chash
stringlengths
16
16
content
stringlengths
267
674k
45fe6803c9837251
Schrödinger equation physics, mathematical physics, philosophy of physics Surveys, textbooks and lecture notes theory (physics), model (physics) experiment, measurement, computable physics Equality and Equivalence The Schrödinger equation (named after Erwin Schrödinger) is the evolution equation of quantum mechanics in the Schrödinger picture. Its simplest version results from replacing the classical expressions in the nonrelativistic, mechanical equation for the energy of a point particle, by operators on a Hilbert space: We start with a point particle with mass mm, impulse pp moving in the space 3\mathbb{R}^3 with a given potential function VV, the energy of it is the sum of kinetic and potential energy: E=p 22m+V E = \frac{p^2}{2 m} + V Quantizing this equation means replacing the coordinate x 3x \in \mathbb{R}^3 with the Hilbert space L 2( 3)L^2(\mathbb{R}^3) and Eit E \to i \hbar \frac{\partial}{\partial t} pi p \to -i \hbar \nabla with the Planck constant hh and =h2π \hbar = \frac{h}{2 \pi} the reduced Planck constant. This results in the Schrödinger equation for a single particle in a potential: itψ(t,x)= 22m 2ψ(t,x)+V(t,x)ψ(t,x) i \hbar \frac{\partial}{\partial t} \psi(t, x) = - \frac{\hbar^2}{2 m} \nabla^2 \psi(t, x) + V(t, x) \psi(t, x) The last term is the multiplication of the functions VV and ψ\psi. The right hand side is called the Hamilton operator HH, the Schrödinger equation is therefore mostly stated in this form: iψ t=Hψ i \hbar \psi_t = H \psi Decomposition into phase and amplitude Consider for simplicity, the mechanical system of a particle of mass mm propagating on the real line \mathbb{R} and subject to a potential VC ()V \in C^\infty(\mathbb{R}), so that the Schrödinger equation is the differential equation on complex-valued functions Ψ:×\Psi \colon \mathbb{R}\times \mathbb{R} \to \mathbb{C} given by itΨ= 22m 2 2xΨ+VΨ, i \hbar \frac{\partial}{\partial t} \Psi = \frac{\hbar^2}{2m} \frac{\partial^2}{\partial^2 x} \Psi + V \Psi \,, where \hbar denotes Planck's constant. By the nature of complex numbers and by the discussion at phase and phase space in physics, it is natural to parameterize Ψ\Psi – away from its zero locus – by a complex phase function S:× S \;\colon\; \mathbb{R}\times \mathbb{R} \longrightarrow \mathbb{R} and an absolute value function ρ\sqrt{\rho} ρ:× \sqrt{\rho} \;\colon\; \mathbb{R}\times \mathbb{R} \longrightarrow \mathbb{R} which is positive, ρ>0\sqrt{\rho} \gt 0, as Ψexp(iS/)ρ. \Psi \coloneqq \exp\left(\frac{i}{\hbar} S / \hbar\right) \sqrt{\rho} \,. Entering this Ansatz into the above Schrödinger equation, that complex equation becomes equivalent to the following two real equations: tS=12m(xS) 2V+ 22m1ρ 2 2xρ \frac{\partial}{\partial t} S = - \frac{1}{2m} \left(\frac{\partial}{\partial x}S\right)^2 - V + \frac{\hbar^2}{2m} \frac{1}{\sqrt{\rho}}\frac{\partial^2}{\partial^2 x} \sqrt{\rho} tρ=x(1m(xS)ρ). \frac{\partial}{\partial t} \rho = - \frac{\partial}{\partial x} \left( \frac{1}{m} \left(\frac{\partial}{\partial x}S\right) \rho \right) \,. Now in this form one may notice a similarity of the form of these two equations with other equations from classical mechanics and statistical mechanics: 1. The first equation is similar to the Hamilton-Jacobi equation that expresses the classical action functional SS and the canonical momentum pxS p \coloneqq \frac{\partial}{\partial x} S except that in addition to the ordinary potential energy VV there is an additional term Q∶− 22m1ρ 2 2xρ Q \coloneq \frac{\hbar^2}{2m} \frac{1}{\sqrt{\rho}}\frac{\partial^2}{\partial^2 x} \sqrt{\rho} which is unlike what may appar in an ordinary Hamilton-Jacobi equation. The perspective of Bohmian mechanics is to regard this as a correction of quantum physics to classical Hamilton-Jacobi theory, it is then called the quantum potential. Notice that unlike ordinary potentials, this “quantum potential” is a function of the density that is subject to the potential. (Notice that this works only away from the zero locus of ρ\rho.) 2. The second equation has the form of the continuity equation? of the flow expressed by 1mp\frac{1}{m}p. (In the context of Bohmian mechanics one regard this equivalent rewriting of the Schrödinger equation as providing a hidden variable theory formulation of quantum mechanics.) Any introductory textbook about quantum mechanics will explain the Schrödinger equation (from the viewpoint of physicists mostly). Last revised on May 16, 2017 at 15:31:49. See the history of this page for a list of all contributions to it.
d9546aaa38119aa7
Tell me more × Imagine you're teaching a first course on quantum mechanics in which your students are well-versed in classical mechanics, but have never seen any quantum before. How would you motivate the subject and convince your students that in fact classical mechanics cannot explain the real world and that quantum mechanics, given your knowledge of classical mechanics, is the most obvious alternative to try? If you sit down and think about it, the idea that the state of a system, instead of being specified by the finitely many particles' position and momentum, is now described by an element of some abstract (rigged) Hilbert space and that the observables correspond to self-adjoint operators on the space of states is not at all obvious. Why should this be the case, or at least, why might we expect this to be the case? Then there is the issue of measurement which is even more difficult to motivate. In the usual formulation of quantum mechanics, we assume that, given a state $|\psi \rangle$ and an observable $A$, the probability of measuring a value between $a$ and $a+da$ is given by $|\langle a|\psi \rangle |^2da$ (and furthermore, if $a$ is not an eigenvalue of $A$, then the probability of measuring a value in this interval is $0$). How would you convince your students that this had to be the case? I have thought about this question of motivation for a couple of years now, and so far, the only answers I've come up with are incomplete, not entirely satisfactory, and seem to be much more non-trivial than I feel they should be. So, what do you guys think? Can you motivate the usual formulation of quantum mechanics using only classical mechanics and minimal appeal to experimental results? Note that, at some point, you will have to make reference to experiment. After all, this is the reason why we needed to develop quantum mechanics. In principle, we could just say "The Born Rule is true because its experimentally verified.", but I find this particularly unsatisfying. I think we can do better. Thus, I would ask that when you do invoke the results of an experiment, you do so to only justify fundamental truths, by which I mean something that can not itself just be explained in terms of more theory. You might say that my conjecture is that the Born Rule is not a fundamental truth in this sense, but can instead be explained by more fundamental theory, which itself is justified via experiment. Edit: To clarify, I will try to make use of a much simpler example. In an ideal gas, if you fix the volume, then the temperature is proportional to pressure. So we may ask "Why?". You could say "Well, because experiment.", or alternatively you could say "It is a trivial corollary of the ideal gas law.". If you choose the latter, you can then ask why that is true. Once again, you can just say "Because experiment." or you could try to prove it using more fundamental physical truths (using the kinetic theory of gases, for example). The objective, then, is to come up with the most fundamental physical truths, prove everything else we know in terms of those, and then verify the fundamental physical truths via experiment. And in this particular case, the objective is to do this with quantum mechanics. share|improve this question "making as little reference to experiment as possible" !!! The only reason we have developed quantum mechanics is because the experimental evidence demanded it and demands it. – anna v Dec 5 '12 at 17:59 Are you looking for a derivation from simple physical principles a la Einstein's derivation of relativity from his two postulates? That is the basic open question in quantum foundations, isn't it? – Emilio Pisanty Dec 5 '12 at 18:58 right, then the definitive answer is that such an argument does not exist. plenty of people devote their academic careers to answering your question, as Emilio implied, and no one is in agreement as to the correct answer, yet. if you are interested in this, then you should look up the work of Rob Spekkens. also, Chris Fuchs, Lucien Hardy, Jonathan Barrett, and probably a bunch of other people too. – Mark Mitchison Dec 5 '12 at 20:16 um...not necessarily. It's just that I think that I now understand the intent of the OP's question, and if I do - it cannot be put better than Emilio did - it is simply 'the standard open question of quantum foundations'. I know enough people working in this field to know that the experts do not consider this question at all resolved. – Mark Mitchison Dec 5 '12 at 20:26 Hey Johnny! Hope all is well. As to your question, I really feel that you can not talk about quantum mechanics in the way you envision while giving your students as solid an understanding when you approach from an experimental view point. I think the closest you could get would be to talk about the problems encountered before quantum mechanics and how quantum mechanics was realized and built after that; this would merely omit information on the experiments, equally unsatisfying. It is a tough question! – Dylan Sabulsky Dec 6 '12 at 1:17 show 13 more comments 11 Answers Why would you ever try to motivate a physical theory without appealing to experimental results??? The motivation of quantum mechanics is that it explains experimental results. It is obvious that you would choose a simpler, more intuitive picture than quantum mechanics if you weren't interested in predicting anything. If you are willing to permit some minimal physical input, then how about this: take the uncertainty principle as a postulate. Then you know that the effect on a system of doing measurement $A$ first, then measurement $B$, is different from doing $B$ first then $A$. That can be written down symbolically as $AB \neq BA$ or even $[A,B] \neq 0$. What kind of objects don't obey commutative multiplication? Linear operators acting on vectors! It follows that observables are operators and "systems" are somehow vectors. The notion of "state" is a bit more sophisticated and doesn't really follow without reference to measurement outcomes (which ultimately needs the Born rule). You could also argue that this effect must vanish in the classical limit, so then you must have $[A,B] \sim \hbar $, where $\hbar$ is some as-yet (and never-to-be, if you refuse to do experiments) undetermined number that must be small compared to everyday units. I believe this is similar to the original reasoning behind Heisenberg's matrix formulation of QM. The problem is that this isn't physics, you don't know how to predict anything without the Born rule. And as far as I know there is no theoretical derivation of the Born rule, it is justified experimentally! If you want a foundations viewpoint on why QM rather than something else, try looking into generalised probabilistic theories, e.g. this paper. But I warn you, these provide neither a complete, simple nor trivial justification for the QM postulates. share|improve this answer See edit to question. Obviously, you'e going to have to appeal to experiment somewhere, but I feel as if the less we have to reference experiment, the more eloquent the answer would be. – Jonathan Gleason Dec 5 '12 at 18:15 i disagree entirely on that point, but obviously that's personal aesthetics. surely if you don't find experiments a beautiful proof, it would be better to argue that quantum mechanics is the most mathematically elegant physical theory possible, and thereby remove the pesky notion of those dirty, messy experiments completely! – Mark Mitchison Dec 5 '12 at 19:35 This seems to be a good starting point, but the problem is that measurements are not linar operators acting on vectors... But perhaps the example can be adapted. – Bzazz Feb 2 at 11:35 @Bzazz Huh? The outcome of a (von Neumann) measurement is given by the projection of the initial state vector onto one of the eigenspaces of the operator describing the observable. That projection certainly is a linear, Hermitian operator. If the observables don't commute, they don't share the same eigenvectors, and therefore the order of projections matters. – Mark Mitchison Feb 4 at 20:40 (contd.) In the more general case, a measurement is described by a CP map, which is a linear operator over the (vector) space of density matrices. The CP map can always be described by a von Neumann projection in a higher-dimensional space, and the same argument holds. – Mark Mitchison Feb 4 at 20:41 show 1 more comment You should use history of physics to ask them questions where classical physics fail. For example, you can tell them result of Rutherford's experiment and ask: If an electron is orbiting around nucleus, it means a charge is in acceleration. So, electrons should release electromagnetic energy. If that's the case, electrons would loose its energy to collapse on Nucleus which would cease the existence of atom within a fraction of second (you can tell them to calculate). But, as we know, atoms have survived billions of years. How? Where's the catch? share|improve this answer +1 I also think using the history of physics is an excellent strategy, and is has the added value of learning the history of physics! The conundrum of the electron not collapsing into the nucleus is a wonderful example, I also suggested the UV catastrophe, which doesn't appeal to any experimental results. – Joe Feb 2 at 9:22 If I would be designing an introduction to quantum physics course for physics undergrads, I would seriously consider starting from the observed Bell-GHZ violations. Something along the lines of David Mermin's approach. If there is one thing that makes clear that no form of classical physics can provide the deepest law of nature, this is it. (This does make reference to experimental facts, albeit more of a gedanken nature. As others have commented, some link to experiments is, and should be, unavoidable.) share|improve this answer Excellent answer. What would be really fascinating would be to show Einstein the Bell-GHZ violations. I can't help but wonder what he would make of it. To me, these experiments confirm his deepest concern -- spooky action at a distance! – user7348 Dec 7 '12 at 16:33 Some time ago I have contemplated Einstein's reaction (…): "Einstein would probably have felt his famous physics intuition had lost contact with reality, and he would certainly happily have admitted that Feynman's claim "nobody understands quantum physics" makes no exception for him. I would love to hear the words that the most quotable physicist would have uttered at the occasion. Probably something along the lines "Magical is the Lord, magical in subtle and deceitful ways bordering on maliciousness"." – Johannes Dec 7 '12 at 19:14 Near the end of Dirac's career, he wrote “And, I think it might turn out that ultimately Einstein will prove to be right, ... that it is quite likely that at some future time we may get an improved quantum mechanics in which there will be a return to determinism and which will, therefore, justify the Einstein point of view. But such a return to deteminism could only be made at the expense of giving up some other basic idea which we now asume without question. We would have to pay for it in some way which we cannot yet guess at, if we are to re-introduce determinism.” Directions in Physics p.10 – joseph f. johnson Feb 11 at 9:43 All the key parts of quantum mechanics may be found in classical physics. 1) In statistical mechanics the system is also described by a distribution function. No definite coordinates, no definite momenta. 2) Hamilton made his formalism for classical mechanics. His ideas were pretty much in line with ideas which were put into modern quantum mechanics long before any experiments: he tried to make physics as geometrical as possible. 3) From Lie algebras people knew that the translation operator has something to do with the derivative. From momentum conservation people knew that translations have something to do with momentum. It was not that strange to associate momentum with the derivative. Now you should just mix everything: merge statistical mechanics with the Hamiltonian formalism and add the key ingredient which was obvious to radio-physicists: that you can not have a short (i.e, localized) signal with a narrow spectrum. Voila, you have quantum mechanics. In principle, for your purposes, Feynman's approach to quantum mechanics may be more "clear". It was found long after the other two approaches, and is much less productive for the simple problems people usually consider while studying. That's why it is not that popular for starters. However, it might be simpler from the philosophical point of view. And we all know that it is equivalent to the other approaches. share|improve this answer Though there are many good answers here, I believe I can still contribute something which answers a small part of your question. There is one reason to look for a theory beyond classical physics which is purely theoretical and this is the UV catastrophe. According to the classical theory of light, an ideal black body at thermal equilibrium will emit radiation with infinite power. This is a fundamental theoretical problem, and there is no need to appeal to any experimental results to understand it, a theory which predicts infinite emitted power is wrong. The quantization of light solves the problem, and historically this played a role in the development of quantum mechanics. Of course this doesn't point to any of the modern postulates of quantum mechanics you're looking to justify, but I think it's still good to use the UV catastrophe as one of the motivations to look for a theory beyond classical physics in the first place, especially if you want to appeal as little as necessary to experimental results. share|improve this answer It is a shame that statistical mechanics is not more widely taught. But, hey, we live in an age when Physics depts don't even teach Optics at the undergrad level anymore... Now the O.P. postulated a context where the students understood advanced Mechanics. So I fear the UV catastrophe, although historically and conceptually most important, will not ring a bell with that audience. – joseph f. johnson Feb 11 at 10:01 Classical mechanics is not final theory from the one side and is not further decomposable from the other. So you can't improve it, it is given as is. For example, you can't explain why if moving body is disappearing from previous point of it's trajectory it should reappear at infinitesimal close point but can't appear a meter ahead (teleporting). What does constraining trajectory points into continuous line? No answer. This is axiom. You can't build a MECHANISM for constraining. Another example: you can't stop decomposing bodies into parts. You can't reach final elements (particles) and if you do, then you can't explain why these particles are indivisible anymore. The matter should be continuous in classics while you can't imagine how material points exist. Also, you can't explain, how the entire infinite universe can exist simultaneously in it's whole information. What is happening in absolutely closed box or what is happening in absolute unreachable regions of spacetime? Classics tends us think that reality is real there too. But how it can be if it is absolutely undetectable? Scientific approach says that only what is measurable does exist. So how can it be reality in absolutely closed box (with cat in it)? In classic mechanic you can't reach absolute identity of building blocks. For example, if all atoms are build of protons, neutrons and electrons, these particles are similar, but not the same. Two electrons in two different atoms are not the same in classic, they are two copies of one prototype, but not the prototype itself. So, you can't define really basic building blocks of reality in classics. You can't define indeterminism in classics. You can't define unrealized possibilities in classic and can't say what have happened with possibility which was possible but not realized. You can't define nonlocality in classics. There are only two possibilities in classics: one event affects another (a cause and effect) and two events are independent. You can't imagine two events correlate but don't affect each other! This is possible but unimaginable in classics! share|improve this answer So far as I understand, you are asking for a minimalist approach to quantum mechanics which would motivate its study with little reference to experiments. The bad. So far as I know, there is not a single experiment or theoretical concept that can motivate your students about the need to introduce Dirac kets $|\Psi\rangle$, operators, Hilbert spaces, the Schrödinger equation... all at once. There are two reasons for this and both are related. First, the ordinary wavefunction or Dirac formulation of quantum mechanics is too different from classical mechanics. Second, the ordinary formulation was developed in pieces by many different authors who tried to explain the results of different experiments --many authors won a Nobel prize for the development of quantum mechanics--. This explains why "for a couple of years now", the only answers you have come up with are "incomplete, not entirely satisfactory". The good. I believe that one can mostly satisfy your requirements by using the modern Wigner & Moyal formulation of quantum mechanics, because this formulation avoids kets, operators, Hilbert spaces, the Schrödinger equation... In this modern formulation, the relation between the classical (left) and the quantum (right) mechanics axioms are $$A(p,x) \rho(p,x) = A \rho(p,x) ~~\Longleftrightarrow~~ A(p,x) \star \rho^\mathrm{W}(p,x) = A \rho^\mathrm{W}(p,x)$$ $$\frac{\partial \rho}{\partial t} = \{H, \rho\} ~~\Longleftrightarrow~~ \frac{\partial \rho^\mathrm{W}}{\partial t} = \{H, \rho^\mathrm{W}\}_\mathrm{MB}$$ $$\langle A \rangle = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho(p,x) ~~\Longleftrightarrow~~ \langle A \rangle = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho^\mathrm{W}(p,x)$$ where $\star$ is the Moyal star product, $\rho^\mathrm{W}$ the Wigner distribution and $\{ , \}_\mathrm{MB}$ the Moyal bracket. The functions $A(p,x)$ are the same than in classical mechanics. An example of the first quantum equation is $H \star \rho_E^\mathrm{W} = E \rho_E^\mathrm{W}$ which gives the energy eigenvalues. Now the second part of your question. What is the minimal motivation for the introduction of the quantum expressions at the right? I think that it could be as follows. There are a number of experiments that suggest a dispersion relation $\Delta p \Delta x \geq \hbar/2$, which cannot be explained by classical mechanics. This experimental fact can be used as motivation for the substitution of the commutative phase space of classical mechanics by a non-commutative phase space. Mathematical analysis of the non-commutative geometry reveals that ordinary products in phase space have to be substituted by start products, the classical phase space state has to be substituted by one, $\rho^\mathrm{W}$, which is bounded to phase space regions larger than Planck length--, and Poisson brackets have to be substituted by Moyal brackets. Although this minimalist approach cannot be obtained by using the ordinary wavefunction or Dirac formalism, there are three disadvantages with the Wigner & Moyal approach however. (i) The mathematical analysis is very far from trivial. The first quantum equation of above is easily derived by substituting the ordinary product by a start product and $\rho \rightarrow \rho^\mathrm{W}$ in the classical expression. The third quantum equation can be also obtained in this way, because it can be shown that $$ \int \mathrm{d}p \mathrm{d}x A(p,x) \star \rho^\mathrm{W}(p,x) = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho^\mathrm{W}(p,x)$$ A priori one could believe that the second quantum equation is obtained in the same way. This does not work and gives an incorrect equation. The correct quantum equation of motion requires the substitution of the whole Poisson bracket by a Moyal bracket. Of course, the Moyal bracket accounts for the non-commutativity of the phase space, but there is not justification for its presence in the equation of motion from non-commutativity alone. In fact, this quantum equation of motion was originally obtained from the Liouville Von Neuman equation via the formal correspondence between the phase space and the Hilbert space, and any modern presentation of the Wigner & Moyal formulation that I know justifies the form of the quantum equation of motion via this formal correspondence. (ii) The theory is backward incompatible with classical mechanics, because the commutative geometry is entirely replaced by a non-commutative one. As a consequence, no $\rho^\mathrm{W}$ can represent a pure classical state --a point in phase space--. Notice that this incompatibility is also present in the ordinary formulations of quantum mechanics --for instance no wavefunction can describe a pure classical state completely--. (iii) The introduction of spin in the Wigner & Moyal formalism is somewhat artificial and still under active development. The best? The above three disadvantages can be eliminated in a new phase space formalism which provides a 'minimalistic' approach to quantum mechanics by an improvement over geometrical quantisation. This is my own work and details and links will be disclosed in the comments or in a separated answer only if they are required by the community. share|improve this answer +1 for the detailed and interesting answer. But the statement that motivations of QM from a small number of physical principles 'do not work' is quite pessimistic. Much of the mathematical formulation (e.g. the Lorentz transformations) underpinning Einstein's work was already in place when he discovered relativity, precisely because people needed some equations that explained experiments. This situation may be analogous to the current state of affairs with QM, irrespective of quantisation scheme. Then Einstein came along and explained what it all means. Who's to say that won't happen again? – Mark Mitchison Dec 8 '12 at 13:28 @MarkMitchison: Thank you! I eliminated the remarks about Einstein and Weinberg (I did mean something close to what you and Emilio Pisanty wrote above) but my own explanation was a complete mess. I agree with you on that what Einstein did could happen again! Precisely I wrote in a paper dealing with foundations of QM: "From a conceptual point of view, the elimination of the wavefunctions from quantum theory is in line with the procedure inaugurated by Einstein with the elimination of the ether in the theory of electromagnetism." – juanrga Dec 8 '12 at 17:51 -1 I suppose the O.P. would like something that has some physical content or intuition linked to it, and that is lacking in what you suggest, so I do not think that pedagogically it would be very useful for this particular purpose. This is not meant as a criticism of its worth as a contribution to scholarship. – joseph f. johnson Feb 11 at 9:34 @josephf.johnson Disagree. The phase space approach has more physical content and is more intuitive than the old wavefunction approach. – juanrga Feb 14 at 19:32 I think you have misunderstood the whole point of what the O.P. was asking for, although your contribution might have been valuable as an answer to a different question. What your answer lacks is any compelling immanent critique of Classical Mechanics, an explanation of why it cannot possibly be true. And it wasn't experiments that suggested the Heisenberg uncertainty relations since the experiments then weren't good enough to get anywhere near the theoretical limits. Only recently have such fine measurements been attained. – joseph f. johnson Feb 14 at 19:40 show 1 more comment I always like to read "BERTLMANN'S SOCKS AND THE NATURE OF REALITY" * by J. Bell to remind myself when and why a classical description must fail. He basically refers to the EPR-correlations. You could motivate his reasoning by comparing common set theory (e.g. try three different sets: A,B,C and try to merge them somehow) with the same concept of "sets" in Hilbert spaces and you will see that they are not equal (Bell's theorem). share|improve this answer It seems to me your question is essentially asking for a Platonic mathematical model of physics, underlying principles from which the quantum formalism could be justified and in effect derived. If so, that puts you in the minority (but growing) realist physicist camp as opposed to the vast majority of traditional instrumentalists. The snag is the best if not only chance of developing a model like that requires either God-like knowledge or at least, with almost superhuman intuition, a correct guess at the underlying phenomena, and obviously nobody has yet achieved either sufficient to unify all of physics under a single rubrik along those lines. In other words, ironically, to get at the most abstract explanation requires the most practical approach, rather as seeing at the smallest scales needs the largest microscope, such as the LHC, or Sherlock Holmes can arrive at the most unexpected conclusion only with sufficient data (Facts, Watson, I need more facts!) So, despite being a fellow realist, I do see that instrumentalism (being content to model effects without seeking root causes, what might be compared with "black box testing") has been and remains indispensable. share|improve this answer -1 This is most unfair to the O.P. Why use labels like «Platonist»? The O.P. only asks for two things: an obvious and fundamental problem with Classical Mechanics, & a motivation for trying QM as the most obvious alternative. Asking for motivation is not asking for Platonist derivations nor is asking why should we give QM a chance asking for a derivation. The only physics in your answer is the Fourier transform version of the uncertainty principle, when you remark about fine resolution needing a large microscope. But the OP asks you to motivate that principle, and you merely assert it. – joseph f. johnson Feb 11 at 10:14 I highly recommend this introductory lecture on Quantum Mechanics: share|improve this answer Thomas's Calculus has an instructive Newtonian Mechanics exercise which everyone ought to ponder: the gravitational field strength inside the Earth is proportional to the distance from the centre, and so is zero at the centre. And, of course, there is the rigorous proof that if the matter is uniformly distributed in a sphere, then outside the sphere it exerts a gravitational force identical to what would have been exerted if all the mass had been concentrated at the centre. Now if one ponders this from a physical point of view, «what is matter», one ends up with logical and physical difficulties that were only answered by de Broglie and Schroedinger's theory of matter waves. This also grows out of pondering Dirac's wise remark: if «big» and «small» are mereley relative terms, there is no use in explaining the big in terms of the small...there must be an absolute meaning to size. Is matter a powder or fluid that is evenly and continuously distributed and can take on any density (short of infinity)? Then that sphere of uniformly distributed matter must shrink to a point of infinite density in a finite amount of time.... Why should matter be rigid and incompressible? Really, this is inexplicable without the wave theory of matter. Schroedinger's equation shows that if, for some reason, a matter wave starts to compress, then it experiences a restoring force to oppose the compression, so that it can not proceed past a certain point (without pouring more energy into it). See the related . Only this can explain why the concept of «particle» can have some validity and not need something smaller still to explain it. share|improve this answer A few millenia ago, an interesting article was published in The Annals of Mathematics about Newtonian particle mechanics: a system of seven particles and specific initial conditions were discovered whereby one of the particles is whipped up to infinite velocity in finite time without any collisions. But this is not quite decisive enough for your purposes. And Earnshaw's theorem is a little too advanced, although it is often mentioned in your context (e.g., by Feynman and my own college teacher, Prof. Davidon). – joseph f. johnson Feb 11 at 9:56 Your Answer
75eb12d1ffebdf1f
Department of Physics PHYS2631 Theoretical Physics 2 (2011/12) Details of the module's prerequisites, learning outcomes, assessment and contact hours are given in the official module description in the Faculty Handbook - follow the link above.  A detailed description of the module's content, together with book lists, is given below.  For an explanation of the library's categorisation system see Classical Mechanics Dr V. Eke 20 lectures + 3 workshops in Michaelmas Term Syllabus: Lagrangian mechanics: dʼAlambertʼs principle, constraints and degrees of freedom, generalized coordinates, velocities and forces, definition of Lagrangian and Hamiltonian, ignorable coordinates . Variational calculus and its application: Euler equation, Hamiltonʼs principle, Lagrange multipliers and constraints. Linear oscillators: stable and unstable equilibrium, SHO and damped SHO, impulsive forces and Greenʼs function, driven oscillators and resonance. One-dimensional systems and central forces: solution by quadrature, central force problem, gravitational attraction. Noetherʼs theorem and Hamiltonian mechanics: angular momentum conservation, Noetherʼs theorem, Hamiltonʼs equations. Theoretical mechanics: canonical transformations, Poisson brackets, Hamilton-Jacobi equation, action-angle variables in 1D, integrability. Rotating coordinate systems: angular velocity vector, finite and infinitesimal rotations in 3D, rotated and rotating reference frames, centrifugal, Coriolis and Euler forces, Foucault pendulum. Dynamics of rigid bodies: kinetic energy, moment of inertia tensor, angular momentum, Euler equations, Euler angles, motion of torque-free symmetric and asymmetric tops, the heavy symmetric top. Theory of small vibrations: two coupled pendulums, normal modes and normal coordinates. Required: Analytical Mechanics, L.N. Hand and J.D. Finch (CUP, 1998) Quantum Theory Dr Krauss 18 lectures + 3 workshops in Epiphany Term Syllabus: State of a system and Dirac notation; Linear operators, eigenvalues, Hermitean operators; Expansion of eigenfunctions; Commutation relations, Heisenberg uncertainty; Unitary transforms; Matrix representations; Schrödinger equation and time evolution; Schrödinger, Heisenberg and Interaction pictures; Symmetry principles and conservation; Angular momentum (operator form); Orbital angular momentum (operator form); General angular momentum (operator form); Matrix representation of angular momentum operators; Spin angular momentum; Spin ½; Pauli spin matrices; Total angular momentum; Addition of angular momentum. Required: Introduction to Quantum Mechanics, B.H. Bransden and C.J. Joachain (Prentice Hall, 2nd Edition) 2 lectures in Easter Term, one by each lecturer Teaching methods Lectures: 2 one-hour lectures per week. Workshops: These provide an opportunity to work through and digest the course material by attempting exercises and assignments assisted by direct interaction with the lecturers and workshop leaders. Students will be divided into four groups, each of which will attend one one-hour class every three weeks. Problem exercises: See
e643a6b66b6e6c91
Pyofss is a Python-based optical fibre system simulator. Optical system components such as pulse generators, optical fibre, and filters are connected to form an optical system. Each system component modifies an optical field in sequence, with the result visualised using plots and animations. Pyofss is free (libre) software, using version 3 of the GNU general public license (GPLv3). It can be installed using the pip command: pip install pyofss Alternatively, download the latest pyofss version from the Python Package Index (PyPI). Code development continues at the pyofss repository on github. The latest documentation for pyofss is uploaded to the Python Package Index, and can be found at the pyofss directory. A graphical interface to pyofss can be downloaded using pip: pip install pyofss-gui Alternatively, download the latest pyofss-gui version from the Python Package Index (PyPI). Code development continues at the pyofss-gui repository on github. Ogg container videos may be played using the (cross-platform) VLC media player. Pyofss is supported by multiple example simulations. Images and videos generated by the examples can be browsed using the links in the following subsections. The following images and videos, generated using pyofss, are based on figures from the book "Nonlinear Fiber Optics" (Fourth Edition) by G. P. Agrawal. Pyofss can simulate the field evolution (representing optical pulses) within a length of optical fibre. There are a range of methods which may be used for this propagation simulation, including the symmetric split-step (Fourier) method and the 4th-order Runge-Kutta in the interaction picture (RK4IP) method. The evolution of the field is governed by the generalised nonlinear Schrödinger equation \begin{equation*} \frac{\partial A}{\partial z} = \left[ \hat{L} + \hat{N}\{A\} \right] A \end{equation*} where \(A\) is the complex field envelope of the pulse and \(z\) is the dimension along the fibre length. The linear operator \(\hat{L}\) and non-linear operator \(\hat{N}\) can be written in the form \begin{align*} \hat{L} &= -\frac{\alpha}{2} - \frac{i \beta_2}{2} \frac{\partial^2}{\partial t^2} + \frac{\beta_3}{6} \frac{\partial^3}{\partial t^3} + \cdots \\ \hat{N} &= i \gamma \left( |A|^2 + \frac{i}{\omega_0} \frac{1}{A} \frac{\partial |A|^2 A}{\partial t} - t_R \frac{\partial |A|^2}{\partial t} \right) \end{align*} The linear operator contains terms for attenuation and (second order and higher) dispersion. The nonlinear operator contains terms for self-phase modulation (SPM), self-steepening, and Raman scattering. Pyofss can also use an improved approximation for the Raman response function. Apart from the simulation of the previously listed effects, pyofss can also be used to study cross-phase modulation (XPM) between two separate channels, supercontinuum generation, and soliton propagation. Pyofss is a complete rewrite of a previous simulator named pulseprop. While pulseprop was coded in C++ and was highly optimised, it was lacking a certain flexibility. Pyofss is an attempt to add interactivity to pulseprop and, by using Python for the programming language, it indeed seems to fulfill this endeavour.
dd023bc8f768fb1c
(Click here for bottom) Acceptance Test. Advanced Technology. Yesterday's AT is tomorrow's joke. You might gaze upon my works and despair. IBM's PC/AT is vintage 1984. A.T., AT German, Altes Testament. English, `Old Testament' (O.T.). Anthropology Today. A journal published by Blackwell on behalf of the Royal Anthropological Institute of Great Britain and Ireland. The sister publication of at is JRAI. Antiquité Tardive. Published by la Association pour l'Antiquité Tardive, it ``aims at enriching the study of written texts from the fourth to the seventh centuries by setting these into a wider context using a multidisciplinary approach covering history, archaeology, epigraphy, law and philology.'' Did I just read the word ``enriching''? Indeed I did. I also just read that the one issue per year costs 62 euros. At those prices it better have a centerfold, and she had better not be an antique. Astatine, at atomic number 85 the heaviest known halogen. Learn more at its entry in WebElements and its entry at Chemicool. German, Atmosphäre. English, `atmosphere.' ATtention. First code in a command-set protocol defined by Hayes for its modems and become the industry standard. (Domain code for) Austria. But for 1866 and 1945, this would be Germany (.de). The US government's Country Studies website has a page of links (``Austria Country Studies'') amounting to the online version of its Austria book. Ariadne, ``The European and Mediterranean link resource for Research, Science and Culture,'' has a page of national links. There's an official government site (also in English). Rec.Travel offers some links. Telephone numbers for International direct dialing to Austria begin with 43. Academic Theme Associate. University staff responsible for advancing the designated academic theme of a house (university residence). Cf. ETA, FA. Actual Time of Arrival (of flight or of transport vehicle). In contrast with ETA. Advanced Technology Attachment. A standard for interfacing disk drives. Nothing more than the name used by ANSI group X3T10 for Integrated Drive Electronics (IDE). Air Transport Association. A trade group representing commercial airlines. All-American Twirling Academy. ``The ATA All-Stars are located in Gainesville and Lake City, Florida. Group and private lessons are offered for age 4 through high school at all skill levels.'' American Teachers Association. Founded at Nashville, Tennessee, in 1904, on the initiative of John Robert Edward Lee of the Tuskegee Institute, as the National Association of Colored Teachers. The name was changed in 1907 to the National Association of Teachers in Colored Schools, to better reflect the target membership. The name was changed to American Teachers Association in 1937. In 1966, the ATA merged with the NEA. With luck, this page of ATA history won't be history itself at the end of February. American Tinnitus Association. American Trans Air. A commercial airline. In my experience flying from South Bend, Indiana, to the coasts, ATA offers the best last-minute deals through their hubs in Detroit and Chicago (Midway). A lot of people wonder how it ended up with the not-very-mnemonic carrier code TZ. The answer is that by the time ATA got into the business (1973), all the more appropriate two-letter codes (AT, TA, TR) were taken. Getting into the business just before deregulation, ATA is sort of a 'tween company: it doesn't have the high costs of the old-line major passenger airlines, but not the low costs of a Southwest or JetBlue. They also don't have the name recognition of the majors. Around 2002, I encountered a travel agent at AAA in New Jersey who had never heard of it. After we finished booking on ATA, he had the cojones to tell us cheerfully that we saved 1,800 or whatever dollars -- sure, no thanks to him. ATA was the tenth-largest US carrier in 2004, ranking by passenger miles. I think ATA needs to invest in more advertising. In late October 2004 they filed for bankruptcy. Also, they're now ``ATA Airlines.'' This is supposed not to be pleonastic because ATA is no longer an acronym, just a name -- sort of a decorative collection of letters, like Kodak, but pronounced ``ayteeay.'' It's as if they had a little switch attached to the language, which turns the significance of an established usage off when flipped and prevents their name from having an expansion that ends in ``Air Airlines.'' At least they didn't claim ATA now stands for the word father translated into TURKISH. One can sympathize with the company's name problems: air and trans are as vanilla as airline word names get (as also American, in the US), and the lack of a distinctive name is probably part of their visibility problem. Indeed, as part of their bankruptcy restructuring, they were originally expecting to sell most of their main hub facilities at Midway to AirTran Airways, a low-cost carrier founded in 1993. Eventually, Southwest won the bidding war, in an agreement to buy the lease rights to six gates at Midway. The agreement involves some cash, transfer of a hangar at Midway, and very significantly a code-share agreement, the first for both ATA and Southwest. ATA will make Indianapolis, previously a secondary hub, the new center of its operations. American Translators Association. Cf. ALTA (L is for Literary). American Trucking Associations [sic, plural], Inc. A national trade association. Their Management Systems Council (MSC) has a web page. The other large trucking-industry trade association is the TCA. `Father,' in various Central Asian languages. Cf. atta. The father of modern Turkey was given the single name Mustafa at birth (1881, in Salonica). A mathematics teacher bestowed the name Kemal (`perfection') on him, and it was as ``Mustafa Kemal'' that he entered a military academy in 1895. After his graduation as a lieutenant in 1905 he was posted to Damascus, where he formed a secret society of anti-royalist (i.e., anti-Ottoman), reform-minded officers called Vatan (`Fatherland'). Other stuff happened that is not relevant to this entry. Let's just say that Mustafa Kemal was to Turkey everything Charles de Gaulle could have wanted to be for France. In 1934, he promulgated a law requiring all Turks to adopt surnames, and the Grand National Assembly gave him the surname of Atatürk, `father of Turks.' Alma-Ata (now ``Almaty,'' grumble grumble) is the largest city in Kazakhstan. The name means `father of apples.' `You,' in Hebrew (stress as usual on the final syllable). Assembly of Turkish American Associations. The nipa palm tree. It grows throughout the Scrabble forest. AT Attachment Packet Interface. Similar to SCSI. (Cf. ATA supra.) 1. n. `target' 2. interj. `on target, dead on, that's right.' Also, there's a brand of orphan computers called Atari. At least there's an FAQ for the eight-bit machines, from the <comp.sys.atari.8bit> newsgroup. We also serve a little bit on the operating system. Academy of Television Arts and Sciences. What's wrong with this picture? ATAS was founded in 1946 and is based in the Los Angeles area. It presents the annual prime time Emmy awards, offers other events in its LA headquarters, and publishes Emmy magazine. The similarly named National Academy of Television Arts and Sciences (NATAS) is a distinct organization based in New York. Oddly enough, NATAS is a national organization, with chapters around the US (20, as of 2004). NATAS handles the Daytime, US News, and Documentary Emmys. Sports is subsumed in one or more of those categories. NATAS chapters handle Regional Emmy Awards. Enough! PLEASE! What do you think this is, some kind of general reference encyclopedic dictionary? We're just interested in acronyms (and initialisms and abbreviations and some necessary related explanatory entries). All I ever wanted to know was, did ``Emmy'' originally stand for M.E.? (Cf. emcee.) Ah! I found an answer. (No, I'm not going to tell you here. That wouldn't be efficient. You have to follow the link.) The NYC-based NATAS has a regional chapter based in NYC: NY-NATAS. ATAS, in addition to being a ``sister organization'' to NATAS, also serves as one of its regional chapters. This begins to sound like incest. Buy the rights, it could be a hit. There's also a IATAS, which awards International Emmys (iEmmys). IATAS is a division of NATAS. It may be possible to draw the organization chart in two dimensions, but it can't be a good idea. All-Terrain Bicycle. Less common synonym of MTB. ATB, atb All The Best. Chatese, texting abbreviation. Anti-Theater Ballistic Missile. A few are still kept targeted at Broadway, although that is no longer considered a serious threat (vide ATW). People have been saying for over fifty years that Broadway is chatting with death's valet. People have probably been right, but musicals still animate the body. ATBM can also be synonymously expanded as Anti-Tactical Ballistic Missile. Again, as with ABM, confusion arises from the fact that hyphenation is not explicitly nested: ATBM is anti the TBM. These are not ballistic missiles directed against tactics, except insofar as those tactics take the form of the firing of tactical ballistic missiles. Evidently, the end of the cold war has had collateral linguistic benefits. AT Bus Variant name for ISA bus. Accelerated Thermal Cycling. Address Translation Cache. Air Traffic Control. Productive prefix (ATCA, ATCAA, ATCBI, ATCRBS, ATCS, ATCSCC, ATCT). ``All Things Considered.'' National Public Radio Program that needs some new theme music. Anatomical Therapeutic Chemical. Arquitectura y Tecnología de Computadores. [In Spanish, there has been a major struggle between ``Computador'' and ``Ordenador.'' (The latter follows French usage.) An important reason to avoid using ``computador'' is that a verb form naturally associated with that is ``computa.'' In Gulliver's Travels, Jonathan Swift has some fun with something similar, turning over in his thought various unsatisfactory alternative etymologies of ``Laputa,'' the name of the floating island. It is less important to know that in the Italian dub of ``Last Tango in Paris'' (``Ultimo Tango a Parigi'') Marlon Brando's character calls Maria Schneider's ``putana,'' but we tell you anyway.] If you're confused, read through to the end of the Pav entry. (To save time, you can start at the beginning of the Pav entry. To save frustration, wait until I publish the entry.) According to the Computer Spanglish Diccionario, a useful resource served by Yolanda M. Rivas, ordenador is seldom used. Audio TeleConferenc{ing|e}. Australian Transputer Centre. Automat{ ic | ed } Traction Control. Automatic Train Control. Average Total Cost. Azienda Trasporti Consorziali. Air Traffic Control Association. Air Traffic Control Assigned Airspace. Air Traffic Control Beacon Interrogator. The Association of [the] Thai Computer Industry. Antarctic Treaty Consultative Meeting. Air Traffic Controller. Air Traffic Control Radar Beacon System. Advanced Train Control Systems. Air Traffic Control {Specialist | System}. Air Traffic Control Systems Command Center. Operational since 1994. Air Traffic Control Tower. Air Transport Division of the Transport Workers Union (TWU). It represents airline mechanics and ground crew. Advanced Technical Demonstration. Automatic Thermal Desorption (TD). Address Transition Detection Circuit. aTDC, ATDC, atdc After Top Dead Center. See TDC. Asynchronous Time-Division Multiplexing. ATtention Dial Tone. Hayes modem AT command. Advanced Technological Education. A joint program of the Divisions of Undergraduate Education and of Elementary, Secondary, and Informal Education of the NSF, ``promotes exemplary improvement in advanced technological education ....'' Automat{ ic | ed } Test Equipment. For external circuit testing. Cost in the megabuck range. See, for example, Teradyne's Semiconductor Test Division. (Bureau of) Alcohol, Tobacco and Firearms of the U.S. Department of the Treasury. The ``revenooers.'' More at BATF. Australian Track & Field Coaches Association. Albert The Great. Albertus Magnus. There's an unintentionally funny site hawking his out-of-copyright-by-now works, at <http://www.AlbertTheGreat.Com/>. (``Please make payment in advance to receive over 40 volumes of truth'' from ``First Floor Rear'' somewhere in Pennsylvania.) Albertus Magnus, a Dominican priest (OP), died in 1280; he was canonized and declared a doctor of the Roman Catholic Church some time later (1931). In 1941, Pope Pius XII declared him the patron of all those who devote themselves to the natural sciences. Alexander The Great. Automatic Test Generation. [Football icon] ATH, ath, Ath. ATHlete. A symbol or abbreviation used in lieu of a specific football position. Read the athlete entry below and you'll know at least as much as I do on the subject. Advanced THermal Analysis System. A lot more people would be atheists if they didn't think that God would disapprove. A town in Alabama (home of Athens State University, founded in 1822, and... it's a county seat!), Arkansas, California, Georgia (home of UGA, the oldest state-chartered university in the US, and another county seat!), Illinois, Indiana (the University of Indianapolis has an Athens campus, but it's in Greece), Kansas, Kentucky, Louisiana, Maine, Michigan, Mississippi, Missouri, New York, Ohio (county seat, and home of Ohio's first state university), Pennsylvania, Tennessee (it's a county seat!), Texas (seat of a different county!), Vermont, Virginia, West Virginia (in Mercer County, where Princeton, naturally, is the county seat; I've been there), and Wisconsin. That's twenty-two states, and not a few college towns. In fact, Tennessee has two Athenses, because Nashville is known locally as ``the Athens of the South.'' In an article about the South that was published in 1962 (``You-All and Non-You-All,'' described within the U and non-U entry), Jessica Mitford wondered puckishly ``whether Athenians ever think of their city as `the Nashville of Greece.' '' For a similar idea, based on Emory University's self-assumed status as a ``Harvard of the South,'' see the this S.P.D. entry. Adelaide, capital of the state of South Australia, is also known locally as the ``Athens of the South.'' [Football icon] I just noticed a specialized use of this word. It apparently designates a football player without a single specific position, but I don't feel competent to give a certain definition, so I'll just cite a couple of instances. The back page of Notre Dame's student newspaper (The Observer) had a graphic that included this text: ``23 players signed letters of intent: 12 offense, 9 defense, 2 athletes.'' (My italics; otherwise, I've sedated the fonts and capitalization for readability. This was from the issue of February 4, 2010, the day after National Signing Day 2010. National Signing Day is the earliest date when student athletes may sign national letters of intent. There will be more about it at the link, once I sort some of it out.) The previous evening, an article on the website of the Huntington, W.Va., Herald-Dispatch reported the letter-of-intent pickings of Marshall University (the local Division-I school). The article included this: ``Quarterback Ed Sullivan [he wants to be in the ``big shoe,'' no doubt] and athletes Jermaine Kelson, Antwon Chisholm, Jazz King and [Harold `Gator'] Hoskins ranked among Marshall recruits who opted for Huntington over BCS teams. The Thundering Herd also added considerable bulk along the line of scrimmage, signing five offensive and defensive players to bolster the front.'' A list at the foot of the Herald-Dispatch article included position codes and other information. Those described as ``athletes'' in the body of the article had the position code ``ATH.'' The student athletes (a general term) were listed in no particular order that I could discern. Anyway, here are the position codes, in order of their first occurrence in the list, along with the number of players with that designation, along with their average heights and weights: Position # height weight (in lb.) QB 1 6'2" 195 K 1 5'10" 175 OL 3 6'4.7" 283 ATH 5 5'10.6" 180 DB 3 5'11.7" 177.7 LB 2 6'2" 207.5 DE 3 6'4.3" 245 DT 2 6'4" 275 TE 2 6'4.5" 210 WR 3 6'0" 181.7 It turns out that ATH, Ath, or ath is very widely used in this context. FWIW, there don't seem to be any specific codes for special-teams positions. The ATH players aren't always relatively small. Oh, and I found an authority (Bob -- a guard... in the Notre Dame library, working beneath Touchdown Jesus!) who explained that ``an athlete'' is someone who can play more than one position. There are position names for the special teams, but everyone on those teams has a position on the main offensive or defensive team -- sort of like a day job. This entry is under construction. What that means is that I've got my feet propped up on the desk and I'm looking out the window, trying to come up with a good pun on atheism when I should be doing real work instead. athletic shoe This entry is under construction. But hey, we've already got a head term. Well-started is half done, so I'd say the entry is about 45% complete. The hang-up is with bowling shoes: are bowling shoes not athletic shoes because they have slippery soles, or are they not athletic shoes because bowling is not a sport? And how can I finish the entry if I don't know? How will I know if I don't do the research, and how will I do the research without funding? Send money now! And shouldn't it be the foot rather than the shoe that is called athletic? The shoe should be an ``athlete's shoe,'' but instead we have ``athlete's foot'' and ``athletic shoe.'' This isn't working right: the more I write, the more incomplete this entry gets. You know, when people say they have to run just to stay in one place, I look at their running shoes and think: if you want to get anywhere, maybe you should run the other way. If I erased this entry completely, I'd be done. Cf. sneaker. Just to incomplete this entry more completely, I'd like to add that the odd attribution of athleticism to a shoe reminds one of homebuilding. (Well, okay, it just reminds me, but since I am one, it reminds one.) Specifically, rich folks will say something like ``I built this house in 1997'' when all they mean is that they hired a general contractor in 1996. At least with similarly misattributed corporate research and claimed accomplishments, no one doubts that the actual work was performed by humans and machines with individual identities distinct from that of the corporation. Nevertheless, have a gander at the GE entry. (Starship's ``We Built This City (on Rock and Roll)'' gets a free pass because attempting to parse rock lyrics dissolves the brain. Marconi plays the mamba. Oh noooo!) American Truck Historical Society. ``Incorporated in 1971, the not-for-profit American Truck Historical Society was formed to preserve the history of trucks, the trucking industry, and its pioneers.'' aths, Aths, ATHS Australian Tuchas for Huchas Society. My best guess, anyway. Okay, here's another try: ATHlet{e|ic}S. An abbreviation particularly common in Australia, where -- in keeping with Fowler's worst suggestion and widespread UK and Oz practice -- abbreviations are frequently written without a closing period. (There is no Australian organization, so far as I have been able to determine in way too much time devoted to the search, whose initialism is ATHS.) Addiction Treatment Inventory. A questionnaire created by TRI for drug treatment centers to report statistical data describing their programs. Used by DENS. Advanced Thin Ionization Calorimeter. A balloon-borne cosmic-ray detector. Adoption Taxpayer Identification Number. Here's an explanation from the 2004 edition of IRS publication 17 (Your Federal Income Tax: For Individuals), p. 15: If you are in the process of adopting a child who is a U.S. citizen or resident and cannot get an SSN for the child [or an ITIN either] until the adoption is final, you can apply for an ATIN to use instead of an SSN. Use form W-7A. (An ATIN is only assigned if the child has already been placed in the return-filer's home and can be claimed as a dependent. An SSN must be applied for and used as soon as possible afterwards, and use of the ATIN discontinued.) Asian Technology Information Program. ``[A] non-profit organization dedicated to providing objective and high-quality information about technology developments in Asia.'' (Link above is to US server; http://www.atip.or.jp/ is in Tokyo.) Advanced Threat InfraRed CounterMeasures (IRCM). Alliance for Telecommunications Industry Solutions. Previously called ECSA. Association of Teachers of Japanese. That URL is more permanent than it looks, but if it ever dies, the link to ATJ from <japaneseteaching.org/> will probably be kept current. Isn't that the Nahuatl word for water? Could be. I'll have to check. Hmmm. So it is. And a lot of folks have come up with interesting speculations connecting Atlantis with the Nahuatl word atl and tlan, which isn't a word in Nahuatl but occurs in a bunch of names. Doubtless these connections are at least as significant as various other observed coincidences. Active Template Library. For Microsoft Windows; used in creating server-side components and ActiveX controls. Association of Teachers and Lecturers. IATA abbreviation for what used to be Atlanta Hartsfield International Airport. Now it's Hartsfield-Jackson Atlanta International Airport. They extended the subway system connecting the gates and main terminal straight through to Mississippi and... Hmmm, let me check this. Okay, they added ``Jackson'' some time after the death in June 2003 of Maynard Jackson, the first black mayor of the city of Atlanta (he was first elected in 1973). He was active in the major expansion of Hartsfield, which was completed ``on time and under budget'' during his second term. (The quotation marks are standard, apparently because it was a phrase he took pride in repeating.) The ``Hartsfield'' honored an earlier mayor, William Hartsfield. American Theological Library Association. Good places to go and read comforting things after you've received reading matter from the next ATLA. Association of Trial Lawyers of America. The trial lawyers have evidently recognized that ``trial lawyer'' is not a term with positive associations. The organization has been rebranded the ``American Association for Justice'' (AAJ). I believe it was one of the Oliver Wendell Holmeses who remarked that there is no more trying experience than undergoing a trial. I don't think it was a tautological pun. I do imagine it was the jurist Oliver Wendell Holmes, Jr., who remarked this. Holmes Senior, the doctor, practiced in the days before modern anesthetics. Atlantic Monthly What can I say? I won't pretend that it's the acronym expansion of ``AM'' or ``AMM.'' Even I have standards. It was founded in 1857, so it has seen its share of ups and downs. The first years of the 21st century have been downs. Visit. Edward Weeks was the editor from 1938 to 1966. Abbreviated Test Language for All Systems. Used for test specification and test programming. IEEE standard 716. It's about the fourth item on this long page. Argonne Tandem-Linac Accelerator System. Since LINAC stands for ``linear accelerator,'' one may regard ``ATLAS'' as an abbreviation of ``Argonne Tandem-Linear-Accelerator Accelerator System.'' That is an example of what we here at SBF call an AAP pleonasm (this stands for ``acronym-assisted pleonasm pleonasm''). One would naturally expect ``ATLAS System'' as an AAP pleonasm pleonasm for ATLAS. This occurs, of course, but the AAP-assisted ``ATLAS accelerator'' pleonasm is much more common. One can also find higher-order-redundant pleonastic redundancies of higher order, like ``ATLAS LINAC accelerator at Argonne.'' ATLAS has 62 resonators. A Toroidal LHC ApparatuS. Name of one of the six particle-detector experiments at the Large Hadron Collider (LHC). The ATLAS collaboration was formed in 1992 when the proposed EAGLE (Experiment for Accurate Gamma, Lepton and Energy Measurements) and ASCOT (Apparatus with SuperCOnducting Toroids) collaborations merged their efforts into building a single, general-purpose particle detector for the LHC. at least as good as No worse than. (Doesn't sound so good that way, does it?) Adobe Type Manager. Air Traffic Management. Amateur Telescope Maker. Association of Teachers of Mathematics. UK organization; nearly 4000 members concerned with mathematical education in primary schools, secondary schools, colleges, polytechnics and universities. Asynchronous Transfer Mode. A nontechnical introduction is available from the ATM Forum; the text within the gifs is hard to read. ATM passes information in 53-byte cells consisting of 48 bytes of payload and 5 bytes of header. It's defined for 155Mbit/second data rates and faster. See also SDH. A tod{a|o} madre. This is a common Mexican slang expression roughly equivalent to the interjection `awesome!' The initialism occurs in graffiti or wherever else one might write it, but in speech the unabbreviated words are used. At the most basic level of grammar, the form with toda would be correct, since madre is (grammatically as well as naturally) female. In practice, todo is common. The phrase can be translated as `at full mother,' on the pattern of expressions like a toda velocidad (`at full speed'). The phrase doesn't make any more literal sense in Spanish than the translation does in English. From time to time over the past few years I've asked various Mexicans what sense they could make of the phrase, and never gotten more than admittedly ignorant speculation. It's just an idiom. At The Moment. Automat{ed|ic} Teller Machine. So far, only bank tellers, not fortune tellers. ATM's go by a variety of names. I'll list a few here someday. Until I do, however, be informed that if you write or say ``ATM machine,'' then you are a bad person. In principle, it's okay just to think it, but bad thoughts lead to bad actions, so keep that in mind. If you want to be really bad, say ``Automatic ATM Machine'' (the teller is silent). Azienda Trasporti Municipali. Transit in Milano, Italy. Asynchronous Transfer Mode Address Resolution Protocol. Asynchronous Transfer Mode (ATM)-Data eXchange Interface (DXI) A unit of pressure [abbrev. atm.] equal to 105,350 Pa. Vide bar. [Phone icon] Automated (Telephone) Trunk Measurement System. Aeronautical Telecommunications Network. Augmented Transition Network (parser). Abort To Orbit. Space shuttle landing abort plan; AOA, RTLS, and TAL are other options. Actual Time Over. Actual as opposed to targeted or predicted time that an aircraft passes a coordination point. Australian Taxation Office. Automatic Train Operation. Association of Train Operating Companies. ``[A]n unincorporated association owned by its members. It was set up by the train operating companies formed during the privatisation of [UK] railways under the Railways Act 1993.'' Automatic Transfer Of Kana kanji. Kana is the Japanese syllabary, with about 95 characters -- hiragana and katakana (about 145 including diacriticals). Kanji are Chinese characters used in Japanese (a few thousand). atomic mass Physicists' term meaning mass of an atom, when the mass is given in amu (atomic mass units). Totally different from atomic weight, you understand, although quantitatively identical. atomic names Given names without accepted shorter form. What is ``accepted'' is, of course, a matter of opinion.) Many atomic names, such as Drew, Joe, Ron, Sam, and Tom, are short or diminutive forms of other names. Since every name that is not itself atomic must by definition have an accepted form that is shorter, and given the usual mathematical facts about phonemes, every name must be or yield at least one atomic name. Some of these are probably only rarely given names themselves, since there does seem to continue to be a tendency to avoid giving legal names that are primarily used as nicknames based on other names. Aargh! Why does everything have to get so complicated when you think about it? I really only wanted to mention traditional atomic names like Kim, Lee, and Saul. For obvious reasons, atomic names tend to be monosyllabic. Aaron and Oscar are pretty solid exceptions, although I knew an automobile repairman who used ``Os'' for the latter. A semiconductor physicist of my acquaintance was upset when his granddaughter was given the non-atomic (molecular?) name ``Candace.'' He feared she would end up being called ``Candy,'' not be taken seriously as a student in school, drop out, and lead an miserably unambitious, unliberated existence. This is only a slightly extreme version of the theory that Nomenclature is Destiny. (Following that link you can find another kind of atomic name: Atom Egoyan.) atomic number The number of protons in a nucleus. Physicists abbreviate this by the capital letter Z. atomic weight Chemists' term, short for relative atomic weight. The atomic weight of a chemical substance is one twelfth of the weight in grams of one mole of the substance, divided by the weight in grams of one mole of carbon atoms. Because of the principle of equivalence (even just the weak principle of equivalence), this ratio is the same at any altitude, so it's practically a measure of mass. Physicists define a quantity that is one twelfth the mass of a carbon atom. (Or, if you prefer, defined as one twelfth the mass of a mole of carbon atoms, divided by Avogadro's number, which is the number of carbon atoms in a mole of carbon atoms.) Since a ratio of masses equals the corresponding ratio of weights (principle of equivalence, remember?) the mass of an atom of some element (its atomic mass), given in amu, equals the atomic weight of the element. Physicists prefer to distinguish mass and force (weight), so in contexts typically described or analyzed in physical terms, one tends to see the atomic mass term. (These contexts are more likely to be in solid, surface, interface, gas, or plasma phase, and to depend on detailed dynamics of individual particles matter. Typical instance: atomic mass spectroscopy.) Chemists tend to deal primarily with weights, and in chemical contexts, one sees atomic weight. (Chemical contexts are predominantly liquid-phase, typically involving macroscopic numbers of particles. Any situation involving a molecular species or chemical reaction is likely to be analyzed in chemical terms.) It is, of course, impossible to define a sharp boundary between chemical and physical contexts or approaches. To some extent, the distinction is one of conceptual approach, even when the substantive situation is the same, and has more to do with pedagogical traditions in the different disciplines than with any great difference in effectiveness. Asynchronous Transfer mode (ATM)-Oriented Multimedia Information System. Atoms in the Family The title of a book by Laura Fermi (neé Capon) about her husband Enrico, the famous physicist who died in 1954. The book was published that year by the University of Chicago Press. As Laura explained in the acknowledgments, it was Dr. Cyril Smith who gave her the idea for the book. ``You should write your husband's biography,'' he told me. ``I cannot,'' I answered. ``My husband is the man I cook for and iron shirts for. How can I take him that seriously?'' Fermi is one of my favorite physicists, and this is one of my favorite books. atom smasher Atoms are very small. I guess that's why they're so hard to smash. I may have something to say here later about cyclotrons and other accelerators, but for now I just wanted to have this entry here for a quote. Interviewed at a training session in Las Vegas, ahead of a non-title bout February 22, 2003, 36-year-old juvenile delinquent Mike Tyson was being philosophical about his bad-boy image: ``Every religion has a saying about throwing stones in glass houses. I can't throw a sand pebble. I can't spit, I can't throw an atom at nobody.'' (This and other reflective contemplations in the London Independent, February 10, 2003. More about this fascinating creature at the bite me entry, coming soon.) atonal music Music that has tones, alright, but no key -- or many. Sounds like it keeps slipping a cog. Generally associated with the name of Schönberg, but it was pioneered by Liszt as early as the 1830's. Schönberg (1874-1951) had to emigrate to the US to escape the Nazis, and the separation from even that small audience that could appreciate his work was a living death. Absolute Thermoelectric Power. Acceptance Test Plan. Adenosine TriPhosphate. A kind of biological fuel for internal transport in a biological cell. Energy is stored in ADP by adding a phosphate group, and extracted by removing it, elsewhere, from the product ATP. In a pinch, you can extract a bit more energy from ADP by removing another phosphate group and leaving AMP. Advanced TurboProp. Made by British Aerospace. As of this writing (8/1996), United Express flies these critters from O'Hare to South Bend seven (7) times a day. Total flight time is only 25 minutes. Most of them are only four or five years old, so you have pretty favorable odds of arriving. Airline Transport Pilot. Highest grade of pilot certificate. All Tests Pass. Alternate Transient Program. A version of Electromagnetic Transient Program (EMTP), a standard code for real-time simulation of power systems including single-phase and three-phase balanced and unbalanced circuit modeling, various equivalent-circuit models for T-lines, and time-dependent models for simulating circuit breakers, lightning arrestors, and faults. Considered user-inimical. Appletalk Transaction Protocol. Application Transaction { Protocol | Program }. Association of Tennis Professionals. Authority To Proceed. Granted by Air Traffic Control. Automatic Train Protection. A system used on some British railway lines. The system determines a maximum safe speed for the train and applies the brakes if that speed is exceeded. There were plans to install it widely in the 1990's, but costs proved greater than expected. American Technological Preeminence Act. Gee, you don't think this wording will offend anyone? Nah -- I checked it out. All our constituents are fine with it. Association of Theatrical Press Agents and Managers. Members of this trade union are not strictly required to be theatrical themselves; they just serve as publicists and managers of theater productions -- which productions are themselves theatrical in some sense of the word. You wanted that spelled out. Automatic Test-Pattern Generation. Association of Teachers of Preventive Medicine. Founded in 1942, it's ``the national association supporting health promotion and disease prevention educators and researchers.... ATPM members also include members of the Association of Preventive Medicine Residents.'' AppleTalk Print Services. Americans for Tax Reform. A group that wants taxes reduced. It's not officially affiliated with the GOP. You know, this entry used to read ``Americans for Tax Reform. A group not officially affiliated with the GOP that wants taxes reduced.'' That was funnier, but the edited entry is better because we want to serve browsers who visit us with precise and unambiguous definitions. Attenuated Total Reflection. Authorization To Recruit. Automat{ed|ic} Target Recogni{tion|zer}. Remember in Robocop, that behemoth with machine guns that required some adjustment? Assistive Technology Resource Alliance. Adaptive TRansform Acoustic Coding. Atom-Transfer Radical Polymerization. Abstract Test Suite. (FAA) Air Traffic Services. Asian Test Symposium. Association of Theological Schools in the United States and Canada. They're in the accreditation business. That could get interesting. Auxiliary Territorial Service. A British something or other founded in 1941. Advanced Television Systems Committee. ``ATSC was formed by the Joint Committee on Inter-Society Coordination (JCIC) to establish voluntary technical standards for advanced television systems, including digital high definition television (HDTV). ATSC suggests positions to the Department of State for their use in international standards organizations. ATSC proposes standards to the Federal Communications Commission.'' Australian Telecommunication Standardisation Committee. Agency for Toxic Substances and Disease Registry. Atchison, Topeka, and Santa Fe RailRoad (RR). (Australia's) Aboriginal and Torres Strait Islander Commission. ATSIC was created in 1990 by the Labor government of Hawke. During parliamentary discussion of the ATSIC Act in 1989, MP John Howard said that establishing ATSIC would be ``sheer national idiocy'' and described ATSIC as a ``black Parliament.'' As PM in 2004, he's getting his opportunity to replace it. It's a fascinating story, so now you know what to look out for. (Australia's) Aboriginal and Torres Strait Islander Services. In April 2003, this new government agency was created by Philip Ruddock (then the indigenous affairs minister). This agency was to manage ATSIC's budget under policy direction from ATSIC's elected leaders. Laotian monetary unit. But what would you buy with it? [Phone icon] American Telephone and Telegraph [Company]. Gothic for `father.' The first sentence of the Lord's Prayer in Gothic is Atta unsar þu in himinam, weihnái namô þein;. Attila (ca. 406-453), was the last and most powerful king of the Hun empire. His fame was such that he remains famous (in Hungary and Turkey) and infamous (in the rest of the West) to this day. His name remains a popular boy's given name today in Hungary and (also as Atilla) in Turkey. The last of his many wives was named Ildikó, and that name is still used in Hungary today. The wife of a colleague from Hungary has that name, and she explained its origin to me with pride. (But maybe she just enjoys the expected shock value.) Ildikó was a Goth, and he died shortly after marrying her. Historians tend to trust the reports of Priscus, a historian who traveled with Maximin on an embassy from Theodosius II in 448. According to Priscus, he died on the night after a feast celebrating that last marriage. After he was buried with rich funeral objects, his funeral party was killed to keep his burial place secret. Let's review: a man of moderate dietary habits, in his mid-forties, apparently healthy and with everything in the world to live for, gets a nosebleed and chokes to death. Many are dead and no one alive will admit he attended the funeral. This doesn't sound suspicious? ``The Scourge of God'' didn't have any enemies? Other reports say one or another of his wives killed him, but the reports that have come down to us are not contemporary. If only Dan Rather would give us his gut sense of the matter, then we could be sure. The Hun empire included many Goths, and in the Gothic language, Attila can be understood as `little father.' Ata or Atta is also a common word for `father' in various Central Asian or at least Turkic languages (see ata), and in one or another of these Attila may mean `land-father.' There are other possibilities. You could look it up. Stalin, another fellow with some blood on his hands, was known by the epithet of ``little father.'' In Romanian, that was tatucul. Here I guess we see the diminutive ending -cul preserved from Latin. According to the W. Meyer-Lübke Romanisches etymologisches Wörterbuch, the Romanian word tata, meaning `father,' has cognates in many Romance languages, though not in Latin. The meaning in some of these other languages is familial but varies. In Old Romanian taica meant `older sibling, advisor to young maidens,' and some tata cognates have referred variously to a younger sibling, older sibling, maiden, etc. Come to think of it, I've heard ``tatas'' used in English. It had something to do with mamas, iirc. Let me look that up in a slang dictionary... oh! I guess I don't want to go there. There's a cognate of tata that also meant `father' in Lombardic. This was the language of a West Germanic tribe that settled in northern Italy and ended up speaking a version of Romance with little Germanic vocabulary left in it, so this is a weak reed to support a Germanic etymology. The Meyer-Lübke doesn't draw any connection to East Germanic (i.e., Gothic) or other pre-Romance languages. It seems very hung up on the idea that the initial vowel would not have been elided. In the instance of one Romance tata variant [(l)ata], it suggests a possible connection with the word ätti in Swiss German (i.e., one of the local varieties of German spoken in Switzerland). I have one thing to say to these crazy linguists: get your head out of your ass! Before Stalin, and before he himself had much blood on his own hands, Tsar Nicholas II was known as the little father. His enemy Nestor Makhnos (a bloody anarchist military commander) was given the nickname batko by his men; this meant `little father.' When John F. Kennedy ran for president in 1960, his younger brother Bobby Kennedy served as campaign manager. He was rather bossy with the campaign staff, who used to say ``Little Brother is Watching You.'' (I just figured I'd throw that in there for a little comic relief, so it's not all about dictatorial leaders or bloody assassinations.) Okay now, back to that earlier Scourge of God. The stress in the English pronunciation of Attila is on the second syllable, but in Gothic and in modern Serbo-Croatian it is on the first syllable. All the continental German forms of the name apparently have initial stress. Middle High German documents from around 1200 record Attila's name as Etzel. This represents two systematic sound shifts: (1) umlaut, specifically assimilation of a to i (yes, even though the vowels were originally separated by a consonant; that's how umlaut works), and (b) affrication of the voiceless stop /t/ into /ts/, part of the second Germanic sound shift (LV). Attila's name provides one bit of evidence that, in at least one High Germanic dialect, the LV2 process had not ended by about 450. Taken all together, the various bits of evidence suggest that LV2 began spreading from the southern extreme of the West Germanic region in the sixth century (probably from Lombardy, when the Lombards still spoke a Germanic language). Etzel became an important character in medieval German folklore. Edsel is a variant form of the name. The most famous person to bear it in modern times was Edsel Ford, son of the Henry Ford who founded the car company named after himself. When the company introduced a new line of cars in the late 1950's, they got the name Edsel. The line flopped infamously, and the name Edsel came to stand for commercial failure. Studies later showed that one of the many reasons it failed was a public perception of the Edsel name as odd. Naming the the new line ``Attila'' or something else better known would probably not have helped much, however: the line was introduced at the start of a recession that killed off the Nash, Packard, Hudson, and DeSoto marques, and left one or two others mortally wounded. The Ford family was partly of Dutch or Flemish descent, but if there is a particular reason for the choice of name, it is not publicly known. There have been reports that the Ford family was opposed to using Edsel as the name of a car line, but their objections can't have been too strong. The company had been family-owned, only becoming a publicly traded corporation in 1956, but the Ford family has retained a controlling interest to this day (July 24, 2005, if you must know). The company had great trouble choosing a name, even going so far as to solicit some famously terrible suggestions from the famous poet Marianne Moore (``The Intelligent Whale,'' ``The Utopian Turtletop,'' ``The Pastelogram,'' ``The Mongoose Civique''). Plato was right about poets. At the meeting that chose the name, Ernest Breech stepped into the breach. Chairing the meeting in the absence of Henry Ford II, he urged the adoption of Edsel, name of the company's second president. Agence de Transfert de Technologie Financière. ``ATTF Luxembourg was created in 1999 by the State of the Grand-Duchy of Luxembourg (Ministry of Finance) - main shareholder, the Central Bank of Luxembourg (BCL), the Chamber of Commerce of the Grand-Duchy of Luxembourg, the Financial Sector Supervisory Commission (CSSF), the Institute for Training in Banking, Luxembourg (IFBL), the Luxembourg Bankers' Association (ABBL - replaced in 2002 by the Federation of the Professionals of the Financial Sector - PROFIL) and the University of Luxembourg....'' American Telephone and Telegraph Global Information Solutions. The former NCR, after it was bought out, and before it was spun off. at the weekend British for `over the weekend' or `on the weekend.' The two translations given here have slightly different but overlapping ranges of meaning. Without venturing to specify these precisely, it seems that in Canada the semantic ranges are not the same as in the US: googling with restrictions to .ca and .us TLD's indicates that the on form (not the on reading!) is relatively more popular in the former. Gorrr, these people are making the language incomprehensible! At this time portable electronic devices may now be used. Around the time also heralded by ``At this time you are now free to move about the cabin, but we ask that otherwise you remain seated with your seat-belt fastened for your safety.'' Not long after the ``last and final boarding call'' for your flight. attire, proper I feel certain that somewhere in this glossary there is a muddled, poorly-remembered reference to the material quoted below, but as I have only a muddled, poor recollection of where that entry is, I'll deposit the quotation here. It's taken from page 19 in my Pocket Books copy (chapter 3, at any event) of John P. Marquand's The Late George Apley. (Marquand's Apleys are fictional; the book is a satire so gentle that you have to read pagefuls just to get a laugh.)   Shortly before he [Thomas Apley, the writer's (George's) father] purchased in Beacon Street he had been drawn, like so many others, to build one of those fine bow-front houses around one of these shady squares in the South End. When he did so nearly everyone was under the impression that this district would be one of the most solid residential sections of Boston instead of becoming, as it is to-day, a region of rooming houses and worse. You may have seen those houses in the South End, fine mansions with dark walnut doors and beautiful woodwork. One morning, as Tim, the coachman, came up with the carriage, to carry your Aunt Amelia and me to Miss Hendrick's Primary School, my father, who had not gone down to his office at the usual early hour because he had a bad head cold, came out with us to the front steps. I could not have been more than seven at the time, but I remember the exclamation that he gave when he observed the brownstone steps of the house across the street.   ``Thunderation,'' Father said, ``there is a man in his shirt sleeves on those steps.'' The next day he sold his house for what he paid for it and we moved to Beacon Street. Father had sensed the approach of change; a man in his shirt sleeves had told him that the days of the South End were numbered. For more Marquand material, see the BF entry. For yet more material -- the whole nine yards, as it were -- try Sartor Resartus, by Thomas Carlyle. (No, no one really knows the origin of the expression ``the whole nine yards.'' I'm sure there's a Nobel prize in it for the fellow who cracks that nut.) American Telephone & Telegraph Information Systems. I've seen both this and ATTGIS used. ATTN, Attn. attributive noun A noun functioning as a modifier--usually as an adjective. An attributive noun may itself be a compound noun or noun phrase. In that case, the attributive noun is traditionally hyphenated. Thus, the noun phrase ``intermediate frequency,'' consisting of the adjective intermediate modifying the noun frequency, becomes the attributive noun ``intermediate-frequency'' and can modify the noun amplifier in the noun phrase ``intermediate-frequency amplifier.'' The hyphen allows a reader encountering the words intermediate and frequency in sequence to parse them immediately as a modifier. If a compound attributive noun is written without a hyphen, then a reader is likely to misinterpret it initially as a subject or predicate, and is forced to reread or rethink the text when the noun functioning as noun is finally encountered. Of particular interest in the present reference is the fact that the better literature, back in the day, preserved the hyphen in abbreviations. Hence, an intermediate-frequency amplifier was abbreviated I.-F. amp., whereas the center frequency of the signals such a device was designed to amplify was simply I.F. Sigh. For old times' sake, we've indicated the various historical abbreviated forms for the electronics abbreviations DC, AC, and IF. In part, this preservation of hyphenation in abbreviated forms was intended to help the reader recognize the abbreviation. It was an innocent time. A similar motivation led to the disappearance of periods in British abbreviations, as discussed in the Mr entry. We now continue with the discussion of attributive-noun hyphenation in unabbreviated cases. The hyphenation rule is applied loosely. Some noun phrases, particularly proper nouns (e.g., Dow Jones) or disciplinary titles (e.g., Fluid Mechanics) are likely to be recognized as attributive in context and are not hyphenated. Sometimes the attributive noun phrase itself consists of an attributive compound noun modifying another noun (so in formal rather than functional terms, one may have an adjective followed by three nouns). In these cases there is no generally accepted rule; one hyphenates in whatever way seems likely to make the meaning clear most immediately. In the case of attributive noun phrases that include a quantifier, American usage follows an interesting rule: when the noun phrase is transformed into a modifier, the noun component of the original phrase is put into singular form. For example, the noun phrase ``two cars'' becomes the adjective ``two-car,'' as in ``two-car garage.'' British usage does not follow this rule (hence ``two cars garage'', with the stress on the first syllable of garage and the comma after the quote for good measure). I'm not sure what the traditional rule has been, but now the plural-singular transformation seems to apply sometimes in Britain. It might just be American media influence. Canadian usage appears to coincide with US. Another example: ``nine days' wonder'' (British) vs. ``nine-day wonder'' (N. American). Of course, there are exceptions. See if you can find the one in the car alarm entry! Another difference between British and North American dialects' use of plural (but not directly concerning attributive nouns) has to do with the grammatical number of collective nouns. In North American English, collective nouns are generally grammatically singular unless the noun form is plural (``Congress meets,'' ``the Miami Heat is out of the play-offs,'' but ``the Yankees win''). In British, collective nouns are usually grammatically plural even when the noun form is singular (``Manchester United win''). Attributive nouns get a mention in the Latin lesson at the A.M. entry. Atü, atü German, Atmosphärenüberdruck. English, `above atmospheric pressure.' Advanced-Technology Vehicle. Advanced TeleVision. FCC term encompassing everything from digital HDTV to enhancements of the current analog standard. Here's their latest document on the matter, as of early 1998. The IEEE Approved Indexing Keyword List instructs that HDTV be used in place of ATV. I like this idea better than the FCC's, because frankly, ``advanced television'' is an oxymoron. All-Terrain Vehicle. ATazanaVir. A protease inhibitor used in the treatment of AIDS. All-Terrain Vehicle Association. Sister organization: AMA. American Theatre Wing. Their logo displays a mask with two of them (wings, that is; the feathered sort, not the architectural). ATW is ``devoted to promoting excellence in the American theatre.'' I infer that this is done by staging expensive productions of musicals in New York City. ATW bestows Tony Awards. ``Wing'' sounds kind of martial. Or maybe wings are intended to suggest angels' wings and death. Vide ATBM. ``As the World Turns.'' A CBS daytime soap opera. German, Abgasuntersuchung. `Gas emission investigation.' Cf. ASU. French with the same meaning but not the same usage as à le. The French expression à le is used primarily to explain what au means. I suppose au can be regarded as a contraction of à le. A contraction of à   la is à la. Americans United for separation of church and state. AU, a.u. Astronomical Unit. The average earth-sun distance. Obviously this is not a very precise definition: even the two most obvious averages -- time average and angular average -- are unequal by Kepler's 2-3 law. No matter, the eccentricity of earth's orbit is small (~1%). In the most interesting units, 1 AU = 8.3 light minutes. In units that would be more meaningful to those planning to drive, it's about 149.6 million kilometers (that's 92 or 93 million miles, give or take a gas station). Even though we could do so, we do not give a more precise value at this entry. After all, el que quiere celeste, que le cueste. Also, we get more hits this way. See the IAU entry. Auburn University (in Auburn, Alabama). Latin, Aulus. A praenomen, typically abbreviated when writing the full tria nomina. Chemical symbol for gold, from the Latin aurum. For a bit on gold in semiconductor electronics, see the Gold entry. For a bit on the geology of gold mines, see the pluton entry. For a movie connection see AU1. For more general information visit the gold entry in WebElements and the entry at Chemicool, where it was #2 on the Top Five List a long time ago when I checked. AUdio. Filename extension for a Sun Unix sound file format. Australia (ISO code used in TCP/IP addresses). Country code 61 for telephone. Currently, the country consists of six states, some territories with various degrees of self-government (the Northern Territory, the Australian Capital Territory, and Norfolk Island) and various federally administered external territories. Association des Universités Africaines / Association of African Universities. Association of University Architects. African Union Broadcasters. American University of Beirut. a.u.c., A.U.C., AUC Ab Urbe Condita. Latin: `from the founding of the city [of Rome]' (around 753 BCE). Roman date designation. Area Under Curve. True, it's a count-on-your-fingers way to say `integrated,' but medical researchers apparently use this expression `professionally.' Maybe they're trying to drum up new business; the acronym certainly makes me sick. In the medical context, AUC is frequently the time integral of a solute concentration in blood or plasma. Autodefensas Unidas de Colombia. Spanish, `united self-defense [forces] of Colombia.' Nominally a union of at-least-originally independent militias fighting against the left-wing armies of Colombia (ELN and FARC), and the name acronym is construed plural in Spanish, but nevertheless it does appear to be under a single command. I seem to recall it was begun by Jesús A. Castaño, who was killed in 1980, and continues under the leadership of his sons. It is certainly in organizations of people that grammatical-number distinctions begin to blur. This is even more the case for the military and civilian ``wings,'' or what have you, or organizations regarded as terrorist. This is interesting: they seem to have a website. Association Universitaire Catholique d'Aide aux Missions. A publisher in Louvain. Association of Universities and Colleges of Canada. Despite the enormous difference between the vocabularies of English and French, this organization somehow managed to contrive a French name that would correspond to the same initialism (it's usually impossible): Association des universités et collèges du Canada. Academy of Upper Cervical Chiropractic Organizations. It appears that au courant French: `up to date.' Stupid: `with berries.' Sometimes I feel like a wrote a beautiful reference work and some jerk-off came along and scrawled graffiti all over it, and it turned out that I was the jerk-off. I also have an entry for au. Doctor of AUdiology. According to itself, this ADA is the ``Home of the Au.D.'' The Audi car company (fnd'd. 1909) got its name from the imperative singular of audio (Latin for `I hear') because the founder, a German named August Horch, had sold the rights to his name along with his first car company (fnd'd. 1899). The use of a Latin calque was the man's son's suggestion. Perhaps it's a slight approximation or exaggeration to call it a calque. Oh, alright, it's not a calque -- audi is the Latin translation of German horch. [The German verbs hören (`to hear') and horchen (`to listen') are cognate with the English words hear and hearken. Needless to say, all are cognate with das Ohr, `ear.'] The semantic distance between horchen and hören is perhaps not so great as between listen and hear.] Im Jahre 1932, Audi and Horch combined, along with Wanderer and DKW (Das kleine Wunder), into Auto-Union, adopting a logo in the form of four interlocking rings that is still the trademark of Audi. [Kleine Wunder can be literally translated `small wonder,' but the German expression only has the sense of `small miracle,' and does not suggest `no surprise [that]' like the English expression. Little wonder the company folded and was merged away.] More details on Audi company history here. I cribbed this from a posting on the Classics list, naturally. Here it is in the archives. Incidentally, Audi is itself not, um, unheard of as a surname. Robert Audi (b. 1941), for instance, is the author of many philosophical works, such as Action, Intention, and Reason (Cornell University Press, 1993), and general editor of The Cambridge Dictionary of Philosophy (CUP, 1/e 1995, 2/e 1999). a.u.e, a.u.E, AUE, aue Alt.Usage.English, a newsgroup. AUstralian Eastern Standard Time. German noun (fem.) meaning `eye.' Spanish noun (masc.) meaning `culmination' or, in a figurative sense, `apogee.' I'd like to mention that symbol on the greenback, the eye above the pyramid, and I would, if I could see any excuse to do it. Auger process Two-stage photo-ionization process, in which the energy of a photon is initially absorbed by a deeply-bound state. This electron has not absorbed enough energy to escape (to be ionized). When the hole it leaves behind is filled, however, the energy is transferred to an electron in a higher-lying state, which does become ionized. [Pronounced ``Oh-zhay.''] Associated Universities, Inc. ``... a not-for-profit corporation based in Washington, DC. It was founded in 1946 by nine northeastern universities to manage major scientific facilities. AUI currently operates the National Radio Astronomy Observatory under a cooperative agreement with the National Science Foundation [NSF].'' Attachment Unit Interface. A type of connector. Standard (Hepburn) transliteration of Japanese version of the Indian holy syllable om. Part of the name of the Japanese poison-gas cult Aum Shinrikyo mentioned at the LPF entry. Shinrikyo means something like `supreme truth.' Authorization for Use of Military Force. Name of an act of the US Congress passed on September 14, 2001. African Union Mission in Sudan. Officially AMIS, q.v. Acceptable Use Policy. Association of University Programs in Health Administration. It describes itself as ``a not-for-profit association of university-based educational programs, faculty, practitioners, and provider organizations. Its members are dedicated to continuously improving the field of healthcare management and practice. It is the only non-profit entity of its kind that works to improve the delivery of health services throughout the world - and thus the health of citizens - by educating professional managers at the entry level.'' au pis French expression literally meaning something like `at worst' (see au and pis aller). The English expression ``at worst'' often has a meliorating connotation, as if to suggest that the worst possible may not be so bad. The flatter connotation of au pis is apparently better captured by `if worse comes to worst.' I suggest the mnemonic ``oh piss!'' (Better yet ``aw pee!'') Incidentally, pis also means `udder,' so ``veau au pis'' does not have to mean `calf at worst.' Unfortunately, ``pis pis'' just means `worse udder.' I was kinda hoping there could be an udder-worst-type pun. Association of University Radiologists. Affiliated societies on the web: APDR and A3CR2. Auriga. Official IAU abbreviation for the constellation. Association of Universities for Research in Astronomy. AppleTalk Update-based Routing Protocol. Autonomous Undersea System[s]. (US Navy acronym.) AUStralian Computer Emergency Response Team. ``Emergencies'' are security breaches. See CERT for other relevant organizations. ausgeruhter Kopf `Well-rested head' in German. The education director of a Texas academy emailed today to praise our WAC entry. It reminds me of the classic movie Fast Times at Ridgemont High, from 1982. It was a perfect movie. For example, its main page at IMDb says that the ``plot synopsis is empty.'' See what I mean? Perfect! Anyway, one of the characters is Brad Hamilton (played by Judge Reinhold) who likes to describe himself as ``a single, successful guy,'' at least until he loses his job and his girlfriend. It just goes to demonstrate the fragility of life. But I wasn't reminded of this immediately. I just mentioned the email to mom, and read her the WACky entry. She didn't think it was so inspired. I must have read it too fast. Yeah, that's it. Then I mentioned that yesterday I had an email from a guy who wrote ``And Stammtisch Beau Fleuve means what? Table reserved by a beautiful river?'' That made her laugh, even though it's a fair interpretation. After she stopped laughing, she commented that what her grandmother would have said about the glossary was (is?) that it's the product of an ausgeruhter Kopf. Googling on this phrase and related ones (vom ausgeruhten Kopf, etc.) suggests that this is no longer, if it ever was, a common expression. Anyway, since you asked what I wrote (you did, didn't you?), here it is: ``Beau fleuve'' is believed to have been used in reference to the Niagara River, and to be the source, in corrupt form, of the name of the city of Buffalo. I started the glossary when I was an asst. prof at the University of Buffalo, and there was a bunch of friends I ate lunch with regularly. At the time (1995), the fellow in charge of Engineering Computing was stupidly reluctant to let me set up a web site for a small glossary of microelectronics terms (and some other words and abbreviations I used in class). To bypass him, I got a website from a different university webserver for the stated purpose of having a web presence for a university group (my lunch group). To get the relevant university official to grant my request, I tried to make it sound a bit more serious or at least established [than it actually was], so I gave our informal group a name. Asociación de Universidades confiadas a la Compañía de Jesús en América Latina. (Spanish: `Association of Universities entrusted to the Society of Jesus [SJ] in Latin America.') Corresponding US organization is AJCU. AUstralian Science and TEchnology Heritage Centre. Launched in December 1999, it was the immediate successor to ASAP. AUStralian TELecommunication Authority. See Janeite. Australia Day Previously known as Anniversary Day and Foundation Day, Australia Day commemorates the beginning of settlement in Australia, when Governor Arthur Phillip landed at Sydney Cove on January 26, 1788. Interestingly, this is a holiday that was once celebrated as a Monday holiday to make a three-day weekend, but which now is celebrated on the actual day. In the years before the 1988 bicentennial, it was celebrated on the first Monday following January 26, but in 1988 it was celebrated on the anniversary (a Tuesday that year) and has been ever since. For someone whose national holiday celebrates independence and freedom, the particulars of the event commemorated on Australia Day can induce queasiness. Governor Phillip came to found a penal colony. The ships he came with carried, in addition to 450 sailors and government personnel, over 750 prisoners (including 15 children). Australia celebrates its other national holiday in common with New Zealand: Anzac Day, described at the ANZAC entry. Australia has other public holidays, but they're not especially national: Good Friday and Easter Monday (I guess that's a three-day weekend plus a day to dry out), Christmas and Boxing Day, and New Year's Day. There are three officially observed days that are not public holidays: Commonwealth Day (second Monday in March), Mother's Day (second Sunday in May), and Father's Day (first Sunday in September). Various other holidays are widely celebrated unofficially or are official at the state level, but are not declared public holidays at the national level (so I understand). These include the Monarch's birthday and Labour Day. Labour Day in Australia is celebrated on different days in different states. The day generally commemorates the establishment of the eight-hour day, and this was won separately by various trade unions at different times in different states. The eight-hour day was an early focus of the union movement (see 888) in the nineteenth century. Austrian scientific suicides It seems like a category large enough, or at least disproportionate enough, to merit its own entry: 1. September 5: Ludwig Boltzmann 2. September 23: Paul Kammerer 3. September 25: Paul Ehrenfest UK Association of University Teachers. According to a webpage viewed in April 2005, it was ``the trade union and professional association for over 48,700 UK higher education professionals'' (this included not just instructional personnel but also librarians and some others). In addition to a newsletter, they had a magazine cleverly named ``AUTlook.'' Alas, this bit of cleverness will have to be abandoned. In 2006, AUT merged with NATFHE to form a new union is called the University and College Union (UCU). One of those ``little magazines.'' This one is published in Puerto Rico and is dedicated to bad poetry in Spanish (subtitle: Revista Internacional de Poesía). Perfect-bound, glossy cover. The cool thing about it is the way they assign dates to the issues. Vol. 1, Núm. 8 is dated ``Noviembre 2002 a febrero 2003.'' Isn't that great? Rhyme schemes? We don' need no steenkeen rhyme schemes! Checking authorization. This is a special terminology used by DSL dialers. For example, say you launch the dialer and it reports Dialer Error629. Connection closed by remote computer. Technical support will conclude that you're successfully connecting but that there are other problems. Check the cabling. Power down and power up. Turn off all other appliances. Jog around the block. Hmm. Apparently your operating system is too old. You should spend a few hundred dollars on an OS upgrade and more memory. Look, why not just buy a new computer? Etc. Thank him politely and call back later. Talk to someone who understands the arcane terminology. ``Authorized''? Let's try another userid and password. Ah-hah -- works! The problem appears to be: your password was munged! By the way, the equivalent terminology from the ``Online Control Pad'' dialog box is Internet Connection Not Established Network connection is not available. Do you want to work offline? This typically means `password mistyped.' AUTOmobile. In Scandinavian countries, bil is common. Humphrey Carpenter has speculated that Autobiography is probably the most respectable form of lying. Maybe it's the only form. According to the back-cover copy of her An Accidental Autobiography, Barbara Grizzuti Harrison was asked to describe the book she was writing and responded, ``an autobiography in which I am not the main character.'' This doesn't strike me as particularly novel. A term from Greek roots meaning `self-headed.' It sounds like it ought to have something to do with soccer. I don't remember our ninth-grade gym teacher, Mr. Carey, using that term when he introduced us to the exotic sport of ``sock-a-bowel'' and pint-size Armando introduced us to the experience of being consistently and reliably out-dribbled, but somehow I'm not surprised. Anyway, it turns out to be a term meaning `self-governed,' used to describe different Orthodox (i.e., Eastern rite) churches. AUTOmatic DIgital Network. Part of DMS. Transfer of infection from one part of the body to another part of the same body. The standard otoscope or whatever it's called has a disposable paper cover for the cone that fits in the outer ear. After looking in my infected ear (outer-ear infection; I guess that qualifies as a sports injury if you catch it in an Olympic pool), my doctor went around to check the uninfected ear. ``Shouldn't you change that?'' ``No, the infection won't transfer.'' Supposing for the sake of argument that he's wrong, I wonder: is infection transferred from one part of the body to another part of the same body by the good offices of a physician properly ``autoinfection,'' ``iatrogenic infection,'' or what? And is the physician a ``vector'' or the 'scope a ``vehicle''? (An auto? BTW, the word transfection refers to something else entirely.) The last time I had a check-up, I asked him (same doctor) why he was examining my ears. What was he actually looking for? He said he was looking for my brain; if it wasn't there he'd be able to see straight across. If I'd had a brain I would have pointed out that in that case, there was no need to check on both sides. The Divinyls had a hit with ``I Touch Myself.'' The middle line of the chorus is ``When I think about you I touch myself.'' Sort of like doing push-ups, I suppose. You know, the three main forms of plague -- bubonic, pneumonic, and septic, in increasing order of how soon an obituary may be needed -- all result from infection by the same bacterium (Yersinia pestis). They differ essentially in where they are or start out, and one kind can turn into another. Similarly, pulmonary tuberculosis (the usual TB), scrofula, and a host of other unpleasant diseases can all arise from the same bacterium, Mycobacterium tuberculosis. Some of these diseases, however, can be caused by other similar bacteria. Scrofula in children is usually caused by Mycobacterium scrofulaceum or Mycobacterium avium. Spontaneous ionization of a motor vehicle occurring in equilibrium, or the same process occuring with something other than an auto. The reaction H2O --> H+ + OH- is a common example of autoionization. automatic camp-on You stay on a line that rings busy, and when your called party hangs up, your call rings through. I could use this to call some people. A Spanish word meaning `able to care for oneself.' Effectively an antonym of the English word invalid. AUTOmatic VOice Network. This US military network was activated in December 1963, and became the principal long-haul, nonsecure voice communications network within the Defense Communications System. It eventually became a part of the Defense Switched Network (DSN), the replacement system activated in 1990 to provide long-distance telephone service to the military. You can get more information about this system from the `touch tone dials'' page at telephonetribute.com and by following links from the AFCA home page. When I worked at military labs in the 1980's, my desk phone was always part of AUTOVON. I could call out of the network (and most of my calls off base were off network as well). When calling people at other government labs, I had a choice: I could call their regular number (seven-digit number, preceded by an area code if different from mine) or I could call them within AUTOVON, in which case I always dialed a seven-digit number. The last four digits of the AUTOVON number were the same as the ordinary phone number, and the first three digits essentially identified the military site. There was a slight preference for calling within AUTOVON when possible, simply for budget reasons. Otherwise, for low- or non-ranking people like me, AUTOVON was not noticeably different from the regular civilian phone network. AUTOVON, derived from the Army's Switched Circuit Automatic Network, was in fact designed to provide the Department of Defense with an internal telephone capability functionally equivalent to toll and Wide Area Telephone Service (WATS) calls. However, it was also designed to provide precedence preemption for high-priority (much-higher-priority-than-me) users. This was implemented with a fourth column of keys, the fourth (1633-Hz) column at the DTMF entry. The column, labeled A/B/C/D from top row to bottom row there, had keys labeled FO/F/I/P, for Flash Override, Flash, Immediate, and Priority. (Also, the octothorpe key was labeled A.) Higher keys had higher precedence, and pressing one had the effect of pre-empting any lower-precedence call that was in the way. (The precedence below ``priority'' was ``routine.'') Phones with higher-precedence keys that were functional were available only to higher ranks in the military chain of command. With a few exceptions (POTUS, Sec'y of Defense, Joint Chiefs of Staff) those with access to them were only authorized to press those keys for specific levels of emergency. Here's some more detail. ATM User-to-User. Autonomous Under{sea|water} Vehicle. A self-propelled submarine robot, intended to function with minimal control input. AUV's are still mostly experimental. Cf. ROV. French phrase meaning à les (in French). This glossary entry is on the very cusp of futility: only a vanishingly small fraction of French-nonspeakers have the requisite level of ignorance to benefit from it, and those few wouldn't know to look here. Perfect! Of course we're not going to give the English. Apple UniX. The license plate number of the Rolls Royce Phantom 337 belonging to Auric Goldfinger, in the 1964 James Bond movie Goldfinger. Goldfinger was played by Gert Fröbe (credited as Gert Frobe). Goldfinger is the chief villain in this one, of course. Do I really have to explain this? Gold cation of valence 1 (Au1+) is aurous. Auric is valence 3 (Au3+)! Honestly, sometimes I think you people don't even care. Also in that movie, Honor Blackman plays the role of Pussy Galore. Somehow I think that when her parents were considering names, the future they imagined for her was nothing like being a Bond woman. (Particularly as she was born in 1927, and Ian Fleming didn't invent James Bond until after he retired with the rank of Commander from WWII service in British Naval Intelligence.) Air-to-Vapor (mass ratio). Mechanical engineers seem to prefer to call this a ``weight ratio.'' Cf. AF. Alleged Vegetarian. Appendix Virgiliana. Authorized Version (of the Bible in English). For a very long time that was the KJV. There's an old saying that a translation is a commentary. There's a Bible commentary called The Unauthorized Version, by Robin Lane Fox. Academy of Video Arts & Sciences. Australian Veterinary Association. It ``is the professional organisation representing veterinarians across Australia.'' 1. A verb meaning `be of use.' It means just that as an intransitive verb. The construction ``avail oneself of'' means for one `to take advantage of.' (Similarly with myself, yourself, etc.) 2. A noun that is apparently short for `speaker availability.' (Availability, of course, is a noun constructed on the adjective available, from the verb avail. It's crazy, but I love this stuff.) Chris Suellentrop did a series of ``Dispatches from Campaign 2004'' for Slate. His September 8 dispatch included this: ``It's been more than five weeks since Kerry last took questions at a press conference, or an `avail,' as it's called.'' Avance Logic, Inc. Makes video and audio chips. Homepage has petulant blinking. I'm not sure in what year I wrote the preceding part of this entry. I checked back in late 2004: no more blink; no more Avance, either. Association of Veterinarians for Animal Rights. American Voter Coalition. Association of Visual Communicators. Look at me when I talk to you! Atomic Vapor Cell. Automatic Volume Control. Isn't it fun to speak progressively more softly, so people lean toward you, and listen real hard, and then suddenly to shout at the top of your lungs so their ears hurt? No? Killjoy. (UK) Association for Veterinary Clinical Pharmacology and Therapeutics. Audio-Visual Copyright Society, Ltd. ``Based in Australia, serving the world.'' AVDP, avdp. Alta Velocidad Española. `Spanish [.es] High Speed [train].' A 300 kph TGV derivative operated by RENFE. Cf. ave. AVErage. Try to use this only if you can avoid capitalizing the a, so it isn't mistaken for an abbreviation of some oddly named avenue. In fact, avoid it altogether and use avg. Spanish, `Bird.' See also AVE. [Football icon] Ave Maria Latin, `Hail Mary.' Name and first words of a common Roman-Catholic prayer. A desperation football pass. English for AViation. Cf. ESP, EAV. Avestan. Makes you wonder why they bother to define an abbreviation. American Volunteer Group. Better known as the Flying Tigers. This was a group of personnel (pilots and ground crew) released from active duty in the air forces of the US Army and Navy, serving as volunteers on the Chinese side in the Sino-Japanese war. The group was formed by Colonel Claire Chennault. Chennault had retired from the USAAC as a captain in the 1930's and was appointed to command the largely nonexistent Chinese air forces by Chiang Kai-Shek, leader of the Nationalist Chinese government. The AVG flew Curtiss P-40B fighters purchased by the Chinese government under a special arrangement with Curtiss-Wright. (The British had taken over a French order for P-40B's after the fall of France, and Curtiss had six assembly lines working on the order. Under an arrangement proposed by Curtiss Vice-President Burdette Wright (an old friend of Chennault), the British waived priority on 100 P-40B's rolling off one of those lines, allowing them to be sold to China. In return, Curtiss added a seventh line and delivered later-model P-40's to Britain that were more suitable for combat.) The P-40's used by the AVG were less maneuverable than Japanese Zeros, and they had crude gunsights, but the Tigers developed tactics that allowed them to achieve impressive kill ratios. After the Japanese surprise attack on Pearl Harbor that brought the US into WWII as an active combatant, the Flying Tigers' success was one of the few bright spots in a Pacific war that was starting out badly for the US. (In this connection also, recall James H. Doolittle.) Chennault's status was rather irregular and his command a bit informal. According to a history page at the self-described official site, he was originally invited to China in 1937 by Madame Chiang, on a three-month mission to make a confidential survey of the Chinese Air Force, and his official status until the US entered the war was always a subject of speculation. ``Chennault himself states [probably in his Way of a Fighter] that he was a civilian advisor to the Secretary of the Commission for Aeronautical Affairs, first Madame Chiang and later T.V. Soong. ... Even while he commanded the American Volunteer Group in combat, his official job was adviser to the Central Bank of China, and his passport listed his occupation as a farmer.'' In July 1942, the AVG was incorporated into the USAAF, and Chennault was promoted to brigadier general. Chennault had great publicity, close connections with FDR and the White House, and a good relationship with Gen. Chiang Kai-Shek. In October 1942, he wrote FDR that with just 105 more fighters, and 30 medium and 12 heavy bombers, he could win the war by gaining air superiority and destroying Japanese shipping and industrial production. It's not clear how much of this wooly optimism FDR bought into, but Chiang's ground forces (could they even be called an army?) weren't engaging the enemy, so this approach had its attractions. In late spring 1943, Chennault was given command of the US Army's newly formed Fourteenth Air Force, and priority on supplies airlifted from India. The 14th underperformed. Chennault was eased out of command after FDR died. When the war ended in 1945, ten AVG pilots formed an air cargo company called Flying Tiger Line, originally flying Conestoga freighters purchased as war surplus from the United States Navy. It achieved a number of firsts, and after acquiring its rival cargo airline Seaboard World Airlines on October 1, 1980, it surpassed Pan Am as the world's largest air cargo carrier. As it happens, my uncle Robert flew for them in the late 1970's or early 1980's. In 1989, the company was purchased by FedEx. AVeraGe. Plural avgs. Singular also abbreviated ave. (deprecated). AViation GASoline. Advanced Video Guidance Sensor. NASA designation of a device developed for DART that gathers navigation data by capturing reflections from laser beams directed at an object at close range (within 500 meters), using them to compute relative bearing, range, and attitude. (Though not all at the maximum range. Range and attitude -- relative orientation of target craft -- were expected to be available only within 200 meters. I'm not sure they know if that's so yet.) Ambulatory Visit GroupS. Academy of Veterinary Homeopathy. The content of what ought to be the homepage has me a bit disoriented, but anyway I'm glad that even ducks can have a dose of quackery. Advanced Very-High-Resolution Radiometer. Association for Veterinary Informatics. Audio Video Interleaved. Advancement Via Individual Determination. Antelope Valley Internet Dialers. ``The Internet User Group for the Antelope Valley.'' Judging from the map on their home page, it appears that Antelope Valley is located on earth, and probably not in Antarctica. Oh, here's something: meetings are held in Lancaster, CA. Also, there are no meetings until further notice. Audio-Visual Information Systems. Latin, `bird.' Well known, of course, from the expression rara avis, `rare [i.e., strange] bird.' The Latin word avis became ave in Spanish, so the Latin prayer Ave Maria would sound like `Mary bird' in Spanish, to anyone who didn't know that it doesn't mean that. Spanish noun meaning `advertisement' and verb meaning `I notify, alert.' Spanish, `visualize, envision.' I think this may be primarily a Latin American usage. If the English verb eviscerate had a close cognate in Spanish, it would be eviscerar, which in Latin America would sound close to avisorar, except for the initial vowel. Association of Visual Language Interpreters of Canada. This appears to be one of those unrequitedly bilingual organizations. (Here ``bilingual'' and ``one of those'' are both meant in the Canadian sense or context. Then again, maybe not.) The old AVLIC logo featured a Canadian maple leaf (well, maybe a stylized sugar maple leaf; I'm no naturalist) and the text ``AVLIC/AILVC.'' The new logo has a more naturalistic maple leaf dotting the letter i of a lower-case ``avlic.'' Also, the English name of the organization is spelled out along the bottom, either alone or above the French version. To be fair for a change, I should probably note that there's a good reason why AVLIC/AILVC seems not to be well-represented in French-speaking parts of Canada, and why there is no provincial AILVC chapter for Quebec. According to the AVLIC Mission Statement, AVLIC is ``a national professional association which represents interpreters whose working languages are English and American Sign Language (ASL).'' (That is, they interpret between ASL and English.) Atomic Vapor Laser Isotope Separation. ArterioVenous Malformation. Here's a support page. Audio Video and Multimedia. Automated Valuation Model. Used by expert systems to generate assessments -- in real estate, at least. Automatic Vehicle Monitoring. Normally refers to remote monitoring of road vehicle location. American Veterinary Medical Association. The main publications of the AVMA are the Journal of the American Veterinary Medical Association (JAVMA) and the American Journal of Veterinary Research (AJVR). Arkansas Veterinary Medical Association. Cf. the national AVMA. American Veterinary Medical Foundation. Audio Video and Multimedia Services. Australian Vaccination Network. A currency subunit used in Macao. The basic currency unit is the pataca, equal to 100 avos. Macao is a former Portuguese colony, and avo is a much-shortened form of Portuguese oitavo, `eighth.' I think this is cute because the original word has been not merely shortened, but shortened almost to its semantically least significant component -- essentially an inflection. It's like shortening eighth to th. Similar radical shortenings (radical eliminations, literally) in European languages include auto, bil, and uncle. More generally, Japanese has a lot of much-shortened loans from European languages, particularly English. For some examples, see the perm entry. Arginine VasoPressin. Plays a rôle, along with the renin-angiotensin system and natriuretic hormones, in water homeostasis. Why can't they make a beer that doesn't take you to the bathroom? Is the current scheme a safety feature? Assistant Vice President. Association Variose pour la Promotion de la Sidénologie. The same organization serves a more Englishy site where they explain AVPS as the ``Fundation for AIDS Research & Care.'' (``Thi site is first intended to professionals,'' dontcha know.) Aortic Valve Replacement. { Adult | Age } Verification Service. You say you're over eighteen, eh? Then you must have a what -- VISA, MasterCard, American Express? What's the number? Expiration date? Hmmm... Looks like you're good! Justreadtheagreementand SIGN HERE FOR YOUR ``FREE PASS'' TO OVER 200,000 HARD-CORE SITES! American Vacuum Society. Really, nature does not abhor a vacuum -- it's the pressure outside that pushes stuff in. The first time I wore my ``Nature abhors a vacuum tube'' tee shirt to work (in 1994 or thereabouts), a student objected! Anti-Virus Software. I should probably warn you that the editor of this glossary had a cold in March, and in April the compiler came down with probably the same rhinovirus. The two are in frequent email contact, and these emails affect what you read on your computer! You shouldn't be too worried, but if I were you I'd wipe the screen and the keyboard, just in case. Heck, wipe the file system -- you can never be too careful. Use some Listerine on the speakers, too, and any other oral cavities on your PC. Application Visualization System. Association of Vision Science Librarians. It's ``an international organization composed of professional librarians, or persons acting in that capacity, whose collections and services include the literature of vision.'' Amphibious Vehicle, Tracked. AudioVisual Terminal. Automated Voice Technology. AutomobilVerkehrs- und -Übungsstraße. (I.e., AutomobilVerkehrsstraße und AutomobilÜbungsstraße.) German `Automobile-Traffic Streets and Test Tracks.' Formerly Rennstrecke für Autorennen in Berlin (`Racetracks for Car races in Berlin') now a part of the Autobahn system). That's about how people drive on the Autobahn too. Arbitrarily-Varying WireTap Channel. (Domain code for) Aruba. The principal export is homeward-bound tourists. The official languages are Dutch and Papiamento. Papiamento written looks like Spanish with spelling slightly adjusted -- less different from Castilian (the Iberian language called ``Spanish'' in English) than Catalan is -- plus a number of Dutch words. Aruba is a Dutch possession. On April 29, 2003, Queen Beatrix of the Netherlands knighted Aruba native Sidney Ponson. At the time, he was a 43-54 career pitcher for the Baltimore Orioles, with a 4.74 ERA. He had never had a winning season. In the subsequent three months, he caught fire, racking up a 12-5 record with a 3.45 ERA. He turned down a $21 million 3-year deal and at the July 31 non-waiver trade deadline he was dealt to the San Francisco Giants for for pitchers Kurt Ainsworth, Damian Moss and Ryan Hannaman. In San Francisco he was only 3-6, but had a 3.71 ERA. In the off-season, Baltimore lured him back for $22.5 million over three years. You know, the sports analysts talk about his not giving up the long ball so much in 2003, and mental toughness and rotator-cuff injuries and controlling his weight -- what a crock! Pitching is a science, like astrology and psychology. He just got psyched by the knighthood. After ten games in 2004, he's 3-7 with an ERA of 6.47. Addison-Wesley or Addison-Wesley Longman, or Addison-Wesley Publishing Group. Can you say ``assignment agreement''? Sure you can! A chain of root beer stands named after the founders -- Roy Allen and Fred Wright. It was the earliest restaurant franchise. ``Another World'' An NBC daytime soap opera. Another homepage, with links to NBC's. Occurs in email subject headers. Apparently stands for Antwort (German: `answer'). Application Whatnot. Okay, I confess, I made it up. A moment of weakness. ArtWork. Typesetters' abbreviation. American Whitewater Affiliation. ``[T]o conserve and restore America's whitewater resources and to enhance opportunities to enjoy them safely.'' See some relevant phonological thoughts at the AWWA entry. American Women's Association. An American expats' mutual support group. Similar organizations go by various similar names (American Women of ..., American Women's Club of ..., American Women's Organization of ..., etc.). The umbrella organization is FAWCO. See also AWA Singapore, which serves a page of AWA links in various countries. Animal Welfare Act, originally enacted in 1966. In amendments passed in 1970, the USDA is instructed to conduct an annual lab-animal census. They counted 1,213,814 in 1998. Such precision! What day was that? Uncertainties concerning what constitutes an animal under that law were resolved by Secretary of Agriculture Clifford Hardin, who exercised his administrative authority to exclude rats, mice, and birds. These together make up anywhere from eighty to ninety-eight percent of warmblooded lab animals, depending on which interested party's estimate you believe. The AAVS filed suit against the USDA in 1999, maintaining that the original intent of the legislation was to include them. It's a good thing no one is proposing counting fruit flies or flatworms. Here was the USDA's breakdown for 1998: Oooh! Bunnywabbits287,523 Guinea pigs261,305 Other Animals142,963 Other farm animals53,671 ``Other animals'' includes ferrets, woodchucks, armadillos, chinchillas, horses, spotted hyenas, and opposums. The categories are given above in the order in which the USDA presents them. If you don't like that order, then you could try suing the USDA. A few groups that you would expect were unhappy with the decision to exclude the most common lab animals. They took the usual multi-track approach -- direct petition, indirect pressure, lawsuit. On October 6, 2000, a lawsuit brought against the USDA by the ARDF was dismissed by US District Court Judge Ellen S. Huvelle. Airborne Warning And Control System. An electronically very souped-up Boeing 707. [Pronounced ``AY-wax.''] Alert, Well, And Keeping Energetic. The American Sleep Apnea Association (ASAA) organizes local support groups called A.W.A.K.E. groups in the fifty states and D.C., and in the seven Canadian provinces that have a land border with the lower 48 states. (Those seven turn out to be all the Canadian provinces that have a land border with any part of the territory of the US, because the Yukon Territory, oddly enough, is a territory and not a province.) Some of the groups have websites. This page leads to contact information for all groups in the A.W.A.K.E. Network. A nonglossy magazine published by the Jehovah's Witnesses, for its missionaries to hand to prospects. The Gideons leave a whole Bible on its back in your hotel room, but not even one missionary in that position. ``The week of March 14-20 2004 has been declared Severe Weather Awareness Week by the Governor of the State of Indiana and by the Commissioners of St. Joseph County.'' This isn't getting off to a very good start -- I didn't find out until the week was two days old. I guess I missed the first announcement on account of the wild festivities for Einstein's 125th birthday. ``As part of Awareness Week, the State Emergency Management Agency and the National Weather Service will be conducting two `Test Tornado Warnings' between 2:00PM-2:30PM and between 7:00PM-7:30PM, Wednesday, March 17, 2004.'' March 17th in St. Joseph County, home of the Fighting Irish. If you think the Einstein shindig was big... ``Should actual severe weather be a threat on March 17, the testing will be held on March 18.'' It's reminiscent of the day of the Doolittle raid in Tokyo. You know, this whole awareness thing was so memorable that the next year when I ran across the forgotten old email announcing it, I created an entirely new entry for it (contrast). I may be stuck in a rut, but I have deleted the announcement. awareness months Various organizations lay claim to portions of the calendar for propaganda purposes. They usually take a day, a week, or a month. Most such designations seem, individually, to be useful or at worst anodyne. To politicians, it looks like a cheap way to satisfy constituents and look public-spirited into the bargain. Thus, it's easy to get lawmakers to vote, and chief executives to proclaim, that these designations are official lah-dee-dah. Therefore we'll pretty much ignore that. Many of these observations, celebrations, PR events or what-have-you's have names that include ``Awareness Month,'' and many don't. Months claimed in connection with health issues are frequently named ``<Foobar> Awareness Month'' or ``<Foobar> Safety Month.'' Many related to group pride or solidarity of one sort or another get names like ``Heritage Month'' or ``History Month.'' Just to shake things up, some group is bound to rename its ``<Foobarian> Pride Month'' ``<Foobarian> History Awareness Month.'' And on the other side, the shills for research on one or another disease will discover that the victims live in shame, requiring ``Oblong Somitis Incognita Awareness Month'' to be rechristened ``OSI Pride Month.'' In short, I don't think the distinction between awareness months and pride months, say, is a sharp one, so I'm going to use this entry as a central repository for designated months, however designated. The entries for awareness days (eventually) and awareness weeks will function similarly. There aren't a lot of awareness trimesters or awareness fortnights, although Prevent Blindness America does sponsor a 61-day ``month'' (see PBA). I can google up at most tens of thousands of awareness weekends, versus millions of weeks and months. Most designated months coincide with calendar months. This is a sensible approach, since ``October is Breast Cancer Awareness Month'' is a little more memorable than, for example, ``The 31 days following the fifth day after the fourth Thursday in September are Breast Cancer Awareness Month.'' In order to discourage the sensible practice, I'll go out of my way to provide more extensive publicity -- a whole entry, say -- when I become aware of month-long awareness months that don't coincide with calendar months. The only one I have an entry for just now is Hispanic Heritage Month. (``National,'' as in ``National Holiday,'' is the frequently elided first word in the official names -- as they occur in the presidential proclamations -- of many of the heritage and history months.) I'm going to have to automate this. It's too much. In connection with the business of aligning awareness months with calendar months, let me note this: When Comte created the Positivist Calendar, even though he made 28-day months and intercalated five or six year-end days that had no weekday correspondences (so that the rest of the year, days of the week corresponded to date mod 7), he did align the years. (Year 1 coincided with year 1789 of the Gregorian calendar, naturally.) awareness weeks Awareness weeks are the young of awareness months, so go to that entry for information about the species generally. Here's a list of awareness weeks that (a) I am aware of or (b) I was aware of: Automated Work Administration System. Afrikaner Weerstands Beweging. Afrikaans: `Afrikaner Resistance Movement.' A neonazi party in South Africa, led by Eugene Terreblanche, sentenced to six years in prison for the attempted murder of a black man, who was paralyzed in the beating. The party flag is essentially the same as the flag of the National Socialist (Nazi) Party of Germany (black device on white disc on red field), except that the four-armed black swastika is replaced by a three-legged black triskelion. Supposedly, this emblem represents three sevens. Auto White Balance. All-Wheel Drive. Hey, just try driving without one. AWD on a vehicle with four wheels sounds like it ought to be equivalent to 4WD, but it's not. 4WD includes ``low-range'' (high torque) gearing for deep mud or snow or steep grades. A 4WD must be stopped or slowed to a crawl to shift in or out of low range (done by toggling a switch or lever). AWD is power to all wheels, but without the special gearing. AWD, .awd At Work Document. Microsoft-defined file type and filename extension for a compressed bitmap format used for faxes. Specifically, an OLE compound object file that stores bilevel (B&W) facsimile data. The compression algorithm used in AWD is not published, but is based on CCITT Group 4. Active Wavelength Demodulation System. Advanced Warfighting Experiments. Asian Weightlifting Federation. American Wire Gauge. A set of numbers designating of (US) standard wire thicknesses. Arbitrary Waveform Generation. Array Waveguide Grating. American Wire Gauge. Additive White Gaussian Noise. Not very realistic sometimes, but a mathematically tractable and convenient model for the systematic analysis of linear systems. Are We Going To Have To Go Through All { That | This } Again? Are We Going To Have To Go Through { That | This } Again? Alert With Info. The Strawberry Statement collects the scattered thoughts of James Kunen, a 60's student radical at Columbia University. (Bibliographic details at the AAHM entry.) It's written in diary style, so I can tell you that on a Tuesday, July 16, 1968, the author visited the programming director at WABC radio in New York City. The two had a mutually unsatisfactory meeting, but agreed that there was some news content on the mostly-music-format WABC-AM, in the form of two newscasts per hour. Kunen felt these were insufficently detailed, and characterized them for the book: ``Canada is still sinking and the Russians have bombed Detroit, now back to the Show.'' Animal Welfare Information Center. I'm out of work. Can my dog get food stamps from Animal WIC? No, AWIC is part of the National Agricultural Library. Advanced Weather Interactive Processing System. Association for Women in Science. Copyeditor's abbreviation for awkward. [This glossary entry is just begging for a juicy example, isn't it?] A pattern-matching utility in Unix. Named after the last initials of its creators Al Aho, Peter Weinberger, and Brian Kernighan. Kind of a batch version of sed. Depending on your release, this may differ from nawk (New awk). Michael Neumann's extensive list of sample short programs in different programming languages includes a couple of awk programs. Animal Welfare League. A simple tool -- something like an ice-pick -- for making holes in leather. An ice-pick usually has a long handle like that of a screwdriver. An ice-pick applies impact force; it is held in the fist, about as a dagger is held. An awl applies steady pressure to a precise point; its handle has a blunter end that can be cupped in the palm. All the awls I've seen, anyway. Nowadays, shoe repair and manual shoe manufacture have gone the way of cobblestones. I suspect that most English-speakers' first encounter with the word awl, or even with the concept, occurs in Shakespeare's tragedy ``Julius Caesar,'' in the punny opening scene. Sadly, the standard (Schlegel) German translation is missing this bit. It wouldn't have been hard to recreate the pun: English awl and all can be translated to Ahle and alle. (The respective initial vowels here are short and long in quantity, but these are close enough for a good pun -- especially with a good actor's pronunciation.) Air and Waste Management Association. American Wholesale Marketers Association. Ancient World Mapping Center. American Women's Organization of Greece. Absent WithOut Leave. This is a US military acronym, but even outside the military, I think it is one of the best known of military acronyms. The writer of an AP news item distributed September 8, 2004, seemed to think it necessary to define it (incorrectly, of course, as ``Away Without Leave''). It's also occasionally expanded as ``absent without official leave,'' but in the military usage it is implicit that leave must be granted offically, or rather by a commanding officer. The way the Oxford Dictionary of the US Military handles this is to expand it as ``absent without (official) leave.'' They claim the acronym came into use in the 1920's, but I think it was already in use during WWI. Various American soldiers AWOL from their units during one or another World War are complaisantly mentioned by Gertrude Stein in some of her books. Ancient World OnLine. Ancient World On TeleVision. The Association of Writers and Writing Programs. It's hardly surprising that there'd be some association. Average Wholesale Price. Arab World for Research & Development. It's ``an independent research center (registered with the Ministry of Economy)... works in social political and economic research and development... highest standards in research methods including surveys, opinion polls, focus groups, in-depth interviews, and case studies.'' It conducts projects throughout the Arab world, but it seems to be based in Morocco. American Welding Society. Automatic Warning System. Now installed on most British railway lines; first used in 1948. By each signal there is one permanent magnet and one electromagnet that is energized when the signal is green. When the train passes the signal, a bell sounds in the driver's cab if it's green, and a horn otherwise. When the horn sounds, the driver must push a button within a few seconds or else the brakes will be applied. Since the 1950's there has also been a mechanical visual display which changes to a sunburst pattern when the button is pushed, and to plain black when the bell rings. Such a system is called ``fail-safe'' because its failure modes are designed to be safe. For example, in a power failure, the electromagnet goes off and the system signals to stop; if the brakeman is incapacitated, the brake goes on automatically. A common way for fail-safe systems to fail to perform safely as designed is by being turned off. In the Jethro Tull song `Locomotive Breath,' Ian Anderson sings something like old Charlie stole the handle and the train it won't stop going no it couldn't slow down For more railway-related songs, visit this chronological listing with comments or this alphabetic list. The word fail-safe came into popular use with the novel Fail-safe, by Eugene Burdick & Harvey Wheeler, (NY: McGraw-Hill, 1962). This story of accidental nuclear war was published during the Cuban missile crisis and was made into a movie of the same name (Dr. Strangelove without the yuks). Aviation Week and Space Technology magazine. Abstract Window Toolkit. Provides the Java GUI. Contained in the java.awt package. (A package is a collection of importable classes. Don't you just love the uneven level of detail you get in this glossary?) American Water Works Association. ``[A]n international nonprofit scientific and educational society dedicated to the improvement of drinking water quality and supply. Founded in 1881, AWWA is the largest organization of water supply professionals in the world. Its more than 50,000 members represent the full spectrum of the drinking water community: treatment plant operators and managers, scientists, environmentalists, manufacturers, academicians, regulators, and others who hold genuine interest in water supply and public health. Membership includes more than 3,700 utilities that supply water to roughly 170 million people in North America,'' including Mexico, where the word for water (agua) sounds more like awwa than it looks, because the g in Spanish is glottal. (The Spanish word is derived from the Latin aqua; for a similar pun on this, see OCWA.) The consonantal w is a glide, and if one purses the lips slightly when pronouncing it, one produces a bilabial sound that is represented by a beta in the IPA, and which is the usual sound of b in Spanish. It is therefore not surprising that in ordinary speech, the glottal g and bilabial b of Spanish sound similar. This has led to some orthographic changes. For example, in Cervantes's original text, the word for `grandmother,' now spelled abuela, was spelled aguela. For some discussion of the Modern Greek g (gamma), see the galaxy entry. Haested Methods sponsors a number of related electronic discussion groups. See their forums page for information about WaterTalk, SewerTalk, StormTalk, and GISTalk. They also sponsor a Spanish-language version of WaterTalk, called AquaForo. American Water Works Association (AWWA) Research Foundation. Aww, mama can this really be the end? To be stuck inside of Mobile, With the Memphis blues again. Refrain of ``Stuck Inside Of Mobile With The Memphis Blues Again.'' First released by Bob Dylan on ``Blonde on Blonde'' (1966). A Webpage Wasted On Tom Lehrer. This GeoCities site has been deactivated due to inactivity. Are you the site owner? Click here to reactivate your site. There was also A [now defunct] Webpage (Wasted) On Tom Lehrer. Maybe it was related content. The names allude to his 1959 album, ``An Evening Wasted with Tom Lehrer.'' Asociación World Wide Web Argentina. (A translation? Hmmm. Let's see if we can guess something here... maybe, em, could be sort of rough, but, uhh, well, something like ``Argentine WWW Association''?) Architecture eXtended. (Antediluvian PC/AT term.) Axe, hatchet. Advanced X-ray Astronomical Facility. Airborne eXpendable Current Profiler. Another one of those secret North Germanic acronyms, like KLM. Its expansion is probably an off-color inside joke, but ... ``The AXE system is Ericsson's core switching platform for all narrowband and wideband public network switching applications well into the [twenty-first] century.'' axial lead Refers to a cylindrical two-lead electrical package with one lead coming out of the center of each end. Cf. radial lead. An obvious or generally accepted proposition. The word reached English via French axiome < Latin axioma < Greek axíôma, `that which is worthy or fit.' Probably the best-known statement of an axiom is the first sentence of chapter I in Jane Austen's Pride and Prejudice: Axioms explicitly so-called occur most often in mathematics. Most high-school students used to make the acquaintance of axioms, even if they did not come into a friendly relationship with them (i.e., even if they didn't exactly become familiar) in standard one-year courses in formal geometry. That was before high-school geometry courses were abased by mathematics-hating ``teachers'' and other saboteurs of children's education, who adopted wretched books full of time-wasting pictures and geometry-related stories with a very optional afterthought chapter or two about proofs at the end. Euclid's geometry text taught rigor of thought to over twenty centuries'-worth of schoolboys. Euclid made a distinction between axioms and postulates, explained at the postulate entry. Anomalous X-ray Pulsar. Academic Year. Here're the AY calendars for UB in 1995-1996 and 1996-1997. Alpha Youth Athletic Association. Funded by the Borough of Alpha, New Jersey. All You Can Eat. Ask Your Local Orthodox Rabbi. (Also: ordained rabbi.) It's a lot faster than wading through the enormous Judaism FAQ. Same as CYLOR. You have my permission to pronounce this like the word its very creation suggests. A simple two-dimensional locally-anisotropic lattice-gas model (for CuO-plane superconductivity) with nearest- and Next-Nearest-Neighbor Interactions, originally proposed by D. de Fontaine, L. T. Wille and S. C. Moss in Phys. Rev. B, vol. 36, pp. 5709ff (1987). I'm not sure if the author list includes the name of the graduate student whose job was to carry the acronym expansion tools. America's Youth on Parade. ``There's no twirling spectacular quite like AYOP. It brings together the best baton twirlers, teams and corps in the world for a series of National and World Open Championship contests - all under one umbrella. It can be appropriately called the `World Series of Baton Twirling' ... sanctioned by the NBTA INTERNATIONAL.'' And where are AYOP events held??? That's right -- they're ``held [every year in July] in the spacious, air conditioned Notre Dame University Athletic and Convocation Center (JACC)''!!!! Hip-hip hooray! Hip-hip-hooray! Hooray! Hooray! Go! Fight! Win! Hip-hip hoo--what? Oh, it's not cheerleading? Better go to the majorette entry (once it exists) and learn more. Adequate Yearly Progress. Under the terms of the NCLB Act, federal (US) funding depends on demonstrated AYP. Measures of AYP, in order to be considered valid for NCLB purposes, must have a 95% student participation rate. (There are easy ways around this requirement, I think. When similar state-level legislation was implemented in Texas, large numbers of the poorest-performing students were recategorized as learning-disabled or encouraged to drop out and enroll in GED programs, and some exam papers were doctored.) Arizona. USPS abbreviation. The Villanova University Law School provides some links to state government web sites for Arizona. USACityLink.com has a page for Arizona. Arizona is a community property state. The US is the world's second-largest copper producer after Chile. Each produces about two million tons a year. You might ask: if they both produce about that much, and if production varies by maybe 10% year-to-year (how did you know that?), then how come Chile is consistently first and the US consistently second? Go ahead, ask, I can answer. The reason is, production is driven by the market. In a year with high demand, prices go up and production everywhere increases, so while the overall numbers vary a lot, the ratio of production between major producers varies less rapidly. Part of how this works is that the cost of extraction varies for different sources. At any given time some sources are not worth using. When prices increase, it becomes profitable to use those higher-cost resources. Major producing countries like the US and Chile have a number of such mines, so production by both varies with world demand. Some statistics show this kicking-in of higher-cost resources. In the US, Arizona is has the richest and most economically efficient copper mines, and in a typical year between a half and two thirds of US production comes from Arizona. When demand is low and increases rapidly, most of the extra production comes from Arizona, which has ready excess capacity. On the other hand, when demand increases steadily, Arizona's share declines, as higher-cost producers enter the market. Instead of saying Arizona here, I probably should be saying Phelps-Dodge. Of course, a lot of other factors affect production, such as resource depletion, lack of investment capital (a major factor for Zambia), political issues (gee, why can't Zambia just borrow abroad on the strength of its rich resources, and why did the bottom fall out of Zairian production in the early nineties?), personnel and transport (proximity to market) considerations, etc. (Domain code for) Azerbaijan. American Zinc Association. More links for Zinc at Zn. Association of Zoos and Aquariums. Founded in 1924 as the American Association of Zoological Parks and Aquariums (and abbreviated AAZPA), later known as the American Zoo and Aquarium Association. I think the current name (I write in 2009) was adopted around 1997. The abbreviation AAZA and the name ``American Association of Zoos and Aquariums'' (those are prophylactic quotation marks) have also been used. With all these different tags, I would have liked, just once, for them to have used ``aquaria'' in the name. Heck, I'll do it myself. Association zaïroise de défense des droits de l'homme. `Zaire Association for the Defense of Human Rights.' Founded in 1991. Changed its name to ASADHO when Mobutu's government fell and Laurent Kabila changed the country's name to Democratic Republic of the Congo. Spanish: `hostess, stewardess.' General term for an attendant at a public gathering or on a plane or train, etc. ``Attendant'' here is meant in the usual sense of someone who attends to the needs of the public, rather than someone who simply attends an event (attendee). That might be a public attendant. Everything would be so much easier if ``servant'' didn't have such poor connotations. Anyway, the male form of the word is azafato. Azafata and azafato are the only terms I've ever heard used in Spanish that would be translated as `flight attendant.' The fact that the attendance takes place on a plane is apparently not regarded as meriting explicit recognition. Spanish noun (masculine) meaning `luck, fortune' or `good fortune,' just as the English noun luck means luck or `good luck,' depending on whether you're speaking generally or wishing it to someone. ``Juegos de azar'' are `games of chance.' It's slightly unusual to have a noun ending in -ar that isn't the noun use of a verb infinitive, but you get used to it before the time when you can remember getting used to it. Another slight oddity: the woman's name Pilar. [Other non-infinitive nouns ending in -ar that I can think of are male: pulgar (`thumb'), collar (`necklace'). Mar is trickier; see its entry.] The word asar, which in Latin American prounciations is a homophone of azar, is a verb meaning `cook over an open flame.' Asado, meaning precisely `grilled beef steak,' is the national dish of Argentina. Latin had four classes of verbs, whose active infinitives (if they weren't deponent verbs they had active infinitives) ended in -are, -ire, or -ere. (That's right: mere spelling didn't quite tell you the conjugation of -ere verbs.) The -are class was the largest, I'm pretty sure. Romance languages typically collapsed these four regular conjugations into three, and the conjugation that collected the -are verbs (-ar in Spanish) were usually still the largest group. Modern Greek has a class of verbs with infinitives ending in -aro. It dates back to Byzantine times, when it was constructed on the basis of -are verbs borrowed from Italian (or perhaps more precisely Venetian). The ending is highly productive, and seems to provide the most common conjugation for loan verbs. For example, stoparo and sakaro (`to stop, to shock') are standard in Modern (demotic) Greek today. (German has a similar class of verbs, with infinitives ending in -ieren, mostly borrowed from French.) Greek-speakers living in foreign countries often use this conjugation to create hybrids used in local versions of Greek (a North American example: muvaro, `to move'). The pattern is not uniform, however. Greeks in Germany use preparizo for `to prepare,' from the German preparieren. The German verb is borrowed, in turn, from the French preparer. This verb is also an -are verb (viz., it's derived from the Latin preparare). I believe that Latin -are verbs generally ended up as -er verbs in Modern French. azide, azido- An azide is an organic chemical with an N3 functional group. That is, a chemical which can be represented by the formula where N is nitrogen and R represents a molecule bonded to the functional group through a carbon chain. Particular azides have names including the prefix azido-. Note carefully the difference between an azide and an amine. An azide has three nitrogens bonded to one organic group; an amine has three organic groups bonded to one nitrogen (R3N). The AriZona Language Association, Inc. ``[T]he not-for-profit professional association for language teachers in Arizona, dedicated to promoting the effective teaching of all languages. AZLA is the Arizona affiliate of ACTFL (the American Council of Teachers of Foreign Languages) and SWCOLT (the Southwest Conference on Language Teaching).'' AZOmethane. (CH3)2N2. AriZona Planning Association. A chapter of the APA. A-Z soup Just give it a second. You can figure this one out. AZidoThymidine. Systematic name, minus the numbers: dihydro methyl pyridinyl carbonyl azido dideoxythymidine. It has a lot of alternate trivial names, such as retrovir and zidovudine (abbreviated ZDV). It's an important AIDS drug, in the class of NRTI's. Like all of the drugs first found effective against AIDS, it somehow blocks the action of reverse transcriptase, which a retrovirus like HIV uses to insert its RNA-encoded genetic instructions into the host cell's DNA. A time-release form of AZT. A characteristic copper ore: Cu3(CO3)2(OH)2 with this structure: O == C O == C The mineral takes its name from its color. For more about the occurrence of this hydroxy-carbonate, see the Fahlerz entry. For a similar mineral, see malachite. AriZona Veterinary Medical Association. See also AVMA. Indian pronunciation of English assume. Bohr Radius. The radius of the orbit of an electron in Bohr's model of the hydrogen atom, it is also the scale parameter in the eigenstates of the Schrödinger equation for the hydrogen atom. It's about 0.52917721 Å, or about two nanoïnches in, uh, customary units. The Bohr radius is itself used as a unit of length (as, for example, in the definition of a dimensionless screening radius rs). As a length unit, the Bohr radius is also called a bohr (q.v.). The formula for the Bohr radius is a = ----- , where ħ is the reduced Planck's constant (h/2π), α the fine-structure constant, c the speed of light in vacuum, and m0 the free electron mass. If you want to compute the properties of an isolated hydrogen atom, you start with the complete Hamiltonian for the nucleus and electron, and separate out the Hamiltonian for the center-of-mass motion. This leaves a Hamiltonian for the electron-nucleus separation. (In classical physics, the Hamiltonian is a function of independent momentum and coordinate variables, and ``canonical'' equations of motion equivalent to Newton's equations are obtained as first-order partial differential equations involving the Hamiltonian. In quantum mechanics, the Hamiltonian is an operator function of momentum and coordinate operators, and it is formally identical to the classical Hamiltonian so long as intrinsic spin is ignored. The Schrödinger equation is a first-order partial differential equation involving the quantum Hamiltonian.) Anyway -- the Hamiltonian, or any equations derived from it, looks similar for the electron-nucleus separation as for an electron orbiting an infinite-mass nucleus, but with a ``reduced mass'' (its value, half the harmonic mean of the electron and nuclear masses, is about 0.05% smaller than the free electron mass). Using the reduced mass can give you a slight improvement in accuracy for an even slighter amount of computational work, if all you're dealing with is an atom with one electron, or a Rydberg atom with only one highly excited electron. (A Rydberg atom is an atom with one or few electrons in large-n states, and the other electrons not in highly excited states.) The Bohr radius, however, is defined using the free electron mass, and not the reduced mass. Diode imperfection factor (A). The zero subscript indicates that the correction is applied to a particularly elementary model: a single-exponential (Ebers-Moll) model. A paper dimension standard used only in those corners of the world (mostly just a few remote stations in Antarctica, a bunch of Pacific islands, some parts of North America, and the continents of Australia, Europe, South America, Africa, and Asia) that stubbornly cling to centuries-old metric units. A0 sheets have a total area of 1 square meter, and a ratio of length to width that is the square root of 2. Each successive standard size (A1, A2, ...) is defined by halving the length of the longer side of the sheet, thus preserving the ratio of height to width. The earliest known suggestion of this scheme was by Georg Lichtenberg, in a letter to Johann Beckmann date October 25, 1786. [The old quarto, octavo, 16mo, etc. are also defined by successive halvings, but have two width and length ratios (whose geometric mean, of course, is also the square root of 2). Cf. B0.] Name Area (sq cm) Width (cm) Length (cm) Length (in) It is superfluous to note that Hermann Melville was rather a literary naturalist. But in chapter 32 (``Cetology'') of Moby Dick, he makes a surprisingly direct connection: ``According to magnitude I divide the whales into three primary BOOKS (subdivisible into CHAPTERS), and these shall comprehend them all, both small and large. I. THE FOLIO WHALE; II. the OCTAVO WHALE; III. the DUODECIMO WHALE. As the type of the FOLIO I present the SPERM WHALE; of the OCTAVO, the GRAMPUS; of the DUODECIMO, the PORPOISE.'' After enumerating the Folio whales, he writes (the ``books'' here are still metaphorical; we continue in chapter 32 of Moby Dick):       Thus ends BOOK I. (Folio), and now begins BOOK II. (Octavo). OCTAVOES.*--These embrace the whales of middling magnitude, among which present may be numbered:--I., the GRAMPUS; II., the BLACK FISH; III., the NARWHALE; IV., the THRASHER; V., the KILLER. *Why this book of whales is not denominated the Quarto is very plain. Because, while the whales of this order, though smaller than those of the former order, nevertheless retain a proportionate likeness to them in figure, yet the bookbinder's Quarto volume in its dimensioned form does not preserve the shape of the Folio volume, but the Octavo volume does. A paper size. See A0. Tops. In the best category. Alpha1-Antitrypsin Deficiency. ``[A] genetic condition that can cause severe early onset emphysema, liver disease in both children and adults, or more rarely, a skin condition called panniculitis. It is estimated [that] there are 80,000 to 100,000 men, women and children with A1AD in the United States, yet only a fraction of them have been identified,'' according to... Alpha1 National Association. ``[A] non-profit, membership organization, dedicated to improving the lives of individuals and their families affected by alpha1-antitrypsin deficiency.'' This was the ``number'' on the vanity plate issued by the state of California for a car belonging to Lawrence Welk. If you're much younger than me, you probably don't get it. Lawrence Welk had an orchestra and a television show (called ``The Lawrence Welk Show''), and his trademark way to set the beat to begin a piece was to say ``uh-one and-uh two and-uh.'' A paper size. See A0. You mean the UK school-leaving exams? See A-levels. Part of a system that might very well end up being a one-off for 2002. Alexander to Actium, by Peter M. Green. Atlantic Reporter, Second Series. Legal publication. Advanced Antennas for Future Combat Systems. CECOM research program. American Association for Laboratory Accreditation. ``[A] non-profit, professional membership society committed to the success of laboratories through the administration of a broad-spectrum, nationwide laboratory accreditation system and a full range of training on laboratory practices taught by experts in their field.'' ``A2LA accredits testing laboratories in the following fields: acoustics and vibration, biological, chemical, construction materials, electrical, environmental, geotechnical, mechanical, calibration, nondestructive and thermal. Accreditation is available to private, independent, in-house and government labs.'' Based in Frederick, MD. A paper size. See A0. American Association of Academic Chief Residents in Radiology. The AUR link on the A3CR2 page is less prominent or direct than the A3CR2 link on the AUR page. I guess we understand the pecking order here. The social science of small-group interactions would probably explain why the APDR doesn't get a link at A3CR2: this town ain't big enough for two alphas. ``Ay THREE cee arr two.'' It has kind of a ring to it, but they should drop the ``two'' so it scans with ``cee THREE pee oh.'' A paper size. See A0. A paper size. See A0. A paper size. See A0. A $60 value, and you also get... Oh sure, you could go to the mall today and get it for $17.98, but what do they know about value? And you don't get the convenience of ordering from the comfort of your own living room couch what you can see clearly right there on your TV screen, and having it delivered to your front door in ``just days.'' (Click here for top) Previous section: A&S (top) to AS56 (bottom) Next section: B (top) to BayMG (bottom) [ Thumb tabs and search tool] [ SBF Homepage ] © Alfred M. Kriman 1995-2012 (c)
6972d39ea791417f
Provided below is a listing of BES-sponsored workshop reports that address the current status and possible future directions of some important research areas. These reports include those resulting from The "Basic Research Needs" Workshop Series.pdf file (1.3MB) that are used to help identify research directions for a decades-to-century energy strategy. [PDF file requirements.pdf file (16KB)] Neutron and X Ray Detectors JPG.jpg file (440KB) Report.pdf file (7.9MB) Report.pdf file (16.5MB) Neutron and X-ray Detectors The Basic Energy Sciences (BES) X-ray and neutron user facilities attract more than 12,000 researchers each year to perform cutting-edge science at these state-of-the-art sources. While impressive breakthroughs in X-ray and neutron sources give us the powerful illumination needed to peer into the nano- to mesoscale world, a stumbling block continues to be the distinct lag in detector development, which is slowing progress toward data collection and analysis. Urgently needed detector improvements would reveal chemical composition and bonding in 3-D and in real time, allow researchers to watch “movies” of essential life processes as they happen, and make much more efficient use of every X-ray and neutron produced by the source The immense scientific potential that will come from better detectors has triggered worldwide activity in this area. Europe in particular has made impressive strides, outpacing the United States on several fronts. Maintaining a vital U.S. leadership in this key research endeavor will require targeted investments in detector R&D and infrastructure. To clarify the gap between detector development and source advances, and to identify opportunities to maximize the scientific impact of BES user facilities, a workshop on Neutron and X-ray Detectors was held August 1-3, 2012, in Gaithersburg, Maryland. Participants from universities, national laboratories, and commercial organizations from the United States and around the globe participated in plenary sessions, breakout groups, and joint open-discussion summary sessions. Sources have become immensely more powerful and are now brighter (more particles focused onto the sample per second) and more precise (higher spatial, spectral, and temporal resolution). To fully utilize these source advances, detectors must become faster, more efficient, and more discriminating. In supporting the mission of today’s cutting-edge neutron and X-ray sources, the workshop identified six detector research challenges (and two computing hurdles that result from the corresponding increase in data volume) for the detector community to overcome in order to realize the full potential of BES neutron and X-ray facilities. Resolving these detector impediments will improve scientific productivity both by enabling new types of experiments, which will expand the scientific breadth at the X-ray and neutron facilities, and by potentially reducing the beam time required for a given experiment. These research priorities are summarized in the table below. Note that multiple, simultaneous detector improvements are often required to take full advantage of brighter sources. High-efficiency hard X-ray sensors: The fraction of incident particles that are actually detected defines detector efficiency. Silicon, the most common direct-detection X-ray sensor material, is (for typical sensor thicknesses) 100% efficient at 8 keV, 25%efficient at 20 keV, and only 3% efficient at 50 keV. Other materials are needed for hard X-rays. Replacement for 3He for neutron detectors: 3He has long been the neutron detection medium of choice because of its high cross section over a wide neutron energy range for the reaction 3He + n —> 3H + 1H + 0.764 MeV. 3He stockpiles are rapidly dwindling, and what is available can be had only at prohibitively high prices. Doped scintillators hold promise as ways to capture neutrons and convert them into light, although work is needed on brighter, more efficient scintillator solutions. Neutron detectors also require advances in speed and resolution. Fast-framing X-ray detectors: Today’s brighter X-ray sources make time-resolved studies possible. For example, hybrid X-ray pixel detectors, initially developed for particle physics, are becoming fairly mature X-ray detectors, with considerable development in Europe. To truly enable time-resolved studies, higher frame rates and dynamic range are required, and smaller pixel sizes are desirable. High-speed spectroscopic X-ray detectors: Improvements in the readout speed and energy resolution of X-ray detectors are essential to enable chemically sensitive microscopies. Advances would make it possible to take images with simultaneous spatial and chemical information. Very high-energy-resolution X-ray detectors: The energy resolution of semiconductor detectors, while suitable for a wide range of applications, is far less than what can be achieved with X-ray optics. A direct detector that could rival the energy resolution of optics could dramatically improve the efficiency of a multitude of experiments, as experiments are often repeated at a number of different energies. Very high-energy-resolution detectors could make these experiments parallel, rather than serial. Low-background, high-spatial-resolution neutron detectors: Low-background detectors would significantly improve experiments that probe excitations (phonons, spin excitations, rotation, and diffusion in polymers and molecular substances, etc.) in condensed matter. Improved spatial resolution would greatly benefit radiography, tomography, phase-contrast imaging, and holography. Improved acquisition and visualization tools: In the past, with the limited variety of slow detectors, it was straightforward to visualize data as it was being acquired (and adjust experimental conditions accordingly) to create a compact data set that the user could easily transport. As detector complexity and data rates explode, this becomes much more challenging. Three goals were identified as important for coping with the growing data volume from high-speed detectors: • Facilitate better algorithm development. In particular, algorithms that can minimize the quantity of data stored. • Improve community-driven mechanisms to reduce data protocols and enhance quantitative, interactive visualization tools. • Develop and distribute community-developed, detector-specific simulation tools. • Aim for parallelization to take advantage of high-performance analysis platforms. Improved analysis work flows: Standardize the format of metadata that accompanies detector data and describes the experimental setup and conditions. Develop a standardized user interface and software framework for analysis and data management. The diversity of detector improvements required is necessarily as broad as the range of scientific experimentation at BES facilities. This workshop identified a variety of avenues by which detector R&D can enable enhanced science at BES facilities. The Research Directions listed above will be addressed by focused R&D and detector engineering, both of which require specialized infrastructure and skills. While U.S. leadership in neutron and X-ray detectors lags behind other countries in several areas, significant talent exists across the complex. A forum of technical experts, facilities management, and BES could be a venue to provide further definition. JPG.jpg file (178KB) Report.pdf file (8.1MB) Hi-res.pdf file (30.3MB) From Quanta to the Continuum:  Opportunities for Mesoscale Science We are at a time of unprecedented challenge and opportunity. Our economy is in need of a jump start, and our supply of clean energy needs to dramatically increase. Innovation through basic research is a key means for addressing both of these challenges. The great scientific advances of the last decade and more, especially at the nanoscale, are ripe for exploitation. Seizing this key opportunity requires mastering the mesoscale, where classical, quantum, and nanoscale science meet. It has become clear that—in many important areas—the functionality that is critical to macroscopic behavior begins to manifest itself not at the atomic or nanoscale but at the mesoscale, where defects, interfaces, and non-equilibrium structures are the norm. With our recently acquired knowledge of the rules of nature that govern the atomic and nanoscales, we are well positioned to unravel and control the complexity that determines functionality at the mesoscale. The reward for breakthroughs in our understanding at the mesoscale is the emergence of previously unrealized functionality. The present report explores the opportunity and defines the research agenda for mesoscale science—discovering, understanding, and controlling interactions among disparate systems and phenomena to reach the full potential of materials complexity and functionality. The ability to predict and control mesoscale phenomena and architectures is essential if atomic and molecular knowledge is to blossom into a next generation of technology opportunities, societal benefits, and scientific advances. • Imagine the ability to manufacture at the mesoscale:  that is, the directed assembly of mesoscale structures that possess unique functionality that yields faster, cheaper, higher performing, and longer lasting products, as well as products that have functionality that we have not yet imagined. • Imagine the realization of biologically inspired complexity and functionality with inorganic earth-abundant materials to transform energy conversion, transmission, and storage. • Imagine the transformation from top-down design of materials and systems with macroscopic building blocks to bottom-up design with nanoscale functional units producing next-generation technological innovation. This is the promise of mesoscale science. Mesoscale science and technology opportunities build on the enormous foundation of nanoscience that the scientific community has created over the last decade and continues to create. New features arise naturally in the transition to the mesoscale, including the emergence of collective behavior; the interaction of disparate electronic, mechanical, magnetic, and chemical phenomena; the appearance of defects, interfaces and statistical variation; and the self assembly of functional composite systems. The mesoscale represents a discovery laboratory for finding new science, a self-assembly foundry for creating new functional systems, and a design engine for new technologies. The last half-century and especially the last decade have witnessed a remarkable drive to ever smaller scales, exposing the atomic, molecular, and nanoscale structures that anchor the macroscopic materials and phenomena we deal with every day. Given this knowledge and capability, we are now starting the climb up from the atomic and nanoscale to the greater complexity and wider horizons of the mesoscale. The constructionist path up from atomic and nanoscale to mesoscale holds a different kind of promise than the reductionist path down:  it allows us to re-arrange the nanoscale building blocks into new combinations, exploit the dynamics and kinetics of these new coupled interactions, and create qualitatively different mesoscale architectures and phenomena leading to new functionality and ultimately new technology. The reductionist journey to smaller length and time scales gave us sophisticated observational tools and intellectual understanding that we can now apply with great advantage to the wide opportunity of mesoscale science following a bottom-up approach. Realizing the mesoscale opportunity requires advances not only in our knowledge but also in our ability to observe, characterize, simulate, and ultimately control matter. Mastering mesoscale materials and phenomena requires the seamless integration of theory, modeling, and simulation with synthesis and characterization. The inherent complexity of mesoscale phenomena, often including many nanoscale structural or functional units, requires theory and simulation spanning multiple space and time scales. In mesoscale architectures the positions of individual atoms are often no longer relevant, requiring new simulation approaches beyond density functional theory and molecular dynamics that are so successful at atomic scales. New organizing principles that describe emergent mesoscale phenomena arising from many coupled and competing degrees of freedom wait to be discovered and applied. Measurements that are dynamic, in situ, and multimodal are needed to capture the sequential phenomena of composite mesoscale materials. Finally, the ability to design and realize the complex materials we imagine will require qualitative advances in how we synthesize and fabricate materials and how we manage their metastability and degradation over time. We must move from serendipitous to directed discovery, and we must master the art of assembling structural and functional nanoscale units into larger architectures that create a higher level of complex functional systems. While the challenge of discovering, controlling, and manipulating complex mesoscale architectures and phenomena to realize new functionality is immense, success in the pursuit of these research directions will have outcomes with the potential to transform society. The body of this report outlines the need, the opportunities, the challenges, and the benefits of mastering mesoscale science. JPG.jpg file (402KB) .pdf file Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) This report is based on a SC/EERE Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE), held March 3, 2011, to determine strategic focus areas that will accelerate innovation in engine design to meet national goals in transportation efficiency. The U.S. has reached a pivotal moment when pressures of energy security, climate change, and economic competitiveness converge. Oil prices remain volatile and have exceeded $100 per barrel twice in five years. At these prices, the U.S. spends $1 billion per day on imported oil to meet our energy demands. Because the transportation sector accounts for two-thirds of our petroleum use, energy security is deeply entangled with our transportation needs. At the same time, transportation produces one-quarter of the nation’s carbon dioxide output. Increasing the efficiency of internal combustion engines is a technologically proven and cost-effective approach to dramatically improving the fuel economy of the nation’s fleet of vehicles in the near- to mid-term, with the corresponding benefits of reducing our dependence on foreign oil and reducing carbon emissions. Because of their relatively low cost, high performance, and ability to utilize renewable fuels, internal combustion engines—including those in hybrid vehicles—will continue to be critical to our transportation infrastructure for decades. Achievable advances in engine technology can improve the fuel economy of automobiles by over 50% and trucks by over 30%. Achieving these goals will require the transportation sector to compress its product development cycle for cleaner, more efficient engine technologies by 50% while simultaneously exploring innovative design space. Concurrently, fuels will also be evolving, adding another layer of complexity and further highlighting the need for efficient product development cycles. Current design processes, using “build and test” prototype engineering, will not suffice. Current market penetration of new engine technologies is simply too slow—it must be dramatically accelerated. These challenges present a unique opportunity to marshal U.S. leadership in science-based simulation to develop predictive computational design tools for use by the transportation industry. The use of predictive simulation tools for enhancing combustion engine performance will shrink engine development timescales, accelerate time to market, and reduce development costs, while ensuring the timely achievement of energy security and emissions targets and enhancing U.S. industrial competitiveness. In 2007 Cummins achieved a milestone in engine design by bringing a diesel engine to market solely with computer modeling and analysis tools. The only testing was after the fact to confirm performance. Cummins achieved a reduction in development time and cost. As important, they realized a more robust design, improved fuel economy, and met all environmental and customer constraints. This important first step demonstrates the potential for computational engine design. But, the daunting complexity of engine combustion and the revolutionary increases in efficiency needed require the development of simulation codes and computation platforms far more advanced than those available today. Based on these needs, a Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) convened over 60 U.S. leaders in the engine combustion field from industry, academia, and national laboratories to focus on two critical areas of advanced simulation, as identified by the U.S. automotive and engine industries. First, modern engines require precise control of the injection of a broad variety of fuels that is far more subtle than achievable to date and that can be obtained only through predictive modeling and simulation. Second, the simulation, understanding, and control of these stochastic in-cylinder combustion processes lie on the critical path to realizing more efficient engines with greater power density. Fuel sprays set the initial conditions for combustion in essentially all future transportation engines; yet today designers primarily use empirical methods that limit the efficiency achievable. Three primary spray topics were identified as focus areas in the workshop: 1. The fuel delivery system, which includes fuel manifolds and internal injector flow, 2. The multi-phase fuel–air mixing in the combustion chamber of the engine, and 3. The heat transfer and fluid interactions with cylinder walls. Current understanding and modeling capability of stochastic processes in engines remains limited and prevents designers from achieving significantly higher fuel economy. To improve this situation, the workshop participants identified three focus areas for stochastic processes: 1. Improve fundamental understanding that will help to establish and characterize the physical causes of stochastic events, 2. Develop physics-based simulation models that are accurate and sensitive enough to capture performance-limiting variability, and 3. Quantify and manage uncertainty in model parameters and boundary conditions. Improved models and understanding in these areas will allow designers to develop engines with reduced design margins and that operate reliably in more efficient regimes. All of these areas require improved basic understanding, high-fidelity model development, and rigorous model validation. These advances will greatly reduce the uncertainties in current models and improve understanding of sprays and fuel–air mixture preparation that limit the investigation and development of advanced combustion technologies. The two strategic focus areas have distinctive characteristics but are inherently coupled. Coordinated activities in basic experiments, fundamental simulations, and engineering-level model development and validation can be used to successfully address all of the topics identified in the PreSICE workshop. The outcome will be: 1. New and deeper understanding of the relevant fundamental physical and chemical processes in advanced combustion technologies, 2. Implementation of this understanding into models and simulation tools appropriate for both exploration and design, and 3. Sufficient validation with uncertainty quantification to provide confidence in the simulation results. These outcomes will provide the design tools for industry to reduce development time by up to 30% and improve engine efficiencies by 30% to 50%. The improved efficiencies applied to the national mix of transportation applications have the potential to save over 5 million barrels of oil per day, a current cost savings of $500 million per day. Compact Light Source Thumbnail JPG.jpg file (252KB) Report.pdf file (2.8MB) Report of the Basic Energy Sciences Workshop on Compact Light Sources This report is based on a BES Workshop on Compact Light Sources, held May 11-12, 2010, to evaluated the advantages and disadvantages of compact light source approaches and compared their performance to the third generation storage rings and free-electron lasers. The workshop examined the state of the technology for compact light sources and their expected progress. The workshop evaluated the cost efficiency, user access, availability, and reliability of such sources. Working groups evaluated the advantages and disadvantages of Compact Light Source (CLS) approaches, and compared their performance to the third-generation storage rings and free-electron lasers (FELs). The primary aspects of comparison were 1) cost effectiveness, 2) technical availability v. time frame, and 3) machine reliability and availability for user access. Five categories of potential sources were analyzed: 1) inverse Compton scattering (ICS) sources, 2) mini storage rings, 3) plasma sources, 4) sources using plasma-based accelerators, and 5) laser high harmonic generation (HHG) sources. Compact light sources are not a substitute for large synchrotron and FEL light sources that typically also incorporate extensive user support facilities. Rather they offer attractive, complementary capabilities at a small fraction of the cost and size of large national user facilities. In the far term they may offer the potential for a new paradigm of future national user facility. In the course of the workshop, we identified overarching R&D topics over the next five years that would enhance the performance potential of both compact and large-scale sources: • Development of infrared (IR) laser systems delivering kW-class average power with femtosecond pulses at kHz repetition rates. These have application to ICS sources, plasma sources, and HHG sources. • Development of laser storage cavities for storage of 10-mJ picosecond and femtosecond pulses focused to micron beam sizes. • Development of high-brightness, high-repetition-rate electron sources. • Development of continuous wave (cw) superconducting rf linacs operating at 4 K, while not essential, would reduce capital and operating cost. New Science for a Secure and Sustainable Energy Future JPG.jpg file (79KB) .pdf file Basic Research Needs for Carbon Capture: Beyond 2020 This report is based on a SC/FE workshop on Carbon Capture: Beyond 2020, held March 4–5, 2010, to assess the basic research needed to address the current technical bottlenecks in carbon capture processes and to identify key research priority directions that will provide the foundations for future carbon capture technologies. The problem of thermodynamically efficient and scalable carbon capture stands as one of the greatest challenges for modern energy researchers. The vast majority of US and global energy use derives from fossil fuels, the combustion of which results in the emission of carbon dioxide into the atmosphere. These anthropogenic emissions are now altering the climate. Although many alternatives to combustion are being considered, the fact is that combustion will remain a principal component of the global energy system for decades to come. Today’s carbon capture technologies are expensive and cumbersome and energy intensive. If scientists could develop practical and cost-effective methods to capture carbon, those methods would at once alter the future of the largest industry in the world and provide a technical solution to one of the most vexing problems facing humanity. The carbon capture problem is a true grand challenge for today’s scientists. Postcombustion CO2 capture requires major new developments in disciplines spanning fundamental theoretical and experimental physical chemistry, materials design and synthesis, and chemical engineering. To start with, the CO2 molecule itself is thermodynamically stable and binding to it requires a distortion of the molecule away from its linear and symmetric arrangement. This binding of the gas molecule cannot be too strong, however; the sheer quantity of CO2 that must be captured ultimately dictates that the capture medium must be recycled over and over. Hence the CO2 once bound, must be released with relatively little energy input. Further, the CO2 must be rapidly and selectively pulled out of a mixture that contains many other gaseous components. The related processes of precombustion capture and oxycombustion pose similar challenges. It is this nexus of high-speed capture with high selectivity and minimal energy loss that makes this a true grand challenge problem, far beyond any of today’s artificial molecular manipulation technologies, and one whose solution will drive the advancement of molecular science to a new level of sophistication. We have only to look to nature, where such chemical separations are performed routinely, to imagine what may be achieved. The hemoglobin molecule transports oxygen in the blood rapidly and selectively and releases it with minimal energy penalty. Despite our improved understanding of how this biological system works, we have yet to engineer a molecular capture system that uses the fundamental cooperativity process that lies at the heart of the functionality of hemoglobin. While such biological examples provide inspiration, we also note that newly developed theoretical and computational capabilities; the synthesis of new molecules, materials, and membranes; and the remarkable advances in characterization techniques enabled by the Department of Energy’s measurement facilities all create a favorable environment for a major new basic research push to solve the carbon capture problem within the next decade. The Department of Energy has established a comprehensive strategy to meet the nation’s needs in the carbon capture arena. This framework has been developed following a series of workshops that have engaged all the critical stakeholder communities. The strategy that has emerged is based upon a tiered approach, with Fossil Energy taking the lead in a series of applied research programs that will test and extend our current systems. ARPA-E (Advanced Research Projects Agency–Energy) is supporting potential breakthroughs based upon innovative proposals to rapidly harness today’s technical capabilities in ways not previously considered. These needs and plans have been well summarized in the report from a recent workshop—Carbon Capture 2020, held in October 5 and 6, 2009—focused on near-term strategies for carbon capture improvements ( proceedings/09/CC2020/pdfs/Richards_Summary.pdf ). Yet the fact remains that when the carbon capture problem is looked at closely, we see today’s technologies fall far short of making carbon capture an economically viable process. This situation reinforces the need for a parallel, intensive use-inspired basic research effort to address the problem. This was the overwhelming conclusion of a recent workshop—Carbon Capture: Beyond 2020, held March 4 and 5, 2010—and is the subject of the present report. To prepare for the second workshop, an in-depth assessment of current technologies for carbon capture was conducted; the result of this study was a factual document, Technology and Applied R&D Needs for Carbon Capture: Beyond 2020. This document, which was prepared by experts in current carbon capture processes, also summarized the technological gaps or bottlenecks that limit currently available carbon capture technologies. The report considered the separation processes needed for all three CO2 emission reduction strategies—postcombustion, precombustion, and oxycombustion—and assessed three primary separation technologies based on liquid absorption, membranes, and solid adsorption. The workshop “Carbon Capture: Beyond 2020” convened approximately 80 attendees from universities, national laboratories, and industry to assess the basic research needed to address the current technical bottlenecks in carbon capture processes and to identify key research priority directions that will provide the foundations for future carbon capture technologies. The workshop began with a plenary session including speakers who summarized the extent of the carbon capture challenge, the various current approaches, and the limitations of these technologies. Workshop attendees were then given the charge to identify high-priority basic research directions that could provide revolutionary new concepts to form the basis for separation technologies in 2020 and beyond. The participants were divided into three major panels corresponding to different approaches for separating gases to reduce carbon emissions—liquid absorption, solid adsorption, and membrane separations. Two other panels were instructed to attend each of these three technology panels to assess crosscutting issues relevant to characterization and computation. At the end of the workshop, a final plenary session was convened to summarize the most critical research needs identified by the workshop attendees in each of the three major technical panels and from the two cross-cutting panels. The reports of the three technical panels included a set of high level Priority Research Directions meant to serve as inspiration to researchers in multiple disciplines—materials science, chemistry, biology, computational science, engineering, and others—to address the huge scientific challenges facing this nation and the world as we seek technologies for large-scale carbon capture beyond 2020. These Priority Research Directions were clustered around three main areas, all tightly coupled: • Understand and control the dynamic atomic-level and molecular-level interactions of the targeted species with the separation media. • Discover and design new materials that incorporate designed structures and functionalities tuned for optimum separation properties. • Tailor capture/release processes with alternative driving forces, taking advantage of a new generation of materials. In each of the technical panels, the participants identified two major crosscutting research themes. The first was the development of new analytical tools that can characterize materials structure and molecular processes across broad spatial and temporal scales and under realistic conditions that mimic those encountered in actual separation processes. Such tools are needed to examine interfaces and thin films at the atomic and molecular levels, achieving an atomic/molecular-scale understanding of gas–host structures, kinetics, and dynamics, and understanding and control of nanoscale synthesis in multiple dimensions. A second major crosscutting theme was the development of new computational tools for theory, modeling, and simulation of separation processes. Computational techniques can be used to elucidate mechanisms responsible for observed separations, predict new desired features for advanced separations materials, and guide future experiments, thus complementing synthesis and characterization efforts. These two crosscut areas underscored the fact that the challenge for future carbon capture technologies will be met only with multidisciplinary teams of scientists and engineers. In addition, it was noted that success in this fundamental research area must be closely coupled with successful applied research to ensure the continuing assessment and maturation of new technologies as they undergo scale-up and deployment. Carbon capture is a very rich scientific problem, replete with opportunity for basic researchers to advance the frontiers of science as they engage on one of the most important technical challenges of our times. This workshop report outlines an ambitious agenda for addressing the very difficult problem of carbon capture by creating foundational new basic science. This new science will in turn pave the way for many additional advances across a broad range of scientific disciplines and technology sectors. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science JPG.jpg file (209KB) Report.pdf file (6.4MB) This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: • Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. • Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. • Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. • Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. • Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. • Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed. New Science for a Secure and Sustainable Energy Future JPG.jpg file (263KB) .pdf file  (561KB)Report.pdf file (5.0MB) Summary.pdf file (17KB) New Science for a Secure and Sustainable Energy Future This Basic Energy Sciences Advisory Committee (BESAC) report summarizes a 2008 study by the Subcommittee on Facing our Energy Challenges in a New Era of Science to: (1) assimilate the scientific research directions that emerged from the BES Basic Research Needs workshop reports into a comprehensive set of science themes, and (2) identify the new implementation strategies and tools required to accomplish the science. The United States faces a three-fold energy challenge: • Energy Independence. U.S. energy use exceeds domestic production capacity by the equivalent of 16 million barrels of oil per day, a deficit made up primarily by importing oil and natural gas. This deficit has nearly tripled since 1970. • Environmental Sustainability. The United States must reduce its emissions of carbon dioxide and other greenhouse gases that accelerate climate change. The primary source of these emissions is combustion of fossil fuel, comprising about 85% of U.S. national energy supply. • Economic Opportunity. The U.S. economy is threatened by the high cost of imported energy—as much as $700 billion per year at recent peak prices. We need to create next-generation clean energy technologies that do not depend on imported oil. U.S. leadership would not only provide solutions at home but also create global economic opportunity. The magnitude of the challenge is so immense that existing energy approaches—even with improvements from advanced engineering and improved technology based on known concepts—will not be enough to secure our energy future. Instead, meeting the challenge will require new technologies for producing, storing and using energy with performance levels far beyond what is now possible. Such technologies spring from scientific breakthroughs in new materials and chemical processes that govern the transfer of energy between light, electricity and chemical fuels. Integrating a major national mobilization of basic energy research—to create needed breakthroughs—with appropriate investments in technology and engineering to accelerate bringing new energy solutions to market will be required to meet our three-fold energy challenge. This report identifies three strategic goals for which transformational scientific breakthroughs are urgently needed: • Making fuels from sunlight • Generating electricity without carbon dioxide emissions • Revolutionizing energy efficiency and use Meeting these goals implies dramatic changes in our technologies for producing and consuming energy. We will manufacture chemical fuel from sunlight, water and carbon dioxide instead of extracting it from the earth. We will generate electricity from sunlight, wind, and high-efficiency clean coal and advanced nuclear plants instead of conventional coal and nuclear technology. Our cars and light trucks will be driven by efficient electric motors powered by a new generation of batteries and fuel cells. These new, advanced energy technologies, however, require new materials and control of chemical change that operate at dramatically higher levels of functionality and performance. Converting sunlight to electricity with double or triple today's efficiency, storing electricity in batteries or supercapacitors at ten times today's densities, or operating coal-fired and nuclear power plants at far higher temperatures and efficiencies requires materials with atom by atom design and control, tailored nanoscale structures where every atom has a specific function. Such high performing materials would have complexity far higher than today's energy materials, approaching that of biological cells and proteins. They would be able to seamlessly control the ebb and flow of energy between chemical bonds, electrons, and light, and would be the foundation of the alternative energy technologies of the future. Creating these advanced materials and chemical processes requires characterizing the structure and dynamics of matter at levels beyond our present reach. The physical and chemical phenomena that capture, store and release energy take place at the nanoscale, often involving subtle changes in single electrons or atoms, on timescales faster than we can now resolve. Penetrating the secrets of energy transformation between light, chemical bonds, and electrons requires new observational tools capable of probing the still-hidden realms of the ultrasmall and ultrafast. Observing the dynamics of energy flow in electronic and molecular systems at these resolutions is necessary if we are to learn to control their behavior. Fundamental understanding of complex materials and chemical change based on theory, computation and advanced simulation is essential to creating new energy technologies. A working transistor was not developed until the theory of electronic behavior on semiconductor surfaces was formulated. In superconductivity, sweeping changes occurred in the field when a microscopic theory of the mechanism of superconductivity was finally developed. As Nobel Laureate Phillip Anderson has written, more is different: at each level of complexity in science, new laws need to be discovered for breakthrough progress to be made. Without such breakthroughs, future technologies will not be realized. The digital revolution was only made possible by transistors—try to imagine the information age with vacuum tubes. Nearly as ubiquitous are lasers, the basis for modern day read-heads used in CDs, DVDs, and bar code scanners. Lasers could not be developed until the quantum theory of light emission by materials was understood. These advances—high-performance materials enabling precise control of chemical change, characterization tools probing the ultrafast and the ultrasmall, and new understanding based on advanced theory and simulation—are the agents for moving beyond incremental improvements and creating a truly secure and sustainable energy future. Given these tools, we can imagine, and achieve, revolutionary new energy systems. Full Report (August 2010) Science for Energy Technology: Strengthening the Link between Basic Research and Industry, Full Report JPG.jpg file (323KB) .pdf file  (6.7MB)Report.pdf file (40.8MB) Initial Report (April 2010) JPG.jpg file (146KB) .pdf file .pdf file  (7.8MB)Summary.pdf file (103KB) This Basic Energy Sciences Advisory Committee (BESAC) report summarizes the results of a Workshop on Science for Energy Technology on January 18-21, 2010, to identify the scientific priority research directions needed to address the roadblocks and accelerate the innovation of clean energy technologies. The nation faces two severe challenges that will determine our prosperity for decades to come: assuring clean, secure, and sustainable energy to power our world, and establishing a new foundation for enduring economic and jobs growth. These challenges are linked: the global demand for clean sustainable energy is an unprecedented economic opportunity for creating jobs and exporting energy technology to the developing and developed world. But achieving the tremendous potential of clean energy technology is not easy. In contrast to traditional fossil fuel-based technologies, clean energy technologies are in their infancy, operating far below their potential, with many scientific and technological challenges to overcome. Industry is ultimately the agent for commercializing clean energy technology and for reestablishing the foundation for our economic and jobs growth. For industry to succeed in these challenges, it must overcome many roadblocks and continuously innovate new generations of renewable, sustainable, and low-carbon energy technologies such as solar energy, carbon sequestration, nuclear energy, electricity delivery and efficiency, solid state lighting, batteries and biofuels. The roadblocks to higher performing clean energy technology are not just challenges of engineering design but are also limited by scientific understanding. Innovation relies on contributions from basic research to bridge major gaps in our understanding of the phenomena that limit efficiency, performance, or lifetime of the materials or chemistries of these sustainable energy technologies. Thus, efforts aimed at understanding the scientific issues behind performance limitations can have a real and immediate impact on cost, reliability, and performance of technology, and ultimately a transformative impact on our economy. With its broad research base and unique scientific user facilities, the DOE Office of Basic Energy Sciences (BES) is ideally positioned to address these needs. BES has laid out a broad view of the basic and grand challenge science needs for the development of future clean energy technologies in a series of comprehensive "Basic Research Needs" workshops and reports (inside front cover and and has structured its programs and launched initiatives to address the challenges. The basic science needs of industry, however, are often more narrowly focused on solving specific nearer-term roadblocks to progress in existing and emerging clean energy technologies. To better define these issues and identify specific barriers to progress, the Basic Energy Sciences Advisory Committee (BESAC) sponsored the Workshop on Science for Energy Technology, January 18-21, 2010. A wide cross-section of scientists and engineers from industry, universities, and national laboratories delineated the basic science Priority Research Directions most urgently needed to address the roadblocks and accelerate the innovation of clean energy technologies. These Priority Research Directions address the scientific understanding underlying performance limitations in existing but still immature technologies. Resolving these performance limitations can dramatically improve the commercial penetration of clean energy technologies. A key conclusion of the Workshop is that in addition to the decadal challenges defined in the "Basic Research Needs" reports, specific research directions addressing industry roadblocks are ripe for further emphasis. Another key conclusion is that identifying and focusing on specific scientific challenges and translating the results to industry requires more direct feedback and communication and collaboration between industrial and BES-supported scientists. BES-supported scientists need to be better informed of the detailed scientific issues facing industry, and industry more aware of BES capabilities and how to utilize them. An important capability is the suite of BES scientific user facilities, which are seen as playing a key role in advancing the science of clean energy technology. Working together, industry and BES-supported scientists can achieve the required understanding and control of the performance limitations of clean energy technology, accelerate innovation in its development, and help build the workforce needed to implement the growing clean energy economy. Next-Generation Photon Sources for Grand Challenges in Science and Energy JPG.jpg file (238KB) .pdf file  (9.4MB)Report.pdf file (17.4MB) Next-Generation Photon Sources for Grand Challenges in Science and Energy This Basic Energy Sciences Advisory Committee (BESAC) report summarizes the results of an October 2008 Photon Workshop of the Subcommittee on Facing our Energy Challenges in a New Era of Science to identify connections between major new research opportunities and the capabilities of the next generation of light sources. Particular emphasis was on energy-related research. The next generation of sustainable energy technologies will revolve around transformational new materials and chemical processes that convert energy efficiently among photons, electrons, and chemical bonds. New materials that tap sunlight, store electricity, or make fuel from splitting water or recycling carbon dioxide will need to be much smarter and more functional than today's commodity-based energy materials. To control and catalyze chemical reactions or to convert a solar photon to an electron requires coordination of multiple steps, each carried out by customized materials and interfaces with designed nanoscale structures. Such advanced materials are not found in nature the way we find fossil fuels; they must be designed and fabricated to exacting standards, using principles revealed by basic science. Success in this endeavor requires probing, and ultimately controlling, the interactions among photons, electrons, and chemical bonds on their natural length and time scales. Control science—the application of knowledge at the frontier of science to control phenomena and create new functionality—realized through the next generation of ultraviolet and X-ray photon sources, has the potential to be transformational for the life sciences and information technology, as well as for sustainable energy. Current synchrotron-based light sources have revolutionized macromolecular crystallography. The insights thus obtained are largely in the domain of static structure. The opportunity is for next generation light sources to extend these insights to the control of dynamic phenomena through ultrafast pump-probe experiments, time-resolved coherent imaging, and high-resolution spectroscopic imaging. Similarly, control of spin and charge degrees of freedom in complex functional materials has the potential not only to reveal the fundamental mechanisms of high-temperature superconductivity, but also to lay the foundation for future generations of information science. This report identifies two aspects of energy science in which next-generation ultraviolet and X-ray light sources will have the deepest and broadest impact: • The temporal evolution of electrons, spins, atoms, and chemical reactions, down to the femtosecond time scale. • Spectroscopic and structural imaging of nano objects (or nanoscale regions of inhomogeneous materials) with nanometer spatial resolution and ultimate spectral resolution. The dual advances of temporal and spatial resolution promised by fourth-generation light sources ideally match the challenges of control science. Femtosecond time resolution has opened completely new territory where atomic motion can be followed in real time and electronic excitations and decay processes can be followed over time. Coherent imaging with short-wavelength radiation will make it possible to access the nanometer length scale, where intrinsic quantum behavior becomes dominant. Performing spectroscopy on individual nanometer-scale objects rather than on conglomerates will eliminate the blurring of the energy levels induced by particle size and shape distributions and reveal the energetics of single functional units. Energy resolution limited only by the uncertainty relation is enabled by these advances. Current storage-ring-based light sources and their incremental enhancements cannot meet the need for femtosecond time resolution, nanometer spatial resolution, intrinsic energy resolution, full coherence over energy ranges up to hard X-rays, and peak brilliance required to enable the new science outlined in this report. In fact, the new, unexplored territory is so expansive that no single currently imagined light source technology can fulfill the whole potential. Both technological and economic challenges require resolution as we move forward. For example, femtosecond time resolution and high peak brilliance are required for following chemical reactions in real time, but lower peak brilliance and high repetition rate are needed to avoid radiation damage in high-resolution spatial imaging and to avoid space-charge broadening in photoelectron spectroscopy and microscopy. But light sources alone are not enough. The photons produced by next-generation light sources must be measured by state-of-the-art experiments installed at fully equipped end stations. Sophisticated detectors with unprecedented spatial, temporal, and spectral resolution must be designed and created. The theory of ultrafast phenomena that have never before been observed must be developed and implemented. Enormous data sets of diffracted signals in reciprocal space and across wide energy ranges must be collected and analyzed in real time so that they can guide the ongoing experiments. These experimental challenges—end stations, detectors, sophisticated experiments, theory, and data handling—must be planned and provided for as part of the photon source. Furthermore, the materials and chemical processes to be studied, often in situ, must be synthesized and developed with equal care. These are the primary factors determining the scientific and technological return on the photon source investment. Of equal or greater concern is the need for interdisciplinary platforms to solve the grand challenges of sustainable energy, climate change, information technology, biological complexity, and medicine. No longer are these challenges confined to one measurement or one scientific discipline. Fundamental problems in correlated electron materials, where charge, spin, and lattice modes interact strongly, require experiments in electron, neutron, and X-ray scattering that must be coordinated across platforms and user facilities and that integrate synthesis and theory as well. The model of users applying for one-time access to single-user facilities does not promote the coordinated, interdisciplinary approach needed to solve today's grand challenge problems. Next-generation light sources and other user facilities must learn to accommodate the interdisciplinary, cross-platform needs of modern grand challenge science. Only through the development of such future sources, appropriately integrated with advanced end stations and detectors and closely coupled with broader synthesis, measurement, theory, and modeling tools, can we meet the demands of a New Era of Science. Directing Matter and Energy: Five Challenges for Science and the Imagination JPG.jpg file (176KB) Report.pdf file (28.9MB) Directing Matter and Energy: Five Challenges for Science and the Imagination This Basic Energy Sciences Advisory Committee (BESAC) Grand Challenges report identifies the most important scientific questions and science-driven technical challenges facing BES and describes the importance of these challenges to advances in disciplinary science, to technology development, and to energy and other societal needs. The report originated from a January 25, 2005, request from the Office of Science and is the product of numerous BESAC and Grand Challenges Subcommittee meetings and conferences in 2006-2007. It is frequently said that any sufficiently advanced technology is indistinguishable from magic. Modern science stands at the beginning of what might seem by today's standards to be an almost magical leap forward in our understanding and control of matter, energy, and information at the molecular and atomic levels. Atoms—and the molecules they form through the sharing or exchanging of electrons—are the building blocks of the biological and non-biological materials that make up the world around us. In the 20th century, scientists continually improved their ability to observe and understand the interactions among atoms and molecules that determine material properties and processes. Now, scientists are positioned to begin directing those interactions and controlling the outcomes on a molecule-by-molecule and atom-by-atom basis, or even at the level of electrons. Long the staple of science- fiction novels and films, the ability to direct and control matter at the quantum, atomic, and molecular levels creates enormous opportunities across a wide spectrum of critical technologies. This ability will help us meet some of humanity's greatest needs, including the need for abundant, clean, and cheap energy. However, generating, storing, and distributing adequate and sustainable energy to the nation and the world will require a sea change in our ability to control matter and energy. One of the most spectacular technological advances in the 20th century took place in the field of information, as computers and microchips became ubiquitous in our society. Vacuum tubes were replaced with transistors and, in accordance with Moore's Law (named for Intel co-founder Gordon Moore), the number of transistors on a microchip has doubled approximately every two years for the past two decades. However, if the time comes when integrated circuits can be fabricated at the molecular or nanoscale level, the limits of Moore's Law will be far surpassed. A supercomputer based on nanochips would comfortably fit in the palm of your hand and use less electricity than a cottage. All the information stored in the Library of Congress could be contained in a memory the size of a sugar cube. Ultimately, if computations can be carried out at the atomic or sub-nanoscale levels, today's most powerful microtechnology will seem as antiquated and slow as an abacus. For the future, imagine a clean, cheap, and virtually unlimited supply of electrical power from solar-energy systems modeled on the photosynthetic processes utilized by green plants, and power lines that could transmit this electricity from the deserts of the Southwest to the Eastern Seaboard at nearly 100-percent efficiency. Imagine information and communications systems based on light rather than electrons that could predict when and where hurricanes make landfall, along with self-repairing materials that could survive those hurricanes. Imagine synthetic materials fully compatible and able to communicate with biological materials. This is speculative to be sure, but not so very far beyond the scope of possibilities. Acquiring the ability to direct and control matter all the way down to molecular, atomic, and electronic levels will require fundamental new knowledge in several critical areas. This report was commissioned to define those knowledge areas and the opportunities that lie beyond. Five interconnected Grand Challenges that will pave the way to a science of control are identified in the regime of science roughly defined by the Basic Energy Science portfolio, and recommendations are presented for what must be done to meet them. • How do we control material processes at the level of electrons? Electrons are the negatively charged subatomic particles whose dynamics determine materials properties and direct chemical, electrical, magnetic, and physical processes. If we can learn to direct and control material processes at the level of electrons, where the strange laws of quantum mechanics rule, it should pave the way for artificial photosynthesis and other highly efficient energy technologies, and could revolutionize computer technologies. Humans, through trial and error experiments or through lucky accidents, have been able to make only a tiny fraction of all the materials that are theoretically possible. If we can learn to design and create new materials with tailored properties, it could lead to low-cost photovoltaics, self-repairing and self-regulating devices, integrated photonic (light-based) technologies, and nano-sized electronic and mechanical devices. Emergent phenomena, in which a complex outcome emerges from the correlated interactions of many simple constituents, can be widely seen in nature, as in the interactions of neurons in the human brain that result in the mind, the freezing of water, or the giant magneto-resistance behavior that powers disk drives. If we can learn the fundamental rules of correlations and emergence and then learn how to control them, we could produce, among many possibilities, an entirely new generation of materials that supersede present-day semiconductors and superconductors. Biology is nature's version of nanotechnology, though the capabilities of biological systems can exceed those of human technologies by a vast margin. If we can understand biological functions and harness nanotechnologies with capabilities as effective as those of biological systems, it should clear the way towards profound advances in a great many scientific fields, including energy and information technologies. • How do we characterize and control matter away—especially very far away—from equilibrium? All natural and most human-induced phenomena occur in systems that are away from the equilibrium in which the system would not change with time. If we can understand system effects that take place away—especially very far away—from equilibrium and learn to control them, it could yield dramatic new energy-capture and energy storage technologies, greatly improve our predictions for molecular-level electronics, and enable new mitigation strategies for environmental damage. We now stand at the brink of a "Control Age" that could spark revolutionary changes in how we inhabit our planet, paving the way to a bright and sustainable future for us all. But answering the call of the five Grand Challenges for Basic Energy Science will require that we change our fundamental understanding of how nature works. This will necessitate a three-fold attack: new approaches to training and funding, development of instruments more precise and flexible than those used up to now for observational science, and creation of new theories and concepts beyond those we currently possess. The difficulties involved in this change of our understanding are huge, but the rewards for success should be extraordinary. If we succeed in meeting these five Grand Challenges, our ability to direct and control matter might one day be measured only by the limits of human imagination. Basic Research Needs for Materials under Extreme Environments JPG.jpg file (261KB) .pdf file  (10.7MB)Report.pdf file (15.7MB) Basic Research Needs for Materials under Extreme Environments This report is based on a BES Workshop on Basic Research Needs for Materials under Extreme Environments, June 11-13, 2007, to evaluate the potential for developing revolutionary new materials that will meet demanding future energy requirements that expose materials to environmental extremes. Never has the world been so acutely aware of the inextricably linked issues of energy, environment, economy, and security. As the economies of developing countries boom, so does their demand for energy. Today nearly a quarter of the world does not have electrical power, yet the demand for electricity is projected to more than double over the next two decades. Increased demand for energy to power factories, transport commodities and people, and heat/cool homes also results in increased CO2 emissions. In 2007 China, a major consumer of coal, surpassed the United States in overall carbon dioxide emissions. As global CO2 emissions grow, the urgency grows to produce energy from carbon-based sources more efficiently in the near term and to move to non-carbon-based energy sources, such as solar, hydrogen, or nuclear, in the longer term. As we look toward the future, two points are very clear: (1) the economy and security of this nation is critically dependent on a readily available, clean and affordable energy supply; and (2) no one energy solution will meet all future energy demands, requiring investments in development of multiple energy technologies. Materials are central to every energy technology, and future energy technologies will place increasing demands on materials performance with respect to extremes in stress, strain, temperature, pressure, chemical reactivity, photon or radiation flux, and electric or magnetic fields. For example, today's state-of-the-art coal-fired power plants operate at about 35% efficiency. Increasing this efficiency to 60% using supercritical steam requires raising operating temperatures by nearly 50% and essentially doubling the operating pressures. These operating conditions require new materials that can reliably withstand these extreme thermal and pressure environments. To lower fuel consumption in transportation, future vehicles will demand lighter weight components with high strength. Next-generation nuclear fission reactors require materials capable of withstanding higher temperatures and higher radiation flux in highly corrosive environments for long periods of time without failure. These increasingly extreme operating environments accelerate the aging process in materials, leading to reduced performance and eventually to failure. If one extreme is harmful, two or more can be devastating. High temperature, for example, not only weakens chemical bonds, it also speeds up the chemical reactions of corrosion. Often materials fail at one-tenth or less of their intrinsic limits, and we do not understand why. This failure of materials is a principal bottleneck for developing future energy technologies that require placing materials under increasingly extreme conditions. Reaching the intrinsic limit of materials performance requires understanding the atomic and molecular origins of this failure. This knowledge would enable an increase in materials performance of order of magnitude or more. Further, understanding how these extreme environments affect the physical and chemical processes that occur in the bulk material and at its surface would open the door to employing these conditions to make entirely new classes of materials with greatly enhanced performance for future energy technologies. This knowledge will not be achieved by incremental advances in materials science. Indeed, this knowledge will only be gained by innovative basic research that will unlock the fundamentals of how extremes environments interact with materials and how these interactions can be controlled to reach the intrinsic limits of materials performance and to develop revolutionary new materials. These new materials would have enormous impact for the development of future energy technologies: extending lifetimes, increasing efficiencies, providing novel capabilities, and lowering costs. Beyond energy applications, these new materials would have a huge impact on other areas of importance to this nation, including national security, industry, and other areas where robust, reliable materials are required. This report summarizes the research directions identified by a Basic Energy Sciences Workshop on Basic Research Needs for Materials under Extreme Environments, held in June 2007. More than 140 invited scientists and engineers from academia, industry, and the national laboratories attended the workshop, along with representatives from other offices within the Department of Energy, including the National Nuclear Security Administration, the Office of Nuclear Energy, the Office of Energy Efficiency and Renewable Energy, and the Office of Fossil Energy. Prior to the workshop, a technology resource document, Technology and Applied R&D Needs for Materials under Extreme Environments, was prepared that provided the participants with an overview of current and future materials needs for energy technologies. The workshop began with a plenary session that outlined the technology needs and the state of the art in research of materials under extreme conditions. The workshop was then divided into four panels, focusing on specific types of extreme environments: Energetic Flux Extremes, Chemical Reactive Extremes, Thermomechanical Extremes, and Electromagnetic Extremes. The four panels were asked to assess the current status of research in each of these four areas and identify the most promising research directions that would bridge the current knowledge gaps in understanding how these four extreme environments impact materials at the atomic and molecular levels. The goal was to outline specific Priority Research Directions (PRDs) that would ultimately lead to the development of vastly improved materials across a broad range of future energy technologies. During the course of the workshop, a number of common themes emerged across these four panels and a fifth panel was charged to identify these cross-cutting research areas. Photons and energetic particles can cause damage to materials that occurs over broad time and length scales. While initiation, characterized by localized melting and re-crystallization, may occur in fractions of a picosecond, this process can produce cascades of point defects that diffuse and agglomerate into larger clusters. These nanoscale clusters can eventually reach macroscopic dimensions, leading to decreased performance and failure. The panel on energetic flux extremes noted that this degradation and failure is a key barrier to achieving more efficient energy generation systems and limits the lifetime of materials used in photovoltaics, solar collectors, nuclear reactors, optics, electronics and other energy and security systems used in extreme flux environments. The panel concluded that the ability to prevent this degradation from extreme fluxes is critically dependent on being able to elucidate the atomic- and molecular-level mechanisms of defect production and damage evolution triggered by single and multiple energetic particles and photons interacting with materials. Advances in characterization and computational tools have the potential to provide an unprecedented opportunity to elucidate these key mechanisms. In particular, ultrafast and ultra-high spatial resolution characterization tools will allow the initial atomic-scale damage events to be observed. Further, advanced computational capabilities have the potential to capture multiscale damage evolution from atomic to macroscopic dimensions. Elucidation of these mechanisms would allow the complex pathways of damage evolution from the atomic to the macroscopic scale to be understood. This knowledge would ultimately allow atomic and molecular structures to be manipulated in a predicable manner to create new materials that have extraordinary tolerance and can function within an extreme environment without property degradation. Further, it would provide revolutionary capabilities for synthesizing materials with novel structures or, alternatively, to force chemical reactions that normally result in damage to proceed along selected pathways that are either benign or self-repair damage initiation. Chemically reactive extreme environments are found in many advanced energy systems, including fuel cells, nuclear reactors, and batteries, among others. These conditions include aqueous and non-aqueous liquids (such as mineral acids, alcohols, and ionic liquids) and gaseous environments (such as hydrogen, ammonia, and steam). The panel evaluating extreme chemical environments concluded there is a lack of fundamental understanding of thermodynamic and kinetic processes that occur at the atomic level under these important reactive environments. The chemically induced degradation of materials is initiated at the interface of a material with its environment. Chemical stability in these environments is often controlled by protective surfaces, either self-healing, stable films that form on a surface (such as oxides) or by coatings that are applied to a surface. Besides providing surface stability, these films must also prevent facile mass transport of reactive species into the bulk of the material. While some films can have long lifetimes, increasing severity of environments can cause the films to break down, leading to costly materials failure. A major challenge therefore is to develop a new generation of surface layers that are extremely robust under aggressive chemical conditions. Before this can be accomplished, however, it is critical to understand the equilibrium and non-equilibrium thermodynamics and reaction kinetics that occur at the atomic level at the interface of the protective film with its environment. The stability of the film can be further complicated by differences in the material's morphology, structure, and defects. It is critical that these complex and interrelated chemical and physical processes be understood at the nanoscale using new capabilities in materials characterization and theory, modeling, and simulation. Armed with this information, it will be possible to develop a new generation of robust surface films to protect materials in extreme chemical environments. Further, this understanding will provide insight into developing films that can self-heal and to synthesizing new classes of materials that have unimaginable stability to aggressive chemical environments. The need for materials that can withstand thermomechanical extremes—high pressure and stress, strain and strain rate, and high and low temperature—is found across a broad range of energy technologies, such as efficient steam turbines and heat exchangers, fuel-efficient vehicles, and strong wind turbine blades. Failures of materials under thermomechanical extremes can be catastrophic and costly. The panel on thermomechanical extremes concluded that designing new materials with properties specifically tailored to withstand thermomechanical extremes must begin with understanding the fundamental chemical and physical processes involved in materials failure, extending from the nanoscale to the collective behavior at the macroscale. Further, the behavior of materials must be understood under static, quasistatic, and dynamic thermomechanical extremes. This requires learning how atoms and electrons move within a material under extremes to provide insight into defect production and eventual evolution into microstructural components, such as dislocations, voids, and grain boundaries. This will require advanced analytical tools that can study materials in situ as these defects originate and evolve. Once these processes are understood, it will be possible to predict responses of materials under thermomechanical extremes using advanced computation tools. Further, this fundamental knowledge will open new avenues for designing and synthesizing materials with unique properties. Using these thermomechanical extremes will allow the very nature of chemical bonds to be tuned to produce revolutionary new materials, such as ultrahard materials. As electrical energy demand grows, perhaps by greater than 70% over the next 50 years, so does the need to develop materials capable of operating at extreme electric and magnetic fields. To develop future electrical energy technologies, new materials are needed for magnets capable of operating at higher fields in generators and motors, insulators resistant to higher electric fields and field gradients, and conductors/superconductors capable of carrying higher current at lower voltage. The panel on electromagnetic extremes concluded that the discovery and understanding of this broad range of new materials requires revealing and controlling the defects that occur at the nanoscale. Defects are responsible for breakdown of insulators, yet defects are needed within local structures of superconductors to trap magnetic vortices. The ability to observe these defects as materials interact with electromagnetic extremes is just becoming available with advances in characterization tools with increased spatial and time resolution. Understanding how these nanoscale defects evolve to affect the macroscale behavior of materials is a grand challenge, and advances in multiscale modeling are required to understand the behavior of materials under these extremes. Once the behavior of defects in materials is understood, then materials could be designed to prevent dielectric breakdown or to enhance magnetic behavior. For example, composite materials having appropriate structures and properties could be tailored using nanoscale self-assembly techniques. The panel projected that understanding how electric and magnetic fields affect materials at the atomic and molecular level could lead to the ability to control materials properties and synthesis. Such control would lead to a new generation of materials that is just emerging today—such as electrooptic materials that can be switched between transparency and opacity through application of electric fields. Beyond energy applications, these tailored materials could have enormous importance in security, computing, electronics, and other applications. During the course of the workshop, four recurring science issues emerged as important themes: (1) Achieving the Limits of Performance; (2) Exploiting Extreme Environments for Materials Design and Synthesis; (3) Characterization on the Scale of Fundamental Interactions; and (4) Predicting and Modeling Materials Performance. All four of the workshop panels identified the need to understand the complex and interrelated physical and chemical processes that control the various performance limits of materials subjected to extreme conditions as the major technical bottleneck in meeting future energy needs. Most of these processes involve understanding the cascade of events that is initiated at atomic-level defects and progresses through macroscopic materials properties. By understanding various mechanisms by which materials fail, for example, it may be possible to increase the performance and lifetime limits of materials by an order of magnitude or more and thereby achieve the true limits of materials performance. Understanding the atomic and molecular basis of the interaction of extreme environments with materials provides an exciting and unique opportunity to produce entirely new classes of materials. Today materials are made primarily by changing temperature, composition, and sometimes, pressure. The panels concluded that extreme conditions—in the form of high temperatures, pressures, strain rate, radiation fluxes, or external fields, alone or in combination—can potentially be used as new "knobs" that can be manipulated for the synthesis of revolutionary new materials. All four of the extreme environments offer new strategies for controlling the atomic- and molecular-level structure in unprecedented ways to produce materials with tailored functionalities. To achieve the breakthroughs needed to understand the atomic and molecular processes that occur within the bulk and at surfaces in materials in extreme environments will require advances in the final two cross-cutting areas, characterization and computation. Elucidating changes in structure and dynamics over broad timescales (femtoseconds to many seconds) and length scales (nanoscale to macroscale) is critical to realizing the revolutionary materials required for future energy technologies. Advances in characterization tools, including diffraction, scattering, spectroscopy, microscopy, and imaging, can provide this critical information. Of particular importance is the need to combine two or more of these characterization tools to permit so-called "multi-dimensional" analysis of materials and surfaces in situ. These advances will enable the elucidation of fundamental chemical and physical mechanisms that are at the heart of materials performance (and failure) and catalyze the discovery of new materials required for the next generation of energy technologies. Complementing these characterization techniques are computational techniques required for modeling and predicting materials behavior under extreme conditions. Recent advances in theory and algorithms, coupled with enormous and growing computational power and ever more sophisticated experimental methods, are opening up exciting new possibilities for taking advantage of predictive theory and simulation to design and predict of the properties and performance of new materials required for extreme environments. New theoretical tools are needed to describe new phenomena and processes that occur under extreme conditions. These various tools need to be integrated across broad length scales—atomic to macroscopic—to model and predict the properties of real materials in response to extreme environments. Together with advanced synthesis and characterization techniques, these new capabilities in theory and modeling offer exciting new capabilities to accelerate scientific discovery and shorten the development cycle from discovery to application. In concluding the workshop, the panelists were confident that today's gaps in materials performance under extreme conditions could be bridged if the physical and chemical changes that occur in bulk materials and at the interface with the extreme environment could be understood from the atomic to macroscopic scale. These complex and interrelated phenomena can be unraveled as advances are realized in characterization and computational tools. These advances will allow structural changes, including defects, to be observed in real time and then modeled so the response of materials can be predicted. The concept of exploiting these extreme environments to create revolutionary new materials was viewed to be particularly exciting. Adding these parameters to the toolkit of materials synthesis opens unimaginable possibilities for developing materials with tailored properties. The knowledge needed for bridging these technology gaps requires significant investment in basic research, and this research needs to be coupled closely with the applied research and technology communities and industry that will drive future energy technologies. These investments in fundamental research of materials under extreme conditions will have a major impact on the development of technologies that can meet future requirements for abundant, affordable, and clean energy. However, this research will enable the development of materials that will have a much broader impact in other applications that are critical to the security and economy of this nation. Basic Research Needs: Catalysis for Energy JPG.jpg file (220KB) .pdf file Basic Research Needs: Catalysis for Energy This report is based on a BES Workshop on Basic Research Needs in Catalysis for Energy Applications, August 6-8, 2007, to identify research needs and opportunities for catalysis to meet the nation's energy needs, provide an assessment of where the science and technology now stand, and recommend the directions for fundamental research that should be pursued to meet the goals described. The United States continues to rely on petroleum and natural gas as its primary sources of fuels. As the domestic reserves of these feedstocks decline, the volumes of imported fuels grow, and the environmental impacts resulting from fossil fuel combustion become severe, we as a nation must earnestly reassess our energy future. Catalysis—the essential technology for accelerating and directing chemical transformation—is the key to realizing environmentally friendly, economical processes for the conversion of fossil energy feedstocks. Catalysis also is the key to developing new technologies for converting alternative feedstocks, such as biomass, carbon dioxide, and water. With the declining availability of light petroleum feedstocks that are high in hydrogen and low in sulfur and nitrogen, energy producers are turning to ever-heavier fossil feedstocks, including heavy oils, tar sands, shale oil, and coal. Unfortunately, the heavy feedstocks yield less fuel than light petroleum and contain more sulfur and nitrogen. To meet the demands for fuels, a deep understanding of the chemistry of complex fossil-energy feedstocks will be required together with such understanding of how to design catalysts for processing these feedstocks. The United States has the capacity to grow and convert enough biomass to replace nearly a third of the nation's current gasoline use. Building on catalysis for petroleum conversion, researchers have identified potential catalytic routes for biomass. However, biomass differs so much in composition and reactivity from fossil fuels that this starting point is inadequate. The technology for economically converting biomass into widely usable fuels does not exist, and the science underpinning its development is only now starting to emerge. The challenge is to understand the chemistry by which cellulose- and lignin-derived molecules are converted to fuels and to use this knowledge as a basis for identifying the needed catalysts. To obtain energy densities similar to those of currently used fuels, the products of biomass conversion must have oxygen contents lower than that of biomass. Oxygen must be removed by using hydrogen derived from biomass or other sources in a manner that minimizes the yield of carbon dioxide as a byproduct. Catalytic conversion of carbon dioxide into liquid fuels using solar and electrical energy would enable the carbon in carbon dioxide to be recycled into fuels, thereby reducing its contribution to atmospheric warming. Likewise, the catalytic generation of hydrogen from water could provide a carbon-free source of hydrogen for fuel and for processing of fossil and biomass feedstocks. The underlying science is far from sufficient for design of efficient catalysts and economical processes. Grand Challenges To realize the full potential of catalysis for energy applications, scientists must develop a profound understanding of catalytic transformations so that they can design and build effective catalysts with atom-by-atom precision and convert reactants to products with molecular precision. Moreover, they must build tools to make real-time, spatially resolved measurements of operating catalysts. Ultimately, scientists must use these tools to achieve a fundamental understanding of catalytic processes occurring in multiscale, multiphase environments. The first grand challenge identified in this report centers on understanding mechanisms and dynamics of catalyzed reactions. Catalysis involves chemical transformations that must be understood at the atomic scale because catalytic reactions present an intricate dance of chemical bond-breaking and bond-forming steps. Structures of solid catalyst surfaces, where the reactions occur on only a few isolated sites and in the presence of highly complex mixtures of molecules interacting with the surface in myriad ways, are extremely difficult to describe. To discover new knowledge about mechanisms and dynamics of catalyzed reactions, scientists need to image surfaces at the atomic scale and probe the structures and energetics of the reacting molecules on varying time and length scales. They also need to apply theory to validate the results. The difficulties of developing a clear understanding of the mechanisms and dynamics of catalyzed reactions are magnified by the high temperatures and pressures at which the reactions occur and the influence of the molecules undergoing transformation on the catalyst. The catalyst structure changes as the reacting molecules become part of it en route to forming products. Although the scientific challenge of understanding catalyst structure and function is great, recent advances in characterization science and facilities provide the means for meeting it in the long term. The second grand challenge in the report centers on design and controlled synthesis of catalyst structures. Fundamental investigations of catalyst structures and the mechanisms of catalytic reactions provide the necessary foundation for the synthesis of improved catalysts. Theory can serve as a predictive design tool, guiding synthetic approaches for construction of materials with precisely designed catalytic surface structures at the nano and atomic scales. Success in the design and controlled synthesis of catalytic structures requires an interplay between (1) characterization of catalysts as they function, including evaluation of their performance under technologically realistic conditions, and (2) synthesis of catalyst structures to achieve high activity and product selectivity. Priority Research Directions The workshop process identified three priority research directions for advancing catalysis science for energy applications: Advanced catalysts for the conversion of heavy fossil energy feedstocks The depletion of light, sweet crude oil has caused increasing use of heavy oils and other heavy feedstocks. The complicated nature of the molecules in these feedstocks, as well as their high heteroatom contents, requires catalysts and processing routes entirely different from those used in today's petroleum refineries. To advance catalytic technologies for converting heavy feedstocks, scientists must (1) identify and quantify the heavy molecules (now possible with methods such as high-resolution mass spectrometry) and (2) determine data to represent the reactivities of the molecules in the presence of the countless other kinds of molecules interacting with the catalysts. Methods for determining reactivities of individual compounds within complex feedstocks reacting under industrial conditions soon will be available. Reactivity data, when combined with fundamental understanding of how the reactants interact with the catalysts, will facilitate the selection of new catalysts for heavy feedstocks and the prediction of properties of the fuels produced. Understanding the chemistry of lignocellulosic biomass deconstruction and conversion to fuels The United States potentially could harvest 1.3 billion tons of biomass annually. Converting this resource to ethanol would produce more than 60 billion gallons/year, enough to replace 30 percent of the nation's current gasoline use. Scientists must develop fundamental understanding of biomass deconstruction, either through high-temperature pyrolysis or low-temperature catalytic conversion, before engineers can create commercial biomass conversion technologies. Pyrolysis generates gases and liquids for processing into fuels or blending with existing petroleum refinery streams. Low-temperature deconstruction produces sugars and lignin for conversion into molecules with higher energy densities than the parent biomass. Scientists also must discover and develop new catalysts for targeted transformations of these biomass-derived molecules into fuels. Developing a molecular-scale understanding of deconstruction and conversion of biomass products to fuels would contribute to the development of optimal processes for particular biomass sources. Knowledge of how catalyst structure and composition affect the kinetics of individual processes could lead to new catalysts with properties adjusted for maximum activity and selectivity for high- and low-temperature processing of biomass. Photo- and electro-driven conversions of carbon dioxide and water Catalytic conversion of carbon dioxide to liquid fuels facilitated by the input of solar or electrical energy presents an immense opportunity for new sources of energy. Furthermore, the catalytic generation of hydrogen from water could provide a carbon-free source of hydrogen for fuel and for processing of fossil and biomass feedstocks. Although these electrolytic processes are possible, they are not now economical, because they depend on expensive and rare materials, such as platinum, and require significantly more energy than the minimum dictated by thermodynamics. Scientists have explored the use of photons to drive thermodynamically uphill reactions, but the efficiencies of the best-known processes are very low. To dramatically increase efficiencies, we need to understand the elementary processes by which photocatalysts and electrocatalysts operate and the phenomena that limit their effectiveness. This knowledge would guide the search for more efficient catalysts. To address the challenge of increased efficiency, scientists must develop fundamental understanding on the basis of novel spectroscopic methods to probe the surfaces of photocatalysts and electrocatalysts in the presence of liquid electrolytes. New catalysts will have to involve multiple-site structures and be able to drive the multiple-electron and hydrogen transfer reactions required to produce fuels from carbon dioxide and water. Theoretical investigations also are needed to understand the manifold processes occurring on photocatalysts and electrocatalysts, many of which are unique to the conditions of their use. Basic research to address these challenges will result in fundamental knowledge and expertise crucial for developing efficient, durable, and scalable catalysts. Crosscutting Research Issues Two broad issues cut across the grand challenges and the priority research directions for development of efficient, economical, and environmentally friendly catalytic processes for energy applications: Experimental characterization of catalystsas they function is a theme common to all the processes mentioned here—ranging from heavy feedstock refining to carbon dioxide conversion to fuels. The scientific community needs a fundamental understanding of catalyst structures and catalytic reaction mechanisms to design and prepare improved catalysts and processes for energy conversion. Attainment of this understanding requires development of new techniques and facilities for investigating catalysts as they function in the presence of complex, real feedstocks at high temperatures and pressures. The community also needs improved methods for characterizing the feedstocks and products—to the point of identifying individual compounds in these complex mixtures. The dearth of information characterizing biomass-derived feedstocks and the growing complexity of the available heavy fossil feedstocks, as well as the intrinsic complexity of catalyst surfaces, magnify the difficulty of this challenge. Implied in the need for better characterization is the need for advanced methods and instrument hardware and software far beyond today's capabilities. Improved spectroscopic and microscopic capabilities, specifically including synchrotron-based equipment and methods, will provide significantly enhanced temporal, spatial, and energy resolution of catalysts and new opportunities for elucidating their performance under realistic reaction conditions. Achieving these crosscutting goals for better catalyst characterization will require breakthrough developments in techniques and much improved methodologies for combining multiple complementary techniques. Advances in theory and computation are also required to significantly advance catalysis for energy applications. A major challenge is to understand the mechanisms and dynamics of catalyzed transformations, enabling rational design of catalysts. Molecular-level understanding is essential to "tune" a catalyst to produce the right products with minimal energy consumption and environmental impact. Applications of computational chemistry and methods derived from advanced chemical theory are crucial to the development of fundamental understanding of catalytic processes and ultimately to first-principles catalyst design. Development of this understanding requires breakthroughs in theoretical and computational methods to allow treatment of the complexity of the molecular reactants and condensed-phase and interfacial catalysts needed to convert new energy feedstocks to useful products. Computation, when combined with advanced experimental techniques, is already leading to broad new insights into catalyst behavior and the design of new materials. The development of new theories and computational tools that accurately predict thermodynamic properties, dynamical behavior, and coupled kinetics of complex condensed-phase and interfacial processes is a crosscutting priority research direction to address the grand challenges of catalysis science, especially in the area of advanced energy technologies. Scientific and Technological Impact The urgent need for fuels in an era of declining resources and pressing environmental concerns demands a resurgence in catalysis science, requiring a massive commitment of programmatic leadership and improved experimental and theoretical methods. These elements will make it possible to follow, in real time, catalytic reactions on an atomic scale on surfaces that are nonuniform and laden with large molecules undergoing complex competing processes. The understanding that will emerge promises to engender technology for economical catalytic processing of ever more challenging fossil feedstocks and for breakthroughs needed to create an industry for energy production from biomass. These new technologies are needed for a sustainable supply of energy from domestic sources and mitigation of the problem of greenhouse gas emissions. Future Science Needs and Opportunities for Electron Scattering: Next-Generation Instrumentation and Beyond JPG.jpg file (364KB) .pdf file  (2.9MB)Report.pdf file (18.0MB) This report is based on a BES Workshop entitled "Future Science Needs and Opportunities for Electron Scattering: Next-Generation Instrumentation and Beyond," March 1-2, 2007, to identify emerging basic science and engineering research needs and opportunities that will require major advances in electron-scattering theory, technology, and instrumentation. The workshop was organized to help define the scientific context and strategic priorities for the U.S. Department of Energy's Office of Basic Energy Sciences (DOE-BES) electron-scattering development for materials characterization over the next decade and beyond. Attendees represented university, national laboratory, and commercial research organizations from the United States and around the world. The workshop comprised plenary sessions, breakout groups, and joint open discussion summary sessions. Complete information about this workshop is available at link In the last 40 years, advances in instrumentation have gradually increased the resolution capabilities of commercial electron microscopes. Within the last decade, however, a revolution has occurred, facilitating 1-nm resolution in the scanning electron microscope and sub-Ångstrom resolution in the transmission electron microscope. This revolution was a direct result of decades-long research efforts concentrating on electron optics, both theoretically and in practice, leading to implementation of aberration correctors that employ multi-pole electron lenses. While this improvement has been a remarkable achievement, it has also inspired the scientific community to ask what other capabilities are required beyond "image resolution" to more fully address the scientific problems of today's technologically complex materials. During this workshop, a number of scientific challenges requiring breakthroughs in electron scattering and/or instrumentation for characterization of materials were identified. Although the individual scientific problems identified in the workshop were wide-ranging, they are well represented by seven major scientific challenges. These are listed in Table 1, together with their associated application areas as proposed by workshop attendees. Addressing these challenges will require dedicated long-term developmental efforts similar to those that have been applied to the electron optics revolution. This report summarizes the scientific challenges identified by attendees and then outlines the technological issues that need to be addressed by a long-term research and development (R&D) effort to overcome these challenges. A recurring message voiced during the meeting was that, while improved image resolution in commercially available tools is significant, this is only the first of many breakthroughs required to answer today's most challenging problems. The major technological issues that were identified, as well as a measure of their relative priority, appear in Table 2. These issues require not only the development of innovative instrumentation but also new analytical procedures that connect experiment, theory, and modeling. Table 1 Scientific Challenges and Applications Areas Identified during the Workshop Theme Application Area 1. The nanoscale origin of macroscopic properties High-performance 21st century materials in both structural engineering and electronic applications 2. The role of individual atoms, point defects, and dopants in materials Semiconductors, catalysts, quantum phenomena and confinement, fracture, embrittlement, solar energy, nuclear power, radiation damage 3. Characterization of interfaces at arbitrary orientations Semiconductors, three-dimensional geometries for nanostructures, grain-boundary-dominated processes, hydrogen storage 4. The interface between ordered and disordered materials Dynamic behavior of the liquid-solid interface, organic/inorganic interfaces, friction/wear, grain boundaries, welding, polymer/metal/oxide composites, self-assembly 5. Mapping of electromagnetic (EM) fields in and around nanoscale matter Ferroelectric/magnetic structures, switching, tunneling and transport, quantum confinement/proximity, superconductivity 6. Probing structures in their native environments Catalysis, fuel cells, organic/inorganic interfaces, functionalized nanoparticles for health care, polymers, biomolecular processes, biomaterials, soft-condensed matter, non-vacuum environments 7. The behavior of matter far from equilibrium High radiation, high-pressure and high-temperature environments, dynamic/transient behavior, nuclear and fusion energy, outer space, nucleation, growth and synthesis in solution, corrosion, phase transformations Table 2 Functionality Required to Address Challenges in Table 1 Functionality Required Priority In-situ environments permitting observation of processes under conditions that replicate real-world/real-time conditions (temperature, pressure, atmosphere, EM fields, fluids) with minimal loss of image and/or spectral resolution Detectors that enhance by more than an order of magnitude the temporal, spatial, and/or collection efficiency of existing technologies for electrons, photons, and/or X-rays Higher temporal resolution instruments for dynamic studies with a continuous range of operating conditions from microseconds to femtoseconds A 4. Sources having higher brightness, temporal resolution, and polarization Sources having higher brightness, temporal resolution, and polarization Electron-optical configurations designed to study complex interactions of nanoscale objects under multiple excitation processes (photons, fields, …) Virtualized instruments that are operating in connection with experimental tools, allowing real-time data quantitative analysis or simulation, and community software tools for routine and robust data analysis Some research efforts have already begun to address these topics. However, a dedicated and coordinated approach is needed to address these challenges more rapidly. For example, the principles of aberration correction for electron-optical lenses were established theoretically by Scherzer (Zeitschrift für Physik 101(9-10), 593-603) in 1936, but practical implementation was not realized until 1997 (a 61-year development cycle). Reducing development time to less than a decade is essential in addressing the scientific issues in the ever-growing nanoscale materials world. To accomplish this, DOE should make a concerted effort to revise how it funds advanced resources and R&D for electron beam instrumentation across its programs. Basic Research Needs for Electrical Energy Storage JPG.jpg file (270KB) .pdf file  (14.3MB)Report.pdf file (14.3MB) Basic Research Needs for Electrical Energy Storage This report is based on a BES Workshop on Basic Research Needs for Electrical Energy Storage (EES), April 2-4, 2007, to identify basic research needs and opportunities underlying batteries, capacitors, and related EES technologies, with a focus on new or emerging science challenges with potential for significant long-term impact on the efficient storage and release of electrical energy. The projected doubling of world energy consumption within the next 50 years, coupled with the growing demand for low- or even zero-emission sources of energy, has brought increasing awareness of the need for efficient, clean, and renewable energy sources. Energy based on electricity that can be generated from renewable sources, such as solar or wind, offers enormous potential for meeting future energy demands. However, the use of electricity generated from these intermittent, renewable sources requires efficient electrical energy storage. For commercial and residential grid applications, electricity must be reliably available 24 hours a day; even second-to-second fluctuations cause major disruptions with costs estimated to be tens of billions of dollars annually. Thus, for large-scale solar- or wind-based electrical generation to be practical, the development of new EES systems will be critical to meeting continuous energy demands and effectively leveling the cyclic nature of these energy sources. In addition, greatly improved EES systems are needed to progress from today's hybrid electric vehicles to plug-in hybrids or all-electric vehicles. Improvements in EES reliability and safety are also needed to prevent premature, and sometimes catastrophic, device failure. Chemical energy storage devices (batteries) and electrochemical capacitors (ECs) are among the leading EES technologies today. Both are based on electrochemistry, and the fundamental difference between them is that batteries store energy in chemical reactants capable of generating charge, whereas electrochemical capacitors store energy directly as charge. The performance of current EES technologies falls well short of requirements for using electrical energy efficiently in transportation, commercial, and residential applications. For example, EES devices with substantially higher energy and power densities and faster recharge times are needed if all-electric/plug-in hybrid vehicles are to be deployed broadly as replacements for gasoline-powered vehicles. Although EES devices have been available for many decades, there are many fundamental gaps in understanding the atomic- and molecular-level processes that govern their operation, performance limitations, and failure. Fundamental research is critically needed to uncover the underlying principles that govern these complex and interrelated processes. With a full understanding of these processes, new concepts can be formulated for addressing present EES technology gaps and meeting future energy storage requirements. BES worked closely with the DOE Office of Energy Efficiency and Renewable Energy and the DOE Office of Electricity Delivery and Energy Reliability to clearly define future requirements for EES from the perspective of applications relevant to transportation and electricity distribution, respectively, and to identify critical technology gaps. In addition, leaders in EES industrial and applied research laboratories were recruited to prepare a technology resource document, Technology and Applied R&D Needs for Electrical Energy Storage, which provided the groundwork for and served as a basis to inform the deliberation of basic research discussions for the workshop attendees. The invited workshop attendees, numbering more than 130, included representatives from universities, national laboratories, and industry, including a significant number of scientists from Japan and Europe. A plenary session at the beginning of the workshop captured the present state of the art in research and development and technology needs required for EES for the future. The workshop participants were asked to identify key priority research directions that hold particular promise for providing needed advances that will, in turn, revolutionize the performance of EES. Participants were divided between two panels focusing on the major types of EES, chemical energy storage and capacitive energy storage. A third panel focused on cross-cutting research that will be critical to achieving the technical breakthroughs required to meet future EES needs. A closing plenary session summarized the most urgent research needs that were identified for both chemical and capacitive energy storage. The research directions identified by the panelists are presented in this report in three sections corresponding to the findings of the three workshop panels. The panel on chemical energy storage acknowledged that progressing to the higher energy and power densities required for future batteries will push materials to the edge of stability; yet these devices must be safe and reliable through thousands of rapid charge-discharge cycles. A major challenge for chemical energy storage is developing the ability to store more energy while maintaining stable electrode-electrolyte interfaces. The need to mitigate the volume and structural changes to the active electrode sites accompanying the charge-discharge cycle encourages exploration of nanoscale structures. Recent developments in nanostructured and multifunctional materials were singled out as having the potential to dramatically increase energy capacity and power densities. However, an understanding of nanoscale phenomena is needed to take full advantage of the unique chemistry and physics that can occur at the nanoscale. Further, there is an urgent need to develop a fundamental understanding of the interdependence of the electrolyte and electrode materials, especially with respect to controlling charge transfer from the electrode to the electrolyte. Combining the power of new computational capabilities and in situ analytical tools could open up entirely new avenues for designing novel multifunctional nanomaterials with the desired physical and chemical properties, leading to greatly enhanced performance. The panel on capacitive storage recognized that, in general, ECs have higher power densities than batteries, as well as sub-second response times. However, energy storage densities are currently lower than they are for batteries and are insufficient for many applications. As with batteries, the need for higher energy densities requires new materials. Similarly, advances in electrolytes are needed to increase voltage and conductivity while ensuring stability. Understanding how materials store and transport charge at electrode-electrolyte interfaces is critically important and will require a fundamental understanding of charge transfer and transport mechanisms. The capability to synthesize nanostructured electrodes with tailored, high-surface-area architectures offers the potential for storing multiple charges at a single site, increasing charge density. The addition of surface functionalities could also contribute to high and reproducible charge storage capabilities, as well as rapid charge-discharge functions. The design of new materials with tailored architectures optimized for effective capacitive charge storage will be catalyzed by new computational and analytical tools that can provide the needed foundation for the rational design of these multifunctional materials. These tools will also provide the molecular-level insights required to establish the physical and chemical criteria for attaining higher voltages, higher ionic conductivity, and wide electrochemical and thermal stability in electrolytes. The third panel identified four cross-cutting research directions that were considered to be critical for meeting future technology needs in EES: 1. Advances in Characterization 2. Nanostructured Materials 3. Innovations in Electrolytes 4. Theory, Modeling, and Simulation Exceptional insight into the physical and chemical phenomena that underlie the operation of energy storage devices can be afforded by a new generation of analytical tools. This information will catalyze the development of new materials and processes required for future EES systems. New in situ photon- and particle-based microscopic, spectroscopic, and scattering techniques with time resolution down to the femtosecond range and spatial resolution spanning the atomic and mesoscopic scales are needed to meet the challenge of developing future EES systems. These measurements are critical to achieving the ability to design EES systems rationally, including materials and novel architectures that exhibit optimal performance. This information will help identify the underlying reasons behind failure modes and afford directions for mitigating them. The performance of energy storage systems is limited by the performance of the constituent materials—including active materials, conductors, and inert additives. Recent research suggests that synthetic control of material architectures (including pore size, structure, and composition; particle size and composition; and electrode structure down to nanoscale dimensions) could lead to transformational breakthroughs in key energy storage parameters such as capacity, power, charge-discharge rates, and lifetimes. Investigation of model systems of irreducible complexity will require the close coupling of theory and experiment in conjunction with well-defined structures to elucidate fundamental materials properties. Novel approaches are needed to develop multifunctional materials that are self-healing, self-regulating, failure-tolerant, impurity-sequestering, and sustainable. Advances in nanoscience offer particularly exciting possibilities for the development of revolutionary three-dimensional architectures that simultaneously optimize ion and electron transport and capacity. The design of EES systems with long cycle lifetimes and high energy-storage capacities will require a fundamental understanding of charge transfer and transport processes. The interfaces of electrodes with electrolytes are astonishingly complex and dynamic. The dynamic structures of interfaces need to be characterized so that the paths of electrons and attendant trafficking of ions may be directed with exquisite fidelity. New capabilities are needed to "observe" the dynamic composition and structure at an electrode surface, in real time, during charge transport and transfer processes. With this underpinning knowledge, wholly new concepts in materials design can be developed for producing materials that are capable of storing higher energy densities and have long cycle lifetimes. A characteristic common to chemical and capacitive energy storage devices is that the electrolyte transfers ions/charge between electrodes during charge and discharge cycles. An ideal electrolyte provides high conductivity over a broad temperature range, is chemically and electrochemically inert at the electrode, and is inherently safe. Too often the electrolyte is the weak link in the energy storage system, limiting both performance and reliability of EES. At present, the myriad interactions that occur in electrolyte systems—ion-ion, ion-solvent, and ion-electrode—are poorly understood. Fundamental research will provide the knowledge that will permit the formulation of novel designed electrolytes, such as ionic liquids and nanocomposite polymer electrolytes, that will enhance the performance and lifetimes of electrolytes. Advances in fundamental theoretical methodologies and computer technologies provide an unparalleled opportunity for understanding the complexities of processes and materials needed to make the groundbreaking discoveries that will lead to the next generation of EES. Theory, modeling, and simulation can effectively complement experimental efforts and can provide insight into mechanisms, predict trends, identify new materials, and guide experiments. Large multiscale computations that integrate methods at different time and length scales have the potential to provide a fundamental understanding of processes such as phase transitions in electrode materials, ion transport in electrolytes, charge transfer at interfaces, and electronic transport in electrodes. Revolutionary breakthroughs in EES have been singled out as perhaps the most crucial need for this nation's secure energy future. The BES Workshop on Basic Research Needs for Electrical Energy Storage concluded that the breakthroughs required for tomorrow's energy storage needs will not be realized with incremental evolutionary improvements in existing technologies. Rather, they will be realized only with fundamental research to understand the underlying processes involved in EES, which will in turn enable the development of novel EES concepts that incorporate revolutionary new materials and chemical processes. Recent advances have provided the ability to synthesize novel nanoscale materials with architectures tailored for specific performance; to characterize materials and dynamic chemical processes at the atomic and molecular level; and to simulate and predict structural and functional relationships using modern computational tools. Together, these new capabilities provide unprecedented potential for addressing technology and performance gaps in EES devices. Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems JPG.jpg file (321KB) Report.pdf file (13.5MB) Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems This report is based on a BES Workshop on Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems, February 21-23, 2007, to identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems. Serious challenges must be faced in this century as the world seeks to meet global energy needs and at the same time reduce emissions of greenhouse gases to the atmosphere. Even with a growing energy supply from alternative sources, fossil carbon resources will remain in heavy use and will generate large volumes of carbon dioxide (CO2). To reduce the atmospheric impact of this fossil energy use, it is necessary to capture and sequester a substantial fraction of the produced CO2. Subsurface geologic formations offer a potential location for long-term storage of the requisite large volumes of CO2. Nuclear energy resources could also reduce use of carbon-based fuels and CO2 generation, especially if nuclear energy capacity is greatly increased. Nuclear power generation results in spent nuclear fuel and other radioactive materials that also must be sequestered underground. Hence, regardless of technology choices, there will be major increases in the demand to store materials underground in large quantities, for long times, and with increasing efficiency and safety margins. Rock formations are composed of complex natural materials and were not designed by nature as storage vaults. If new energy technologies are to be developed in a timely fashion while ensuring public safety, fundamental improvements are needed in our understanding of how these rock formations will perform as storage systems. This report describes the scientific challenges associated with geologic sequestration of large volumes of carbon dioxide for hundreds of years, and also addresses the geoscientific aspects of safely storing nuclear waste materials for thousands to hundreds of thousands of years. The fundamental crosscutting challenge is to understand the properties and processes associated with complex and heterogeneous subsurface mineral assemblages comprising porous rock formations, and the equally complex fluids that may reside within and flow through those formations. The relevant physical and chemical interactions occur on spatial scales that range from those of atoms, molecules, and mineral surfaces, up to tens of kilometers, and time scales that range from picoseconds to millennia and longer. To predict with confidence the transport and fate of either CO2 or the various components of stored nuclear materials, we need to learn to better describe fundamental atomic, molecular, and biological processes, and to translate those microscale descriptions into macroscopic properties of materials and fluids. We also need fundamental advances in the ability to simulate multiscale systems as they are perturbed during sequestration activities and for very long times afterward, and to monitor those systems in real time with increasing spatial and temporal resolution. The ultimate objective is to predict accurately the performance of the subsurface fluid-rock storage systems, and to verify enough of the predicted performance with direct observations to build confidence that the systems will meet their design targets as well as environmental protection goals. The report summarizes the results and conclusions of a Workshop on Basic Research Needs for Geosciences held in February 2007. Five panels met, resulting in four Panel Reports, three Grand Challenges, six Priority Research Directions, and three Crosscutting Research Issues. The Grand Challenges differ from the Priority Research Directions in that the former describe broader, long-term objectives while the latter are more focused. Computational thermodynamics of complex fluids and solids. Predictions of geochemical transport in natural materials must start with detailed knowledge of the chemical properties of multicomponent fluids and solids. New modeling strategies for geochemical systems based on first-principles methods are required, as well as reliable tools for translating atomic-and molecular-scale descriptions to the many orders of magnitude larger scales of subsurface geologic systems. Specific challenges include calculation of equilibrium constants and kinetics of heterogeneous reactions, descriptions of adsorption and other mineral surface processes, properties of transuranic elements and compounds, and mixing and transport properties for multicomponent liquid, solid and supercritical solutions. Significant advances are required in calculations based on the electronic Schrödinger equation, scaling of solution methods, and representation in terms of Equations of State. Calibration of models with a new generation of experiments will be critical. Integrated characterization, modeling, and monitoring of geologic systems. Characterization of the subsurface is inextricably linked to the modeling and monitoring of processes occurring there. More accurate descriptions of the behavior of subsurface storage systems will require that the diverse, independent approaches currently used for characterizing, modeling and monitoring be linked in a revolutionary and comprehensive way and carried out simultaneously. The challenges arise from the inaccessibility and complexity of the subsurface, the wide range of scales of variability, and the potential role of coupled nonlinear processes. Progress in subsurface simulation requires advances in the application of geological process knowledge for determining model structure and the effective integration of geochemical and high-resolution geophysical measurements into model development and parameterization. To fully integrate characterization and modeling will require advances in methods for joint inversion of coupled process models that effectively represent nonlinearities, scale effects, and uncertainties. Simulation of multiscale geologic systems for ultra-long times. Anthropogenic perturbations of subsurface storage systems will occur over decades, but predictions of storage performance will be needed that span hundreds to many thousands of years, time scales that reach far beyond standard engineering practice. Achieving this simulation capability requires a major advance in modeling capability that will accurately couple information across scales, i.e., account for the effects of small-scale processes on larger scales, and the effects of fast processes as well as the ultra-slow evolution on long time scales. Cross-scale modeling of complex dynamic subsurface systems requires the development of new computational and numerical methods of stochastic systems, new multiscale formulations, data integration, improvements in inverse theory, and new methods for optimization. Mineral-water interface complexity and dynamics. Natural materials are structurally complex, with variable composition, roughness, defect content, and organic and mineral coatings. There is an overarching need to interrogate the complex structure and dynamics at mineral-water interfaces with increasing spatial and temporal resolution using existing and emerging experimental and computational approaches. The fundamental objectives are to translate a molecular-scale description of complex mineral surfaces to thermodynamic quantities for the purpose of linking with macroscopic models, to follow interfacial reactions in real time, and to understand how minerals grow and dissolve and how the mechanisms couple dynamically to changes at the interface. Nanoparticulate and colloid chemistry and physics. Colloidal particles play critical roles in dispersion of contaminants from energy production, use, or waste isolation sites. New advances are needed in characterization of colloids, sampling technologies, and conceptual models for reactivity, fate, and transport of colloidal particles in aqueous environments. Specific advances will be needed in experimental techniques to characterize colloids at the atomic level and to build quantitative models of their properties and reactivity. Dynamic imaging of flow and transport. Improved imaging in the subsurface is needed to allow in situ multiscale measurement of state variables as well as flow, transport, fluid age, and reaction rates. Specific research needs include development of smart tracers, identification of environmental tracers that would allow age dating fluids in the 50-3000 year range, methods for measuring state variables such as pressure and temperature continuously in space and time, and better models for the interactions of physical fields, elastic waves, or electromagnetic perturbations with fluid-filled porous media. Transport properties and in situ characterization of fluid trapping, isolation, and immobilization. Mechanisms of immobilization of injected CO2 include buoyancy trapping of fluids by geologic seals, capillary trapping of fluid phases as isolated bubbles within rock pores, and sorption of CO2 or radionuclides on solid surfaces. Specific advances will be needed in our ability to understand and represent the interplay of interfacial tension, surface properties, buoyancy, the state of stress, and rock heterogeneity in the subsurface. Fluid-induced rock deformation. CO2 injection affects the thermal, mechanical, hydrological, and chemical state of large volumes of the subsurface. Accurate forecasting of the effects requires improved understanding of the coupled stress-strain and flow response to injection-induced pressure and hydrologic perturbations in multiphase-fluid saturated systems. Such effects manifest themselves as changes in rock properties at the centimeter scale, mechanical deformation at meter-to-kilometer scales, and modified regional fluid flow at scales up to 100 km. Predicting the hydromechanical properties of rocks over this scale range requires improved models for the coupling of chemical, mechanical, and hydrological effects. Such models could revolutionize our ability to understand shallow crustal deformation related to many other natural processes and engineering applications. Biogeochemistry in extreme subsurface environments. Microorganisms strongly influence the mineralogy and chemistry of geologic systems. CO2 and nuclear material isolation will perturb the environments for these microorganisms significantly. Major advances are needed to describe how populations of microbes will respond to the extreme environments of temperature, pH, radiation, and chemistry that will be created, so that a much clearer picture of biogenic products, potential for corrosion, and transport or immobilization of contaminants can be assembled. The microscopic basis of macroscopic complexity. Classical continuum mechanics relies on the assumption of a separation between the length scales of microscopic fluctuations and macroscopic motions. However, in geologic problems this scale separation often does not exist. There are instead fluctuations at all scales, and the resulting macroscopic behavior can then be quite complex. The essential need is to develop a scientific basis of "emergent" phenomena based on the microscopic phenomena. Highly reactive subsurface materials and environments. The emplacement of energy system byproducts into geological repositories perturbs temperature and pressure, imposes chemical gradients, creates intense radiation fields, and can cause reactions that alter the minerals, pore fluids, and emplaced materials. Strong interactions between the geochemical environment and emplaced materials are expected. New insight is needed on equilibria in compositionally complex systems, reaction kinetics in concentrated aqueous and other solutions, reaction kinetics under near-equilibrium undersaturated and supersaturated conditions, and transient reaction kinetics. Thermodynamics of the solute-to-solid continuum. Reactions involving solutes, colloids, particles, and surfaces control the transport of chemical constituents in the subsurface environment. A rigorous structural, kinetic, and thermodynamic description of the complex chemical reality between the molecular and the macroscopic scale is a fundamental scientific challenge. Advanced techniques are needed for characterizing particles in the nanometer-tomicrometer size range, combined with a new description of chemical thermodynamics that does not rely on a sharp distinction between solutes and solids. The Grand Challenges, Priority Research Directions, and Crosscutting Issues described in this report define a science-based approach to understanding the long-term behavior of subsurface geologic systems in which anthropogenic CO2 and nuclear materials could be stored. The research areas are rich with opportunities to build fundamental knowledge of the physics, chemistry, and materials science of geologic systems that will have impacts well beyond the specific applications. The proposed research is based on development of a new level of understanding—physical, chemical, biological, mathematical, and computational—of processes that happen at the microscopic scale of atoms, molecules and mineral surfaces, and how those processes translate to material behavior over large length scales and on ultra-long time scales. Addressing the basic science issues described would revolutionize our ability to understand, simulate, and monitor all of the subsurface settings in which transport is critical, including the movement of contaminants, the emplacement of minerals, or the management of aquifers. The results of the research will have a wide range of implications from physics and chemistry, to material science, biology and earth science. JPG.jpg file (262KB) Report.pdf file (5.9MB) This report is based on a BES Workshop on Clean and Efficient Combustion of 21st Century Transportation Fuels, October 29-November 1, 2006, to identify basic research needs and opportunities underlying utilization of evolving transportation fuels, with a focus on new or emerging science challenges that have the potential for significant long-term impact on fuel efficiency and emissions. From the invention of the wheel, advances in transportation have increased the mobility of human kind, enhancing the quality of life and altering our very perception of time and distance. Early carts and wagons driven by human or animal power allowed the movement of people and goods in quantities previously thought impossible. With the rise of steam power, propeller driven ships and railroad locomotives shrank the world as never before. Ocean crossings were no longer at the whim of the winds, and continental crossings went from grand adventures to routine, scheduled outings. The commercialization of the internal combustion engine at the turn of the twentieth century brought about a new, and very personal, revolution in transportation, particularly in the United States. Automobiles created an unbelievable freedom of movement: A single person could travel to any point in the county in a matter of days, on a schedule of his or her own choosing. Suburbs were built on the promise of cheap, reliable, personal transportation. American industry grew to depend on internal combustion engines to produce and transport goods, and farmers increased yields and efficiency by employing farm machinery. Airplanes, powered by internal combustion engines, shrank the world to the point where a trip between almost any two points on the globe is now measured not in days or months, but in hours. Transportation is the second largest consumer of energy in the United States, accounting for nearly 60% of our nation's use of petroleum, an amount equivalent to all of the oil imported into the U.S. The numbers are staggering—the transport of people and goods within the U.S. burns almost one million gallons of petroleum each minute of the day. Our Founding Fathers may not have foreseen freedom of movement as an inalienable right, but Americans now view it as such. Knowledge is power, a maxim that is literally true for combustion. In our global, just-in-time economy, American competitiveness and innovation require an affordable, diverse, stable, and environmentally acceptable energy supply. Currently 85% of our nation's energy comes from hydrocarbon sources, including natural gas, petroleum, and coal; 97% of transportation energy derives from petroleum, essentially all from combustion in gasoline engines (65%), diesel engines (20%), and jet turbines (12%). The monolithic nature of transportation technologies offers the opportunity for improvements in efficiency of 25-50% through strategic technical investment in advanced fuel/engine concepts and devices. This investment is not a matter of choice, but, an economic, geopolitical, and environmental necessity. The reality is that the internal combustion engine will remain the primary driver of transport for the next 30-50 years, whether or not one believes that the peak in oil is past or imminent, or that hydrogen-fueled and electric vehicles will power transport in the future, or that geopolitical tensions will ease through international cooperation. Rational evaluation of U.S. energy security must include careful examination of how we achieve optimally efficient and clean combustion of precious transportation fuels in the 21st century. The Basic Energy Sciences Workshop on Clean and Efficient Combustion of 21st Century Transportation Fuels Our historic dependence on light, sweet crude oil for our transportation fuels will draw to a close over the coming decades as finite resources are exhausted. New fuel sources, with differing characteristics, are emerging to displace crude oil. As these new fuel streams enter the market, a series of new engine technologies are also under development, promising improved efficiency and cleaner combustion. To date, however, a coordinated strategic effort to match future fuels with evolving engines is lacking. To provide the scientific foundation to enable technology breakthroughs in transportation fuel utilization, the Office of Basic Energy Sciences in the U.S. Department of Energy (DOE) convened the Workshop on Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels from October 30 to November 1, 2006. This report is a summary of that Workshop. It reflects the collective output of the Workshop participants, which included over 80 leading scientists and engineers representing academia, industry, and national laboratories in the United States and Europe. Researchers specializing in basic science and technological applications were well represented, producing a stimulating and engaging forum. Workshop planning and execution involved advance coordination with DOE Office of Energy Efficiency and Renewable Energy, FreedomCAR and Vehicle Technologies, which manages applied research and development of transportation technologies. Priority research directions were identified by three panels, each made up of a subset of the Workshop attendees and interested observers. The first two panels were differentiated by their focus on engines or fuels and were similar in their strategy of working backward from technology drivers to scientific research needs. The first panel focused on Novel Combustion, as embodied in promising new engine technologies. The second panel focused on Fuel Utilization, inspired by the unique (and largely unknown) challenges of the emerging fuel streams entering the market. The third panel explored crosscutting science themes and identified general gaps in our scientific understanding of 21st-century fuel combustion. Subsequent to the Workshop, co-chairs and panel leads distilled the collective output to produce eight distinct, targeted research areas that advance one overarching grand challenge: to develop a validated, predictive, multi-scale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications. Fuels and Engines Transportation fuels for automobile, truck and aircraft engines are currently produced by refining petroleum-based sweet crude oil, from which gasoline, diesel fuel and jet fuel are each made with specific physical and chemical characteristics dictated by the type of engine in which they are to be burned. Standardized fuel properties and restricted engine operating domains couple to provide reliable performance. As new fuels derived from oil sands, oil shale, coal, and bio-feedstocks emerge as replacements for light, sweet crude oil, both uncertainties and strategic opportunities arise. Rather than pursue energy-intensive refining of these qualitatively different emerging fuels to match current fuel formulations, we must strive to achieve a "dual revolution" by interdependently advancing both fuel and engine technologies. Spark-ignited gasoline engines equipped with catalytic after-treatment operate cleanly but well below optimal efficiency due to low compression ratios and throttle-plate losses used to control air intake. Diesel engines operate more efficiently at higher compression ratios but sample broad realms of fuel/air ratio, thereby producing soot and NOx for which burnout and/or removal can prove problematic. A number of new engine technologies are attempting to overcome these efficiency and emissions compromises. Direct injection gasoline engines operate without throttle plates, increasing efficiency, while retaining the use of a catalytic converter. Ultra-lean, high-pressure, low-temperature diesel combustion seeks to avoid the conditions that form pollutants, while maintaining very high efficiency. A new form of combustion, homogeneous charge compression ignition (HCCI) seeks to combine the best of diesel and gasoline engines. HCCI employs a premixed fuel-air charge that is ignited by compression, with the ignition timing controlled by in-cylinder fuel chemistry. Each of these advanced combustion strategies must permit and even exploit fuel flexibility as the 21st-century fuel stream matures. The opportunity presented by new fuel sources and advanced engine concepts offers such an overwhelming design and operation parameter space that only those technologies that build upon a predictive science capability will likely mature to a product within a useful timeframe. Research Directions The Workshop identified a single, overarching grand challenge: The development of a validated, predictive, multi-scale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications. A broad array of discovery research and scientific inquiry that integrates experiment, theory, modeling and simulation will be required. This predictive capability, if attained, will change fundamentally the process for fuels research and engine development by establishing a scientific understanding of sufficient depth and flexibility to facilitate realistic simulation of fuel combustion in existing and proposed engines. Similar understanding in aeronautics has produced the beautiful and efficient complex curves of modern aircraft wings. These designs could never have been realized through cut-and-try engineering, but rather rely on the prediction and optimization of complex air flows. An analogous experimentally validated, predictive capability for combustion is a daunting challenge for numerous reasons: (1) spatial scales of importance range from the dimensions of the atom up to that of an engine piston; (2) the combustion chemistry of 21st-century fuels is astonishingly complex with hundreds of different fuel molecules and many thousands of possible reactions contributing to the oxidative release of energy stored in chemical bonds—chemical details also dictate emissions profiles, engine knock conditions and, for HCCI, ignition timing; (3) evolving engine designs will operate under dilute conditions at very high pressures and compression ratios—we possess neither sufficient concepts nor experimental tools to address these new operating conditions; (4) turbulence, transport, and radiative phenomena have a profound impact on local chemistry in most combustion media but are poorly understood and extremely challenging to characterize; (5) even assuming optimistic growth in computing power for existing and envisioned architectures, combustion phenomena are and will remain too complex to simulate in their complete detail, and methods that condense information and accurately propagate uncertainties across length and time scales will be required to optimize fuel/engine design and operation. Eight priority research directions, each of which focuses on crucial elements of the overarching grand challenge, are cited as most critical to the path forward by the Workshop participants. In addition to the unifying grand challenge and specific priority research directions, the Workshop produced a keen sense of urgency and opportunity for the development of revolutionary combustion technology for transportation based upon fundamental combustion science. Internal combustion engines are often viewed as mature technology, developed in an Edisonian fashion over a hundred years. The participants at the Workshop were unanimous in their view that only through the achievable goal of truly predictive combustion science will the engines of the 21st century realize unparalleled efficiency and cleanliness in the challenging environment of changing fuel streams. Basic Research Needs for Advanced Nuclear Energy Systems JPG.jpg file (380KB) Report.pdf file (27.9MB) Report.pdf file (62.6MB) Basic Research Needs for Advanced Nuclear Energy Systems This report is based on a BES Workshop on Advanced Nuclear Energy Systems, July 31-August 3, 2006, to identify new, emerging, and scientifically challenging areas in materials and chemical sciences that have the potential for significant impact on advanced nuclear energy systems. The global utilization of nuclear energy has come a long way from its humble beginnings in the first sustained nuclear reaction at the University of Chicago in 1942. Today, there are over 440 nuclear reactors in 31 countries producing approximately 16% of the electrical energy used worldwide. In the United States, 104 nuclear reactors currently provide 19% of electrical energy used nationally. The International Atomic Energy Agency projects significant growth in the utilization of nuclear power over the next several decades due to increasing demand for energy and environmental concerns related to emissions from fossil plants. There are 28 new nuclear plants currently under construction including 10 in China, 8 in India, and 4 in Russia. In the United States, there have been notifications to the Nuclear Regulatory Commission of intentions to apply for combined construction and operating licenses for 27 new units over the next decade. The projected growth in nuclear power has focused increasing attention on issues related to the permanent disposal of nuclear waste, the proliferation of nuclear weapons technologies and materials, and the sustainability of a once-through nuclear fuel cycle. In addition, the effective utilization of nuclear power will require continued improvements in nuclear technology, particularly related to safety and efficiency. In all of these areas, the performance of materials and chemical processes under extreme conditions is a limiting factor. The related basic research challenges represent some of the most demanding tests of our fundamental understanding of materials science and chemistry, and they provide significant opportunities for advancing basic science with broad impacts for nuclear reactor materials, fuels, waste forms, and separations techniques. Of particular importance is the role that new nanoscale characterization and computational tools can play in addressing these challenges. These tools, which include DOE synchrotron X-ray sources, neutron sources, nanoscale science research centers, and supercomputers, offer the opportunity to transform and accelerate the fundamental materials and chemical sciences that underpin technology development for advanced nuclear energy systems. The fundamental challenge is to understand and control chemical and physical phenomena in multi-component systems from femto-seconds to millennia, at temperatures to 1000ºC, and for radiation doses to hundreds of displacements per atom (dpa). This is a scientific challenge of enormous proportions, with broad implications in the materials science and chemistry of complex systems. New understanding is required for microstructural evolution and phase stability under relevant chemical and physical conditions, chemistry and structural evolution at interfaces, chemical behavior of actinide and fission-product solutions, and nuclear and thermo-mechanical phenomena in fuels and waste forms. First-principles approaches are needed to describe f-electron systems, design molecules for separations, and explain materials failure mechanisms. Nanoscale synthesis and characterization methods are needed to understand and design materials and interfaces with radiation, temperature, and corrosion resistance. Dynamical measurements are required to understand fundamental physical and chemical phenomena. New multiscale approaches are needed to integrate this knowledge into accurate models of relevant phenomena and complex systems across multiple length and time scales. The Department of Energy (DOE) Workshop on Basic Research Needs for Advanced Nuclear Energy Systems was convened in July 2006 to identify new, emerging, and scientifically challenging areas in materials and chemical sciences that have the potential for significant impact on advanced nuclear energy systems. Sponsored by the DOE Office of Basic Energy Sciences (BES), the workshop provided recommendations for priority research directions and crosscutting research themes that underpin the development of advanced materials, fuels, waste forms, and separations technologies for the effective utilization of nuclear power. A total of 235 invited experts from 31 universities, 11 national laboratories, 6 industries, 3 government agencies, and 11 foreign countries attended the workshop. The workshop was the sixth in a series of BES workshops focused on identifying basic research needs to overcome short-term showstoppers and to formulate long-term grand challenges related to energy technologies. These workshops have followed a common format that includes the development of a technology perspectives resource document prior to the workshop, a plenary session including invited presentations from technology and research experts, and topical panels to determine basic research needs and recommended research directions. Reports from the workshops are available on the BES website at The workshop began with a plenary session of invited presentations from national and international experts on science and technology related to nuclear energy. The presentations included nuclear technology, industry, and international perspectives, and an overview of the Global Nuclear Energy Partnership. Frontier research presentations were given on relevant topics in materials science, chemistry, and computer simulation. Following the plenary session, the workshop divided into six panels: Materials under Extreme Conditions, Chemistry under Extreme Conditions, Separations Science, Advanced Actinide Fuels, Advanced Waste Forms, and Predictive Modeling and Simulation. In addition, there was a crosscut panel that looked for areas of synergy across the six topical panels. The panels were composed of basic research leaders in the relevant fields from universities, national laboratories, and other institutions. In advance of the workshop, panelists were provided with a technology perspectives resource document that described the technology and applied R&D needs for advanced nuclear energy systems. In addition, technology experts were assigned to each of the panels to ensure that the basic research discussions were informed by a current understanding of technology issues. The panels were charged with defining the state of the art in their topical research area, describing the related basic research challenges that must be overcome to provide breakthrough technology opportunities, and recommending basic research directions to address these challenges. These basic research challenges and recommended research directions were consolidated into Scientific Grand Challenges, Priority Research Directions, and Crosscutting Research Themes. These results are summarized below and described in detail in the full report. Scientific Grand Challenges Scientific Grand Challenges represent barriers to fundamental understanding that, if overcome, could transform the related scientific field. Historical examples of scientific grand challenges with far-reaching scientific and technological impacts include the structure of DNA, the understanding of quantum behavior, and the explanation of nuclear fission. Theoretical breakthroughs and new experimental capabilities are often key to addressing these challenges. In advanced nuclear energy systems, scientific grand challenges focus on the fundamental materials and chemical sciences that underpin the performance of materials and processes under extreme conditions of radiation, temperature, and corrosive environments. Addressing these challenges offers the potential of revolutionary new approaches to developing improved materials and processes for nuclear applications. The workshop identified the following three Scientific Grand Challenges. Resolving the f-electron challenge to master the chemistry and physics of actinides and actinide-bearing materials.The introduction of new actinide-based fuels for advanced nuclear energy systems requires new chemical separations strategies and predictive understanding of fuel and waste-form fabrication and performance. However, current computational electronic-structure approaches are inadequate to describe the electronic behavior of actinide materials, and the multiplicity of chemical forms and oxidation states for these elements complicates their behavior in fuels, solutions, and waste forms. Advances in density functional theory as well as in the treatment of relativistic effects are needed in order to understand and predict the behavior of these strongly correlated electron systems. Developing a first-principles, multiscale description of material properties in complex materials under extreme conditions.The long-term stability and mechanical integrity of structural materials, fuels, claddings, and waste forms are governed by the kinetics of microstructure and interface evolution under the combined influence of radiation, high temperature, and stress. Controlling the mechanical and chemical properties of materials under these extreme conditions will require the ability to relate phase stability and mechanical behavior to a first-principles understanding of defect production, diffusion, trapping, and interaction. New synthesis techniques based on the nanoscale design of materials offer opportunities for mitigating the effects of radiation damage through the development and control of nanostructured defect sinks. However, a unified, predictive multiscale theory that couples all relevant time and length scales in microstructure evolution and phase stability must be developed. In addition, fundamental advances are needed in nanoscale characterization, diffusion, thermodynamics, and in situ studies of fracture and deformation. Understanding and designing new molecular systems to gain unprecedented control of chemical selectivity during processing.Advanced separations technologies for nuclear fuel reprocessing will require unprecedented control of chemical selectivity in complex environments. This control requires the ability to design, synthesize, characterize, and simulate molecular systems that selectively trap and release target molecules and ions with high efficiency under extreme conditions and to understand how mesoscale phenomena such as nanophase behavior and energetics in macromolecular systems impact partitioning. New capabilities in molecular spectroscopy, imaging, and computational modeling offer opportunities for breakthroughs in this area. Priority Research Directions Priority Research Directions are areas of basic research that have the highest potential for impact in a specific research or technology area. They represent opportunities that align with scientific grand challenges, emerging research opportunities, and related technology priorities. The workshop identified nine Priority Research Directions for basic research related to advanced nuclear energy systems. Nanoscale design of materials and interfaces that radically extend performance limits in extreme radiation environments.The fundamental understanding of the interaction of defects with nanostructures offers the potential for the design of materials and interfaces that mitigate radiation damage by controlling defect behavior. New research is needed in the design, synthesis, nanoscale characterization, and time-resolved study of nanostructured materials and interfaces that offer the potential to control defect production, trapping, and interaction under extreme conditions. Physics and chemistry of actinide-bearing materials and the f-electron challenge.A robust theory of the electronic structure of actinides will provide an improved understanding of their physical and chemical properties and behavior, leading to opportunities for advances in fuels and waste forms. New advances in exchange and correlation functionals in density functional theory as well as in the treatment of relativistic effects and in software implementation on advanced computer architectures are needed to overcome the challenges of adequately treating the behavior of 4f and 5f electrons, namely, strong correlation, spin-orbit coupling, and multiplet complexity, as well as additional relativistic effects. Advances are needed in the application of these new electronic structure methods for f-element-containing molecules and solids to calculate the properties of defects in multi-component systems, and in the fundamental understanding of related chemical and physical properties at high temperature. Microstructure and property stability under extreme conditions.The predictive understanding of microstructural evolution and property changes under extreme conditions is essential for the rational design of materials for structural, fuels, and waste-form applications. Advances are needed to develop a first-principles understanding of the relationship of defect properties and microstructural evolution to mechanical behavior and phase stability. This will require a closely coupled approach of in situ studies of nanoscale and mechanical behavior with multiscale theory. Mastering actinide and fission product chemistry under all chemical conditions.A more accurate understanding of the electronic structure of the complexes of actinide and fission products will expand our ability to predict their behavior quantitatively under conditions relevant to all stages in fuel reprocessing (separations, dissolution, and stabilization of waste forms) and in new media that are proposed for advanced processing systems. This knowledge must be supplemented by accurate prediction and manipulation of solvent properties and chemical reactivities in non-traditional separation systems such as modern "tunable" solvent systems. This will require quantitative, fundamental understanding of the mechanisms of solvent tunability, the factors limiting control over solvent properties, the forces driving chemical speciation, and modes of controlling reactions. Basic research needs include f-element electronic structure and bonding, speciation and reactivity, thermodynamics, and solution behavior. Exploiting organization to achieve selectivity at multiple length scales. Harnessing the complexity of organization that occurs at the mesoscale in solution or at interfaces will lead to new separation systems that provide for greatly increased selectivity in the recovery of target species and reduced formation of secondary waste streams through ligand degradation. Research directions include design of ligands and other selectivity agents, expanding the range of selection/release mechanisms, fundamental understanding of phase phenomena and self-assembly in separations, and separations systems employing aqueous solvents. Adaptive material-environment interfaces for extreme chemical conditions.Chemistry at interfaces will play a crucial role in the fabrication, performance, and stability of materials in almost every aspect of Advanced Nuclear Energy Systems, from fuel, claddings, and pressure vessels in reactors to fuel reprocessing and separations, and ultimately to long-term waste storage. Revolutionary advances in the understanding of interfacial chemistry of materials through developments in new modeling and in situ experimental techniques offer the ability to design material interfaces capable of providing dynamic, universal stability over a wide range of conditions and with much greater "self-healing" capabilities. Achieving the necessary scientific advances will require moving beyond interfacial chemistry in ultra-high-vacuum environments to the development of in situ techniques for monitoring the chemistry at fluid/solid and solid/solid interfaces under conditions of high pressure and temperature and harsh chemical environments. Fundamental effects of radiation and radiolysis in chemical processes.The reprocessing of nuclear fuel and the storage of nuclear waste present environments that include substantial radiation fields. A predictive understanding of the chemical processes resulting from intense radiation, high temperatures, and extremes of acidity and redox potential on chemical speciation is required to enhance efficient, targeted separations processes and effective storage of nuclear waste. In particular, the effect of radiation on the chemistries of ligands, ionic liquids, polymers, and molten salts is poorly understood. There is a need for an improved understanding of the fundamental processes that affect the formation of radicals and ultimately control the accumulation of radiation-induced damage to separation systems and waste forms. Fundamental thermodynamics and kinetic processes in multi-component systems for fuel fabrication and performance.The fabrication and performance of advanced nuclear fuels, particularly those containing the minor actinides, is a significant challenge that requires a fundamental understanding of the thermodynamics, transport, and chemical behavior of complex materials during processing and irradiation. Global thermochemical models of complex phases that are informed by ab initio calculations of materials properties and high-throughput predictive models of complex transport and phase segregation will be required for full fuel fabrication and performance calculations. These models, when coupled with appropriate experimental efforts, will lead to significantly improved fuel performance by creating novel tailored fuel forms. Predictive multiscale modeling of materials and chemical phenomena in multi-component systems under extreme conditions.The advent of large-scale (petaflop) simulations will significantly enhance the prospect of probing important molecular-level mechanisms underlying the macroscopic phenomena ofsolution and interfacial chemistry in actinide-bearing systems and of materials and fuels fabrication, performance, and failure under extreme conditions. There is an urgent need to develop multiscale algorithms capable of efficiently treating systems whose time evolution is controlled by activated processes and rare events. Although satisfactory solutions are lacking, there are promising directions, including accelerated molecular dynamics (MD) and adaptive kinetic Monte Carlo methods, which should be pursued. Many fundamental problems in advanced nuclear energy systems will benefit from multi-physics, multiscale simulation methods that can span time scales from picoseconds to seconds and longer, including fission product transport in nuclear fuels, the evolution of microstructure of irradiated materials, the migration of radionuclides in nuclear waste forms, and the behavior of complex separations media. Crosscutting Research Themes Crosscutting Research Themes are research directions that transcend a specific research area or discipline, providing a foundation for progress in fundamental science on a broad front. These themes are typically interdisciplinary, leveraging results from multiple fields and approaches to provide new insights and underpinning understanding. Many of the fundamental science issues related to materials, fuels, waste forms, and separations technologies have crosscutting themes and synergies. The workshop identified four crosscutting basic research themes related to materials and chemical processes for advanced nuclear energy systems: Tailored nanostructures for radiation-resistant functional and structural materials.There is evidence that the design and control of specialized nanostructures and defect complexes can create sinks for radiation-induced defects and impurities, enabling the development of highly radiation-resistant materials. New capabilities in the synthesis and characterization of materials with controlled nanoscale structure offer opportunities for the development of tailored nanostructures for structural applications, fuels, and waste forms. This approach crosscuts advanced materials synthesis and processing, radiation effects, nanoscale characterization, and simulation. Solution and solid-state chemistry of 4f and 5f electron systems.Advances in the basic science of 4f and 5f electron systems in materials and solutions offer the opportunity to extend condensed matter physics and reaction chemistry on a broad front, including applications that impact the development of nuclear fuels, waste forms, and separations technologies. This is a key enabling science for the fundamental understanding of actinide-bearing materials and solutions. Physics and chemistry at interfaces and in confined environments.Controlling the structure and composition of interfaces is essential to ensuring the long-term stability of reactor materials, fuels, and waste forms. The fundamental understanding of interface science and related transport and chemical phenomena in extreme environments crosscuts many science and technology areas. New computational and nanoscale structure and dynamics measurement tools offer significant opportunities for advancing interface science with broad impacts on the predictive design of advanced materials and processes for nuclear energy applications. Physical and chemical complexity in multi-component systems.Advanced fuels, waste forms, and separations technologies are highly interactive, multi-component systems. A fundamental understanding of these complex systems and related structural and phase stability and chemical reactivity under extreme conditions is needed to develop and predict the performance of materials and separations processes in advanced nuclear energy systems. This is a challenging problem in complexity with broad implications across science and technology. Taken together, these Scientific Grand Challenges, Priority Research Directions, and Crosscutting Research Themes define the landscape for a science-based approach to the development of materials and chemical processes for advanced nuclear energy systems. Building upon new experimental tools and computational capabilities, they presage a renaissance in fundamental science that underpins the development of materials, fuels, waste forms, and separations technologies for nuclear energy applications. Addressing these basic research needs offers the potential to revolutionize the science and technology of advanced nuclear energy systems by enabling new materials, processes, and predictive modeling, with resulting improvements in performance and reduction in development times. The fundamental research outlined in this report offers an outstanding opportunity to advance the materials, chemical, and computational science of complex systems at multiple length and time scales, furthering both fundamental understanding and the technology of advanced nuclear energy systems. Basic Research Needs for Solid-State Lighting JPG.jpg file (822KB) .pdf file  (6.9MB)Report.pdf file (86.7MB) Basic Research Needs for Solid-State Lighting This report is based on a BES Workshop on Solid-State Lighting (SSL), May 22-24, 2006, to examine the gap separating current state-of-the-art SSL technology from an energy efficient, high-quality, and economical SSL technology suitable for general illumination; and to identify the most significant fundamental scientific challenges and research directions that would enable that gap to be bridged. Since fire was first harnessed, artificial lighting has gradually broadened the horizons of human civilization. Each new advance in lighting technology, from fat-burning lamps to candles to gas lamps to the incandescent lamp, has extended our daily work and leisure further past the boundaries of sunlit times and spaces. The incandescent lamp did this so dramatically after its invention in the 1870s that the light bulb became the very symbol of a "good idea." Today, modern civilization as we know it could not function without artificial lighting; artificial lighting is so seamlessly integrated into our daily lives that we tend not to notice it until the lights go out. Our dependence is even enshrined in daily language: an interruption of the electricity supply is commonly called a "blackout." This ubiquitous resource, however, uses an enormous amount of energy. In 2001, 22% of the nation's electricity, equivalent to 8% of the nation's total energy, was used for artificial light. The cost of this energy to the consumer was roughly $50 billion per year or approximately $200 per year for every person living in the U.S. The cost of this energy to the environment was approximately 130 million tons of carbon emitted into our atmosphere, or about 7% of all the carbon emitted by the U.S. Our increasingly precious energy resources and the growing threat of climate change demand that we reduce the energy and environmental cost of artificial lighting, an essential and pervasive staple of modern life. There is ample room for reducing this energy and environmental cost. The artificial lighting we take for granted is extremely inefficient primarily because all these technologies generate light as a by-product of indirect processes producing heat or plasmas. Incandescent lamps (a heated wire in a vacuum bulb) convert only about 5% of the energy they consume into visible light, with the rest emerging as heat. Fluorescent lamps (a phosphor-coated gas discharge tube, invented in the 1930s) achieve a conversion efficiency of only about 20%. These low efficiencies contrast starkly with the relatively high efficiencies of other common building technologies: heating is typically 70% efficient, and electric motors are typically 85 to 95% efficient. About 1.5 billion light bulbs are sold each year in the U.S. today, each one an engine for converting the earth's precious energy resources mostly into waste heat, pollution, and greenhouse gases. There is no physical reason why a 21st century lighting technology should not be vastly more efficient, thereby reducing equally vastly our energy consumption. If a 50%-efficient technology were to exist and be extensively adopted, it would reduce energy consumption in the U.S. by about 620 billion kilowatt-hours per year by the year 2025 and eliminate the need for about 70 nuclear plants, each generating a billion Watts of power. Solid-state lighting (SSL) is the direct conversion of electricity to visible white light using semiconductor materials and has the potential to be just such an energy-efficient lighting technology. By avoiding the indirect processes (producing heat or plasmas) characteristic of traditional incandescent and fluorescent lighting, it can work at a far higher efficiency, "taking the heat out of lighting," it might be said. Recently, for example, semiconductor devices emitting infrared light have demonstrated an efficiency of 76%. There is no known fundamental physical barrier to achieving similar (or even higher) efficiencies for visible white light, perhaps approaching 100% efficiency. Despite this tantalizing potential, however, SSL suitable for illumination today has an efficiency that falls short of a perfect 100% by a factor of fifteen. Partly because of this inefficiency, the purchase cost of SSL is too high for the average consumer by a factor ten to a hundred, and SSL suitable for illumination today has a cost of ownership twenty times higher than that expected for a 100% efficient light source. The reason is that SSL is a dauntingly demanding technology. To generate light near the theoretical efficiency limit, essentially every electron injected into the material must result in a photon emitted from the device. Furthermore, the voltage required to inject and transport the electrons to the light-emitting region of the device must be no more than that corresponding to the energy of the resulting photon. It is insufficient to generate "simple" white light; the distribution of photon wavelengths must match the spectrum perceived by the human eye to render colors accurately, with no emitted photons outside the visible range. Finally, all of these constraints must be achieved in a single device with an operating lifetime of at least a thousand hours (and preferably ten to fifty times longer), at an ownership cost-of-light comparable to, or lower than, that of existing lighting technology. Where promising demonstrations of higher efficiency exist, they are typically achieved in small devices (to enhance light extraction), at low brightness (to minimize losses) or with low color-rendering quality (overemphasizing yellow and green light, to which the eye is most sensitive). These restrictions lead to a high cost of ownership for high-quality light that would prevent the widespread acceptance of SSL. For example, Cree Research recently (June 2006) demonstrated a 131 lm/W white light device, which translates roughly to 35% efficiency but with relatively low lumen output. With all devices demonstrated to date, a very large gap is apparent between what is achievable today and the 100% (or roughly 375 lm/W) efficiency that should be possible with SSL. Today, we cannot produce white SSL that is simultaneously high in efficiency, low in cost, and high in color-rendering quality. In fact, we cannot get within a factor of ten in either efficiency or cost. Doing so in the foreseeable future will require breakthroughs in technology, stimulated by a fundamental understanding of the science of light-emitting materials. To accelerate the laying of the scientific foundation that would enable such technology breakthroughs, the Office of Basic Energy Sciences in the U.S. Department of Energy (DOE) convened the Workshop on Basic Energy Needs for Solid-State Lighting from May 22 to 24, 2006. This report is a summary of that workshop. It reflects the collective output of the workshop attendees, which included 80 scientists representing academia, national laboratories, and industry in the United States, Europe, and Asia. Workshop planning and execution involved advance coordination with the DOE Office of Energy Efficiency and Renewable Energy, Building Technologies program, which manages applied research and development of SSL technologies and the Next Generation Lighting Initiative. The Workshop identified two Grand Challenges, seven Priority Research Directions, and five Cross-Cutting Research Directions. These represent the most specific outputs of the workshop. The Grand Challenges are broad areas of discovery research and scientific inquiry that will lay the groundwork for the future of SSL. The first Grand Challenge aims to change the very paradigm by which SSL structures are designed—moving from serendipitous discovery towards rational design. The second Grand Challenge aims to understand and control the essential roadblock to SSL—the microscopic pathways through which losses occur as electrons produce light. Rational Design of SSL Structures. Many materials must be combined in order to form a light-emitting device, each individual material working in concert with the others to control the flow of electrons so that all their energy produces light. Today, novel light-emitting and charge-transporting materials tend to be discovered rather than designed "with the end in mind." To approach 100% efficiency, fundamental building blocks should be designed so they work together seamlessly, but such a design process will require much greater insight than we currently possess. Hence, our aim is to understand light-emitting organic and inorganic (and hybrid) materials and nanostructures at a fundamental level to enable the rational design of low-cost, high-color-quality, near-100% efficient SSL structures from the ground up. The anticipated results are tools for rational, informed exploration of technology possibilities; and insights that open the door to as-yet-unimagined ways of creating and using artificial light. Controlling Losses in the Light-Emission Process. The key to high efficiency SSL is using electrons to produce light but not heat. That this does not occur in today's SSL structures stems from the abundance of decay pathways that compete with light emission for electronic excitations in semiconductors. Hence, our aim is to discover and control the materials and nanostructure properties that mediate the competing conversion of electrons to light and heat, enabling the conversion of every injected electron into useful photons. The anticipated results are ultra-high-efficiency light-emitting materials and nanostructures, and a deep scientific understanding of how light interacts with matter, with broad impact on science and technology areas beyond SSL. The Priority and Cross-Cutting Research Directions are narrower areas of discovery research and use-inspired basic research targeted at a particular materials set or at a particular area of scientific inquiry believed to be central to one or more roadblocks in the path towards future SSL technology. These Research Directions also support one or both Grand Challenges. The Research Directions were identified by three panels, each of which was comprised of a subset of the workshop attendees and interested observers. The first two panels, which identified the Priority Research Directions, were differentiated by choice of materials set. The first, LED Science, focused on inorganic light-emitting materials such as the Group III nitrides, oxides, and novel oxychalcogenides. The second, OLED Science, considered organic materials that are carbon-based molecular, polymeric, or dendrimeric compounds. The third panel, which identified the Cross-Cutting Research Directions, explored cross¬cutting and novel materials science and optical physics themes such as light extraction from solids, hybrid organic-inorganic and unconventional materials, and light-matter interactions. LED Science. Single-color, inorganic, light-emitting diodes (LEDs) are already widely used and are bright, robust, and long-lived. The challenge is to achieve white-light emission with high-efficiency and high-color rendering quality at acceptable cost while maintaining these advantages. The bulk of current research focuses on the Group III-nitride materials. Our understanding of how these materials behave and can be controlled has advanced significantly in the past decade, but significant scientific mysteries remain. These include (1) determining whether there are as-yet undiscovered or undeveloped materials that may offer significant advantages over current materials; (2) understanding and optimizing ways of generating white light from other wavelengths; (3) determining the role of piezoelectric and polar effects throughout the device but particularly at interfaces; and (4) understanding the basis for some of the peculiarities of the nitrides, the dominant inorganic SSL materials today, such as their apparent tolerance of high defect densities, and the difficulty of realizing efficient light emission at all visible wavelengths. OLED Science. Organic light emitting devices (OLEDs) based on polymeric or molecular thin films have been under development for about two decades, mostly for applications in flat-panel displays, which are just beginning to achieve commercial success. They have a number of attractive properties for SSL, including ease (and potential affordability) of processing and the ability to tune device properties via chemical modification of the molecular structure of the thin film components. This potential is coupled with challenges that have so far prevented the simultaneous achievement of high brightness at high efficiency and long device lifetime. Organic thin films are often structurally complex, and thin films that were long considered "amorphous" can exhibit order on the molecular (nano) scale. Research areas of particularly high priority include (1) quantifying local order and understanding its role in the charge transport and light-emitting properties of organic thin films, (2) developing the knowledge and expertise to synthesize and characterize organic compounds at a level of purity approaching that of inorganic semiconductors, and understanding the role of various low-level impurities on device properties in order to control materials degradation under SSL-relevant conditions, and (3) understanding the complex interplay of effects among the many individual materials and layers in an OLED to enable an integrated approach to OLED design. Cross-Cutting and Novel Materials Science and Optical Physics. Some areas of scientific research are relevant to all materials systems. While research on inorganic and organic materials has thus far proceeded independently, the optimal material system and device architecture for SSL may be as yet undiscovered and, furthermore, may require the integration of both classes of materials in a single system. Research directions that could enable new materials and architectures include (1) the design, synthesis, and integration of novel, nanoscale, heterogeneous building blocks, such as functionalized carbon nanotubes or quantum dots, with properties optimized for SSL, (2) the development of innovative architectures to control the flow of energy in a light emitting material to maximize the efficiency of light extraction, (3) the exploitation of strong coupling between light and matter to increase the quality and efficiency of emitted light, (4) the development of multiscale modeling techniques extending from the atomic or molecular scale to the device and system scale, and (5) the development and use of new experimental, theoretical, and computational tools to probe and understand the fundamental properties of SSL materials at the smallest scales of length and time. The workshop participants enthusiastically concluded that the time is ripe for new fundamental science to beget a revolution in lighting technology. SSL sources based on organic and inorganic materials have reached a level of efficiency where it is possible to envision their use for general illumination. The research areas articulated in this report are targeted to enable disruptive advances in SSL performance and realization of this dream. Broad penetration of SSL technology into the mass lighting market, accompanied by vast savings in energy usage, requires nothing less. These new "good ideas" will be represented not by light bulbs, but by an entirely new lighting technology for the 21st century and a bright, energy-efficient future indeed. Basic Research Needs for Superconductivity JPG.jpg file (355KB) .pdf file  (9.3MB)Report.pdf file (35.0MB) Basic Research Needs for Superconductivity This report is based on a BES Workshop on Superconductivity, May 8-10, 2006, to examine the prospects for superconducting grid technology and its potential for significantly increasing grid capacity, reliability, and efficiency to meet the growing demand for electricity over the next century. As an energy carrier, electricity has no rival with regard to its environmental cleanliness, flexibility in interfacing with multiple production sources and end uses, and efficiency of delivery. In fact, the electric power grid was named "the greatest engineering achievement of the 20th century" by the National Academy of Engineering. This grid, a technological marvel ingeniously knitted together from local networks growing out from cities and rural centers, may be the biggest and most complex artificial system ever built. However, the growing demand for electricity will soon challenge the grid beyond its capability, compromising its reliability through voltage fluctuations that crash digital electronics, brownouts that disable industrial processes and harm electrical equipment, and power failures like the North American blackout in 2003 and subsequent blackouts in London, Scandinavia, and Italy in the same year. The North American blackout affected 50 million people and caused approximately $6 billion in economic damage over the four days of its duration. Superconductivity offers powerful new opportunities for restoring the reliability of the power grid and increasing its capacity and efficiency. Superconductors are capable of carrying current without loss, making the parts of the grid they replace dramatically more efficient. Superconducting wires carry up to five times the current carried by copper wires that have the same cross section, thereby providing ample capacity for future expansion while requiring no increase in the number of overhead access lines or underground conduits. Their use is especially attractive in urban areas, where replacing copper with superconductors in power-saturated underground conduits avoids expensive new underground construction. Superconducting transformers cut the volume, weight, and losses of conventional transformers by a factor of two and do not require the contaminating and flammable transformer oils that violate urban safety codes. Unlike traditional grid technology, superconducting fault current limiters are smart. They increase their resistance abruptly in response to overcurrents from faults in the system, thus limiting the overcurrents and protecting the grid from damage. They react fast in both triggering and automatically resetting after the overload is cleared, providing a new, self-healing feature that enhances grid reliability. Superconducting reactive power regulators further enhance reliability by instantaneously adjusting reactive power for maximum efficiency and stability in a compact and economic package that is easily sited in urban grids. Not only do superconducting motors and generators cut losses, weight, and volume by a factor of two, but they are also much more tolerant of voltage sag, frequency instabilities, and reactive power fluctuations than their conventional counterparts. The challenge facing the electricity grid to provide abundant, reliable power will soon grow to crisis proportions. Continuing urbanization remains the dominant historic demographic trend in the United States and in the world. By 2030, nearly 90% of the U.S. population will reside in cities and suburbs, where increasingly strict permitting requirements preclude bringing in additional overhead access lines, underground cables are saturated, and growth in power demand is highest. The power grid has never faced a challenge so great or so critical to our future productivity, economic growth, and quality of life. Incremental advances in existing grid technology are not capable of solving the urban power bottleneck. Revolutionary new solutions are needed — the kind that come only from superconductivity. The Basic Energy Sciences Workshop on Superconductivity The Basic Energy Sciences (BES) Workshop on Superconductivity brought together more than 100 leading scientists from universities, industry, and national laboratories in the United States, Europe, and Asia. Basic and applied scientists were generously represented, creating a valuable and rare opportunity for mutual creative stimulation. Advance planning for the workshop involved two U.S. Department of Energy offices: the Office of Electricity Delivery and Energy Reliability, which manages research and development for superconducting technology, and the Office of Basic Energy Sciences, which manages basic research on superconductivity. Performance of superconductors The workshop participants found that superconducting technology for wires, power control, and power conversion had already passed the design and demonstration stages. The discovery of copper oxide superconductors in 1986 was a landmark event, bringing forth a new generation of superconducting materials with transition temperatures of 90 K or above, which allow cooling with inexpensive liquid nitrogen or mechanical cryocoolers. Cables, transformers, and rotating machines using first-generation (1G) wires based on Bi2Sr2Ca2Cu3Ox allowed new design principles and performance standards to be established that enabled superconducting grid technology to compete favorably with traditional copper devices. The early 2000s saw a paradigm shift to second-generation (2G) wires based on YBa2Cu3O7 that use a very different materials architecture; these have the potential for better performance over a larger operating range with respect to temperature and magnetic field. 2G wires have advanced rapidly; their current-carrying ability has increased by a factor of 10, and their usable length has increased to 300 meters, compared with only a few centimeters five years ago. While 2G superconducting wires now considerably outperform copper wires in their capacity for and efficiency in transporting current, significant gaps in their performance improvements remain. The alternating-current (ac) losses in superconductors are a major source of heat generation and refrigeration costs; these costs decline significantly as the maximum lossless current-carrying capability increases. For the same operating current, a tenfold increase in the maximum current-carrying capability of the wire cuts the heat generated as a result of ac losses by the same factor of 10. For transporting current on the grid, an order-of-magnitude increase in current-carrying capability is needed to reduce the operational cost of superconducting lines and cables to competitive levels. Transformers, fault current limiters, and rotating machinery all contain coils of superconducting wire that create magnetic fields essential to their operation. 2G wires carry significantly less current in magnetic fields as small as 0.1 to 0.5 T, which are found in transformers and fault current limiters, and in fields of 3 to 5 T, which are needed for motors and generators. The fundamental factors that limit the current-carrying performance of 2G wires in magnetic fields must be understood and overcome to produce a five- to tenfold increase in their performance rating. Increasing the current-carrying capability of superconductors requires blocking the motion of "Abrikosov vortices" — nanoscale tubes of magnetic flux that form spontaneously inside superconductors upon exposure to magnetic fields. Vortices are immobilized by artificial defects in the superconducting material that attract the vortices and pin them in place. To pin vortices effectively, an understanding not only of the pinning strength of individual defects for individual vortices but also of the collective effects of many defects interacting with many vortices is needed. The similarities of vortex pinning and flow to glacier flow around rock obstacles, avalanche flow in landslides, and earthquake motion at fault lines are reflected in the colloquial name "vortex matter." To achieve a five- to tenfold increase in vortex pinning and current-carrying ability in superconductors, we must learn how to bridge the scientific gap separating the microscopic behavior of individual vortices and pinning sites in a superconductor from its macroscopic current-carrying ability. Cost of superconductors Although superconducting wires perform significantly better than copper wires in transmitting electricity, their cost is still too high. The cost of manufactured superconducting wires must be reduced by a factor of 10 to 100 to make them competitive with copper. Much of the manufacturing cost arises from the complex architecture of 2G wires, which are made up of a flexible metallic substrate (often of a magnetic material) on which up to seven additional layers must be sequentially deposited while a specific crystalline orientation is maintained from layer to layer. Significant advances in materials science are needed to simplify the architecture and the manufacturing process while maintaining crystalline orientation, flexibility, superconductor composition, and protection from excessive heat if there is an accidental loss of superconductivity. Beyond their manufacturing cost, the operating cost of superconductors must be reduced. Copper wires require no active cooling to operate, while superconductors must be cooled to temperatures of between 50 and 77 K for most applications. The added cost of refrigeration is a significant factor in superconductor operating cost. Reducing refrigeration costs for future generations of superconducting applications is a major technology driver for the discovery or design of new superconducting materials with higher transition temperatures. Phenomena of superconductivity These achievements and challenges in superconducting technology are matched by equally promising achievements and challenges in the fundamental science of superconductivity. Since 1986, new materials discoveries have pushed the superconducting transition temperature in elements from 12 to 20 K (for Li under pressure), in heavy fermion compounds from 1.5 to 18.5 K (for PuCoGa5), in noncuprate oxides from 13 to 30 K (for Ba1-xKxBiO3), in binary borides from 6 to 40 K (for MgB2), and in graphite intercalation compounds from 4.05 to 11.5 K (for CaC6). In addition, superconductivity has been discovered for the first time in carbon compounds like boron-doped diamond (11 K) and fullerides (up to 40 K for Cs3C60 under pressure), as well as in borocarbides (up to 16.5 K with metastable phases up to 23 K). We are finding that superconductivity, formerly thought to be a rare occurrence in special compounds, is a common behavior of correlated electrons or "electron matter" in materials. As of this writing, fully 55 elements display superconductivity at some combination of temperature and pressure; this number is up from 43 in 1986, an increase of 28%. As the number and classes of materials displaying superconductivity have mushroomed, so also has the variety of pairing mechanisms and symmetries of superconductivity. The superconducting state is built of "Cooper pairs" — composite objects composed of two electrons bound by a pairing mechanism. The spatial relation of the charges in a pair is described by its pairing symmetry. Copper oxides are known to have d-wave pairing symmetry, in contrast to the s-wave pairing of conventional superconductors; Sr2RuO4 and certain organic superconductors appear to be p-wave. Superconductivity has been found close to magnetic order and can either compete against it or coexist with it, suggesting that spin plays a role in the pairing mechanism. Tantalizing glimpses of superconducting-like states at very high temperatures have been seen in the underdoped phase of yttrium barium copper oxide (YBCO), in the form of pseudogaps and of strong transverse electric fields induced by temperature gradients (the "vortex Nernst effect") that typically imply vortex motion. The proliferation of new classes of superconducting materials; of record-breaking transition temperatures in the known classes of superconductors; of unconventional pairing mechanisms and symmetries of superconductivity; and of exotic, superconducting-like features well above the superconducting transition temperature all imply that superconducting electron matter is a far richer field than we suspected even 10 years ago. While there are many fundamental puzzles in this profusion of intriguing effects, the central challenge with the biggest impact is to understand the mechanisms of high-temperature superconductivity. This is difficult precisely because the mechanisms are entangled with these anomalous normal state effects. Such effects are noticeably absent in the normal states of conventional superconductors. In the underdoped copper oxides (as in other complex oxides), there are many signs of highly correlated normal states, like the spontaneous formation of stripes and pseudogaps that exist above the superconducting transition temperature. They may be necessary precursors to the high-temperature superconducting state, or perhaps competitors, and it seems clear that an explanation of superconductivity will include these correlated normal states in the same framework. For two decades, theorists have struggled and failed to find a solution, even as experimentalists tantalize them with ever more fascinating anomalous features. The more than 50 superconducting compounds in the copper oxide family demonstrate that the mechanism of superconductivity is robust, and that it is likely to apply widely in nature among other complex metals with highly correlated normal states. Although finding the mechanism is frustratingly difficult, its value, once found, makes the struggle compelling. Research directions The BES Workshop on Superconductivity identified seven "priority research directions" and two "cross-cutting research directions" that capture the promise of revolutionary advances in superconductivity science and technology. The first seven directions set a course for research in superconductivity that will exploit the opportunities uncovered by the workshop panels in materials, phenomena, theory, and applications. These research directions extend the reach of superconductivity to higher transition temperatures and higher current-carrying capabilities, create new families of superconducting materials with novel nanoscale structures, establish fundamental principles for understanding the rich variety of superconducting behavior within a single framework, and develop tools and materials that enable new superconducting technology for the electric power grid that will dramatically improve its capacity, reliability, and efficiency for the coming century. The seven priority research directions identified by the workshop take full advantage of the rapid advances in nanoscale science and technology of the last five years. Superconductivity is ultimately a nanoscale phenomenon. Its two composite building blocks — Cooper pairs mediating the superconducting state and Abrikosov vortices mediating its current-carrying ability — have dimensions ranging from a tenth of a nanometer to a hundred nanometers. Their nanoscale interactions among themselves and with structures of comparable size determine all of their superconducting properties. The continuing development of powerful nanofabrication techniques, by top-down lithography and bottom-up self-assembly, creates promising new horizons for designer superconducting materials with higher transition temperatures and current-carrying ability. Nanoscale characterization techniques with ever smaller spatial and temporal resolution — including aberration-corrected electron microscopy, nanofocused x-ray beams from high-intensity synchrotrons, scanning probe microscopy, and ultrafast x-ray laser spectroscopy — allow us to track the motion of a single vortex interacting with a single pinning defect or to observe Cooper pair making and pair breaking near a magnetic impurity atom. The numerical simulation of superconducting phenomena in confined geometries using computer clusters of a hundred or more nodes allows the interaction of Cooper pairs and Abrikosov vortices with nanoscale boundaries and architectures to be isolated. Understanding these nanoscale interactions with artificial boundaries enables the numerical design of functional superconductors. The promise of nanoscale fabrication, characterization, and simulation for advancing the fundamental science of superconductivity and rational design of functional superconducting materials for next-generation grid technology has never been higher. A key outcome of the BES Workshop on Superconductivity has been a strong sense of optimism and awareness of the opportunity that spans the community of participants in the basic and applied sciences. In the last decade, enormous strides have been made in understanding the science of high-temperature superconductivity and exploiting it for electricity production, distribution, and use. The promise of developing a smart, self-healing grid based on superconductors that require no cooling is an inspiring "grand energy challenge" that drives the frontiers of basic science and applied technology. Meeting this 21st century challenge would rival the 20th century achievement of providing electricity for everyone at the flick of a switch. The seven priority and two cross-cutting research directions identified by the workshop participants offer the potential for achieving this challenge and creating a transformational impact on our electric power infrastructure. The Path to Sustainable Nuclear Energy, Basic and Applied Research Opportunities for Advanced Fuel Cycles JPG.jpg file (178KB) Report.pdf file (864KB) The Path to Sustainable Nuclear Energy Basic and Applied Research Opportunities for Advanced Fuel Cycles This report is based on a small DOE-sponsored workshop held in September 2005 to identify new basic science that will be the foundation for advances in nuclear fuel-cycle technology in the near term, and for changing the nature of fuel cycles and of the nuclear energy industry in the long term . The goals are to enhance the development of nuclear energy, to maximize energy production in nuclear reactor parks, and to minimize radioactive wastes, other environmental impacts, and proliferation risks. The limitations of the once-through fuel cycle can be overcome by adopting a closed fuel cycle, in which the irradiated fuel is reprocessed and its components are separated into streams that are recycled into a reactor or disposed of in appropriate waste forms. The recycled fuel is irradiated in a reactor, where certain constituents are partially transmuted into heavier isotopes via neutron capture or into lighter isotopes via fission. Fast reactors are required to complete the transmutation of long-lived isotopes. Closed fuel cycles are encompassed by the Department of Energy's Advanced Fuel Cycle Initiative (AFCI), to which basic scientific research can contribute. Two nuclear reactor system architectures can meet the AFCI objectives: a "single-tier" system or a "dual-tier" system. Both begin with light water reactors and incorporate fast reactors. The "dual-tier" systems transmute some plutonium and neptunium in light water reactors and all remaining transuranic elements (TRUs) in a closed-cycle fast reactor. Basic science initiatives are needed in two broad areas: • Near-term impacts that can enhance the development of either "single-tier" or "dual-tier" AFCI systems, primarily within the next 20 years, through basic research. Examples: • Dissolution of spent fuel, separations of elements for TRU recycling and transmutation • Design, synthesis, and testing of inert matrix nuclear fuels and non-oxide fuels • Invention and development of accurate on-line monitoring systems for chemical and nuclear species in the nuclear fuel cycle • Development of advanced tools for designing reactors with reduced margins and lower costs • Long-term nuclear reactor development requires basic science breakthroughs: • Understanding of materials behavior under extreme environmental conditions • Creation of new, efficient, environmentally benign chemical separations methods • Modeling and simulation to improve nuclear reaction cross-section data, design new materials and separation system, and propagate uncertainties within the fuel cycle • Improvement of proliferation resistance by strengthening safeguards technologies and decreasing the attractiveness of nuclear materials A series of translational tools is proposed to advance the AFCI objectives and to bring the basic science concepts and processes promptly into the technological sphere. These tools have the potential to revolutionize the approach to nuclear engineering R&D by replacing lengthy experimental campaigns with a rigorous approach based on modeling, key fundamental experiments, and advanced simulations. Basic Research Needs for Solar Energy Utilization JPG.jpg file (333KB) .pdf file  (6.9MB)Report.pdf file (19.2MB) Basic Research Needs for Solar Energy Utilization This report is based on a BES Workshop on Solar Energy Utilization, April 18-21, 2005, to examine the challenges and opportunities for the development of solar energy as a competitive energy source and to identify the technical barriers to large-scale implementation of solar energy and the basic research directions showing promise to overcome them. World demand for energy is projected to more than double by 2050 and to more than triple by the end of the century. Incremental improvements in existing energy networks will not be adequate to supply this demand sustainably. Finding sufficient supplies of clean energy for the future is one of society's most daunting challenges. Sunlight provides by far the largest of all carbon-neutral energy sources. More energy from sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on the planet in a year (4.1 × 1020 J). We currently exploit this solar resource through solar electricity — a $7.5 billion industry growing at a rate of 35-40% per annum — and solar-derived fuel from biomass, which provides the primary energy source for over a billion people. Yet, in 2001, solar electricity provided less than 0.1% of the world's electricity, and solar fuel from modern (sustainable) biomass provided less than 1.5% of the world's energy. The huge gap between our present use of solar energy and its enormous undeveloped potential defines a grand challenge in energy research. Sunlight is a compelling solution to our need for clean, abundant sources of energy in the future. It is readily available, secure from geopolitical tension, and poses no threat to our environment through pollution or to our climate through greenhouse gases. This report of the Basic Energy Sciences Workshop on Solar Energy Utilization identifies the key scientific challenges and research directions that will enable efficient and economic use of the solar resource to provide a significant fraction of global primary energy by the mid 21st century. The report reflects the collective output of the workshop attendees, which included 200 scientists representing academia, national laboratories, and industry in the United States and abroad, and the U.S. Department of Energy's Office of Basic Energy Sciences and Office of Energy Efficiency and Renewable Energy. Solar energy conversion systems fall into three categories according to their primary energy product: solar electricity, solar fuels, and solar thermal systems. Each of the three generic approaches to exploiting the solar resource has untapped capability well beyond its present usage. Workshop participants considered the potential of all three approaches, as well as the potential of hybrid systems that integrate key components of individual technologies into novel cross-disciplinary paradigms. The challenge in converting sunlight to electricity via photovoltaic solar cells is dramatically reducing the cost/watt of delivered solar electricity — by approximately a factor of 5-10 to compete with fossil and nuclear electricity and by a factor of 25-50 to compete with primary fossil energy. New materials to efficiently absorb sunlight, new techniques to harness the full spectrum of wavelengths in solar radiation, and new approaches based on nanostructured architectures can revolutionize the technology used to produce solar electricity. The technological development and successful commercialization of single-crystal solar cells demonstrates the promise and practicality of photovoltaics, while novel approaches exploiting thin films, organic semiconductors, dye sensitization, and quantum dots offer fascinating new opportunities for cheaper, more efficient, longer-lasting systems. Many of the new approaches outlined by the workshop participants are enabled by (1) remarkable recent advances in the fabrication of nanoscale architectures by novel top-down and bottom-up techniques; (2) advances in nanoscale characterization using electron, neutron, and x-ray scattering and spectroscopy; and (3) sophisticated computer simulations of electronic and molecular behavior in nanoscale semiconductor assemblies using density functional theory. Such advances in the basic science of solar electric conversion, coupled to the new semiconductor materials now available, could drive a revolution in the way that solar cells are conceived, designed, implemented, and manufactured. The inherent day-night and sunny-cloudy cycles of solar radiation necessitate an effective method to store the converted solar energy for later dispatch and distribution. The most attractive and economical method of storage is conversion to chemical fuels. The challenge in solar fuel technology is to produce chemical fuels directly from sunlight in a robust, cost-efficient fashion. For millennia, cheap solar fuel production from biomass has been the primary energy source on the planet. For the last two centuries, however, energy demand has outpaced biomass supply. The use of existing types of plants requires large land areas to meet a significant portion of primary energy demand. Almost all of the arable land on Earth would need to be covered with the fastest-growing known energy crops, such as switchgrass, to produce the amount of energy currently consumed from fossil fuels annually. Hence, the key research goals are (1) application of the revolutionary advances in biology and biotechnology to the design of plants and organisms that are more efficient energy conversion "machines," and (2) design of highly efficient, all-artificial, molecular-level energy conversion machines exploiting the principles of natural photosynthesis. A key element in both approaches is the continued elucidation — by means of structural biology, genome sequencing, and proteomics — of the structure and dynamics involved in the biological conversion of solar radiation to sugars and carbohydrates. The revelation of these long-held secrets of natural solar conversion by means of cutting-edge experiment and theory will enable a host of exciting new approaches to direct solar fuel production. Artificial nanoscale assemblies of new organic and inorganic materials and morphologies, replacing natural plants or algae, can now use sunlight to directly produce H2 by splitting water and hydrocarbons via reduction of atmospheric CO2. While these laboratory successes demonstrate the appealing promise of direct solar fuel production by artificial molecular machines, there is an enormous gap between the present state of the art and a deployable technology. The current laboratory systems are unstable over long time periods, too expensive, and too inefficient for practical implementation. Basic research is needed to develop approaches and systems to bridge the gap between the scientific frontier and practical technology. The key challenge in solar thermal technology is to identify cost-effective methods to convert sunlight into storable, dispatchable thermal energy. Reactors heated by focused, concentrated sunlight in thermal towers reach temperatures exceeding 3,000°C, enabling the efficient chemical production of fuels from raw materials without expensive catalysts. New materials that withstand the high temperatures of solar thermal reactors are needed to drive applications of this technology. New chemical conversion sequences, like those that split water to produce H2 using the heat from nuclear fission reactors, could be used to convert focused solar thermal energy into chemical fuel with unprecedented efficiency and cost effectiveness. At lower solar concentration temperatures, solar heat can be used to drive turbines that produce electricity mechanically with greater efficiency than the current generation of solar photovoltaics. When combined with solar-driven chemical storage/release cycles, such as those based on the dissociation and synthesis of ammonia, solar engines can produce electricity continuously 24 h/day. Novel thermal storage materials with an embedded phase transition offer the potential of high thermal storage capacity and long release times, bridging the diurnal cycle. Nanostructured thermoelectric materials, in the form of nanowires or quantum dot arrays, offer a promise of direct electricity production from temperature differentials with efficiencies of 20-30% over a temperature differential of a few hundred degrees Celsius. The much larger differentials in solar thermal reactors make even higher efficiencies possible. New low-cost, high-performance reflective materials for the focusing systems are needed to optimize the cost effectiveness of all concentrated solar thermal technologies. Workshop attendees identified thirteen priority research directions (PRDs) with high potential for producing scientific breakthroughs that could dramatically advance solar energy conversion to electricity, fuels, and thermal end uses. Many of these PRDs address issues of concern to more than one approach or technology. These cross-cutting issues include (1) coaxing cheap materials to perform as well as expensive materials in terms of their electrical, optical, chemical, and physical properties; (2) developing new paradigms for solar cell design that surpass traditional efficiency limits; (3) finding catalysts that enable inexpensive, efficient conversion of solar energy into chemical fuels; (4) identifying novel methods for self-assembly of molecular components into functionally integrated systems; and (5) developing materials for solar energy conversion infrastructure, such as transparent conductors and robust, inexpensive thermal management materials. A key outcome of the workshop is the sense of optimism in the cross-disciplinary community of solar energy scientists spanning academia, government, and industry. Although large barriers prevent present technology from producing a significant fraction of our primary energy from sunlight by the mid-21st century, workshop participants identified promising routes for basic research that can bring this goal within reach. Much of this optimism is based on the continuing, rapid worldwide progress in nanoscience. Powerful new methods of nanoscale fabrication, characterization, and simulation — using tools that were not available as little as five years ago — create new opportunities for understanding and manipulating the molecular and electronic pathways of solar energy conversion. Additional optimism arises from impressive strides in genetic sequencing, protein production, and structural biology that will soon bring the secrets of photosynthesis and natural bio-catalysis into sharp focus. Understanding these highly effective natural processes in detail will allow us to modify and extend them to molecular reactions that directly produce sunlight-derived fuels that fit seamlessly into our existing energy networks. The rapid advances on the scientific frontiers of nanoscience and molecular biology provide a strong foundation for future breakthroughs in solar energy conversion. Advanced Computational Materials Sciences: Application to Fusion and Generation IV Fission Reactors JPG.jpg file (238KB) Report.pdf file (692KB) Advanced Computational Materials Science: Application to Fusion and Generation IV Fission Reactors This report is based on a workshopExternal link held March 31-April 2, 2004, to determine the degree to which an increased effort in modeling and simulation could help bridge the gap between the data that is needed to support the implementation of advanced nuclear technologies and the data that can be obtained in available experimental facilities. The need to develop materials capable of performing in the severe operating environments expected in fusion and fission (Generation IV) reactors represents a significant challenge in materials science. There is a range of potential Gen-IV fission reactor design concepts and each concept has its own unique demands. Improved economic performance is a major goal of the Gen-IV designs. As a result, most designs call for significantly higher operating temperatures than the current generation of LWRs to obtain higher thermal efficiency. In many cases, the desired operating temperatures rule out the use of the structural alloys employed today. The very high operating temperature (up to 1000°C) associated with the NGNP is a prime example of an attractive new system that will require the development of new structural materials. Fusion power plants represent an even greater challenge to structural materials development and application. The operating temperatures, neutron exposure levels and thermo-mechanical stresses are comparable to or greater than those for proposed Gen-IV fission reactors. In addition, the transmutation products created in the structural materials by the high energy neutrons produced in the DT plasma can profoundly influence the microstructural evolution and mechanical behavior of these materials. Although the workshop addressed issues relevant to both Gen-IV and fusion reactor materials, much of the discussion focused on fusion; the same focus is reflected in this report. Most of the physical models and computational methods presented during the workshop apply equally to both types of nuclear energy systems. The primary factor that differentiates the materials development path for the two systems is that nearly prototypical irradiation environments for Gen-IV materials can be found or built in existing fission reactors. This is not the case for fusion. The only fusion-relevant, 14 MeV neutron sources ever built (such as the rotating target neutron sources, RTNS-I and -II at LLNL) were relatively low-power accelerator based systems. The RTNS-II "high" flux irradiation volume was quite small, less than 1 cm3, and only low doses could be achieved. The maximum dose data obtained was much less than 0.1 dpa. Thus, RTNS-II, which last operated in 1986, provided only a limited opportunity for fundamental investigations of the effects of 14 MeV neutrons characteristic of DT fusion. Historically, both the fusion and fission reactor programs have taken advantage of and built on research carried out by the other program. This leveraging can be expected to continue over the next ten years as both experimental and modeling activities in support of the Gen-IV program grow substantially. The Gen-IV research will augment the fusion studies (and vice versa) in areas where similar materials and exposure conditions are of interest. However, in addition to the concerns that are common to both fusion and advanced fission reactor programs, designers of a future DT fusion reactor have the unique problem of anticipating the effects of the 14 MeV neutron source term. In particular, the question arises whether irradiation data obtained in a near-prototypic irradiation environment such as the IFMIF are needed to verify results obtained from computational materials research. The need for a theory and modeling effort to work hand-in-hand with a complementary experimental program for the purpose of model development and verification, and for validation of model predictions was extensively discussed at the workshop. There was a clear consensus that an IFMIF-like irradiation facility is likely to be required to contribute to this research. However, the question of whether IFMIF itself is needed was explored from two different points of view at the workshop. These complementary (and in some cases opposing) points of view can be coarsely characterized as "scientific" and "engineering." The recent and anticipated progress in computational materials science presented at the workshop provides some confidence that many of the scientific questions whose answers will underpin the successful use of structural materials in a DT fusion reactor can be addressed in a reasonable time frame if sufficient resources are devoted to this effort. For example, advances in computing hardware and software should permit improved (and in some cases the first) descriptions of relevant properties in alloys based on ab initio calculations. Such calculations could provide the basis for realistic interatomic potentials for alloys, including alloy-He potentials, that can be applied in classical molecular dynamics simulations. These potentials must have a more detailed description of many-body interactions than accounted for in the current generation which are generally based on a simple embedding function. In addition, the potentials used under fusion reactor conditions (very high PKA energies) should account for the effects of local electronic excitation and electronic energy loss. The computational cost of using more complex potentials also requires the next generation of massively parallel computers. New results of ab initio and atomistic calculations can be coupled with ongoing advances in kinetic and phase field models to dramatically improve predictions of the non-equilibrium, radiation-induced evolution in alloys with unstable microstructures. This includes phase stability and the effects of helium on each microstructural component. However, for all its promise, computational materials science is still a house under construction. As such, the current reach of the science is limited. Theory and modeling can be used to develop understanding of known critical physical phenomena, and computer experiments can, and have been used to, identify new phenomena and mechanisms, and to aid in alloy design. However, it is questionable whether the science will be sufficiently mature in the foreseeable future to provide a rigorous scientific basis for predicting critical materials' properties, or for extrapolating well beyond the available validation database. Two other issues remain even if the scientific questions appear to have been adequately answered. These are licensing and capital investment. Even a high degree of scientific confidence that a given alloy will perform as needed in a particular Gen-IV or fusion environment is not necessarily transferable to the reactor licensing or capital market regimes. The philosophy, codes, and standards employed for reactor licensing are properly conservative with respect to design data requirements. Experience with the U.S. Nuclear Regulatory Commission suggests that only modeling results that are strongly supported by relevant, prototypical data will have an impact on the licensing process. In a similar way, it is expected that investment on the scale required to build a fusion power plant (several billion dollars) could only be obtained if a very high level of confidence existed that the plant would operate long and safely enough to return the investment. These latter two concerns appear to dictate that an experimental facility capable of generating a sufficient, if limited, body of design data under essentially prototypic conditions (i.e. with ~14 MeV neutrons) will ultimately be required for the commercialization of fusion power. An aggressive theory and modeling effort will reduce the time and experimental investment required to develop the advanced materials that can perform in a DT fusion reactor environment. For example, the quantity of design data may be reduced to that required to confirm model predictions for key materials at critical exposure conditions. This will include some data at a substantial fraction of the anticipated end-of-life dose, which raises the issue of when such an experimental facility is required. Long lead times for construction of complex facilities, coupled with several years irradiation to reach the highest doses, imply that the decision to build any fusion-relevant irradiation facility must be made on the order of 10 years before the design data is needed. Two related areas of research can be used as reference points for the expressed need to obtain experimental validation of model predictions. Among the lessons learned from ASCI, the importance of code validation and verification was emphasized at the workshop. Despite an extensive investment in theory and modeling of the relevant physics, the NIF is being built at LLNL to verify the performance of the physics codes. Similarly, while the U.S. and international fusion community has invested considerable resources in simulating the behavior of magnetically-confined plasmas, a series of experimental devices (e.g. DIII-D, TFTR, JET, NSTX, and NCSX) have been, or will be, built and numerous experiments carried out to validate the predicted plasma performance on the route to ITER and a demonstration fusion power reactor. Opportunities for Discovery: Theory and Computation in Basic Energy Sciences JPG.jpg file (376KB) .pdf file  (1.9MB)Report.pdf file (6.8MB) Opportunities for Discovery: Theory and Computation in Basic Energy Sciences This report is based on the deliberations of the BESAC Subcommittee on Theory and Computation following meetings on February 22 and April 17-16, 2004, to obtain testimony and discuss input from the scientific community on research directions for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the science. The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged on October 17, 2003, by the Director, Office of Science, with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee's Findings and Recommendations are presented in Section VII of the report. A confluence of scientific events has enhanced the importance of theory and computation in BES. After considering both written and verbal testimony from members of the scientific community, the Subcommittee observed that a confluence of developments in scientific research over the past fifteen years has quietly revolutionized both the present role and future promise of theory and computation in the disciplines that comprise the Basic Energy Sciences. Those developments fall into four broad categories: 1. a set of striking recent scientific successes that demonstrate the increased impact of theory and computation; 2. the appearance of new scientific frontiers in which innovative theory is required to lead inquiry and unravel the mysteries posed by new observations; 3. the development of new experimental capabilities, including large-scale facilities, that provide challenging new data and demand both fundamental and computationally intensive theory to realize their promise; 4. the ongoing increase of computational capability provided by continued improvements in computers and algorithms, which has dramatically amplified the power and applicability of theoretical research. • The sum of these events argues powerfully that now is the time for an increase in the investment by BES in theory and computation, including modeling and simulation. Emerging themes in the Basic Energy Sciences and nine specific areas of opportunity for scientific discovery. The report identifies nine specific areas of opportunity in which expanded investment in theory and computation holds great promise to enhance discovery in the scientific mission of BES. While this list is not exhaustive, it represents a range of persuasive prospects broadly characterized by the themes of "Complexity" and "Control" that describe much of the BES portfolio. The challenges and promise of theory in each of these nine areas are described in detail. Connecting theory with experiment. Connecting the BES theory and computation programs with experimental research taking place at existing or planned BES facilities deserves a high priority. BES should undertake a major new thrust to significantly augment its theoretical and computational programs coupled to experimental research at its major facilities. We also urge that such a new effort not be limited only to research at the facilities but also address the coupling of theory and computation with new capabilities involving "tabletop" experimental science as well. The unity of modern theory and computation. For a number of the research problems in BES, we are fortunate to know the equations that must be solved. For this reason many BES disciplines are presently exploiting high-end computation and are poised to use it at the leadership scale. However, in many other areas of BES, we do not know all the equations, nor do we have all the mathematical and physical insights we need, and therefore we have not yet invented the required algorithms. In an expanded yet balanced theory effort in BES, enhancements in computation must be accompanied by enhancements in the rest of the theoretical endeavor. Conceptual theory and computation are not separate enterprises. Resources necessary for success in the BES theory enterprise. A successful BES theory effort must provide the full spectrum of computational resources, as well as support the development and maintenance of scientific computer codes as shared scientific instruments. We find that BES is ready for and requires access to leadership-scale computing to perform calculations that cannot be done elsewhere, but also that a large amount of essential BES computation falls between the leadership and the desktop scales. Moreover, BES should provide support for the development and maintenance of shared scientific software to enhance the scientific impact of the BES-supported theory community and to remove a key obstacle to the effective exploitation of high-end computing resources and facilities. In summary, the Subcommittee finds that there is a compelling need for BES to expand its programs to capture opportunities created by the combination of new capabilities in theory and computation and the opening of new experimental frontiers. Providing the right resources, supporting new styles of theoretical inquiry, and building a properly balanced program are all essential for the success of an expanded effort in theory and computation. The experimental programs of BES will be enhanced by such an effort. Nanoscience Research for Energy Needs JPG.jpg file (235KB) .pdf file  (6.4MB)Report.pdf file (29.0MB) Nanoscience Research for Energy Needs This report is based upon a BES-cosponsored National Nanotechnology Initiative (NNI) Workshop held March 16-18, 2004, by t he Nanoscale Science, Engineering, and Technology (NSET) Subcommittee of the National Science and Technology Council (NSTC) to address the Grand Challenge in Energy Conversion and Storage set out in the NNI. This report was originally released on June 24, 2004, during the Department of Energy NanoSummitExternal link. The second edition that is provided here was issued in June 2005. The world demand for energy is expected to double to 28 terawatts by the year 2050. Compounding the challenge presented by this projection is the growing need to protect our environment by increasing energy efficiency and through the development of "clean" energy sources. These are indeed global challenges, and their resolution is vital to our energy security. Recent reports on Basic Research Needs to Assure a Secure Energy Future and Basic Research Needs for the Hydrogen Economy have recognized that scientific breakthroughs and truly revolutionary developments are demanded. Within this context, nanoscience and nanotechnology present exciting and requisite approaches to addressing these challenges. An interagency workshop to identify and articulate the relationship of nanoscale science and technology to the nation's energy future was convened on March 16-18, 2004 in Arlington, Virginia. The meeting was jointly sponsored by the Department of Energy and, through the National Nanotechnology Coordination Office, the other member agencies of the Nanoscale Science, Engineering and Technology Subcommittee of the Committee on Technology, National Science and Technology Council. This report is the outcome of that workshop. The workshop had 63 invited presenters with 32 from universities, 26 from national laboratories and 5 from industry. This workshop is one in a series intended to provide input from the research community on the next NNI strategic plan, which the NSTC is required to deliver to Congress on the first anniversary of the signing of the 21st Century Nanotechnology R&D Act, Dec. 3, 2003. At the root of the opportunities provided by nanoscience to impact our energy security is the fact that all the elementary steps of energy conversion (charge transfer, molecular rearrangement, chemical reactions, etc.) take place on the nanoscale. Thus, the development of new nanoscale materials, as well as the methods to characterize, manipulate, and assemble them, creates an entirely new paradigm for developing new and revolutionary energy technologies. The primary outcome of the workshop is the identification of nine research targets in energy-related science and technology in which nanoscience is expected to have the greatest impact: • Scalable methods to split water with sunlight for hydrogen production • Highly selective catalysts for clean and energy-efficient manufacturing • Harvesting of solar energy with 20 percent power efficiency and 100 times lower cost • Solid-state lighting at 50 percent of the present power consumption • Super-strong, light-weight materials to improve efficiency of cars, airplanes, etc. • Reversible hydrogen storage materials operating at ambient temperatures • Power transmission lines capable of 1 gigawatt transmission • Low-cost fuel cells, batteries, thermoelectrics, and ultra-capacitors built from nanostructured materials • Materials synthesis and energy harvesting based on the efficient and selective mechanisms of biology The report contains descriptions of many examples indicative of outcomes and expected progress in each of these research targets. For successful achievement of these research targets, participants recognized six foundational and vital crosscutting nanoscience research themes: • Catalysis by nanoscale materials • Using interfaces to manipulate energy carriers • Linking structure and function at the nanoscale • Assembly and architecture of nanoscale structures • Theory, modeling, and simulation for energy nanoscience • Scalable synthesis methods DOE-NSF-NIH Workshop on Opportunities in THz Science JPG.jpg file (470KB) Report.pdf file (9.8MB) DOE-NSF-NIH Workshop on Opportunities in THz Science This report is based on a Workshop on Opportunities in Terahetrz (THz) Science held February 12-14, 2004, to discuss basic research problems that can be answered using THz radiation. The workshop did not focus on the wide range of potential applications of THz radiation in engineering, defense and homeland security, or the commercial and government sectors of the economy. The workshop was jointly sponsored by DOE, NSF, and NIH. The region of the electromagnetic spectrum from 0.3 to 20 THz (10- 600 cm-1, 1 mm - 15 µm wavelength) is a frontier area for research in physics, chemistry, biology, medicine, and materials sciences. Sources of high quality radiation in this area have been scarce, but this gap has recently begun to be filled by a wide range of new technologies. Terahertz radiation is now available in both cw and pulsed form, down to single-cycles or less, with peak powers up to 10 MW. New sources have led to new science in many areas, as scientists begin to become aware of the opportunities for research progress in their fields using THz radiation. Science at a Time Scale Frontier: THz-frequency electromagnetic radiation, with a fundamental period of around 1 ps, is uniquely suited to study and control systems of central importance: electrons in highly-excited atomic Rydberg states orbit at THz frequencies. Small molecules rotate at THz frequencies. Collisions between gas phase molecules at room temperature last about 1 ps. Biologically-important collective modes of proteins vibrate at THz frequencies. Frustrated rotations and collective modes cause polar liquids (such as water) to absorb at THz frequencies. Electrons in semiconductors and their nanostructures resonate at THz frequencies. Superconducting energy gaps are found at THz frequencies. An electron in Intel's THz Transistor races under the gate in ~1 ps. Gaseous and solid-state plasmas oscillate at THz frequencies. Matter at temperatures above 10 K emits black-body radiation at THz frequencies. This report also describes a tremendous array of other studies that will become possible when access to THz sources and detectors is widely available. The opportunities are limitless. Electromagnetic Transition Region: THz radiation lies above the frequency range of traditional electronics, but below the range of optical and infrared generators. The fact that the THz frequency range lies in the transition region between photonics and electronics has led to unprecedented creativity in source development. Solid-state electronics, vacuum electronics, microwave techniques, ultrafast visible and NIR lasers, single-mode continuous-wave NIR lasers, electron accelerators ranging in size from a few inches to a mile-long linear accelerator at SLAC, and novel materials have been combined yield a large variety of sources with widely-varying output characteristics. For the purposes of this report, sources are divided into 4 categories according to their (low, high) peak power and their (small, large) instantaneous bandwidth. THz experiments: Many classes of experiments can be performed using THz electromagnetic radiation. Each of these will be enabled or optimized by using a THz source with a particular set of specifications. For example, some experiments will be enabled by high average and peak power with impulsive half-cycle excitation. Such radiation is available only from a new class of sources based on sub-ps electron bunches produced in large accelerators. Some high-resolution spectroscopy experiments will require cw THz sources with kHz linewidths but only a few hundred microwatts of power. Others will require powerful pulses with ≤1% bandwidth, available from free-electron lasers and, very recently, regeneratively-amplified lasers and nonlinear optical materials. Time-domain THz spectroscopy, with its time coherence and extremely broad spectral bandwidth, will continue to expand its reach and range of applications, from spectroscopy of superconductors to sub-cutaneous imaging of skin cancer. What is needed The THz community needs a network: Sources of THz radiation are, at this point, very rare in physics and materials science laboratories and almost non-existent in chemistry, biology and medical laboratories. The barriers to performing experiments using THz radiation are enormous. One needs not only a THz source, but also an appropriate receiver and an understanding of many experimental details, ranging from the absorption characteristics of the atmosphere and common materials, to where to purchase or construct various simple optics components such as polarizers, lenses, and waveplates, to a solid understanding of electromagnetic wave propagation, since diffraction always plays a significant role at THz frequencies. There is also significant expense, both in terms of time and money, in setting up any THz apparatus in one's own lab, even if one is the type of investigator who enjoys building things. Because of the enormous barriers to entry into THz science, the community of users is presently much smaller than the potential based on the scientific opportunities. Symposia on medical applications of THz radiation are already attracting overflow crowds at conferences. The size of the community is increasing with a clear growth potential to support a large THz user's network including user facilities. The opportunities are great. The most important thing we can do is lower research barriers. A THz User's Network would leverage the large existing investment in THz research and infrastructure to considerably grow the size of the THz research community. The Network would inform the scientific community at large of opportunities in THz science, bring together segments of the community of THz researchers who are currently only vaguely aware of one another and lower the barriers to entry into THz research. Specific ideas for network activities include disseminating information about techniques and opportunities in THz science through the worldwide web, sponsoring sessions about THz technology at scientific conferences, co-location of conferences from different communities within the THz field, providing funding for small-scale user facilities at existing centers of excellence, directing researchers interested in THz science to the most appropriate technology and/or collaborator, encouraging commercialization of critical THz components, outreach to raise public awareness of THz science and technology, and formation of teams to work on problems of common interest, such as producing higher peak fields or pulse-shaping schemes. Interagency support is crucial: NIH, NSF, and DOE will all benefit, and all must be involved. Eventually, the network will provide the best and most efficient path to defining what new facilities may be needed. New users of THz methodology will also find it easier to learn about the field when there is a network. Defining common goals: During the workshop, the community articulated several common and unmet technical needs. This list is far from exhaustive, and it will grow with the network: 1. Higher peak fields. 2. Coverage to 10 THz (or higher) with coherent broad-band sources. 3. Full pulse-shaping. 4. Excellent stability in sources with the above characteristics. 5. Easy access to components such as emitters and receivers, and for time-domain THz spectroscopy. 6. Near-field THz microscopy. 7. Sensitive non-cryogenic detectors. Basic Research Needs for the Hydrogen Economy JPG.jpg file (407KB) Report.pdf file (7.2MB) Basic Research Needs for the Hydrogen Economy This report is based upon the BES Workshop on Hydrogen Production, Storage, and Use, held May 13-15, 2003, to identify fundamental research needs and opportunities in hydrogen production, storage, and use, with a focus on new, emerging and scientifically challenging areas that have the potential to have significant impact in science and technologies. The coupled challenges of a doubling in the world's energy needs by the year 2050 and the increasing demands for "clean" energy sources that do not add more carbon dioxide and other pollutants to the environment have resulted in increased attention worldwide to the possibilities of a "hydrogen economy" as a long-term solution for a secure energy future. The hydrogen economy offers a grand vision for energy management in the future. Its benefits are legion, including an ample and sustainable supply, flexible interchange with existing energy media, a diversity of end uses to produce electricity through fuel cells or to produce heat through controlled combustion, convenient storage for load leveling, and a potentially large reduction in harmful environmental pollutants. These benefits provide compelling motivation to mount a major, innovative basic research program in support of a broad effort across the applied research, development, engineering, and industrial communities to enable the use of hydrogen as the fuel of the future. There is an enormous gap between our present capabilities for hydrogen production, storage, and use and those required for a competitive hydrogen economy. To be economically competitive with the present fossil fuel economy, the cost of fuel cells must be lowered by a factor of 10 or more and the cost of producing hydrogen must be lowered by a factor of 4. Moreover, the performance and reliability of hydrogen technology for transportation and other uses must be improved dramatically. Simple incremental advances in the present state of the art cannot bridge this gap. The only hope of narrowing the gap significantly is a comprehensive, long-range program of innovative, high-risk/high-payoff basic research that is intimately coupled to and coordinated with applied programs. The best scientists from universities and national laboratories and the best engineers and scientists from industry must work in interdisciplinary groups to find breakthrough solutions to the fundamental problems of hydrogen production, storage, and use. The objective of such a program must not be evolutionary advances but revolutionary breakthroughs in understanding and in controlling the chemical and physical interactions of hydrogen with materials. The detailed findings and research directions identified by the three panels are presented in this report. They address the four research challenges for the hydrogen economy outlined by Secretary of Energy Spencer Abraham in his address to the National Hydrogen Association: (1) dramatically lower the cost of fuel cells for transportation, (2) develop a diversity of sources for hydrogen production at energy costs comparable to those of gasoline, (3) find viable methods of onboard storage of hydrogen for transportation uses, and (4) develop a safe and effective infrastructure for seamless delivery of hydrogen from production to storage to use. The essence of this report is captured in six cross-cutting research directions that were identified as being vital for enabling the dramatic breakthroughs to achieve lower costs, higher performance, and greater reliability that are needed for a competitive hydrogen economy: • Catalysis • Nanostructured Materials • Membranes and Separations • Characterization and Measurement Techniques • Theory, Modeling, and Simulation • Safety and Environmental Issues In addition to these research directions, the panels identified biological and bio-inspired science and technology as richly promising approaches for achieving the revolutionary technical advances required for a hydrogen economy. Theory and Modeling in Nanoscience JPG.jpg file (274KB) Report.pdf file (2.2MB) Theory and Modeling in Nanoscience This report is based upon the May 10-11, 2002, workshop conducted jointly by the Basic Energy Sciences Advisory Committee and the Advanced Scientific Computing Advisory Committees to identify challenges and opportunities for theory, modeling, and simulation in nanoscience and nanotechnology and to investigate the growing and promising role of applied mathematics and computer science in meeting those challenges. During the past 15 years, the fundamental techniques of theory, modeling, and simulation have undergone a revolution that parallels the extraordinary experimental advances on which the new field of nanoscience is based. This period has seen the development of density functional algorithms, quantum Monte Carlo techniques, ab initio molecular dynamics, advances in classical Monte Carlo methods and mesoscale methods for soft matter, and fast-multipole and multigrid algorithms. Dramatic new insights have come from the application of these and other new theoretical capabilities. Simultaneously, advances in computing hardware increased computing power by four orders of magnitude. The combination of new theoretical methods together with increased computing power has made it possible to simulate systems with millions of degrees of freedom. The application of new and extraordinary experimental tools to nanosystems has created an urgent need for a quantitative understanding of matter at the nanoscale. The absence of quantitative models that describe newly observed phenomena increasingly limits progress in the field. A clear consensus emerged at the workshop that without new, robust tools and models for the quantitative description of structure and dynamics at the nanoscale, the research community would miss important scientific opportunities in nanoscience. The absence of such tools would also seriously inhibit widespread applications in fields of nanotechnology ranging from molecular electronics to biomolecular materials. To realize the unmistakable promise of theory, modeling, and simulation in overcoming fundamental challenges in nanoscience requires new human and computer resources. Fundamental Challenges and Opportunities With each fundamental intellectual and computational challenge that must be met in nanoscience comes opportunities for research and discovery utilizing the approaches of theory, modeling, and simulation. In the broad topical areas of (1) nano building blocks (nanotubes, quantum dots, clusters, and nanoparticles), (2) complex nanostructures and nano-interfaces, and (3) the assembly and growth of nanostructures, the workshop identified a large number of theory, modeling, and simulation challenges and opportunities. Among them are: • to bridge electronic through macroscopic length and time scales • to determine the essential science of transport mechanisms at the nanoscale • to devise theoretical and simulation approaches to study nano-interfaces, which dominate nanoscale systems and are necessarily highly complex and heterogeneous • to simulate with reasonable accuracy the optical properties of nanoscale structures and to model nanoscale opto-electronic devices • to simulate complex nanostructures involving "soft" biologically or organically based structures and "hard" inorganic ones as well as nano-interfaces between hard and soft matter • to simulate self-assembly and directed self-assembly • to devise theoretical and simulation approaches to quantum coherence, deco-herence, and spintronics • to develop self-validating and benchmarking methods The Role of Applied Mathematics Since mathematics is the language in which theory is expressed and advanced, developments in applied mathematics are central to the success of theory, modeling, and simulation for nanoscience, and the workshop identified important roles for new applied mathematics in the above-mentioned challenges. Novel applied mathematics is required to formulate new theory and to develop new computational algorithms applicable to complex systems at the nanoscale. The discussion of applied mathematics at the workshop focused on three areas that are directly relevant to the central challenges of theory, modeling, and simulation in nano-science: (1) bridging time and length scales, (2) fast algorithms, and (3) optimization and predictability. Each of these broad areas has a recent track record of developments from the applied mathematics community. Recent advances range from fundamental approaches, like mathematical homogenization (whereby reliable coarse-scale results are made possible without detailed knowledge of finer scales), to new numerical algorithms, like the fast-multipole methods that make very large scale molecular dynamics calculations possible. Some of the mathematics of likely interest (perhaps the most important mathematics of interest) is not fully knowable at the present, but it is clear that collaborative efforts between scientists in nanoscience and applied mathematicians can yield significant advances central to a successful national nanoscience initiative. The Opportunity for a New Investment The consensus of the workshop is that the country's investment in the national nano-science initiative will pay greater scientific dividends if it is accelerated by a new investment in theory, modeling, and simulation in nanoscience. Such an investment can stimulate the formation of alliances and teams of experimentalists, theorists, applied mathematicians, and computer and computational scientists to meet the challenge of developing a broad quantitative understanding of structure and dynamics at the nanoscale. The Department of Energy is uniquely situated to build a successful program in theory, modeling, and simulation in nano-science. Much of the nation's experimental work in nanoscience is already supported by the Department, and new facilities are being built at the DOE national laboratories. The Department also has an internationally regarded program in applied mathematics, and much of the foundational work on mathematical modeling and computation has emerged from DOE activities. Finally, the Department has unique resources and experience in high performance computing and algorithms. The combination of these areas of expertise makes the Department of Energy a natural home for nanoscience theory, modeling, and simulation. Opportunities for Catalysis in the 21st Century JPG.jpg file (130KB) Report.pdf file (1.0MB) Opportunities for Catalysis in the 21st Century This report is based upon a Basic Energy Sciences Advisory Committee subpanel workshop that was held May 14-16, 2002, to identify research directions to better understand how to design catalyst structures to control catalytic activity and selectivity. Chemical catalysis affects our lives in myriad ways. Catalysis provides a means of changing the rates at which chemical bonds are formed and broken and of controlling the yields of chemical reactions to increase the amounts of desirable products from these reactions and reduce the amounts of undesirable ones. Thus, it lies at the heart of our quality of life: The reduced emissions of modern cars, the abundance of fresh food at our stores, and the new pharmaceuticals that improve our health are made possible by chemical reactions controlled by catalysts. Catalysis is also essential to a healthy economy: The petroleum, chemical, and pharmaceutical industries, contributors of $500 billion to the gross national product of the United States, rely on catalysts to produce everything from fuels to "wonder drugs" to paints to cosmetics. Today, our Nation faces a variety of challenges in creating alternative fuels, reducing harmful by-products in manufacturing, cleaning up the environment and preventing future pollution, dealing with the causes of global warming, protecting citizens from the release of toxic substances and infectious agents, and creating safe pharmaceuticals. Catalysts are needed to meet these challenges, but their complexity and diversity demand a revolution in the way catalysts are designed and used. This revolution can become reality through the application of new methods for synthesizing and characterizing molecular and material systems. Opportunities to understand and predict how catalysts work at the atomic scale and the nanoscale are now appearing, made possible by breakthroughs in the last decade in computation, measurement techniques, and imaging and by new developments in catalyst design, synthesis, and evaluation. A Grand Challenge In May 2002, a workshop entitled "Opportunities for Catalysis Science in the 21st Century" was conducted in Gaithersburg, Maryland. The impetus for the workshop grew out of a confluence of factors: the continuing importance of catalysis to the Nation's productivity and security, particularly in the production and consumption of energy and the associated environmental consequences, and the emergence of new research tools and concepts associated with nanoscience that can revolutionize the design and use of catalysts in the search for optimal control of chemical transformations. While research opportunities of an extraordinary variety were identified during the workshop, a compelling, unifying, and fundamental challenge became clear. Simply stated, the Grand Challenge for catalysis science in the 21st century is to understand how to design catalyst structures to control catalytic activity and selectivity. The Present Opportunity In his address to the 2002 meeting of the American Association for the Advancement of Science, Jack Marburger, the President's Science Advisor, spoke of the revolution that will result from our emerging ability to achieve an atom-by-atom understanding of matter and the subsequent unprecedented ability to design and construct new materials with properties that are not found in nature. " The revolution I am describing," he said, " is one in which the notion that everything is made of atoms finally becomes operational… We can actually see how the machinery of life functions, atom by atom. We can actually build atomic-scale structures that interact with biological or inorganic systems and alter their functions. We can design new tiny objects 'from scratch' that have unprecedented optical, mechanical, electrical, chemical, or biological properties that address needs of human society." Nowhere else can this revolution have such an immediate payoff as in the area of catalysis. By investing now in new methods for design, synthesis, characterization, and modeling of catalytic materials, and by employing the new tools of nanoscience, we will achieve the ability to design and build catalytic materials atom by atom, molecule by molecule, nanounit by nanounit. The Importance of Catalysis Science to DOE For the present and foreseeable future, the major source of energy for the Nation is found in chemical bonds. Catalysis affords the means of changing the rates at which chemical bonds are formed and broken. Catalysis also allows chemistry of extreme specificity, making it possible to select a desired product over an undesired one. Materials and materials properties lie at the core of almost every major issue that the U.S. Department of Energy (DOE) faces, including energy, stockpile stewardship, and environmental remediation. Much of the synthesis of new materials is certainly going to happen through catalysis. When scientists and engineers understand how to design catalysts to control catalytic chemistry, the effects on energy production and use and on the creation of exciting new materials will be profound. A Recommendation for Increased Federal Investment in Catalysis Research We are approaching a renaissance in catalysis science in this country. With the availability of exciting new laboratory tools for characterization, new designer approaches to synthesis, advanced computational capabilities, and new capabilities at user facilities, we have unparalleled potential for making significant advances in this vital and vibrant field. The convergence of the scientific disciplines that is a growing trend in the catalysis field is spawning new ideas that reach beyond conventional thinking. This revolution unfortunately comes at a time when industry has largely abandoned its support of basic research in catalysis. As the only Federal agency that supports catalysis as a discipline, DOE is uniquely positioned to lead the revolution. Our economy and our quality of life depend on catalytic processes that are efficient, clean, and effective. An increased investment in catalysis science in this country is not only important, it is essential. Successful research ventures in this area will have an impact on all levels of daily life, leading to enhanced energy efficiency for a range of fuels, reductions in harmful emissions, effective synthesis of new and improved drugs, enhanced homeland security and stockpile stewardship, and new materials with tailored properties. Federal investment is vital for building the scientific workforce needed to address the challenging issues that lie ahead in this field — a workforce that comprises our best and brightest scientists, developing creative new ideas and approaches. This investment is also vital to ensuring that we have the best scientific tools possible for exploiting creative ideas, and that our scientists have ready access to these experimental and computational tools. These tools include both state-of-the-art instrumentation in individual investigator laboratories and unique instrumentation that is only available, because of its size and cost, at DOE' s national user facilities. Biomolecular Materials JPG.jpg file (317KB) Report.pdf file (8.9MB) Biomolecular Materials This report is based upon the January 13-15, 2002, workshop sponsored by the Basic Energy Sciences Advisory Committee to explore the potential impact of biology on the physical sciences, in particular the materials and chemical sciences. Twenty-two scientists from around the nation and the world met to discuss the way that the molecules, structures, processes and concepts of the biological world could be used or mimicked in designing novel materials, processes or devices of potential practical significance. The emphasis was on basic research, although the long-term goal is, in addition to increased knowledge, the development of applications to further the mission of the Department of Energy. The charge to the workshop was to identify the most important and potentially fruitful areas of research in the field of Biomolecular Materials and to identify challenges that must be overcome to achieve success. This report summarizes the response of the workshop participants to this charge, and provides, by way of example, a description of progress that has been made in selected areas of the field. The participants felt that a DOE program in this area should focus on the development of a greater understanding of the underlying biology, and tools to manipulate biological systems both in vitro and in vivo rather than on the attempted identification of narrowly defined applications or devices. The field is too immature to be subject to arbitrary limitations on research and the exclusion of areas that could have great impact. These limitations aside, the group developed a series of recommendations. Three major areas of research were identified as central to the exploitation of biology for the physical sciences: 1) Self Assembled, Templated and Hierarchical Structures; 2) The Living Cell in Hybrid Materials Systems; and 3) Biomolecular Functional Systems. Workshop participants also discussed the challenges and impediments that stand in the way of our attaining the goal of fully exploiting biology in the physical sciences. Some are cultural, others are scientific and technical. Recommendations from the report are: Program Relevance. In view of what has recently developed into a generally recognized opinion that biology offers a rich source of structures, functions and inspiration for the development of novel materials, processes and devices support for this research should be a component of the broad Office of Basic Energy Sciences Program. Broad Support. The field is in its early stages and is not as well defined as other areas. Thus, although it is recommended that support be focused in the three areas identified in this report, it should be broadly applied. Good ideas in other areas proposed by investigators with good track records should be supported as well. There should not be an emphasis on "picking winning applications" because it is simply too difficult to reliably identify them at this time. Support of the Underlying Biology. Basic research focused on understanding the biological structures and processes in areas that show potential for applications supporting the DOE mission should be supported. Multidisplinary Teams. Research undertaken by multidisciplinary teams across the spectrum of materials science, physics, chemistry and biology should be encouraged but not artificially arranged. Training. Research that involves the training of students and postdocs in multiple disciplines, preferably co-advised by two or more senior investigators representing different relevant disciplines, should be encouraged without sacrificing the students' thorough studies within the individual disciplines. Long-Term Investment. Returns, in terms of functioning materials, processes or devices should not be expected in the very short term, although it can reasonably be assumed that applications will, as they have already, arise unexpectedly. Basic Research Needs To Assure A Secure Energy Future JPG.jpg file (132KB) Report.pdf file (13.0MB) Basic Research Needs To Assure A Secure Energy Future This report is based upon a Basic Energy Sciences Advisory Committee workshop that was held in October 2002 to assess the basic research needs for energy technologies to assure a reliable, economic, and environmentally sound energy supply for the future. The workshop discussions produced a total of 37 proposed research directions. Current projections estimate that the energy needs of the world will more than double by the year 2050. This is coupled with increasing demands for "clean" energy – sources of energy that do not add to the already high levels of carbon dioxide and other pollutants in the environment. These coupled challenges simply cannot be met by existing technologies. Major scientific breakthroughs will be required to provide reliable, economic solutions. The results of the BESAC workshop are a compilation of 37 Proposed Research Directions. At a higher level, these fell into ten general research areas, all of which are multidisciplinary in nature: • Materials Science to Transcend Energy Barriers • Energy Biosciences • Basic Research Towards the Hydrogen Economy • Innovative Energy Storage • Novel Membrane Assemblies • Heterogeneous Catalysis • Fundamental Approaches to Energy Conversion • Basic Research for Energy Utilization Efficiency • Actinide Chemistry and Nuclear Fuel Cycles • Geosciences Nanoscale science, engineering, and technology were identified as cross-cutting areas where research may provide solutions and insights to long-standing technical problems and scientific questions. The need for developing quantitative predictive models was also identified in many cases, and this requires better understanding of the underlying fundamental mechanisms of the relevant processes. Often this in turn requires characterization with very high physical, chemical, structural, and temporal precision: DOE's existing world-leading user facilities currently provide these capabilities, and these capabilities must be continuously enhanced and new ones developed. In addition, requirements for theory, modeling, and simulation will demand advanced computational tools, including high-end computer user facilities. All the participants agreed that the education of the next generation of research scientists is of crucial importance; and this should include making the importance of the energy security issue clear to everyone. It is clear that assuring the security of the energy supply for the U.S. over the next few decades will present major problems. There are a number of reasons for this. The most important of these is the current reliance on fossil fuels for a high proportion of the energy, of which a significant fraction is imported. The Developing World countries will have greatly increased needs for energy, in part because of the expected population increase, and in part because of the increase in their presently very low standards of living. A second problem is related to concerns over the environmental effects of the use of fossil fuels. Third, the peaking of the production of fossil fuels is likely within the next several decades. For these reasons, it is very important that the U.S. undertakes a vigorous research and development program to address the issues identified in this report. There are a number of actions that can help in the nearer term: increased efficiency in the conversion and use of energy; increased conservation; and aggressive environmental control requirements. However, while these may delay the major impact, they will not in the longer run provide the assured energy future that the U.S. requires. It is also clear that there is no single answer to this problem. There are several options that are available at the moment, and many – or indeed all – of them must be pursued. Basic research will make an important contribution to the solution to this problem by providing the basis on which entities which include DOE's applied missions programs will develop new technological approaches; and by leading to the discovery of new concepts. The time between the basic research and its contribution to new or significantly improved technical solutions that can make major contributions to the future energy supply is often measured in decades. Major new discoveries are needed, and these will largely come from basic research programs. It is clear from the analysis presented in this report that there are a number of opportunities. Essentially all of these are interdisciplinary in character. The Office of Basic Energy Sciences should review its current research portfolio to assess how it is contributing to the research directions proposed by this study. BESAC expects, however, that a much larger effort will be needed than the current BES program. The magnitude of the energy challenge should not be underestimated. With major scientific discoveries and development of the underlying knowledge base, we must enable vast technological changes in the largest industry in the world (energy), and we must do it quickly. If we are successful, we will both assure energy security at home and promote peace and prosperity worldwide. Recommendation: Considering the urgency of the energy problem, the magnitude of the needed scientific breakthroughs, and the historic rate of scientific discovery, current efforts will likely be too little, too late. Accordingly, BESAC believes that a new national energy research program is essential and must be initiated with the intensity and commitment of the Manhattan Project, and sustained until this problem is solved. BESAC recommends that BES review its research activities and user facilities to make sure they are optimized for the energy challenge, and develop a strategy for a much more aggressive program in the future. Basic Research Needs for Countering Terrorism JPG.jpg file (665KB) Report.pdf file (1.8MB) Basic Research Needs for Countering Terrorism This report documents the results of the Department of Energy, Office of Basic Energy Sciences (BES) Workshop on Basic Research Needs to Counter Terrorism. This two-day Workshop, held in Gaithersburg, MD, February 28-March 1, 2002, brought together BES research participants and experts familiar with counter-terrorism technologies, strategies, and policies. The purpose of the workshop was to: (1) identify direct connections between technology needs for countering terrorism and the critical, underlying science issues that will impact our ability to address those needs and (2) recommend investment strategies that will increase the impact of basic research on our nation's efforts to counter terrorism. The workshop focused on science and technology challenges associated with our nation's need to detect, prevent, protect against, and respond to terrorist attacks involving Radiological and Nuclear, Chemical, and Biological threats. While the organizers and participants of this workshop recognize that the threat of terrorism is extremely broad, including food and water safety as well as protection of our public infrastructure, we necessarily limited the scope of our discussions to the principal weapons of mass destruction. In order to set the stage for the discussions of critical science and technology challenges, the workshop began with keynote and plenary lectures that provided a realistic context for understanding the broad challenges of countering terrorism. The plenary speakers emphasized the socio-political complexity of terrorism problems, reinforced the need for basic research in addressing these problems, and provided critical advice on how basic research can best contribute to our nation's needs. Their advice highlighted the need to: • Invest Strategically– Focus on Cross-Cutting Research that has the potential to have an impact on a broad set of technology needs, thereby providing the greatest return on the research investment. • Build Team Efforts– Countering terrorism will require broad, collaborative teams. The research community should focus on: (1) Research Environments and Infrastructures that encourage and enable cross-disciplinary science and technology teams to explore and integrate new scientific discoveries and (2) Exploring Relationships with Other Programs that will strengthen connections between new scientific advances and those groups responsible for technology development and implementation. • Consider Dual Use– Identify areas of research that present significant Dual-Use Opportunities for application to countering terrorism and other complementary technology needs. The during the workshop, participants identified several critical technology needs and the underlying science challenges that, if met, can help to reduce the threat of terrorist attacks in the United States. Some of the key technology needs and limitations that were identified include: Detection– Nonintrusive, stand-off, and imaging detection systems; sampling from complex backgrounds and environments; inexpensive and field-deployable sensor systems; highly selective and ultra-sensitive detectors; early warning triggers for continuous monitoring Prevention – Methods and materials to control, track, and reduce the availability of hazardous materials; techniques to rapidly characterize and attribute the source of terrorist threats Protection – Personal protective equipment; light-weight barrier materials and fabrics; filtration systems; explosive containment structures; methods to protect people, animals, crops, and public spaces Response–Coupled models and measurements that can predict fate and transport of toxic materials including pre-event background data; pre-symptomatic and point of care medical diagnostics; methods to immobilize and neutralize hazardous materials including self-cleaning and self-decontaminating surfaces The workshop discussions of these technology needs and the underlying science challenges are fully documented in the major sections of this report. The results of these discussions, combined with the broad perspective and advice from our plenary speakers, were used to develop a set of high-level workshop recommendations. The following recommendations are offered to help guide our nation's basic research investments in order to maximize our ability to reduce the threat of terrorism. • We recommend continuing or increasing funding for a selected set of research directions that are identified in the Workshop Summary and Recommendations (Section 5) of this report. These areas of research underpin many of the technologies that have high probability to impact our nation's ability to counter terrorism. • New programs should be supported to stimulate the formation of, and provide needed resources for, cross-disciplinary and multi-institutional teams of scientists and technologists that are needed to address these critical problems. An important component of this strategy is investment in DOE national laboratories and user facilities because they can provide an ideal environment to carry out this highly collaborative work. • Governmental organizations and agencies should explore their complementary goals and capabilities and, where appropriate, work to develop agreements that facilitate the formation of multi-organizational teams and the sharing of research and technology capabilities that will improve our nation's ability to counter the threat of terrorism. • Increased emphasis should be placed on identifying dual-use applications for key counter-terrorism technologies. Efforts should be focused on building partnerships between government, university, and industry to capitalize on these opportunities. In summary, this workshop made significant progress in identifying the basic research needs and in outlining a strategy to enhance the research community's ability to impact our nation's counter-terrorism needs. We wish to acknowledge the enthusiasm and hard work of all the workshop participants. Their extraordinary contributions were key to the success of this workshop, and their dedication to this endeavor provides strong evidence that the basic research community is firmly committed to supporting our nation's goal of reducing the threat of terrorism in the United States. Workshop Presentations - February 28, 2002 The Role of Science and Technology in Countering Terrorism.ppt file (109KB) Keynote Lecture by Jay Davis, LLNL Welcome and Brief Overview.ppt file (88KB) Walter Stevens, BES Introduction and Purpose.ppt file (63KB) Terry Michalske, SNL Radiological and Nuclear Threat Area.ppt file (20.3MB) Michael Anastasio, LLNL Chemical Threat Area.ppt file (24.4MB) Michael Sailor, UC San Diego Biological Threat Area.ppt file (3.1MB); David Franz, Southern Research Institute Workshop Report Screen optimized.pdf file (2.8MB) Print optimized.pdf file (4.2MB) Word format.doc file (6.2MB) Complex Systems: Science for the 21st Century JPG.jpg file (258KB) Report.pdf file (1.9MB) Complex Systems: Science for the 21st Century This report is based upon a BES workshop, March 5-6, 1999, which was designed to help define new scientific directions related to complex systems in order to create new understanding aboutthe nano world and complicated, multicomponent structures. As we look further in to this century, we find science and technology at yet another threshold: the study of simplicity will give way to the study of "complexity" as the unifying theme. The triumphs of science in the past century, which improved our lives immeasurably, can be described as elegant solutions to problems reduced to their ultimate simplicity. We discovered and characterized the fundamental particles and the elementary excitations in matter and used them to form the foundation for interpreting the world around us and for building devices to work for us. We learned to design, synthesize, and characterize small, simple molecules and to use them as components of, for example, materials, catalysts, and pharmaceuticals. We developed tools to examine and describe these "simple" phenomena and structures. The new millennium will take us into the world of complexity. Here, simple structures interact to create new phenomena and assemble themselves into devices. Here also, large complicated structures can be designed atom by atom for desired characteristics. With new tools, new understanding, and a developing convergence of the disciplines of physics, chemistry, materials science, and biology, we will build on our 20th century successes and begin to ask and solve questions that were, until the 21st century, the stuff of science fiction. Complexity takes several forms. The workshop participants identified five emerging themes around which research could be organized. Collective Phenomena — Can we achieve an understanding of collective phenomena to create materials with novel, useful properties? We already see the first examples of materials with properties dominated by collective phenomena — phenomena that emerge from the interactions of the components of the material and whose behavior thus differs significantly from the behavior of those individual components. In some cases collective phenomena can bring about a large response to a small stimulus — as seen with colossal magnetoresistance, the basis of a new generation of recording memory media. Collective phenomena are also at the core of the mysteries of such materials as the high-temperature superconductors. Materials by Design — Can we design materials having predictable, and yet often unusual properties? In the past century we discovered materials, frequently by chance, determined their properties, and then discarded those materials that did not meet our needs. Now we will see the advent of structural and compositional freedoms that will allow the design of materials having specific desired characteristics directly from our knowledge of atomic structure. Of particular interest are "nanostructured" materials, with length scales between 1 and 100 nanometers. In this regime, dimensions "disappear," with zero-dimensional dots or nanocrystals, one-dimensional wires, and two-dimensional films, each with unusual properties distinctly different from those of the same material with "bulk" dimensions. We could design materials for lightweight batteries with high storage densities, for turbine blades that can operate at 2500°C, and perhaps even for quantum computing. Functional Systems — Can we design and construct multicomponent molecular devices and machines? We have already begun to use designed building blocks to create self-organized structures of previously unimagined complexity. These will form the basis of systems such as nanometer-scale chemical factories, molecular pumps, and sensors. We might even stretch and think of self-assembling electronic/photonic devices. Nature's Mastery — Can we harness, control, or mimic the exquisite complexity of Nature to create new materials that repair themselves, respond to their environment, and perhaps even evolve? This is, perhaps, the ultimate goal. Nature tells us it can be done and provides us with examples to serve as our models. We learn about Nature's design rules and try to mimic green plants which capture solar energy, or genetic variation as a route to "self-improvement" and optimized function. T hese concepts may seem fanciful, but with the revolution now taking place in biology, progressing from DNA sequence to structure and function, the possibilities seem endless. Nature has done it. Why can't we? New Tools — Can we develop the characterization instruments and the theory to help us probe and exploit this world of complexity? Radical enhancement of existing techniques and the development of new ones will be required for the characterization and visualization of structures, properties, and functions — from the atomic, to the molecular, to the nanoscale, to the macroscale. Terascale computing will be necessary for the modeling of these complex systems. Now is the time. We can now do this research, make these breakthroughs, and enhance our lives as never before imagined. The work of the past few decades has taken us to this point, solving many of the problems that underlie these challenges, teaching us how to approach problems of complexity, giving us the confidence needed to achieve these goals. This work also gave us the ability to compute on our laps with more power than available to the Apollo astronauts on their missions to the moon. It taught us to engineer genes, "superconduct" electricity, visualize individual atoms, build "plastics" ten times stronger than steel, and put lasers on chips for portable CD players. We are ready to take the next steps. Complexity pays dividends. We think of simple silicon for semiconductors, but our CD players depend on dozens of layers of semiconductors made of aluminum, gallium, and arsenic. Copper conducts electricity and iron is magnetic. Superconductors and giant magnetoresistive materials have eight or more elements, all of which are essential and interact with one another to produce the required proper-ties. Nature, too, shows us the value of complexity. Hemoglobin, the protein that transports oxygen from the lungs to, for example, the brain, is made up of four protein subunits which interact to vastly increase the efficiency of delivery. As individual subunits, these proteins cannot do the job. The new program. The very nature of research on complexity makes it a "new millennium" program. Its foundations rest on four pillars: physics, chemistry, materials science, and biology. Success will require an unprecedented level of inter-disciplinary collaboration. Universities will need to break down barriers between established departments and encourage the development of teams across disciplinary lines. Interactions between universities and national laboratories will need to be increased, both in the use of the major facilities at the laboratories and also through collaborations among research programs. Finally, understanding the interactions among components depends on understanding the components themselves. Although a great deal has been accomplished in this area in the past few decades, far more remains to be done. A complexity program will complement the existing programs and will ensure the success of both. The benefits are, as they have been at the start of all previous scientific "revolutions," beyond anything we can now foresee. Nanoscale Science, Engineering and Technology Research Directions JPG.jpg file (707KB) Front & Back JPG.jpg file (588KB) Report.pdf file (6.6MB) Report.doc file (4.9MB) Nanoscale Science, Engineering and Technology Research Directions This report illustrates the wide range of research opportunities and challenges in nanoscale science, engineering and technology. It was prepared in 1999 in connection with the interagency national research initiative on nanotechnology. The principal missions of the Department of Energy (DOE) in Energy, Defense, and Environment will benefit greatly from future developments in nanoscale science, engineering and technology. For example, nanoscale synthesis and assembly methods will result in significant improvements in solar energy conversion; more energy-efficient lighting; stronger, lighter materials that will improve efficiency in transportation; greatly improved chemical and biological sensing; use of low-energy chemical pathways to break down toxic substances for environmental remediation and restoration; and better sensors and controls to increase efficiency in manufacturing. The DOE's Office of Science has a strong focus on nanoscience discovery, the development of fundamental scientific understanding, and the conversion of these into useful technological solutions. A key challenge in nanoscience is to understand how deliberate tailoring of materials on the nanoscale can lead to novel and enhanced functionalities. The DOE National Laboratories are already making a broad range of contributions in this area. The enhanced properties of nanocrystals for novel catalysts, tailored light emission and propagation, and supercapacitors are being explored, as are hierachical nanocomposite structures for chemical separations, adaptive/responsive behavior and impurity gettering. Nanocrystals and layered structures offer unique opportunities for tailoring the optical, magnetic, electronic, mechanical and chemical properties of materials. The Laboratories are currently synthesizing layered structures for electronics/photonics, novel magnets and surfaces with tailored hardness. This report supplies numerous other examples of new properties and functionalities that can be achieved through nanoscale materials control. These include: • Nanoscale layered materials that can yield a four-fold increase in the performance of permanent magnets; • Addition of aluminum oxide nanoparticles that converts aluminum metal into a material with wear resistance equal to that of the best bearing steel; • New optical properties achieved by fabricating photonic band gap superlattices to guide and switch optical signals with nearly 100% transmission, in very compact architectures; • Layered quantum well structures to produce highly efficient, low-power light sources and photovoltaic cells; • Novel optical properties of semiconducting nanocrystals that are used to label and track molecular processes in living cells; • Novel chemical properties of nanocrystals that show promise as photocatalysts to speed the breakdown of toxic wastes; • Meso-porous inorganic hosts with self-assembled organic monolayers that are used to trap and remove heavy metals from the environment; and • Meso-porous structures integrated with micromachined components that are used to produce high-sensitivity and highly selective chip-based detectors of chemical warfare agents. These and other nanostructures are already recognized as likely key components of 21st century optical communications, printing, computing, chemical sensing and energy conversion technologies. The DOE is well prepared to make major contributions to developing nanoscale scientific understanding, and ultimately nanotechnology, through its materials characterization, synthesis, in situ diagnostic and computing capabilities. The DOE and its National Laboratories maintain a large array of major national user facilities that are ideally suited to nanoscience discovery and to developing a fundamental understanding of nanoscale processes. Synchrotron and neutron sources provide exquisite energy control of radiation sources that are able to probe structure and properties on length scales ranging from Ångstroms to millimeters. Scanning Probe Microscope (SPM) and Electron Microscopy facilities provide unique capabilities for characterizing nanoscale materials and diagnosing processes. DOE also maintains synthesis and prototype manufacturing centers where fundamental and applied research, technology development and prototype fabrication can be pursued simultaneously. Finally, the large computational facilities at the DOE National Laboratories can be key contributors in nanoscience discovery, modeling and understanding. In order to increase the impact of major DOE facilities on the national nanoscience and technology initiative, it is proposed to establish several new Nanomaterials Research Centers. These Centers are intended to exploit and be associated with existing radiation sources and materials characterization and diagnostic facilities at DOE National Laboratories. Each Center would focus on a different area of nanoscale research, such as materials derived from or inspired by nature; hard and crystalline materials, including the structure of macromolecules; magnetic and soft materials, including polymers and ordered structures in fluids; and nanotechnology integration. The Nanomaterials Research Centers will facilitate interdisciplinary research and provide an environment where students, faculty, industrial researchers and national laboratory staff can work together to rapidly advance nanoscience discovery and its application to nanotechnology. Establishment of these Centers will permit focusing DOE resources on the most important nanoscale science questions and technology needs, and will ensure strong coupling with the national nanoscience initiative. The synergy of these DOE assets in partnership with universities and industry will provide the best opportunity for nanoscience discoveries to be converted rapidly into technological advances that will meet a variety of national needs and enable the United States to reap the benefits of a technological revolution. Last modified: 4/29/2013 9:19:29 AM
aa8cb2a01cc60579
.. _WorkedExamples: Worked Examples =============== One of the best ways to learn XMDS2 is to see several illustrative examples. Here are a set of example scripts and explanations of the code, which will be a good way to get started. As an instructional aid, they are meant to be read sequentially, but the adventurous could try starting with one that looked like a simulation they wanted to run, and adapt for their own purposes. :ref:`NonLinearSchrodingerEquation` (partial differential equation) :ref:`Kubo` (stochastic differential equations) :ref:`Fibre` (stochastic partial differential equation using parallel processing) :ref:`IntegerDimensionExample` (integer dimensions) :ref:`WignerArguments` (two dimensional PDE using parallel processing, passing arguments in at run time) :ref:`GroundStateBEC` (PDE with continual renormalisation - computed vectors, filters, breakpoints) :ref:`HermiteGaussGroundStateBEC` (Hermite-Gaussian basis) :ref:`2DMultistateSE` (combined integer and continuous dimensions with matrix multiplication, aliases) All of these scripts are available in the included "examples" folder, along with more examples that demonstrate other tricks. Together, they provide starting points for a huge range of different simulations. .. _NonLinearSchrodingerEquation: The nonlinear Schrödinger equation ---------------------------------- This worked example will show a range of new features that can be used in an **XMDS2** script, and we will also examine our first partial differential equation. We will take the one dimensional nonlinear Schrödinger equation, which is a common nonlinear wave equation. The equation describing this problem is: .. math:: \frac{\partial \phi}{\partial \xi} = \frac{i}{2}\frac{\partial^2 \phi}{\partial \tau^2} - \Gamma(\tau)\phi+i|\phi|^2 \phi where :math:`\phi` is a complex-valued field, and :math:`\Gamma(\tau)` is a :math:`\tau`-dependent damping term. Let us look at an XMDS2 script that integrates this equation, and then examine it in detail. .. code-block:: xpdeint nlse Joe Hope The nonlinear Schrodinger equation in one dimension, which is a simple partial differential equation. We introduce several new features in this script. xi phi Gamma 10 100 10 wavefunction Ltt dampingVector density wavefunction normalisation wavefunction densityK wavefunction Let us examine the new items in the ```` element that we have demonstrated here. The existence of the ```` element causes the simulation to be timed. The ```` element causes the computer to make a sound upon the conclusion of the simulation. The ```` element is used to pass options to the `FFTW libraries for fast Fourier transforms `_, which are needed to do spectral derivatives for the partial differential equation. Here we used the option `plan="patient"`, which makes the simulation test carefully to find the fastest method for doing the FFTs. More information on possible choices can be found in the `FFTW documentation `_. Finally, we use two tags to make the simulation run faster. The ```` element switches on several loop optimisations that exist in later versions of the GCC compiler. The ```` element turns on threaded parallel processing using the OpenMP standard where possible. These options are not activated by default as they only exist on certain compilers. If your code compiles with them on, then they are recommended. Let us examine the ```` element. .. code-block:: xpdeint xi This is the first example that includes a transverse dimension. We have only one dimension, and we have labelled it "tau". It is a continuous dimension, but only defined on a grid containing 128 points (defined with the lattice variable), and on a domain from -6 to 6. The default is that transforms in continuous dimensions are fast Fourier transforms, which means that this dimension is effectively defined on a loop, and the "tau=-6" and "tau=6" positions are in fact the same. Other transforms are possible, as are discrete dimensions such as an integer-valued index, but we will leave these advanced possibilities to later examples. Two vector elements have been defined in this simulation. One defines the complex-valued wavefunction "phi" that we wish to evolve. We define the transverse dimensions over which this vector is defined by the ``dimensions`` tag in the description. By default, it is defined over all of the transverse dimensions in the ```` element, so even though we have omitted this tag for the second vector, it also assumes that the vector is defined over all of tau. The second vector element contains the component "Gamma" which is a function of the transverse variable tau, as specified in the equation of motion for the field. This second vector could have been avoided in two ways. First, the function could have been written explicitly in the integrate block where it is required, but calculating it once and then recalling it from memory is far more efficient. Second, it could have been included in the "wavefunction" vector as another component, but then it would have been unnecessarily complex-valued, it would have needed an explicit derivative in the equations of motion (presumably ``dGamma_dxi = 0;``), and it would have been Fourier transformed whenever the phi component was transformed. So separating it as its own vector is far more efficient. The ```` element for a partial differential equation has some new features: .. code-block:: xpdeint 10 100 10 wavefunction Ltt dampingVector There are some trivial changes from the tutorial script, such as the fact that we are using the ARK45 algorithm rather than ARK89. Higher order algorithms are often better, but not always. Also, since this script has multiple output groups, we have to specify how many times each of these output groups are sampled in the ```` element, so there are three numbers there. Besides the vectors that are to be integrated, we also specify that we want to use the vector "dampingVector" during this integration. This is achieved by including the ```` element inside the ```` element. The equation of motion as written in the CDATA block looks almost identical to our desired equation of motion, except for the term based on the second derivative, which introduces an important new concept. Inside the ```` element, we can define any number of operators. Operators are used to define functions in the transformed space of each dimension, which in this case is Fourier space. The derivative of a function is equivalent to multiplying by :math:`i*k` in Fourier space, so the :math:`\frac{i}{2}\frac{\partial^2 \phi}{\partial \tau^2}` term in our equation of motion is equivalent to multiplying by :math:`-\frac{i}{2}k_\tau^2` in Fourier space. In this example we define "Ltt" as an operator of exactly that form, and in the equation of motion it is applied to the field "phi". Operators can be explicit (``kind="ex"``) or in the interaction picture (``kind="ip"``). The interaction picture can be more efficient, but it restricts the possible syntax of the equation of motion. Safe utilisation of interaction picture operators will be described later, but for now let us emphasise that **explicit operators should be used** unless the user is clear what they are doing. That said, **XMDS2** will generate an error if the user tries to use interaction picture operators incorrectly. The ``constant="yes"`` option in the operator block means that the operator is not a function of the propagation dimension "xi", and therefore only needs to be calculated once at the start of the simulation. The output of a partial differential equation offers more possibilities than an ordinary differential equation, and we examine some in this example. For vectors with transverse dimensions, we can sample functions of the vectors on the full lattice or a subset of the points. In the ```` element, we must add a string called "basis" that determines the space in which each transverse dimension is to be sampled, optionally followed by the number of points to be sampled in parentheses. If the number of points is not specified, it will default to a complete sampling of all points in that dimension. If a non-zero number of points is specified, it must be a factor of the lattice size for that dimension. .. code-block:: xpdeint density wavefunction The first output group samples the mod square of the vector "phi" over the full lattice of 128 points. If the lattice parameter is set to zero points, then the corresponding dimension is integrated. .. code-block:: xpdeint normalisation wavefunction This second output group samples the normalisation of the wavefunction :math:`\int d\tau |\phi(\tau)|^2` over the domain of :math:`\tau`. This output requires only a single real number per sample, so in the integrate element we have chosen to sample it many more times than the vectors themselves. Finally, functions of the vectors can be sampled with their dimensions in Fourier space. .. code-block:: xpdeint densityK wavefunction The final output group above samples the mod square of the Fourier-space wavefunction phi on a sample of 32 points. .. _Kubo: Kubo Oscillator --------------- This example demonstrates the integration of a stochastic differential equation. We examine the Kubo oscillator, which is a complex variable whose phase is evolving according to a Wiener noise. In a suitable rotating frame, the equation of motion for the variable is .. math:: \frac{dz}{dt} = i z \;\eta where :math:`\eta(t)` is the Wiener differential, and we interpret this as a Stratonovich equation. In other common notation, this is sometimes written: .. math:: dz = i z \;\circ dW Most algorithms employed by XMDS require the equations to be input in the Stratonovich form. Ito differential equations can always be transformed into Stratonovich euqations, and in this case the difference is equivalent to the choice of rotating frame. This equation is solved by the following XMDS2 script: .. code-block:: xpdeint kubo Graham Dennis and Joe Hope Example Kubo oscillator simulation t eta z 100 main drivingNoise zR zI main The first new item in this script is the ```` element. This element enables us to change top level management of the simulation. Without this element, XMDS2 will integrate the stochastic equation as described. With this element and the option ``name="multi-path"``, it will integrate it multiple times, using different random numbers each time. The output will then contain the mean values and standard errors of your output variables. The number of integrations included in the averages is set with the ``paths`` variable. In the ```` element we have included the ```` element. This performs the integration first with the specified number of steps (or with the specified tolerance), and then with twice the number of steps (or equivalently reduced tolerance). The output then includes the difference between the output variables on the coarse and the fine grids as the 'error' in the output variables. This error is particularly useful for stochastic integrations, where algorithms with adaptive step-sizes are less safe, so the number of integration steps must be user-specified. We define the stochastic elements in a simulation with the ```` element. .. code-block:: xpdeint eta This defines a vector that is used like any other, but it will be randomly generated with particular statistics and characteristics rather than initialised. The name, dimensions and type tags are defined just as for normal vectors. The names of the components are also defined in the same way. The noise is defined as a Wiener noise here (``kind = "wiener"``), which is a zero-mean Gaussian random noise with an average variance equal to the discretisation volume (here it is just the step size in the propagation dimension, as it is not defined over transverse dimensions). Other noise types are possible, including uniform and Poissonian noises, but we will not describe them in detail here. We may also define a noise method to choose a non-default pseudo random number generator, and a seed for the random number generator. Using a seed can be very useful when debugging the behaviour of a simulation, and many compilers have pseudo-random number generators that are superior to the default option (posix). The integrate block is using the semi-implicit algorithm (``algorithm="SI"``), which is a good default choice for stochastic problems, even though it is only second order convergent for deterministic equations. More will be said about algorithm choice later, but for now we should note that adaptive algorithms based on Runge-Kutta methods are not guaranteed to converge safely for stochastic equations. This can be particularly deceptive as they often succeed, particularly for almost any problem for which there is a known analytic solution. We include elements from the noise vector in the equation of motion just as we do for any other vector. The default SI and Runge-Kutta algorithms converge to the *Stratonovich* integral. Ito stochastic equations can be converted to Stratonovich form and vice versa. Executing the generated program 'kubo' gives slightly different output due to the "multi-path" driver. .. code-block:: none $ ./kubo Beginning full step integration ... Starting path 1 Starting path 2 ... many lines omitted ... Starting path 9999 Starting path 10000 Beginning half step integration ... Starting path 1 Starting path 2 ... many lines omitted ... Starting path 9999 Starting path 10000 Generating output for kubo Maximum step error in moment group 1 was 4.942549e-04 Time elapsed for simulation is: 2.71 seconds The maximum step error in each moment group is given in absolute terms. This is the largest difference between the full step integration and the half step integration. While a single path might be very stochastic: .. figure:: images/kuboSingle.* :align: center The mean value of the real and imaginary components of the z variable for a single path of the simulation. The average over multiple paths can be increasingly smooth. .. figure:: images/kubo10000.* :align: center The mean and standard error of the z variable averaged over 10000 paths, as given by this simulation. It agrees within the standard error with the expected result of :math:`\exp(-t/2)`. .. _Fibre: Fibre Noise ----------- This simulation is a stochastic partial differential equation, in which a one-dimensional damped field is subject to a complex noise. This script can be found in ``examples/fibre.xmds``. .. math:: \frac{\partial \psi}{\partial t} = -i \frac{\partial^2 \psi}{\partial x^2} -\gamma \psi+\beta \frac{1}{\sqrt{2}}\left(\eta_1(x)+i\eta_2(x)\right) where the noise terms :math:`\eta_j(x,t)` are Wiener differentials and the equation is interpreted as a Stratonovich differential equation. On a finite grid, these increments have variance :math:`\frac{1}{\Delta x \Delta t}`. .. code-block:: xpdeint fibre Joe Hope and Graham Dennis Example fibre noise simulation t Eta phi 50 L drivingNoise main pow_dens main Note that the noise vector used in this example is complex-valued, and has the argument ``dimensions="x"`` to define it as a field of delta-correlated noises along the x-dimension. This simulation demonstrates the ease with which XMDS2 can be used in a parallel processing environment. Instead of using the stochastic driver "multi-path", we simply replace it with "mpi-multi-path". This instructs XMDS2 to write a parallel version of the program based on the widespread `MPI standard `_. This protocol allows multiple processors or clusters of computers to work simultaneously on the same problem. Free open source libraries implementing this standard can be installed on a linux machine, and come standard on Mac OS X. They are also common on many supercomputer architectures. Parallel processing can also be used with deterministic problems to great effect, as discussed in the later example :ref:`WignerArguments`. Executing this program is slightly different with the MPI option. The details can change between MPI implementations, but as an example: .. code-block:: none $xmds2 fibre.xmds xmds2 version 2.1 "Happy Mollusc" (r2543) Copyright 2000-2012 Graham Dennis, Joseph Hope, Mattias Johnsson and the xmds team Generating source code... ... done Compiling simulation... ... done. Type './fibre' to run. Note that different compile options (and potentially a different compiler) are used by XMDS2, but this is transparent to the user. MPI simulations will have to be run using syntax that will depend on the MPI implementation. Here we show the version based on the popular open source `Open-MPI `_ implementation. .. code-block:: none $ mpirun -np 4 ./fibre Found enlightenment... (Importing wisdom) Planning for x <---> kx transform... done. Beginning full step integration ... Rank[0]: Starting path 1 Rank[1]: Starting path 2 Rank[2]: Starting path 3 Rank[3]: Starting path 4 Rank[3]: Starting path 8 Rank[0]: Starting path 5 Rank[1]: Starting path 6 Rank[2]: Starting path 7 Rank[3]: Starting path 4 Beginning half step integration ... Rank[0]: Starting path 1 Rank[2]: Starting path 3 Rank[1]: Starting path 2 Rank[3]: Starting path 8 Rank[0]: Starting path 5 Rank[2]: Starting path 7 Rank[1]: Starting path 6 Generating output for fibre Maximum step error in moment group 1 was 4.893437e-04 Time elapsed for simulation is: 20.99 seconds In this example we used four processors. The different processors are labelled by their "Rank", starting at zero. Because the processors are working independently, the output from the different processors can come in a randomised order. In the end, however, the .xsil and data files are constructed identically to the single processor outputs. The analytic solution to the stochastic averages of this equation is given by .. math:: \langle |\psi(k,t)|^2 \rangle = \exp(-2\gamma t)|\psi(k,0)|^2 +\frac{\beta^2 L_x}{4\pi \gamma} \left(1-\exp(-2\gamma t)\right) where :math:`L_x` is the length of the x domain. We see that a single integration of these equations is quite chaotic: .. figure:: images/fibreSingle.* :align: center The momentum space density of the field as a function of time for a single path realisation. while an average of 1024 paths (change ``paths="8"`` to ``paths="1024"`` in the ```` element) converges nicely to the analytic solution: .. figure:: images/fibre1024.* :align: center The momentum space density of the field as a function of time for an average of 1024 paths. .. _IntegerDimensionExample: Integer Dimensions ------------------ This example shows how to handle systems with integer-valued transverse dimensions. We will integrate the following set of equations .. math:: \frac{dx_j}{dt} = x_j \left(x_{j-1}-x_{j+1}\right) where :math:`x_j` are complex-valued variables defined on a ring, such that :math:`j\in \{0,j_{max}\}` and the :math:`x_{j_{max}+1}` variable is identified with the variable :math:`x_{0}`, and the variable :math:`x_{-1}` is identified with the variable :math:`x_{j_{max}}`. .. code-block:: xpdeint integer_dimensions Graham Dennis XMDS2 script to test integer dimensions. t x 0) = 1.0; ]]> 1000 main j) = x(j => j)*(x(j => j_minus_one) - x(j => j_plus_one)); ]]> xR main The first extra feature we have used in this script is the ```` element. It performs run-time checking that our generated code does not accidentally attempt to access a part of our vector that does not exist. Removing this tag will increase the speed of the simulation, but its presence helps catch coding errors. The simulation defines a vector with a single transverse dimension labelled "j", of type "integer" ("int" and "long" can also be used as synonyms for "integer"). In the absence of an explicit type, the dimension is assumed to be real-valued. The dimension has a "domain" argument as normal, defining the minimum and maximum values of the dimension's range. The lattice element, if specified, is used as a check on the size of the domain, and will create an error if the two do not match. Integer-valued dimensions can be called non-locally. Real-valued dimensions are typically coupled non-locally only through local operations in the transformed space of the dimension, but can be called non-locally in certain other situations as described in :ref:`the reference`. The syntax for calling integer dimensions non-locally can be seen in the initialisation CDATA block: .. code-block:: xpdeint x = 1.0e-3; x(j => 0) = 1.0; where the syntax ``x(j => 0)`` is used to reference the variable :math:`x_0` directly. We see a more elaborate example in the integrate CDATA block: .. code-block:: xpdeint dx_dt(j => j) = x(j => j)*(x(j => j_minus_one) - x(j => j_plus_one)); where the vector "x" is called using locally defined variables. This syntax is chosen so that multiple dimensions can be addressed non-locally with minimal possibility for confusion. .. _WignerArguments: Wigner Function --------------- This example integrates the two-dimensional partial differential equation .. math:: \begin{split} \frac{\partial W}{\partial t} &= \Bigg[ \left(\omega + \frac{U_{int}}{\hbar}\left(x^2+y^2-1\right)\right) \left(x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x}\right)\\ &\phantom{=\Bigg[} - \frac{U_{int}}{16 \hbar}\left(x\left(\frac{\partial^3}{\partial x^2 \partial y} +\frac{\partial^3}{\partial y^3}\right)-y\left(\frac{\partial^3}{\partial y^2 \partial x}+\frac{\partial^3}{\partial x^3}\right)\right)\Bigg]W(x,y,t) \end{split} with the added restriction that the derivative is forced to zero outside a certain radius. This extra condition helps maintain the long-term stability of the integration. The script can be found in ``examples/wigner_arguments_mpi.xmds`` under your XMDS2 installation directory. .. code-block:: xpdeint wigner Graham Dennis and Joe Hope Simulation of the Wigner function for an anharmonic oscillator with the initial state being a coherent state. t W damping _max_x-width) damping = 0.0; else damping = 1.0; ]]> 50 Lx Ly Lxxx Lxxy Lxyy Lyyy main dampConstants WR WI main This example demonstrates two new features of XMDS2. The first is the use of parallel processing for a deterministic problem. The FFTW library only allows MPI processing of multidimensional vectors. For multidimensional simulations, the generated program can be parallelised simply by adding the ``name="distributed-mpi"`` argument to the ```` element. .. code-block:: xpdeint $ xmds2 wigner_argument_mpi.xmds xmds2 version 2.1 "Happy Mollusc" (r2680) Copyright 2000-2012 Graham Dennis, Joseph Hope, Mattias Johnsson and the xmds team Generating source code... ... done Compiling simulation... ... done. Type './wigner' to run. To use multiple processors, the final program is then called using the (implementation specific) MPI wrapper: .. code-block:: xpdeint $ mpirun -np 2 ./wigner Planning for (distributed x, y) <---> (distributed ky, kx) transform... done. Planning for (distributed x, y) <---> (distributed ky, kx) transform... done. Sampled field (for moment group #1) at t = 0.000000e+00 Current timestep: 5.908361e-06 Sampled field (for moment group #1) at t = 1.400000e-05 Current timestep: 4.543131e-06 ... The possible acceleration achievable when parallelising a given simulation depends on a great many things including available memory and cache. As a general rule, it will improve as the simulation size gets larger, but the easiest way to find out is to test. The optimum speed up is obviously proportional to the number of available processing cores. The second new feature in this simulation is the ```` element in the ```` block. This is a way of specifying global variables with a given type that can then be input at run time. The variables are specified in a self explanatory way .. code-block:: xpdeint ... where the "default_value" is used as the valuable of the variable if no arguments are given. In the absence of the generating script, the program can document its options with the ``--help`` argument: .. code-block:: none $ ./wigner --help Usage: wigner --omega --alpha_0 --absorb --width --Uint_hbar Details: Option Type Default value -o, --omega real 0.0 -a, --alpha_0 real 3.0 -b, --absorb real 8.0 -w, --width real 0.3 -U, --Uint_hbar real 1.0 We can change one or more of these variables' values in the simulation by passing it at run time. .. code-block:: none $ mpirun -np 2 ./wigner --omega 0.1 --alpha_0 2.5 --Uint_hbar 0 Found enlightenment... (Importing wisdom) Planning for (distributed x, y) <---> (distributed ky, kx) transform... done. Planning for (distributed x, y) <---> (distributed ky, kx) transform... done. Sampled field (for moment group #1) at t = 0.000000e+00 Current timestep: 1.916945e-04 ... The values that were used for the variables, whether default or passed in, are stored in the output file (wigner.xsil). .. code-block:: xpdeint Script compiled with XMDS2 version 2.1 "Happy Mollusc" (r2680) See http://www.xmds.org for more information. Variables that can be specified on the command line: Command line argument omega = 1.000000e-01 Command line argument alpha_0 = 2.500000e+00 Command line argument absorb = 8.000000e+00 Command line argument width = 3.000000e-01 Command line argument Uint_hbar = 0.000000e+00 Finally, note the shorthand used in the output group .. code-block:: xpdeint which is short for .. code-block:: xpdeint .. _GroundStateBEC: Finding the Ground State of a BEC (continuous renormalisation) -------------------------------------------------------------- This simulation solves another partial differential equation, but introduces several powerful new features in XMDS2. The nominal problem is the calculation of the lowest energy eigenstate of a non-linear Schrödinger equation: .. math:: \frac{\partial \phi}{\partial t} = i \left[\frac{1}{2}\frac{\partial^2}{\partial y^2} - V(y) - U_{int}|\phi|^2\right]\phi which can be found by evolving the above equation in imaginary time while keeping the normalisation constant. This causes eigenstates to exponentially decay at the rate of their eigenvalue, so after a short time only the state with the lowest eigenvalue remains. The evolution equation is straightforward: .. math:: \frac{\partial \phi}{\partial t} = \left[\frac{1}{2}\frac{\partial^2}{\partial y^2} - V(y) - U_{int}|\phi|^2\right]\phi but we will need to use new XMDS2 features to manage the normalisation of the function :math:`\phi(y,t)`. The normalisation for a non-linear Schrödinger equation is given by :math:`\int dy |\phi(y,t)|^2 = N_{particles}`, where :math:`N_{particles}` is the number of particles described by the wavefunction. The code for this simulation can be found in ``examples/groundstate_workedexamples.xmds``: .. code-block:: xpdeint groundstate Joe Hope Calculate the ground state of the non-linear Schrodinger equation in a harmonic magnetic trap. This is done by evolving it in imaginary time while re-normalising each timestep. t V1 phi Ncalc wavefunction normalisation wavefunction 25 4000 wavefunction normalisation T wavefunction potential wavefunction norm_dens wavefunction normalisation norm normalisation We have used the ``plan="exhasutive"`` option in the ```` element to ensure that the absolute fastest transform method is found. Because the FFTW package stores the results of its tests (by default in the ~/.xmds/wisdom directory), this option does not cause significant computational overhead, except perhaps on the very first run of a new program. This simulation introduces the first example of a very powerful feature in XMDS2: the ```` element. This has syntax like any other vector, including possible dependencies on other vectors, and an ability to be used in any element that can use vectors. The difference is that, much like noise vectors, computed vectors are recalculated each time they are required. This means that a computed vector can never be used as an integration vector, as its values are not stored. However, computed vectors allow a simple and efficient method of describing complicated functions of other vectors. Computed vectors may depend on other computed vectors, allowing for spectral filtering and other advanced options. See for example, the :ref:`AdvancedTopics` section on :ref:`Convolutions`. The difference between a computed vector and a stored vector is emphasised by the replacement of the ```` element with an ```` element. Apart from the name, they have virtually identical purpose and syntax. .. code-block:: xpdeint Ncalc wavefunction Here, our computed vector has no transverse dimensions and depends on the components of "wavefunction", so the extra transverse dimensions are integrated out. This code therefore integrates the square modulus of the field, and returns it in the variable "Ncalc". This will be used below to renormalise the "phi" field. Before we examine that process, we have to introduce the ```` element. The ```` element can be placed in the ```` element, or inside ```` elements as we will see next. Elements placed in the ```` element are executed in the order they are found in the .xmds file. Filter elements place the included CDATA block directly into the generated program at the designated position. If the element does not contain any dependencies, like in our first example, then the code is placed alone: .. code-block:: xpdeint This filter block merely prints a string into the output when the generated program is run. If the ```` element contains dependencies, then the variables defined in those vectors (or computed vectors, or noise vectors) will be available, and the CDATA block will be placed inside loops that run over all the transverse dimensions used by the included vectors. The second filter block in this example depends on both the "wavefunction" and "normalisation" vectors: .. code-block:: xpdeint normalisation wavefunction Since this filter depends on a vector with the transverse dimension "y", this filter will execute for each point in "y". This code multiplies the value of the field "phi" by the factor required to produce a normalised function in the sense that :math:`\int dy |\phi(y,t)|^2 = N_{particles}`. The next usage of a ```` element in this program is inside the ```` element, where all filters are placed inside a ```` element. .. code-block:: xpdeint wavefunction normalisation Filters placed in an integration block are applied each integration step. The "where" flag is used to determine whether the filter should be applied directly before or directly after each integration step. The default value for the where flag is ``where="step start"``, but in this case we chose "step end" to make sure that the final output was normalised after the last integration step. At the end of the sequence element we introduce the ```` element. This serves two purposes. The first is a simple matter of convenience. Often when we manage our input and output from a simulation, we are interested solely in storing the exact state of our integration vectors. A breakpoint element does exactly that, storing the components of any vectors contained within, taking all the normal options of the ```` element but not requiring any ```` elements as that information is assumed. .. code-block:: xpdeint wavefunction If the filename argument is omitted, the output filenames are numbered sequentially. Any given ```` element must only depend on vectors with identical dimensions. This program begins with a very crude guess to the ground state, but it rapidly converges to the lowest eigenstate. .. figure:: images/groundstateU2.* :align: center The shape of the ground state rapidly approaches the lowest eigenstate. For weak nonlinearities, it is nearly Gaussian. .. figure:: images/groundstateU20.* :align: center When the nonlinear term is larger (:math:`U=20`), the ground state is wider and more parabolic. .. _HermiteGaussGroundStateBEC: Finding the Ground State of a BEC again --------------------------------------- Here we repeat the same simulation as in the :ref:`GroundStateBEC` example, using a different transform basis. While spectral methods are very effective, and Fourier transforms are typically very efficient due to the Fast Fourier transform algorithm, it is often desirable to describe nonlocal evolution in bases other than the Fourier basis. The previous calculation was the Schrödinger equation with a harmonic potential and a nonlinear term. The eigenstates of such a system are known analytically to be Gaussians multiplied by the Hermite polynomials. .. math:: \left[-\frac{\hbar}{2 m}\frac{\partial^2}{\partial x^2} + \frac{1}{2}\omega^2 x^2\right]\phi_n(x) = E_n \phi_n(x) where .. math:: \phi_n(x,t) = \sqrt{\frac{1}{2^n n!}} \left(\frac{m \omega}{\hbar \pi}\right)^\frac{1}{4} e^{-\frac{m \omega x^2}{2\hbar}} H_n\left(\sqrt{\frac{m \omega}{\hbar}x}\right),\;\;\;\;\;\;E_n = \left(n+\frac{1}{2}\right) \omega where :math:`H_n(u)` are the physicist's version of the Hermite polynomials. Rather than describing the derivatives as diagonal terms in Fourier space, we therefore have the option of describing the entire :math:`-\frac{\hbar}{2 m}\frac{\partial^2}{\partial x^2} + \frac{1}{2}\omega^2 x^2` term as a diagonal term in the hermite-Gaussian basis. Here is an XMDS2 simulation that performs the integration in this basis. The following is a simplified version of the ``examples/hermitegauss_groundstate.xmds`` script. .. code-block:: xpdeint hermitegauss_groundstate Graham Dennis Solve for the groundstate of the Gross-Pitaevskii equation using the hermite-Gauss basis. t phi Ncalc wavefunction 100 100 wavefunction normalisation L wavefunction normalisation wavefunction wavefunction dens wavefunction dens wavefunction The major difference in this simulation code, aside from the switch back from dimensionless units, is the new transverse dimension type in the ```` element. .. code-block:: xpdeint We have explicitly defined the "transform" option, which by defaults expects the Fourier transform. The ``transform="hermite-gauss"`` option requires the 'mpmath' package installed, just as Fourier transforms require the FFTW package to be installed. The "lattice" option details the number of hermite-Gaussian eigenstates to include, and automatically starts from the zeroth order polynomial and increases. The number of hermite-Gaussian modes fully determines the irregular spatial grid up to an overall scale given by the ``length_scale`` parameter. The ``length_scale="sqrt(hbar/(M*omegarho))"`` option requires a real number, but since this script defines it in terms of variables, XMDS2 is unable to verify that the resulting function is real-valued at the time of generating the code. XMDS2 will therefore fail to compile this program without the feature: .. code-block:: xpdeint which disables many of these checks at the time of writing the C-code. .. _2DMultistateSE: Multi-component Schrödinger equation ------------------------------------ This example demonstrates a simple method for doing matrix calculations in XMDS2. We are solving the multi-component PDE .. math:: \frac{\partial \phi_j(x,y)}{\partial t} = \frac{i}{2}\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right)\phi_j(x,y) - i U(x,y) \sum_k V_{j k}\phi_k(x,y) where the last term is more commonly written as a matrix multiplication. Writing this term out explicitly is feasible for a small number of components, but when the number of components becomes large, or perhaps :math:`V_{j k}` should be precomputed for efficiency reasons, it is useful to be able to perform this sum over the integer dimensions automatically. This example show how this can be done naturally using a computed vector. The XMDS2 script is as follows: .. code-block:: xpdeint 2DMSse Joe Hope Schroedinger equation for multiple internal states in two spatial dimensions. t phi U V VPhi internalInteraction wavefunction k); ]]> 20 100 wavefunction Ltt spatialInteraction coupling density wavefunction normalisation wavefunction The only truly new feature in this script is the "aliases" option on a dimension. The integer-valued dimension in this script indexes the components of the PDE (in this case only two). The :math:`V_{j k}` term is required to be a square array of dimension of this number of components. If we wrote the k-index of :math:`V_{j k}` using a separate ```` element, then we would not be enforcing the requirement that the matrix be square. Instead, we note that we will be using multiple 'copies' of the j-dimension by using the "aliases" tag. .. code-block:: xpdeint This means that we can use the index "k", which will have exactly the same properties as the "j" index. This is used to define the "V" function in the "internalInteraction" vector. Now, just as we use a computed vector to perform an integration over our fields, we use a computed vector to calculate the sum. .. code-block:: xpdeint VPhi internalInteraction wavefunction k); ]]> Since the output dimensions of the computed vector do not include a "k" index, this index is integrated. The volume element for this summation is the spacing between neighbouring values of "j", and since this spacing is one, this integration is just a sum over k, as required. By this point, we have introduced most of the important features in XMDS2. More details on other transform options and rarely used features can be found in the :ref:`advancedTopics` section.
07413a0bf9a2f6ba
Jihad or Bust Mustafa Akyol believes that “the main obstacle to Christian religious freedom in Turkey is not Islam but Turkish nationalism and laïcité (“Render Unto Atatürk,” March). And regarding the “verses of the sword” in the Qur’an, Akyol suggests “it is possible to argue that these verses refer only to those non-Muslims who have been belligerent toward Muslims in the first place.” I am not so sure. As Akyol knows, almost all these numerous verses appear in the suras dictated after Muhammad had left Mecca, where he had been trying to compromise with hostile parties, and had arrived at Medina, where he became a powerful chieftain gaining booty from battles and demanding unquestioning fidelity. The bellicose verses he dictated there are taken by Muslims as the direct word of Allah relayed to Muhammad, and they cannot be subjected to interpretation. When a devout Muslim reads Sura 5:51, telling the faithful never to take Jews or Christians as friends, or Sura 2:61, decreeing an eternal curse of humiliation and wretchedness on Jews, or the numerous passages (for example, Suras 2:191, 9:5, 47:4) enjoining believers to slay unbelievers wherever they can be found and to continue to fight until “religion should be only for Allah” (Sura 8:39), one may vainly hope that there are not many Muslims who read their scriptures literally. Christians who commit acts of violence against those of other faiths cannot point to the New Testament for justification of their actions, but unfortunately Muslims can. Howard P. Kainz Marquette University Milwaukee, Wisconsin Mustafa Akyol replies: Thanks to Howard Kainz for his criticism. It has become an oft-repeated dictum that the later verses of the Qur’an are less tolerant than the earlier ones, but the reality is more complex. Of course, all the wars by Prophet Muhammad were in the second (“Medinan”) stage of his mission, but there are notable peaceful and tolerant passages from the same period”including the famous “there is no compulsion in religion” verse (2:256). Actually, when the Qur’an is read as a whole, one finds verses that balance the belligerent ones and put them into context. For example, the one Kainz quotes, “Sura 5:51, telling the faithful never to take Jews or Christians as friends,” should be supplemented by these, which are also Medinan: “Allah does not forbid you from being good to those who have not fought you in religion or driven you from your homes, or from being just towards them. Allah loves those who are just. Allah merely forbids you from taking as friends those who have fought you in religion and driven you from your homes and who supported your expulsion” (60:8-9). One can choose to emphasize either the peaceful or the belligerent verses of the Qur’an. (Militant Muslims and biased critics of Islam focus on the latter, whereas some Muslim apologists speak only about the former.) As a third way, one can categorize and interpret all these seemingly opposing passages in a manner that will support a “just war” argument. Michael Cook, professor of Near Eastern Studies at Princeton University, agrees. “On the basis of the Koran alone,” he noted in a 2006 Pew Forum lecture, “you could mount a decent argument for saying offensive jihad is never a duty.” Quantifying Quantum Stephen M. Barr’s “Faith and Quantum Theory” (March) gives an accurate portrayal of quantum theory as far as it goes. Barr, however, writes off David Bohm’s approach to the subject by arguing that it “brings back Newtonian determinism and mechanism.” Such an argument is fallacious, because Bohm’s theory gives rise to nonlocal phenomena. By nonlocal, we mean that an event at point A can be correlated with an event at point B with no discernable connection between them. Even in the classical Newtonian realm, the deterministic system of Laplace is impossible except, perhaps, with a very small number of interacting particles. For systems with very large numbers of interacting atoms and molecules, collective phenomena prevent any kind of prediction or, indeed, retrodiction. Such collective behavior was first studied by Poincaré more than a century ago. The research stagnated until the advent of computers, which were needed to study the evolution of instabilities that announce the onset of chaos (in the scientific sense) and self-ordering. The onset of such instabilities is the cause of our inability to predict reliably the weather more than about a week in advance. Robert C. Whitten NASA (retired) Cupertino, California Stephen Barr explains that Peter E. Hodgson posits the Bohmian theory as the “only metaphysically sound alternative” with respect to interpretations of quantum theory. I like the fact that its being “metaphysically sound” is Hodgson’s criterion for judging the veracity of the theory. And Barr seems to hold to the same criterion. With respect to the common themes and questions of traditional metaphysics, however, I was wondering if Barr would be able to explain in greater detail why he prefers the traditional Copenhagen interpretation of quantum theory. Adam DeMuro Naperville, Illinois Stephen Barr voices his opposition to determinism in favor of the Copenhagen interpretation of Heisenberg’s uncertainty principle, finding it sympathetic to biblical faith. Yet other religious scientists reject the Copenhagen interpretation, as Barr notes: “Hodgson insists that Bohmian theory is the only metaphysically sound alternative. He is unfazed that it brings back Newtonian determinism and mechanism.” Inasmuch as Barr holds that Christian orthodoxy requires the affirmation of free will, he stands on solid ground. That same affirmation, however, doesn’t bolster any purely scientific argument in favor of the Copenhagen interpretation. Consistent with modern science’s limits, Hodgson rightly observes that “physics can take no account of Divine intervention or of acts of free will, so in the course of scientific research the world is assumed to be strictly determined” ( Science and Belief in the Nuclear Age ). Hodgson goes on to explain that “God causes everything, but He also acts by secondary causality when he creates matter and gives it certain definite properties. Thereafter the matter behaves in accord with these properties. This does not happen by unbreakable necessity, because God has complete power over nature and can suspend or alter the laws of nature . . . . This means that Laplacian determinism is unacceptable . . . . [B]oth God and human beings can cause . . . effects” that Laplacian determinism cannot explain. If it were true that physical laws are absolutely inviolable, the natural order would seem to preclude miracles. Yet perfectly reasonable believers such as Hodgson and St. Thomas Aquinas affirm unequivocally the reality of miracles. Despite Barr’s earlier essay “The Miracle of Evolution” (February 2006), it isn’t entirely clear whether he would admit without qualification the reality of miracles. If he does, then why wouldn’t he accept the idea that physical laws may be broken through the insertion of human acts? I would encourage believers to take Hodgson’s work more seriously than they might otherwise do if they relied exclusively on the passing acknowledgement found in “Faith and Quantum Theory.” (One should note that the back cover of Hodgson’s Science and Belief contains several glowing recommendations, including one by Barr himself.) Peter A. Pagan Aquinas College Nashville, Tennessee While Stephen Barr is to be commended for his attempt to deal in a short essay with the complicated subject of the implications of quantum theory for religious belief, his article contains several misstatements that weaken and to some extent deflect his argument. (1) His claim that Planck proposed that light energy travels in little packets, now called photons, is historically incorrect. Planck proposed only that resonators in the solid body in equilibrium with the light absorbed light energy in discrete units. It was Einstein who proposed that this discreteness resided in light radiation itself, an interpretation that Planck opposed for many years afterward. (2) Barr’s interpretation of a light particle passing through two windows at once is not the presently accepted one. In some sense, a photon passing through one window “knows” the other window is there and acts accordingly. An extensive literature of theory and experiment has been built up by elaborating on this insight. In one example, it has been shown that a photon already heading toward the two windows can anticipate the experimenter pulling down the shade on one of them and act accordingly. These considerations point to a much stronger interaction of observer with observed than Barr posits. An answer to some of the philosophical questions Barr raises will require a deep probing of this interaction, without an a priori dismissal of the question as to what extent “minds create reality.” In making the statements in the preceding paragraph, I have dismissed alternatives, such as Bohm’s pilot-wave theory, as being without experimental justification. Also, I am a bit surprised that Barr did not mention string theory as an alternative school of thought, even though it has become quite controversial lately. (3) Barr seems to rely too heavily on Heinz Pagels’ dismissal of Buddhist philosophy. While either classical mechanics or modern physics can be the basis of the encounter, much of the recent debate between science and Buddhist thought does revolve around quantum theory, as in the dialogue between Matthieu Ricard and Trinh Xuan Thuan in The Quantum and the Lotus . The interconnectedness of all phenomena and the role of the observer in any observation (which includes the question of the relation of matter and consciousness) can be approached from either a classical or a modern perspective, but the questions are seen more acutely from the perspective of quantum theory. William D. Hobey Worcester Polytechnic Institute Worcester, Massachusetts What is curious about Barr’s “Faith and Quantum Theory” is how he completely ignores every philosophy of science except scientific realism. He proceeds as if science had direct access to ontology and gives no thought to any competing philosophy, such as instrumentalism. He simply presents physicists as if they were philosophers or theologians. So the abstractions of quantum theory somehow attain reality, a reality only physicists can ascertain. Last year, in “The Miracle of Evolution,” Barr agreed that science should be “metaphysically modest” (Cardinal Schönborn’s phrase), but here he presumes that physicists can determine whether determinism is real. Scientists need to decide whether they are in the philosophy business. If they are, they should compete with all philosophies. If they are not, they should stop claiming special access to truth about reality. Ralph Gillmann Burke, Virginia Stephen Barr attempts to explain for us the Copenhagen and other interpretations of quantum mechanics. Perhaps he should have cited a blunter version of Feynman’s comment on quantum mechanics, to the effect that whoever claims to understand it doesn’t know what he is talking about. And how could anyone understand it when its great prophet, Werner Heisenberg, frankly stated in his Physics and Philosophy that at the quantum level the law of noncontradiction has to be abandoned. This, of course, means that his view had to be unintelligible nonsense. Violation of the law of noncontradiction should have been a red flag to Heisenberg and Niels Bohr. Instead, Heisenberg abandoned the long-held concept of science as a description of reality and substituted an instrumental notion of science. The equations work, but they don’t describe the underlying reality. The older, realist view that the equations of science represented the underlying physical realities was grievously wounded and has yet to recover. But in 2000, with the publication of his Collective Electrodynamics: Quantum Foundations of Electromagnetism , California Institute of Technology professor Carver Mead cut the Gordian knot of wave-particle duality with his evidence that purported that such quantum particles as the electron, proton, and neutron are waves simply. In so doing, Mead restored to us a rational physics. I don’t think that the philosophical and theological implications of quantum mechanics can be discussed any longer without coming to grips with Mead’s work, which I reviewed for Touchstone ‘s September 2003 issue (“Recovering Rational Science”). Barr, of course, is merely pointing out that a quantum mechanics based on wave-particle duality doesn’t support philosophical determinism. My point is that any such quantum mechanics undermines both rationality and rational science. Physical determinism we can live with, but irrationality is fatal to both science and revealed religion. David Haddon Redding, California “Faith and Quantum Theory” is excellent on the nature of light. Yet, as the merest amateur of the philosophy of science, may I suggest that when Professor Barr speaks of the different schools of thought in physics today, he would have been clearer if he had used the Aristotelian-Scholastic principles of act and potency. For example, the Schrödinger equations leave all possibilities in play. I interpret this to mean, philosophically, that all possibilities exist, but only in the mind of God, because God is all act and knows all possibilities as actual. This can be true only of God. Also, it would be possible for both situations of the light collection to be real only in the mind of God; but in the physical-temporal world, only one could be actual, while the other must remain only possible. Paula Haigh Nazareth, Kentucky Stephen M. Barr replies: Robert C. Whitten confuses determinism with predictability. Physical determinism, as formulated by Laplace, asserts that an infinite mind having complete knowledge of the present state of the physical world could calculate its future development. The fact that, in practice, human beings cannot do this is beside the point. The prevalence of chaos (in the scientific sense) does make the world less predictable but does not in itself afford an escape from determinism. (In fact, chaotic behavior was first studied and is best understood in classical deterministic systems.) As for Bohmian theory, the “nonlocal” aspects of it make it no less deterministic. Peter Hodgson, a distinguished advocate of Bohmian theory, acknowledges that it is deterministic, as one can see from Peter Pagan’s letter. As noted by Pagan, the way Hodgson reconciles the putative determinism of the laws of physics with free will is by appealing to the fact that God can suspend those laws to allow humans to act freely. Since God has that power, Pagan is right to say that the reality of free will does not compel one to accept the Copenhagen interpretation of quantum theory or any other nondeterministic theory of physics. Pagan senses that I am somewhat skeptical of the idea that God suspends the laws of physics whenever human beings act freely. He therefore wonders whether I deny God’s power over nature or the reality of supernatural miracles. Of course, I affirm both. My hesitation in embracing Hodgson’s solution to the problem of free will comes from the fact that miracles seem to be extraordinary events and dramatic manifestations of divine power, whereas most free human acts are not. Because human free will, being a spiritual power, is beyond the laws of physics, it needn’t involve a suspension of them. Yet it might, and I regard Hodgson’s suggestion as perfectly reasonable. I enthusiastically second Pagan’s endorsement of Hodgson’s writings on science and religion, which are of unexcelled lucidity and soundness. Adam DeMuro asks me to explain more fully why I presently prefer standard quantum theory and its traditional (Copenhagen) interpretation to Bohmian theory. The reasons are not primarily metaphysical but scientific. In my article, I claimed that Bohm’s modification of quantum theory sacrificed much of its beauty. The beauty I had in mind lies in its theoretical unification of matter and forces, which are now understood to be two manifestations of “quantum fields.” This is one of the greatest triumphs in the history of science. Quantum theory unites in a profound way particles with waves, and therefore with fields and forces. In fact, forces can be understood in two mathematically equivalent ways: as arising from “field lines” stretching between objects, as Faraday imagined them, or as due to the exchange of “virtual particles” between the objects, as pictured by Feynman. This wonderful synthesis emerges when quantum principles are applied in a thoroughly consistent manner to produce what is called quantum field theory. Our present, extremely successful theory of all known, nongravitational physics (called the standard model of particle physics) is a quantum field theory. Since Bohmian theory severs the wave-particle duality, it unravels the particle-wave-field-force connection. This makes it quite doubtful that a full quantum field theory can be constructed within the Bohmian framework. Certainly, that has not been accomplished to date (though a few people have been working on it recently). For this reason, I think Bohmianism will ultimately prove inadequate as a scientific theory. As far as metaphysics is concerned, Bohmian physics, being essentially Newtonian, readily lends itself to materialist monism, whereas the traditional interpretation of quantum theory seems necessarily to refer to rational mind as something distinct from inanimate matter. William D. Hobey’s point about Planck’s and Einstein’s roles in developing the quantum idea is correct. I did not see the need, however, to go into that level of historical and technical detail. Hobey’s second point is not correct. The idea that each photon goes through just one window is emphatically not the “presently accepted” view. Indeed, the idea that particles follow many paths at once is the basis of the most modern and powerful mathematical formulation of quantum theory, called the Feynman Path Integral Formalism. Only in the “classical limit” is it a good approximation to say that particles follow a single path or trajectory. Baseballs do, but photons and electrons do not. (Ironically, it is in Bohmian theory, which Hobey says he “dismisses,” that one can say that each photon goes through just one window while sensing the openness of the other window. The “sensing” happens by the “nonlocal” forces to which Whitten’s letter refers. In Bohmianism, particles are like confetti in the wind. The wind pattern reflects the openness of both windows, but each piece of confetti goes through just one.) Hobey is also wrong in supposing that string theory is an “alternative” to quantum theory. String theory is based on standard postulates of quantum theory. The “interconnectedness of all phenomena” in quantum physics referred to by Hobey is one of the main reasons that some people see an affinity with Eastern mysticism. My point was not so much to “dismiss” Buddhist ideas as to note that on a very central issue, namely the radical distinction between knower and known, the traditional interpretation of quantum theory is closer to Western than to Eastern thought. Both Ralph Gillmann and David Haddon raise the thorny question of instrumentalism and the relation of quantum theory to reality. Many people have said that the traditional interpretation of quantum physics necessarily entails an instrumentalist view of science. That depends on what one means by instrumentalism. I believe that science, including quantum physics, makes objectively true statements about the real world and helps us to understand that world as it really is in itself. I reject views that say that modern science is merely useful or successful in manipulating (as opposed to understanding) the world, or that it merely “saves the appearances,” or that it deals only with sensible phenomena rather than with the reality underlying them, or that it studies just the accidental quantitative aspects of things rather than their essences. All such views, whether they stem from nominalism, Humeanism, Kantianism, instrumentalism, or positivism, involve a gross underestimation of the explanatory successes of modern science and the power of human reason. Gillmann is right that science has no special “direct access to ontology.” What science, philosophy, and other branches of knowledge do have is the access to reality afforded by the use of reason, which may be fairly direct or extremely indirect. Gillmann is also right that the abstractions of science do not “attain reality” in the sense of becoming concrete objects, but they do refer to real aspects of the world and are necessary for correctly understanding it, at least for finite minds such as ours, which must use discursive reason. Science is not philosophy (if we use the terms in the modern sense). A scientific theory is one thing and its philosophical interpretation or significance another. That is why quantum theory, on whose mathematical rules everyone agrees, is given wildly divergent philosophical interpretations by intelligent and informed people. It can be interpreted in ways that are materialist or nonmaterialist, deterministic or nondeterministic, realist or instrumentalist. It is the same with evolution and the Darwinian mechanism of natural selection, which can be interpreted in atheistic ways or in religious ways. Scientists become arrogant (rather than “metaphysically modest”) when they pretend such divergent interpretations do not exist or claim all but one are “unscientific.” Scientists as scientists are not “in the philosophy business,” as Gillmann puts it. That should not debar them, however, from holding philosophical views. Moreover, it happens that philosophizing usefully about physics requires knowing something about it; so physicists, while not in the business of philosophy, can certainly be helpful to those who are. David Haddon notes that Heisenberg once wrote that the law of noncontradiction is violated in quantum theory. This raises the question of what is meant by the “Copenhagen interpretation.” Does it include every comment ever made by Bohr or Heisenberg? If so, then it is, just as Haddon says, “unintelligible nonsense.” That is why some prefer the less loaded terms “traditional interpretation” or “standard interpretation.” In quantum physics, one faces a crucial question: When an observer judges an outcome to have happened, is that the only outcome that really happened, or did all the other possible outcomes also really happen in other slices of the universe (seen by the observer’s alter egos)? If one says the former, one has adopted some version of the traditional interpretation. If one says the latter, one has adopted some version of the many-worlds interpretation. The traditional interpretation, so defined, can be further interpreted in a variety of directions. It can lead to solipsism, extreme instrumentalism, subjectivism, or nutty ideas about logic. I don’t think it has to lead to any of those disasters, however, or I wouldn’t take it seriously. I heartily agree with Haddon that irrationality is inimical to true religion, but so, I would say, is the tendency to equate the mysterious or paradoxical with the irrational. As for the Carver Mead book that Haddon refers to, it proposes some interesting speculations but only sketchily develops them. They are only the germs of ideas, and it seems to me that little will grow from them. The fact that his book has been almost completely ignored by theoretical physicists shows that I am not alone in this judgment. Paula Haigh’s suggestion that the Aristotelian concepts of act and potency may be of relevance to the issues raised by quantum physics is interesting. Others have had the same thought, including Heisenberg himself, and more recently the physicist Wolfgang Smith. I find it strange that few of the many letters FIRST THINGS received about my article dwelt on what I took to be its most significant point, namely that a powerful antimaterialist argument can be developed from quantum physics. Even many materialist physicists would agree with the following proposition: If quantum theory as we have known it for more than eighty years is correct, and if one also takes the materialist view that human beings are merely complicated physical systems, then some version of the many-worlds interpretation is probably unavoidable. This means that materialism must now carry some very heavy baggage. Missing the Point The noun leadership is traditionally defined as “to cause to follow.” If a high school teacher had asked the class to write a short paper on “The Leadership of George W. Bush: Con & Pro” (March), I respectfully suggest that both Joseph Bottum and Michael Novak would fail because of their mutual failure to so much as intimate an opinion on the central issue of the post-September 11 era: the necessity of the war against Iraq. All other issues pale in significance to that issue. I cannot find where either author offers an opinion on the necessity of the United States’ use of deadly force on a massive scale. It is impossible to conclude that each believes that the case for the necessity of war in March 2003 was so compelling as not to warrant discussion relative to the leadership of Bush. For example, no one applauds Roosevelt’s leadership because he recommended the day after Pearl Harbor that the Congress pass a bill authorizing war against Japan. By their joint failure to discuss the necessity of war, Bottum and Novak imply that any discussion of the issue of the necessity of war is irrelevant to the issue of Bush’s leadership. John B. Day Philadelphia, Pennsylvania The assertions that Bush is hapless and incompetent simply don’t hold up to scrutiny. Long before September 11, the Washington beltway was intently watching the economy slowly deliver a recession that was especially devastating to the Midwest. In the face of this downturn, by itself one of the most daunting challenges any president can face, Bush cut taxes. It was a bold move stridently argued against by the Washington elite, and it proved to be exactly what the economy needed. Hardly the fruits of one so hapless and incompetent. In response to the attacks and large-scale slaughter of innocent people on American soil, Bush single-handedly changed the paradigm of the West versus terror. The Bush doctrine needs to be carefully studied by every American, for it contains the structure we will use to fight terror for the next fifty years. Neither the Democrats in America nor opponents in any Western nation have been able to provide a better framework. Hardly the work of an incompetent leader. Has Bush made mistakes? What a pointless question! We so earnestly avoid paradigm shifts that we have in large measure lost an understanding of the nature of radical change. It is not just that avoiding mistakes during paradigm shifts is an impossibility; the bigger issue is that mistakes are a crucial part of the exploration and discovery process that establishes a path within the context of the new framework. Failure is a necessity. Have we all forgotten that? The critical element in the process of failure is to avoid catastrophic failure, and to manage the exploration and discovery period such that we fail within our means. This Bush has done. This is not a manifestation of haplessness but of resoluteness. The best analogy is that of Ferdinand Magellan finding his way around the tip of South America. Look at the Magellan Straits by means of a satellite image. Every one of the inlets and channels had to be explored. Most turned out to be mistakes, but those mistakes had to be made. Magellan was lucky he didn’t have the editors of the New York Times harping on his quarterdeck. Surely they would have instigated mutiny and doomed the mission to failure. And thank goodness also that Joseph Bottum was not a crew member. Julian Tonning Holland, Michigan Put not your trust in princes. This is what I think about the arguments around G.W. Bush. You tie yourself to a political party or a politician, and you sink or swim with him. Now you are sinking. You gambled and you lost, so do what losing gamblers do: Pay what you owe, leave the table, and don’t whine. There is a reason why there is a separation between church and state, and between politics and church. When you hold eternal verities hostage to the vagaries of current events, you put them at risk. Forget about G.W. Bush. Concentrate on what you do best, and do not gamble what you cannot afford to lose. What has Iraq to do with the godly life anyway? Adriana I. Pena State College, Pennsylvania Michael Novak replies: Just-war decisions are not geometry but practical wisdom”prudence. In the case of Iraq, there were strong reasons against and strong reasons for, and the issue was well argued for some months before the decision was finally made. Someone once asked Gandhi, “Was Christianity good for Europe?” “Too early to tell,” he replied. A dozen times since January 28, the insurgents in Iraq have used weapons of mass destruction”weaponized canisters of chlorine gas”chiefly against other Muslims. Given the testimony of the Clinton administration and Hans Blix about the existing chemical weapons in Iraq before the war, no American president could have afforded to take the chance, at least not after September 11, that these weapons might be used in the United States, or he would have been held culpable and derelict in his duty. The decision was not “necessary,” but on balance it was wise. Others may argue against this”and some did then”and in due course we shall know the full effects on the politics of the Middle East. I thank Mr. Tonning for his arguments, with which I mostly agree. The question of human rights in Iraq and of democratic progress in the Middle East generally was, and is, an important question of justice. Sometimes “justice” requires that wars should be fought, lest worse sins be committed. On matters of practical judgment such as this, on which persons of goodwill can and do disagree, we often do not know who was right until some time after the event. In the meantime, we do our best to act wisely, nobly, and bravely, “with a firm reliance on the protection of Divine Providence.” Joseph Bottum replies: My thanks to our correspondents”and to Michael Novak, a member of the FIRST THINGS editorial board and a treasured friend. When we publish a magazine on religion and public life, one of the things we have to cover is public life, and so from time to time we will have political essays in the magazine. I have little to add to my original thoughts, which prompted Michael Novak’s reply in the March issue. Justices Roberts and Alito joined the Supreme Court’s decision to uphold the federal ban on partial-birth abortion, and their presence on the Court is a good example of why I supported President Bush in 2000 and 2004. The current mess of the Justice Department under Attorney General Gonzales is a good example of why the incompetence of the Bush White House makes me want to throw my hands up in despair. Noll and Void Thanks to Mark Noll and Fr. Neuhaus for adding depth and texture to my understanding of Christianity in Canada (The Public Square, March). I would only add that it’s debatable whether the “old Canada” was more religious or more conservative than the “old United States.” I would propose that each teased out and developed different strains in British Protestantism. To generalize for brevity’s sake: Canada was born on the Plains of Abraham, out of a deal reached between the Anglican establishment of Britain and the (Gallican-tinged) Catholic establishment of New France. The culture of deference that this engendered was only reinforced by the influx of Tory refugees from the American Revolution. In religion, as in other things, most Canadians followed the lead of their “betters,” who in turn followed the lead of their British (née European) betters. From this perspective, the crisis of faith in Canada is the problem of the magisterial Reformation: If the state can set the terms of religion, then why not cut out the middleman and simply worship at the altar of the state? In fact, when the “best people” decided that church was no place for a gentleman, most Canadians followed them into apostasy. The United States, on the other hand, was settled largely by religious dissenters whom the Crown was pleased to see as far from the center of power as possible. Dissenter Protestant ideas (for example, skepticism about the divine-right monarchy) were both a major cause of the American Revolution and enshrined by the revolution as nowhere else on earth. Almost from the beginning of the republic, it was assumed that religion was to be decided by personal conscience and that churches were by their nature voluntary organizations. Likewise, legitimacy of government was believed to reside in the consent of the governed and the laws of nature “self-evident” to all. It’s not hard to see why Americans would be less susceptible to being told that God is dead by their “betters.” (In a similar vein, few Americans who opposed abortion were cowed when the Supreme Court “settled” the issue!) But lest we become too triumphal, the Dissenter Protestant tradition has a terminal logic of its own: If every man is his own priest, why not his own church? And if his own church, why not his own god? Sean Degidon New York, New York The idea that Canada has ceased to be a “Christian” nation has received considerable support since Mark Noll’s address. Such a view is consistent with that held by many of the country’s cultural elite: Canada is a multifaith society moving, albeit with sporadic setbacks, toward a purely secular future. And it is true that the secular authority of the Roman Catholic Church has fallen, especially in Quebec and Newfoundland. Certainly, measured by churchgoing, Canada is less Christian than it was. But the falling away from churchgoing ought not to be confused with a collapse of faith. It is true that few Canadians were unaffiliated with a specific religious group in the early 1960s, but formal religious affiliation does not always reflect religious belief. Mainstream churches in Canada tend to reflect a strong social commitment with a less firm focus on theological concerns. As a result, mainstream churches offer good works (which is good and proper) but not much theological depth. Faced with social work as theology, many Canadians have simply left their churches; good works are done well by others, so why keep a religious trapping? The remaining churches focus on a theology that often has a simplistic dualist approach (God vs. Satan”a battle till the End Times) and that offers little to persons of insight. Churches rooted in Canadian history have become more involved in secular or political matters (divestment from Israel, for example) than in matters of faith. As a result, Canadians seeking spiritual leadership do not always find it in the church of their fathers (and mothers). Canadians overwhelmingly believe in God and overwhelmingly believe in a Christian God. My experience, based on traveling across the country, is that such belief is quiet, unimposing, and seldom expressed in public. But the belief is profound and deep-set. It is not comfortable with pious platitudes (such have not served Canadians well in the past), but it is real. The reports of the death of faith in Canada are premature. James Morton Toronto, Ontario Reopening the Discussion Richard John Neuhaus helpfully describes papal infallibility as a “charism that ensures that the Church will never invoke its full teaching authority to require assent to anything in faith and morals that is false” (The Public Square, February). But his affirmation that it is such “a narrowly prescribed charism” that “it has been exercised in a manner beyond dispute only once, namely in the 1950 definition of the bodily assumption of Mary,” does not seem quite right. He implicitly provides a good counterexample a few pages later. He praises Sr. Sara Butler for pointing out that the Church requires the “full and unconditional assent of the faithful” to its ban on women priests and that the “discussion . . . is closed.” Sr. Butler shows that this was irrevocably made the case by the Servant of God Pope John Paul II in his 1994 Apostolic Letter Ordinatio Sacerdotalis, which confirmed the tradition “in virtue of my ministry of confirming the brethren ( Luke 22:32).” In formally requiring such assent, this papal letter is similar to the papal definition of the Assumption and is surely therefore, given Fr. Neuhaus’ own definition, an exercise of the charism of infallibility. The somewhat simpler formula of pronouncement in 1994 from that of 1950 does not imply a lesser degree of guaranteed doctrinal inerrancy. The inerrancy of both acts of the Church’s highest teaching authority is “beyond (legitimate) dispute.” Twentieth-century papal encyclical condemnations of artificial contraception in Casti Connubii and Humanae Vitae, each of which also explicitly applies the authority of Christ to traditional teaching, would, I think, be other examples. The ecclesial presence of Christ’s definitive “But I say to you” may not be quite as “narrowly prescribed” as Fr. Neuhaus implies. Fr. Hugh MacKenzie Editor, Faith Magazine London, England RJN replies: Fr. MacKenzie might have added the statements on abortion and euthanasia in John Paul II’s encyclical Evangelium Vitae . By “beyond dispute” I mean that the 1950 definition explicitly invokes the charism of infallibility defined by the First Vatican Council. This is not the case with Evangelium Vitae and the other instances cited by Fr. MacKenzie, and some Catholic theologians in good standing hold that they do not meet the strict criteria for an infallible definition.
468faed25e3dbc55
Tell me more × Does the Hamiltonian always translate to the energy of a system? What about in QM? So by the Schrodinger equation, is it true then that $i\hbar{\partial\over\partial t}|\psi\rangle=H|\psi\rangle$ means that $i\hbar{\partial\over\partial t}$ is also an energy operator? How can we interpret this? Thanks. share|improve this question For the first part of the question(v1), see also – Qmechanic Oct 13 '11 at 18:33 2 Answers up vote 4 down vote accepted I will formulate the following in such a way, that the language doesn't change too much within the answer. This also emphasizes the analogies of related concepts. • Classically, you have a configuration/state $\Psi$, which is characterised by coordinates $x^i,v^i$ or $q^i,p_i$ and/or any other relevant parameters. Then an energy is a function or functional of this configuration $$H:\Psi\mapsto E_\Psi,\ \ \mbox{where}\ \ E_\Psi:=H[\Psi].$$ Here $E_\Psi$ is some real (energy-)value associated with the configuration $\Psi$. To name an example: Let $q$ and $p$ be the coordinates of your two-dimensional phase space, then every point $\Psi=(q,p)$ characterises a possible configuration. The configuration/state $\Psi$ here is really just the pair of coordinates. The scalar function $H(p,q)=\frac{1}{2m}p^2+\frac{\omega}{2}x^2$ clearly is a map which assigns a scalar energy value $E_\Psi$ to every possible configuration $\Psi$. The evolution of $\Psi$ in time is determined by $H$, see Hamilton's Equations. This might be viewed as the point of coming up with the Hamiltonian in the first place and it is typically done in such a way, that the energy value $E_\Psi$ will not change with time. See also this thread for a related question. What you call "energy" is pretty much determined by this criterium. In the case of a time independent Hamiltonian (as in the example) and if the time developement of observables $f$ is governed by $\frac{\mathrm{d}f}{\mathrm{d}t} = \{f, H\} + \frac{\partial f}{\partial t}$, then you have $\frac{\mathrm{d}H}{\mathrm{d}t} = \{H, H\} = 0$ and the conservation of the quantiy $E_\Psi:=H[\Psi]$ is evident. Of course, you might want to model friction processes and whatnot and it then might be difficult to define all the relevant quantities. • In quantum mechanics, your configuration $\Psi$ is given by a state vector $|\Psi\rangle$ (or an equivalence class of such vectors) in some Hilbert space. There are many vectors in this Hilbert space, but there are some vectors $|\Psi_n\rangle$, which also span the whole vector space and which are also special in the following sense: They are eigenvectors of the Hamiltonian operator: $H|\Psi_n\rangle = E_n|\Psi_n\rangle$. Here $E_n$ is just the real eigenvarlue and I assume that I can enumerate the eigenstates by an descrete index $n$. Now for every point in time, your state vector $\Psi$ is just a linear combination of the special states $\{\Psi_n\}$. (As a remark, notice that all the time dependencies of states are left implicit in this post.) Therefore, if you know how $H$ acts on all the $\Psi_n$'s, you know how $H$ acts on any $\Psi$. Since a Hilbert space naturally comes with an inner product, i.e. a map $$\omega:|\Psi\rangle\times|\Phi\rangle\mapsto\omega(|\Psi\rangle,|\Phi\rangle)\equiv\langle\Psi|\Phi\rangle\in\mathbb{C},\ \ \mbox{satisfying}\ \ \langle\Psi|\Psi\rangle>0\ \ \forall\ \ |\Psi\rangle\ne 0,$$ you can define a new map $$\omega_H:\Psi\mapsto E_\Psi,\ \ \mbox{where}\ \ E_\Psi:=\omega_H[\Psi],$$ $$\omega_H[\Psi]:=\omega(|\Psi\rangle,H|\Psi\rangle)\equiv\langle\Psi| H|\Psi\rangle.$$ Compare the lines above with the classical case. Here $E_\Psi=\ ...=\langle\Psi| H|\Psi\rangle$ is then called the expectation value of the Hamiltonian in the phyical state. It is the energy value associated with $\Psi$, which is real due to hermiticity of the Hamiltonian. Also, like in the classical case, the time evolution of any state $\Psi$ (resp. state vector $|\Psi\rangle$) is determined by the observable $H$, an operator in the QM-case. And as stated above, exactly this $H$, together with the state/configuration $\Psi$, gives you the energy values $E_\Psi$ associated with $\Psi$. This relation of time and energy is by construction: The Schrödinger equation is an axiom (but a natural one, see conservation of probability), which relates time evolution and Hamiltonian. Now, if the time dependency of the state is governed by the Hamiltonian (whatever it might look like in your scenario), then so is the time dependency of $\langle\Psi| H|\Psi\rangle$. And if $\ i\hbar\frac{\partial}{\partial t}|\Psi\rangle=H|\Psi\rangle\ $ is true for all vectors in your Hilbert space, i.e. if $i\hbar\frac{\partial}{\partial t}=H$ holds as an operator equation, then these two really are just the same operator. If you ask for an interpretation for this, then I'd suggest you hold on to the quantum mechanical relation between frequency and energy. Regarding the equation which determines time evolution, quantum mechanics is much easier than classical mechanics in a sense, especially if you come with some Lie group theory intuition in your backpack. share|improve this answer The classical example for something where the Hamiltonian is different from the total energy is a particle in an accelerating constraint, like a particle bead sliding on a rotating wire. I will use a different system, a particle of mass m in a long uniformly accelerating box. If the box is accelerating with acceleration a, in the comoving system, there is a fictitious force on the particle which is derived from a fictitious potential. The comoving Hamiltonian description is the same as for a particle in gravity, so that $$ H = {p^2\over 2m} + mg x$$ Which is valid for positive x, and the potential is infinite for negative x. Viewing the same particle in the non-accelerated frame, the total energy is just the kinetic energy, and the potential energy restricts the particle from entering the region $x<{at^2\over 2}$. The comoving Hamiltonian is not the energy of the particle, which increases without bound with time, but it gives the dynamical law for the comoving frame wavefunction. The wavefunction of the particle will (if it can radiate) settle down to the ground state of the moving Hamiltonian. The particle will be in a bound profile against the wall, where the binding is by a linear potential. For the inertial frame, this profile will be accelerating steadily, and its energy does not settle down. The relation between the two is given by boosting the wavefunction by an amount which depends on time. For systems which are not constrained, the Hamiltonian is always the total energy. This is also true for systems where the constraints do not add energy to the system. The Hamiltonian for systems which add energy is usually explicitly time dependent, but not so in the case where the dynamics is time independent from the point of view of the particle. Mathematically, in such a system you have a nontrivial time translation invariance which is a symmetry, and in the accelerated particle case, this time translation symmetry mixes up inertial frame time translation and boosts. share|improve this answer Your Answer
ad2a5f8813b3ab90
Publisher of the most cited open-access journals in their fields. Original Research ARTICLE Front. Phys., 01 October 2013 | Umbral Vade Mecum Thomas L. Curtright1 and Cosmas K. Zachos2* • 1Department of Physics, University of Miami, Coral Gables, FL, USA • 2High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA In recent years the umbral calculus has emerged from the shadows to provide an elegant correspondence framework that automatically gives systematic solutions of ubiquitous difference equations—discretized versions of the differential cornerstones appearing in most areas of physics and engineering—as maps of well-known continuous functions. This correspondence deftly sidesteps the use of more traditional methods to solve these difference equations. The umbral framework is discussed and illustrated here, with special attention given to umbral counterparts of the Airy, Kummer, and Whittaker equations, and to umbral maps of solitons for the Sine-Gordon, Korteweg–de Vries, and Toda systems. 1. Introduction Robust theoretical arguments have established an anticipation of a fundamental minimum measurable length in Nature, of order LPlanckGN/c3=1.6162×1035m, the corresponding mass and time being MPlanckc/GN=2.1765×108kg and LPlanck/c=5.3911×1044s. The essence of such arguments is the following (in relativistic quantum geometrical units, wherein ħ, c, and MPlanck are all unity). In a system or process characterized by energy E, no lengths smaller than L can be measured, where L is the larger of either the Schwarzschild horizon radius of the system (~ E) or, for energies smaller than the Planck mass, the Compton wavelength of the aggregate process (~1/E). Since the minimum of max(E, 1/E) lies at the Planck mass (E = 1), the smallest measurable distance is widely believed to be of order LPlanck. Thus, continuum laws in Nature are expected to be deformed, in principle, by modifications at that minimum length scale. Remarkably, however, if a fundamental spacetime lattice of spacing a = O(LPLanck) is the structure that underlies conventional continuum physics, then it turns out that continuous symmetries, such as Galilei or Lorentz invariance, can actually survive unbroken under such a deformation into discreteness, in a non-local, umbral realization (10, 11, 18). Umbral calculus, pioneered by Rota and associates in a combinatorial context (4, 16), specifies, in principle, how functions of discrete variables in infinite domains provide systematic “shadows” of their familiar continuum limit properties. By preserving Leibniz's chain rule, and by providing a discrete counterpart of the Heisenberg algebra, observables built from difference operators shadow the Lie algebras of the standard differential operators of continuum physics. [For a review relevant to physics, see (13).] Nevertheless, while the continuous symmetries and Lie algebras of umbrally deformed systems might remain identical to their continuum limit, the functions of observables themselves are modified, in general, and often drastically so. Traditionally, the controlling continuum differential equations of physics are first discretized (2, 5, 18), and then those difference equations are solved to yield umbral deformations of the continuum solutions. But quite often, routine methods to solve such discrete equations become unwieldy, if not intractable. On the other hand, some technical difficulties may be bypassed by directly discretizing the continuum solutions. That is, through appropriate umbral deformation of the continuum solutions, the corresponding discrete difference equations may be automatically solved. However, as illustrated below for the simplest cases of oscillations and wave propagation, the resulting umbral modifications may present some subtleties when it comes to extracting the underlying physics. In (21) the linearity of the umbral deformation functional was exploited, together with the fact that the umbral image of an exponential is also an exponential, albeit with interesting modifications, to discretize well-behaved functions occurring in solutions of physical differential equations through their Fourier expansion. This discrete shadowing of the Fourier representation functional should thus be of utility in inferring wave disturbance propagation in discrete spacetime lattices. We continue to pursue this idea here with some explicit examples. We do this in conjunction with the umbral deformation of power series, especially those for hypergeometric functions. We compare both Fourier and power series methods in some detail to gain further insight into the umbral framework. Overall, we utilize essentially all aspects of the elegant umbral calculus to provide systematic solutions of discretized cornerstone differential equations that are ubiquitous in most areas of physics and engineering. We pay particular attention to the umbral counterparts of the Airy, Kummer, and Whittaker equations, and their solutions, and to the umbral maps of solitons for the Sine-Gordon, Korteweg–de Vries, and Toda systems. 2. Overview of the Umbral Correspondence For simplicity, consider discrete time, t = 0, a, 2a, …, na, …. Without loss of generality, broadly following the summary review of (13), consider an umbral deformation defined by the forward difference discretization of ∂t, and whence of the elementary oscillation Equation, (t) = −x(t), namely, Now consider the solutions of this second-order difference equation. Of course, (2) can be easily solved directly by the textbook Fourier-component Ansatz x(t) ∝ rt, (2), to yield (1 ± ia)t/a. However, to illustrate instead the powerful systematics of umbral calculus (13, 18), we produce and study the solution in that framework. The umbral framework considers associative chains of operators, generalizing ordinary continuum functions by ultimately acting on a translationally-invariant “vacuum” 1, after manipulations to move shift operators to the right and have them absorbed by that vacuum, which we indicate by T · 1 = 1. Using the standard Lagrange-Boole shift generator Teat, so that Tf(t)1=f(t+a)T1=f(t+a),(3) the umbral deformation is then   ttT1,     (5) so that [t]0 = 1, and, for n > 0, [0]n = 0. The [t]n are called “basic polynomials”1 for positive n (5, 13, 16), and they are eigenfunctions of tT−1Δ. A linear combination of monomials (a power series representation of a function) will thus transform umbrally to the same linear combination of basic polynomials, with the same series coefficients, f(t) ↦ f(tT−1). All observables in the discretized world are thus such deformation maps of the continuum observables, and evaluation of their direct functional form is in order. Below, we will be concluding the correspondence by casually eliminating translation operators at the very end, first through operating on the vacuum and then leaving it implicit, so that F(t) ≡ f(tT−1) · 1. The umbral deformation relies on the respective umbral entities obeying operator combinatorics identical to their continuum limit (a → 0), by virtue of obeying the same Heisenberg commutation relation (18), Thus, e.g., by shift invariance, TΔT−1 = Δ, [t,tn]=ntn1  [Δ,[t]nTn]=n[t]n1T1n,(8) so that, ultimately, Δ[t]n = n[t]n − 1. For commutators of associative operators, the umbrally deformed Leibniz rule holds (10, 11), ultimately to be dotted onto 1. Formally, the umbral deformation reflects (unitary) equivalences of the unitary irreducible representation of the Heisenberg-Weyl group, provided for by the Stone-von Neumann theorem. Here, these equivalences reflect the alternate consistent realizations of all continuum physics structures through systematic maps such as the one we have chosen. It is worth stressing that the representations of this algebraic relation on the real or complex number fields can only be infinite dimensional, that is, the lattices covered must be infinite. Now note that, in this case the basic polynomials [t]n are just scaled falling factorials, for n ≥ 0, i.e., generalized Pochhammer symbols, which may be expressed in various ways: [t]n(tT1)n·1=t(ta)(t(n1)a)=an(t/a)!(t/an)!      =anΓ(ta+1)Γ(tan+1)=(a)nΓ(nta)Γ(ta) .(10) Thus [−t]n = (−)n[t + a(n − 1)]n. Furthermore, [an]n = ann!; [t]m[tam]nm = [t]n for 0 ≤ mn; and for integers 0 ≤ m < n, [am]n = 0. Thus, Δm[t]n = [an]m[t]nm/am. Negative umbral powers, by contrast, are the inverse of rising factorials, instead: [ 1t ]n=(T1t)n1=1(t+a)(t+2a)(t+na)       =an(t/a)!(t/a+n)!       =anΓ(ta+1)Γ(ta+n+1)=(a)nΓ(tan)Γ(ta) .(11) These correspond to the negative eigenvalues of tT−1Δ. The standard umbral exponential is then natural to define as (6, 11, 16)2 E(λt,λa)eλ[t]eλtT11=n=0λnn![t]n             =n=0(λa)n(t/an)=(1+λa)t/a,(12) the compound interest formula, with the proper continuum limit (a → 0). N.B. There is always a 0 at λ = −1/a. Evidently, since Δ · 1 = 0, and, as already indicated, one could have solved this equation directly3 to produce the above Et, λa). Serviceably, the umbral exponential E happens to be an ordinary exponential, and it actually serves as the generating function of the umbral basic polynomials, Conversely, then, this construction may be reversed, by first solving directly for the umbral eigenfunction of Δ, and effectively defining the umbral basic polynomials through the above parametric derivatives, in situations where these might be more involved, as in the next section. As a consequence of linearity, the umbral deformation of a power series representation of a function is given formally by This may not always be easy to evaluate, but, in fact, the same argument may be applied to linear combinations of exponentials, and hence the entire Fourier representation functional, to obtain F(t)=dτ f(τ)dω2πeiωτ(1+iωa)t/a      =(1+aτ)t/af(τ)|τ=0 .(17) The rightmost equation follows by converting iω into ∂τ derivatives and integrating by parts away from the resulting delta function. Naturally, it identifies with Equation (16) by the (Fourier) identity f(∂x)g(x)|x = 0 = g(∂x)f(x)|x = 0. It is up to individual ingenuity to utilize the form best suited to the particular application at hand. It is also straightforward to check that this umbral transform functional yields tfΔF ,(18) and to evaluate the umbral transform of the Dirac delta function, which amounts to a cardinal sine or sampling function, or to evaluate umbral transforms of rational functions, such as to obtain an incomplete Gamma function (1), and so on. Note how the last of these is distinctly, if subtly, different from the umbral transform of negative powers, as given in (11). In practical applications, evaluation of umbral transforms of arbitrary functions of observables may be more direct, at the level of solutions, through this deforming functional, Equation (17). For example, one may evaluate in this way the umbral correspondents of trigonometric functions, Sin[t]ei[t]ei[t]2i, Cos[t]ei[t]+ei[t]2,(21) so that ΔSin[t]=Cos[t], ΔCos[t]=Sin[t].(22) As an illustration, consider phase-space rotations of the oscillator. The umbral deformation of phase-space rotations, x˙=p, p˙=x  ΔX(t)=P(t), ΔP(t)=X(t),(23) readily yields, by directly deforming continuum solutions, the oscillatory solutions, In view of (14), and also the umbral sines and cosines in (24) are seen to amount to discrete phase-space spirals, with a frequency decreased from the continuum value (i.e., 1) to So the frequency has become, effectively, the inverse of the cardinal tangent function.4 Note that the umbrally conserved quantity is, such that Δ=0, with the proper energy as the continuum limit. 3. Reduction from Second-Order Differences to Single Term Recursions In this section and the following, to conform to prevalent conventions, the umbral variable will be denoted by x, instead of t. In this case there is a natural way to think of the umbral correspondence that draws on familiar quantum mechanics language (16): The discrete difference equations begin as operator statements, for operator xs and Ts, but are then reduced to equations involving classical-valued functions just by taking the matrix element 〈x|…|vac〉 where |vac〉 is translationally invariant. The overall x-independent non-zero constant 〈x|vac〉 is then ignored. To be specific, consider Whittaker's equation (1) for μ = 1/2, This umbrally maps to the operator statement Considering either y(xT−1)·1 ≡ Y(x), or else 〈x|y(xT−1)|vac〉 = Y(x) 〈x|vac〉, this operator statement reduces to a classical difference equation, Before using umbral mapping to convert continuous solutions of (29) into discrete solutions (14, 15) of (31), here we note a simplification of the latter equation upon choosing a = 2, which amounts to setting the scale of x. With this choice (31) collapses to a mere one-term recursion. Shifting xx − 2 this is Despite being a first-order difference equation, however, the solutions of this equation still involve two independent “constants of summation” even for x restricted to only integer values, because the choice a = 2 has decoupled adjacent vertical strips of unit width on the complex x plane. To be explicit, for integer x > 0, forward iteration gives (2) Y(2k+1)=2k(j=1kj2kj)Y(1)   andY(2k+2)=2k(j=1kjkj)Y(2)    for integer k0,(33) with Y(1) and Y(2) the two independent constants that determine values of Y for all larger odd and even integer points, respectively. Or, if generic x is contemplated, the Equation (32) has elementary solutions, for arbitrary complex constants C1 and C2, given by Y(x)=2x/2Γ(x2κ)Γ(x2)C1+(2)x/2Γ(x2)Γ(1x2+κ) C2(34) In the second expression, we have used Γ(z)Γ(1 − z) = π/sin πz. Note the C2 part of this elementary solution differs from the C1 part just through multiplication by a particular complex function with period 2. This is typical of solutions to difference equations since any such periodic factors are transparent to Δ, as mentioned in an earlier footnote (12). As expected, even for generic x the constants C1 and C2 may be determined given Y(x) at two judiciously chosen points, not necessarily differing by an integer. For example, if 0 < κ < 1, C1=Γ(1+κ)21+κY(2+2κ) ,   C2=πsinπκC112Γ(κ)Y(2).(36) Moreover, poles and zeros of the solution are manifest either from the Γ functions in (34), or else from continued product representations such as (33). For the latter, either forward or backward iterations of the first-order difference Equation (32) may be used. Schematically, or alternatively, Although both terms in (34) have zeroes, the C1 term also has poles while the C2 term has none—it is an entire function of x—and it is complex for any nonzero choice of C2. Of course, since the Equation (32) is linear, real and imaginary parts may be taken as separate real solutions. All this is evident in the following plots for various selected integer κ. 2x/2Γ(12xκ)Γ(12x) for κ = 1, 2, and 3 in red, blue, and green. 2x/2cosπ(12x)Γ(12x)Γ(112x+κ) for κ = 1, 2, and 3 in red, blue, and green. Collapse to a mere one-term recursion also occurs for an inverse-square potential, For μa2 = 1, which amounts to setting the scale of the energy of the solution, the umbral version of this equation reduces to Y(x)=12(1+κa2x(x+a))Y(x+a)      =12(1+aκxaκa+x)Y(x+a).(40) That is to say, Y(x+a)=2(1+xa)xa(xa+1+14κ2)(xa+114κ2)Y(x) .(41) Elementary solutions for generic x, for arbitrary complex constants C1 and C2, are given by Again, the C2 part of this elementary solution differs from the C1 part just through multiplication by a particular complex function with period a. And again, poles and zeros of these and other solutions are manifest either from those of the Γ functions, or else from a continued product form, e.g. It is not surprising that (29) and (39) share the privilege to become only first-order difference equations for specific choices of a, as in (32) and (41), because they are both special cases of Whittaker's differential equation, as discussed in the next section. No other linear second-order ODEs lead to umbral equations with this property. 4. Discretization Through Hypergeometric Recursion In this section we discuss several examples using umbral transform methods to convert solutions of continuum differential equations directly into solutions of the corresponding discretized equations. We use both Fourier and power series umbral transforms. As an explicit illustration of the umbral transform functional (17), inserting the Fourier representation of the Airy function (1) yields AiryAi(x)  UmAiryAi(x,a)Re(1π0+e13ik3(1+ika)xadk).(45) This integral is expressed in terms of hypergeometric functions and evaluated numerically in Appendix A. Likewise, gaussians also map to hypergeometric functions, as may be obtained by formal series manipulations: ex2G(x,a)         n=0()n[x]2nn!         =n=01n!(1)na2nΓ(xa+1)Γ(xa2n+1)(46) where the reflection and duplication formulas were used to write While the series (47) actually has zero radius of convergence, it is Borel summable, and the resulting regularized hypergeometric function is well-defined. See Appendix B for some related numerics. For another example drawn from the familiar repertoire of continuum physics, consider the confluent hypergeometric equation of Kummer (A&S 13.1.1): x y+(βx)yαy=0,(50) whose regular solution at x = 0, expressed in various dialects, is with series and integral representations 1F1(α;β;x)=n=0Γ(α+n)Γ(α)Γ(β)Γ(β+n)xnn!                 =Γ(β)Γ(α)Γ(βα)01exssα1(1s)βα1ds                 =1+αβx+12α(α+1)β(β+1)x2+16α(α+1)(α+2)β(β+1)(β+2)x3+O(x4).(52) The second, independent solution of (50), with branch point at x = 0, is given by Tricomi's confluent hypergeometric function (1), sometimes known as HypergeometricU: U(α,β,x)=πsinπβ (M(α,β,x)Γ(1+αβ)Γ(β)x1βM(1+αβ,2β,x)Γ(α)Γ(2β)).(53) Invoking the umbral calculus for x, either of these confluent hypergeometric functions can be mapped onto their umbral counterparts using 1F1(α;β;x)  2F1(α,xa;β;a),(54) where 2F1 is the well-known Gauss hypergeometric function (1). This map from 1F1 to 2F1 follows from the basic monomial umbral map, and from the series (52). When combined, these give the well-known series representation of 2F1. Next, reconsider the one-dimensional Coulomb problem defined by Whittaker's equation for general μ (1): Since κ and μ are both arbitrary, this also encompasses the inverse-square potential, (39). Exact solutions of this differential equation are whittakerW(κ,μ,x)=xμ+1/2ex/2            (Γ(2μ)Γ(μκ+12) 1F1(μκ+12;2μ+1;x)+Γ(2μ)Γ(μκ+12)x2μ1F1(μκ+12;2μ+1;x)).(59) Umbral versions of these solutions are complicated by the exponential and overall power factors in the classical relations between the 1F1's and the Whittaker functions, but this complication is manageable. (In part this is because in the umbral calculus there are no ordering ambiguities (20).) To obtain the umbral version of the Whittaker functions, we begin by evaluating e12xT11F1(α;β;xT1)·1=m=0n=0(12)mΓ(α+n)Γ(α)Γ(β+n)Γ(β)[ x ]m+nm!n!                                       =(1a2)xa 2F1(α,xa;β;2aa2),(60) where we have performed the sum over m first, to obtain The sum over n then gives the Gauss hypergeometric function in (60). Next, to deal with the umbral deformations of the Whittaker functions, we need to use the continuation of (10) and (11) to an arbitrary power of xT−1, namely, This continuation leads to the following: (xT1)γe12xT11F1(α;β;xT1)·1    =aγΓ(xa+1)Γ(xaγ+1)Tγe12xT11F1(α;β;xT1)·1    =aγΓ(xa+1)Γ(xaγ+1)e12(xγa)T11F1(α;β;(xγa)T1)·1.(63) Thus we obtain the umbral map xγe12x1F1(α;β;x)Γ(xa+1)Γ(xaγ+1) aγ(1a2)xaγ 2F1(α,γxa;β;2aa2).(64) Finally then, specializing to the relevant α, β, and γ, we find the umbral Whittaker functions. In particular, whittakerM(κ,μ,x)Γ(xa+1)Γ(xaμ+12)aμ+1/2(1a2)xaμ12 2F1(μ+12κ,μ+12xa;2μ+1;2aa2).(65) This result for general a exhibits what is special about the choice a = 2, as exploited in the previous section. To realize that choice from (65) requires taking a limit a ↗ 2, hence it requires the asymptotic behavior of the Gauss hypergeometric function (1): Now with sufficient care, a = 2 solutions can be coaxed from the umbral version of whittakerM in (65), and/or the corresponding umbral counterpart of whittakerW, upon taking lima↗2 and making use of (66). Moreover, in principle the umbral correspondents of both Whittaker functions could be used to obtain from this limit a solution with two arbitrary constants. On the other hand, for a = 2, the umbral equation corresponding to (56) again reduces to a one-term recursion, namely, For generic x, solutions for arbitrary complex constants C1 and C2 are then given by       =2x/2Γ(1+x2)Γ(x2κ)Γ(x2+12+μ)Γ(x2+12μ)(C1+1π2 C2sin(πx2)sinπ(x2κ)),(69) which agrees with (34) when μ = 1/2, of course. As in that previous special case, the C2 part of (68) differs from the C1 part just through multiplication by a particular complex function with period 2 (12). We graph some examples to show the differences between the Whittaker functions and their umbral counterparts, for a = 1. whittakerM(κ, 1/2, x) for κ = 1, 2, and 3 in red, blue, and green. Umbral whittakerM(κ, 1/2, x) for a = 1, and for κ = 1, 2, and 3 in red, blue, and green. The examples above are specific illustrations of combinatorics that may be summarized in a few umbral hypergeometric mapping lemmata, the simplest being Lemma 1: pFq(α1,,αp;β1,,βq;x)p+1Fq(α1,,αp,x/a;β1,,βq;a) ,(70) where the series representation of the generalized hypergeometric function pFq is5 A proof of (70) follows from formal manipulations of these series. The umbral version of a more general class of functions is obtained by replacing xxT−1 in functions of xk for some fixed positive integer k. Thus, again for hypergeometric functions, we have Lemma 2: pFq(α1,,αp;β1,,βq;xk)p+kFq( α1,,αp,1k(xa),1k(1xa),, 1k(k1xa);β1,,βq;(ak)k ).(72) And again, a proof follows from formal series expansions. Multiplication by exponentials produces only minor modifications of these general results, as was discussed above in the context of Whittaker functions, namely, Lemma 3: eλxpFq(α1,,αp;β1,,βq;xk)(1+aλ)xap+kFq( α1,,αp,1k(xa),1k(1xa),, 1k(k1xa);β1,,βq;(ak1+aλ)k ).(73) In addition, multiplication by an overall power of x gives Lemma 4: xγeλxpFq(α1,,αp;β1,,βq;xk)Γ(xa+1)aγ(1+aλ)xaγΓ(xaγ+1) p+kFq( α1,,αp,γxak,1+γxak,,k1+γxak; β1,,βq;(ak1+aλ)k ).(74) 5. Wave Propagation Given the umbral features of discrete time and space equations discussed above, separately, it is natural to combine the two. For example, the umbral version of simple plane waves in 1+1 spacetime would obey an equation of the type (6, 11, 12), (Δx2Δt2) F=0 ,(75) on a time-lattice with spacing a and a space-lattice with spacing b, not necessarily such that b = a in all spacetime regions. For generic frequency, wavenumber and velocity, the basic solutions are For right-moving waves, say, these have phase velocity v(ω,k)=ωk aarcsin(b)barcsin(a).(77) Thus, the effective index of refraction in the discrete medium is (b arcsin(a))/(a arcsin(b)), i.e., modified from 1. Small inhomogeneities of a and b in the fabric of spacetime over large regions could therefore yield interesting effects. Technically, a more challenging application of umbral methods involves nonlinear, solitonic phenomena (21), such as the one-soliton solution of the continuum Sine-Gordon equation, (x2t2)f(x,t)=sin(f(x,t)) ,  fSG(x,t)=4arctan(mexvt1v2).(78) The corresponding umbral deformation of the PDE itself would now also involve a deformed potential sin(f(xT−1x, tT−1t))·1. But rather than tackling this difficult nonlinear difference equation, one may instead use the umbral transform (17) to infer that fSG(x, t) maps to The continuum Korteweg–de Vries soliton is likewise mapped: Closed-form evaluations of these Fourier integrals are not available, but the physical effects of the discretization could be investigated numerically, and compared to the Lax pair integrability machinery of (13), or to the results on a variety of discrete KdVs in (17), or to other studies (8, 9). However, a more accessible example of umbral effects on solitons may be found in the original Toda lattice model (19). For this model the spatial variable is already discrete, usually with spacing b = 1 so x = n is an integer, while the time t is continuous. The equations of motion in that case are for integer n. Though x = n is discrete, nevertheless there are exact multi-soliton solutions valid for all continuous t, as is well-known. Specific one-soliton Toda solutions are given for constant α, β, γ, and q0 by provided that So the soliton's velocity is just v=±2βsinh(β2). While obtained only for discrete x = n, for plotting purposes q(n, t) may be interpolated for any x (see graph below). To carry out the complete umbral deformation of this system, it is then only necessary to discretize t in the equations of motion (81). Consider what effects this approach to discrete time has on the specified one-soliton solutions. To that end, expand the exact solutions in (82) as series, Upon umbralizing t, the one-soliton solutions then map as and these are guaranteed to give solutions to the umbral operator equations of motion, Δp(n,tT1)1a(T1)p(n,tT1)                   =(e(q(n+1,tT1)q(n,tT1))e(q(n,tT1)q(n1,tT1))),(88) upon projecting onto a translationally invariant “vacuum” (i.e., Q(n, t) ≡ q(n, tT−1) · 1). Now, for integer time steps, t/a = m, consider the series at hand: S(m,c,z)=k=1zkk(ekβ1)(1+ck)m             =ln(1z1zeβ)+j=1mcj(mj)R(j,z),(89) where c = γa, z = −αe−βn, and where for j > 0, R(j,z)=k=0(ekβ1)zkkj1         Φ(eβz,1j,0)Φ(z,1j,0).(90) Fortunately, for positive integer t/a, we only need the Lerch transcendent function, for those cases where the sums are expressible as elementary functions. For example, The ln(…) term on the RHS of (89) then reproduces the specified classical one-soliton solutions at t = 0, while the remaining terms give umbral modifications for t ≠ 0. Altogether then, we have These umbral results are compared to some time-continuum soliton profiles for t/a = 0, 1, 2, 3, and 4 in the following Figure (with q0 = 0, α = 1 = β, and γ = 2 sinh(1/2) = 1.042). Toda soliton profiles q interpolated for all x ∈ [−5, 5] at integer time slices superimposed with their time umbral maps Q (thicker curves) for a = 1. Thus, the umbral-mapped solutions no longer evolve just by translating the profile shape. Rather, they develop oscillations about the classical fronts that dramatically increase with time, that evince not only dispersion but also generation of harmonics, and that, strictly speaking, disqualify use of the term soliton for their description. Be that as it may, this model is referred to in some studies as integrable (8, 9). These umbral effects on wave propagation evoke scattering and diffraction by crystals. But here the “crystal” is spacetime itself. It is tempting to speculate based on this analogy. In particular, were a well-formed wave packet to pass through a localized region of crystalline spacetime, with sufficiently large lattice spacings, the packet could undergo dramatic deformations in shape, wavelength, and frequency—far greater than and very different from what would be expected just from the dispersion of a free packet propagating through continuous space and time. 6. Concluding Remarks We have emphasized how the umbral calculus has visibly emerged to provide an elegant correspondence framework that automatically gives solutions of ubiquitous difference equations as maps of well-known continuous functions. This correspondence systematically sidesteps the use of more traditional methods to solve these difference equations. We have used the umbral calculus framework to provide solutions to discretized versions of several differential equations that are widespread building-blocks in many if not all areas of physics and engineering, thereby avoiding the rather unwieldy frontal assaults often engaged to solve such discrete equations directly. We have paid special attention to the Airy, Kummer, and Whittaker equations, and illustrated several basic principles that transform their continuum solutions to umbral versions through the use of hypergeometric function maps. The continuum limits thereof are then manifest. Finally, we have applied the solution-mapping technique to single solitons of the Sine-Gordon, Korteweg–de Vries, and Toda systems, and we have noted how their umbral counterparts—particular solutions of corresponding discretized equations—evince dispersion and other non-solitonic behavior, in general. Such corrections to the continuum result may end up revealing discrete spacetime structure in astrophysical wave propagation settings. We expect to witness several applications of the framework discussed and illustrated here. Conflict of Interest Statement This work was supported in part by NSF Award PHY-1214521; and in part, the submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory. Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. Thomas L. Curtright was also supported in part by a University of Miami Cooper Fellowship. 1. ^We stress that the notation [t]n is shorthand for the product t(ta)…(t − (n − 1)a). It is not just the nth power of [t] = t. 2. ^Again we stress that eλ[t] is a short-hand notation, and not just the usual exponential of λ[t] = λt. 3. ^N.B. There is an infinity of “non-umbral” extensions of the Et, λa) solution (12): Multiplying the umbral exponential by an arbitrary periodic function g(t + a) = g(t) will pass undetected through Δ, and thus will also yield an eigenfunction of Δ. Often, such extra solutions have either a vanishing continuum limit, or else an ill-defined one. 4. ^That is, for Θ ≡ arctan(a), the spacing of the zeros, period, etc, are scaled up by a factor of tanc(Θ)tan(Θ)Θ1. For complete periodicity on the time lattice, one further needs return to the origin in an integral number of N steps, thus a solution of N = 2πn/arctan a. Example: For a = 1, the solutions' radius spirals out as 2t/2, while ω = π/4, and the period is τ = 8. 5. ^Recall results from using the ratio test to determine the radius of convergence for the pFq1, …, αp1,…,βq;x) series: If p < q + 1 then the ratio of coefficients tends to zero. This implies that the series converges for any finite value of x. If p = q + 1 then the ratio of coefficients tends to one, hence the series converges for |x| < 1 and diverges for |x| > 1. If p > q + 1 then the ratio of coefficients grows without bound. The series is then divergent or asymptotic, and is a symbolic shorthand for the solution to a differential equation. 1. Abramowitz M, Stegun I. Handbook of Mathematical Functions., National Bureau of Standards. AMS 55, (1964). 2. Bender C, Orszag S. Advanced Mathematical Methods for Scientists and Engineers, McGraw-Hill (1978). 3. Cholewinski F, Reneke J. Electron J Diff Eq. (2003) 2003:1–64. 4. Di Bucchianico A, Loeb D. A selected survey of umbral calculus. Electron J Combin. (1995) DS3. 5. Dimakis A, Müller-Hoissen F, Striker T. Umbral calculus, discretization, and quantum mechanics on a lattice. J Phys. (1996) A 29:6861–76. 6. Floreanini R, Vinet L. Lie symmetries of finite-difference equations. J Math Phys. (1995) 36:7024–42 7. Levi D, Negro J, del Olmo M. Discrete q-derivatives and symmetries of q-difference equations. J Phys. (2004) 37:3459–73. 8. Grammaticos B, Kosmann-Schwarzbach Y, Tamizhmani T. (eds.). Discrete Integrable Systems. In: Lect Notes Phys 644:Springer (2004). doi: 10.1007/b94662. CrossRef Full Text 9. Grammaticos B, Ramani A, Willox R. A sine-Gordon cellular automaton and its exotic solitons. J Phys A Math Theor. (2013) 46:145204. 10. Levi D, Negro J, del Olmo M. Discrete derivatives and symmetries of difference equations. J. Phys. (2001) 34:2023–2030. 11. Levi D, Tempesta P, Winternitz P. Lorentz and galilei invariance on lattices. Phys Rev. (2004a) D69:105011. 12. Levi D, Tempesta P, Winternitz P. Umbral calculus, difference equations and the discrete Schrödinger equation. J Math Phys. (2004b) 45: 4077–4105. 13. Levi D, Winternitz P. Continuous symmetries of difference equations Schiff: Loop groups and discrete KdV equations. J Phys. (2006) A39:R1–R63. 14. López-Sendino J, Negro J, Del Olmo M, Salgado E. “Quantum mechanics and umbral calculus” J. Phys. (2008) 128:012056. doi: 10.1088/1742-6596/128/1/012056 CrossRef Full Text 15. López-Sendino J, Negro J, Del Olmo M. “Discrete coulomb potential” Phys Atom. Nuclei (2010) 73.2:384–90. 16. Rota G-C. Finite Operator Calculus, Academic Press (1975). 17. Schiff J. Loop groups and discrete KdV equations. Nonlinearity (2003) 16:257–75. 18. Smirnov Y, Turbiner A. Lie algebraic discretization of differential equations. Mod Phys Lett. (1995) A10:1795 [Erratum-ibid. A10:3139 (1995)]. 19. Toda M. Theory of Nonlinear Lattices (2nd Edn.) Springer (1989). 20. Ueno K. Umbral calculus and special functions; Hypergeometric series formulas through operator calculus. Adv Math. (1988) 67:174–229; Funkcialaj Ekvacioj (1990) 33:493–518. 21. Zachos CK. Umbral deformations on discrete space-time. Int J Mod Phys. (2008) A23:2005. Appendix A: Umbral Airy Functions Formally, these can be obtained by expressing the Airy functions in terms of hypergeometric functions and then umbral mapping the series. The continuum problem is given by yxy=0, y(x)=C1AiryAi(x)+C2AiryBi(x),(94) AiryAi(x)=132/3Γ(2/3) 0F1(;23;19x3)131/3Γ(1/3) 0F1(;43;19x3),(95) AiryBi(x)=131/6Γ(2/3) 0F1(;23;19x3)+31/6zΓ(1/3) 0F1(;43;19x3).(96) The yY umbral images of these, solving the umbral discrete difference equation (3, 12) are then given by (72) for k = 3. In particular, UmAiryAi(x,a)=132/3Γ(2/3) 3F1(13xa,13(1xa),13(2xa);23;3a3)                       131/3Γ(1/3) 3F1(13xa,13(1xa),13(2xa);43;3a3).(98) Since the number of “numerator parameters” in the hypergeometric function 3F1 exceeds the number of “denominator parameters” by 2, the series expansion is at best asymptotic. However, the series is Borel summable. In this respect, the situation is the same as for the umbral gaussian (see Appendix B). Alternatively, as previously mentioned in the text, using the familiar integral representation of AiryAi(x), the umbral map devolves to that of an exponential. That is to say,   AiryAi(xT1)=12π+exp(13is3+isxT1)ds                        (99) Just as AiryAi(x) is a real function for real x, UmAiryAi(x, a) is a real function for real x and a, After some hand-crafting, the final result may be expressed in terms of just three 2F2 generalized hypergeometric functions. To wit, where the hypergeometric functions 2F2 (a, b; c, d; z) appear in the expression as H1(w,z)=Γ(13z)Γ(13+13z) 2F2(13z,13+13z;13,23;13w3),(103) H2(w,z)=Γ(13+13z)Γ(23+13z) 2F2(13+13z,23+13z;23,43;13w3),(104) H3(w,z)=Γ(23+13z)Γ(1+13z) 2F2(23+13z,1+13z;43,53;13w3),(105) and where the coefficients in (102) are C3(w,z)=336cos(12πz12πzsignum(w))636cos(16πz+12πzsignum(w))                +636cos(56πz+12πzsignum(w))+3×323sin(12πz12πzsignum(w))                336cos(12πz12πzsignum(w))+2×323sin(16πz+12zπsignum(w))                +323sin(12πz+12πzsignum(w))+2×323sin(56πz+12πzsignum(w)).(109) While the coefficient functions C0−3 are not pretty, they are comprised of elementary functions, and they are nonsingular functions of z. On the other hand, the hypergeometric functions do have singularities and discontinuities for negative z. However, the net result for UmAiryAi is reasonably well-behaved. We plot UmAiryAi(x, a) for a=0, ±14, ±12, and ±1. UmAiryAi(x, a) for a = ±1, ±1/2, and ±1/4 (red, blue, & green dashed/solid curves, resp.) compared to AiryAi(x) = UmAiryAi(x, 0) (black curve). Appendix B: Umbral Gaussians As discussed in the text, straightforward discretization of the series yields the umbral gaussian map: (NB G(x, a) ≠ G(−x, a).) Now, it is clear that term by term the series (110) reduces back to the continuum gaussian as a → 0. Nonetheless, since the series is asymptotic and not convergent for |a| > 0, it is interesting to see how this limit is obtained from other representations of the hypergeometric function in (111), in particular from using readily available numerical routines to evaluate 2F0 for specific small values of a. Some examples are shown here. G(x, 1/2n) vs. x ∈ [−3,2], for n = 1, 2, and 3, in red, blue, and green, respectively, compared to G(x, 0) = exp(−x2), in black. Mathematicaő code is available online to produce similar graphs, for those interested. It is amusing that Mathematica manipulates the Borel regularized sum to render the 2F0 in question in terms of Tricomi's confluent hypergeometric function U, as discussed above in the context of Kummer's Equation, cf. (53). Thus G can also be expressed in terms of 1F1s. The relevant identities are: Keywords: umbral correspondence, discretization, difference equations, umbral transform, hypergeometric functions Citation: Curtright TL and Zachos CK (2013) Umbral Vade Mecum. Front. Physics 1:15. doi: 10.3389/fphy.2013.00015 Received: 27 June 2013; Paper pending published: 04 September 2013; Accepted: 10 September 2013; Published online: 01 October 2013. Edited by: Manuel Asorey, Universidad de Zaragoza, Spain Reviewed by: Apostolos Vourdas, University of Bradford, UK An Huang, Harvard University, USA Mariano A. Del Olmo, Universidad de Valladolid, Spain *Correspondence: Cosmas K. Zachos, High Energy Physics Division 362, Argonne National Laboratory, Argonne, IL 60439-4815, USA e-mail:
2b6ecd57dcf88fee
quantum mechanics Get flash to fully experience Pearltrees Ab initio quantum chemistry methods are computational chemistry methods based on quantum chemistry . [ 1 ] The term ab initio was first used in quantum chemistry by Robert Parr and coworkers, including David Craig in a semiempirical study on the excited states of benzene. [ 2 ] [ 3 ] The background is described by Parr. [ 4 ] In its modern meaning ('from first principles of quantum mechanics') the term was used by Chen [ 5 ] (when quoting an unpublished 1955 MIT report by Allen and Nesbet), by Roothaan [ 6 ] and, in the title of an article, by Allen and Karo, [ 7 ] who also clearly define it. Almost always the basis set (which is usually built from the LCAO ansatz ) used to solve the Schrödinger equation is not complete, and does not span the Hilbert space associated with ionization and scattering processes (see continuous spectrum for more details). http://en.wikipedia.org/wiki/Ab_initio_quantum_chemistry_methods Ab initio quantum chemistry methods Density functional theory http://en.wikipedia.org/wiki/Spin_(physics) In quantum mechanics and particle physics , spin is an intrinsic form of angular momentum carried by elementary particles , composite particles ( hadrons ), and atomic nuclei . [ 1 ] [ 2 ] Spin is a solely quantum-mechanical phenomenon; it does not have a counterpart in classical mechanics (despite the term spin being reminiscent of classical phenomena such as a planet spinning on its axis). [ 2 ] Spin (physics) Fermi hole http://en.wikipedia.org/wiki/Fermi_heap_and_Fermi_hole Fermi heap and Fermi hole refer to two closely related quantum phenomena that occur in many-electron atoms. http://quantum.bu.edu/notes/GeneralChemistry/FermniHolesAndHeaps.html You may have learned the "rule" that no more than two electrons can be in the same orbital. If you have, you may also have puzzled about why such a rule is so. Fermi holes and Fermi heaps, Fall 2002, CH352 Physical Chemistry Hilbert space
f1ca63fb41139c5c
Bose–Einstein condensate A Bose–Einstein condensate (BEC) is a state of matter of bosons confined in an external potential and cooled to temperatures very near to absolute zero (, or ). Under such supercooled conditions, a large fraction of the atoms collapse into the lowest quantum state of the external potential, at which point quantum effects become apparent on a macroscopic scale. This state of matter was first predicted by Satyendra Nath Bose in 1925. Bose submitted a paper to the Zeitschrift für Physik but was turned down by the peer review . Bose then took his work to Einstein who recognized its merit and had it published under the names Bose and Einstein, hence the hyphen. Seventy years later, the first gaseous condensate was produced by Eric Cornell and Carl Wieman in 1995 at the University of Colorado at Boulder NIST-JILA lab, using a gas of rubidium atoms cooled to 170 nanokelvin (nK) (). Eric Cornell, Carl Wieman and Wolfgang Ketterle at MIT were awarded the 2001 Nobel Prize in Physics in Stockholm, Sweden. "Condensates" are extremely low-temperature fluids which contain properties and exhibit behaviors that are currently not completely understood, such as spontaneously flowing out of their containers. The effect is the consequence of quantum mechanics, which states that since continuous spectral regions can typically be neglected, systems can almost always acquire energy only in discrete steps. If a system is at such a low temperature that it is in the lowest energy state, it is no longer possible for it to reduce its energy, not even by friction. Without friction, the fluid will easily overcome gravity because of adhesion between the fluid and the container wall, and it will take up the most favorable position, all around the container. Bose-Einstein condensation is an exotic quantum phenomenon that was observed in dilute atomic gases for the first time in 1995, and is now the subject of intense theoretical and experimental study . The slowing of atoms by use of cooling apparatuses produces a singular quantum state known as a Bose condensate or Bose–Einstein condensate. This phenomenon was predicted in 1925 by generalizing Satyendra Nath Bose's work on the statistical mechanics of (massless) photons to (massive) atoms. (The Einstein manuscript, believed to be lost, was found in a library at Leiden University in 2005.) The result of the efforts of Bose and Einstein is the concept of a Bose gas, governed by the Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now known as bosons. Bosonic particles, which include the photon as well as atoms such as helium-4, are allowed to share quantum states with each other. Einstein demonstrated that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. T_c=left(frac{n}{zeta(3/2)}right)^{2/3}frac{h^2}{2pi m k_B} ,T_c  is  the critical temperature, ,n  is  the particle density, ,m  is  the mass per boson, ,h  is  Planck's constant, ,k_B  is  the Boltzmann constant, and ,zeta  is  the Riemann zeta function; ,zeta(3/2)approx 2.6124. Einstein's Argument Consider a collection of N noninteracting particles which can each be in one of two quantum states, scriptstyle|0rangle and scriptstyle|1rangle. If the two states are equal in energy, each different configuration is equally likely. If we can tell which particle is which, there are 2^N different configurations, since each particle can be in scriptstyle|0rangle or scriptstyle|1rangle independently. In almost all the configurations, about half the particles are in scriptstyle|0rangle and the other half in scriptstyle|1rangle. The balance is a statistical effect--- the number of configurations is largest when the particles are divided equally. If the particles are indistinguishable, however, there are only N+1 different configurations. If there are K particles in state scriptstyle|0rangle, there are N-K particles in state scriptstyle|1rangle. Whether any particular particle is in state scriptstyle|0rangle or in state scriptstyle|1rangle can't be determined, so each value of K determines a unique quantum state for the whole system. If all these states are equally likely, there is no statistical spreading out--- it is just as likely for all the particles to sit in scriptstyle|0rangle as for the particles to be split half and half. Supposing now that the energy of state scriptstyle|1rangle is slightly greater than the energy of state scriptstyle|0rangle by an amount E. At temperature T, a particle will have a lesser probability to be in state scriptstyle|1rangle by exp(-E/T). In the distinguishable case, the particle distribution will be biased slightly towards state scriptstyle|0rangle and the distribution will be slightly different from half and half. But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most likely outcome is that most of the particles will collapse into state scriptstyle|0rangle. In the distinguishable case, for large N, the fraction in state scriptstyle|0rangle can be computed. It is the same as coin flipping with a coin which has probability p=exp(-E/T) to land tails. The fraction of heads is 1/(1+p), which is a smooth function of p, of the energy. P(K)= C e^{-KE/T} = C p^K. For large N, the normalization constant C is (1-p). The expected total number of particles which are not in the lowest energy state, in the limit that scriptstyle Nrightarrow infty, is equal to scriptstyle sum_{n>0} C n p^n=p/(1-p) . It doesn't grow when N is large, it just approaches a constant. This will be a negligible fraction of the total number of particles. So a collection of enough bose particles in thermal equilibrium will mostly be in the ground state, with only a few in any excited state, no matter how small the energy difference. Consider now a gas of particles, which can be in different momentum states labelled scriptstyle|krangle. If the number of particles is less than the number of thermally accessible states, for high temperatures and low densities, the particles will all be in different states. In this limit the gas is classical. As the density increases or the temperature decreases, the number of accessible states per particle becomes smaller, and at some point more particles will be forced into a single state than the maximum allowed for that state by statistical weighting. From this point on, any extra particle added will go into the ground state. To calculate the transition temperature at any density, integrate over all momentum states the expression for maximum number of excited particles p/1-p: N = V int {d^3k over (2pi)^3} {p(k)over 1-p(k)} = V int {d^3k over (2pi)^3} {1 over e^{k^2over 2mT}-1} p(k)= e^{-k^2over 2mT}. When the integral is evaluated with the factors of kB and ℏ restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of zero chemical potential (μ=0 in the Bose–Einstein statistics distribution). The Gross-Pitaevskii equation The state of the BEC can be described by the wavefunction of the condensate psi(vec{r}). For a system of this nature, |psi(vec{r})|^2 is interpreted as the particle density, so the total number of atoms is N=int dvec{r} ^2 Provided essentially all atoms are in the condensate (that is, have condensed to the ground state), and treating the bosons using Mean field theory, the energy (E) associated with the state psi(vec{r}) is: Minimising this energy with respect to infinitesimal variations in psi(vec{r}), and holding the number of atoms constant, yields the Gross-Pitaevski equation (GPE) (also a non-linear Schrödinger equation): ihbarfrac{partial psi(vec{r})}{partial t} = ,m  is the mass of the bosons, ,V(vec{r})  is the external potential, ,U_0  is representative of the inter-particle interactions. The GPE provides a good description the behavior of the BEC's and is the approach often applied to their theoretical analysis. Velocity-distribution data graph In the image accompanying this article, the velocity-distribution data confirms the discovery of the Bose–Einstein condensate out of a gas of rubidium atoms. The false colors indicate the number of atoms at each velocity, with red being the fewest and white being the most. The areas appearing white and light blue are at the lowest velocities. The peak is not infinitely narrow because of the Heisenberg uncertainty principle: since the atoms are trapped in a particular region of space, their velocity distribution necessarily possesses a certain minimum width. This width is given by the curvature of the magnetic trapping potential in the given direction. More tightly confined directions have bigger widths in the ballistic velocity distribution. This anisotropy of the peak on the right is a purely quantum-mechanical effect and does not exist in the thermal distribution on the left. This famous graph served as the cover-design for 1999 textbook Thermal Physics by Ralph Baierlein. As in many other systems, vortices can exist in BECs. These can be created, for example, by 'stirring' the condensate with lasers, or rotating the confining trap. The vortex created will be a quantum vortex. These phenomena are allowed for by the non-linear term in the GPE (the |psi(vec{r})|^2 term, that is). As the vortices must have quantised angular momentum, the wavefunction will be of the form psi(vec{r})=phi(rho,z)e^{ielltheta} where rho,z and theta are as in the cylindrical coordinate system, and ell is the angular number. To determine phi(rho,z), the energy of psi(vec{r}) must be minimised, according to the constraint psi(vec{r})=phi(rho,z)e^{ielltheta}. This is usually done computationally, however in a uniform medium the analytic form ,n^2  is  density far from the vortex, ,x = frac{rho}{ellxi}, ,xi  is  healing length of the condensate. demonstrates the correct behavior, and is a good approximation. A singly-charged vortex (ell=1) is in the ground state, with its energy epsilon_v given by epsilon_v=pi n ,b  is  the farthest distance from the vortex considered. (to obtain an energy which is well defined it is necessary to include this boundary b) For multiply-charged vortices (ell >1) the energy is approximated by epsilon_vapprox ell^2pi n which is greater than that of ell singly-charged vortices, indicating that these multiply-charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes. Unusual characteristics Further experimentation by the JILA team in 2000 uncovered a hitherto unknown property of Bose–Einstein condensates. Cornell, Wieman, and their coworkers originally used rubidium-87, an isotope whose atoms naturally repel each other, making a more stable condensate. The JILA team instrumentation now had better control over the condensate so experimentation was made on naturally attracting atoms of another rubidium isotope, rubidium-85 (having negative atom-atom scattering length). Through a process called Feshbach resonance involving a sweep of the magnetic field causing spin flip collisions, the JILA researchers lowered the characteristic, discrete energies at which the rubidium atoms bond into molecules making their Rb-85 atoms repulsive and creating a stable condensate. The reversible flip from attraction to repulsion stems from quantum interference among condensate atoms which behave as waves. Because supernova explosions are also preceded by an implosion, the explosion of a collapsing Bose–Einstein condensate was named "bosenova", a pun on the musical style bossa nova. The atoms that seem to have disappeared almost certainly still exist in some form, just not in a form that could be detected in that experiment. Most likely they formed molecules consisting of two bonded rubidium atoms. The energy gained by making this transition imparts a velocity sufficient for them to leave the trap without being detected. Current research Compared to more commonly-encountered states of matter, Bose–Einstein condensates are extremely fragile. The slightest interaction with the outside world can be enough to warm them past the condensation threshold, forming a normal gas and losing their interesting properties. It is likely to be some time before any practical applications are developed. Nevertheless, they have proved to be useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an explosion in experimental and theoretical activity. Examples include experiments that have demonstrated interference between condensates due to wave-particle duality, the study of superfluidity and quantized vortices, and the slowing of light pulses to very low speeds using electromagnetically induced transparency. Vortices in Bose-Einstein condensates are also currently the subject of analogue-gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the lab. Experimentalists have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential for the condensate. These have been used to explore the transition between a superfluid and a Mott insulator, and may be useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Tonks-Girardeau gas. Related experiments in cooling fermions rather than bosons to extremely low temperatures have created degenerate gases, where the atoms do not congregate in a single state due to the Pauli exclusion principle. To exhibit Bose–Einstein condensation, the fermions must "pair up" to form compound particles (e.g. molecules or Cooper pairs) that are bosons. The first molecular Bose–Einstein condensates were created in November 2003 by the groups of Rudolf Grimm at the University of Innsbruck, Deborah S. Jin at the University of Colorado at Boulder and Wolfgang Ketterle at MIT. Jin quickly went on to create the first fermionic condensate composed of Cooper pairs. In 1999, Danish physicist Lene Vestergaard Hau led a team from Harvard University who succeeded in slowing a beam of light to about 17 metres per second and, in 2001, was able to momentarily stop a beam. She was able to achieve this by using a superfluid. Hau and her associates at Harvard University have since successfully transformed light into matter and back into light using Bose-Einstein condensates: details of the experiment are discussed in an article in the journal Nature, 8 February 2007 . Some subtleties One should not overlook that the effect involves subtleties, which are not always mentioned. One may be already "used" to the prejudice that the effect really needs the mentioned ultralow temperatures of 10-7 K or below, and is mainly based on the nuclear properties of (typically) alkaline atoms, i.e. properties which fit to working with "traps". However, the situation is more complicated. This it true, although, up to 2004, using the above-mentioned "ultralow temperatures" one had found Bose-Einstein condensation for a multitude of isotopes involving mainly alkaline and earth-alkaline atoms (7Li, 23Na, 41K, 52Cr, 85Rb, 87Rb, 133Cs and 174Yb). Not astonishingly, even with hydrogen condensation-research was finally successful, although with special methods. In contrast, the superfluid state of the bosonic 4He at temperatures below the "rather high" (many people would say "rather low"!) temperature of 2.17 K is not a good example for Bose-Einstein condensation, because the interaction between the 4He bosons is simply too strong, so that at zero temperature, in contrast to the Bose-Einstein theory, not 100%, but only 8% of the atoms are in the ground state. Even the fact that the mentioned alkaline gases show bosonic, and not fermionic, behaviour, as solid state physicists or chemists would expect, is based on a subtle interplay of electronic and nuclear spins: at the mentioned ultralow temperatures and corresponding excitation energies the (half-integer, in units of hbar) total spin of the electronic shell and the (also half-integer) total spin of the nucleus of the atom are coupled by the (very weak) hyperfine interaction to the (integer!) total spin of the atom. Only the fact that this last-mentioned total spin is integer, implies that, at the mentioned ultralow temperatures the behaviour of the atom is bosonic, whereas e.g. the "chemistry" of the systems at room temperature is determined by the electronic properties, i.e. essentially fermionic, since at room temperature thermal excitations have typical energies which are much higher than the hyperfine values. (Here one should remember the spin-statistics theorem of Wolfgang Pauli, which states that half-integer spins lead to fermionic behaviour (e.g., the Pauli exclusion principle, forbidding that more than two electrons possess the same energy), whereas integer spins lead to bosonic behaviour, e.g., condensation of identical bosonic particles in a common ground state). In contrast to the above properties, the Bose-Einstein condensation is not necessarily restricted to ultralow temperatures: in 2006 physicists around S. Demokritov in Münster, Germany, , have found Bose-Einstein condensation of magnons (i.e. quantized spinwaves) at room temperature, admittedly by the application of pump-processes. Use in popular science A prominent example of the use of Bose-Einstein condensation in popular science is at the Physics 2000 web site developed at the University of Colorado at Boulder. In the context of popularizations, atomic BEC is sometimes called a Super Atom. See also • Bose, S. N. (1924). "Plancks Gesetz und Lichtquantenhypothese". Zeitschrift für Physik 26 178. • Einstein, A. (1925). "Quantentheorie des einatomigen idealen Gases". Sitzungsberichte der Preussischen Akademie der Wissenschaften 1 3. , • Landau, L. D. (1941). "The theory of Superfluity of Helium 111". J. Phys. USSR 5 71–90. • L. Landau (1941). "Theory of the Superfluidity of Helium II". Physical Review 60 356–358. • M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell (1995). "Observation of Bose–Einstein Condensation in a Dilute Atomic Vapor". Science 269 198–201. • C. Barcelo, S. Liberati and M. Visser (2001). "Analogue gravity from Bose-Einstein condensates". Classical and Quantum Gravity 18 1137–1156. • P.G. Kevrekidis, R. Carretero-Gonzlaez, D.J. Frantzeskakis and I.G. Kevrekidis (2006). "Vortices in Bose-Einstein Condensates: Some Recent Developments". Modern Physics Letters B 5 • K.B. Davis, M.-O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Durfee, D.M. Kurn, and W. Ketterle (1995). "Bose–Einstein condensation in a gas of sodium atoms". Physical Review Letters 75 3969–3973. . • D. S. Jin, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell (1996). "Collective Excitations of a Bose–Einstein Condensate in a Dilute Gas". Physical Review Letters 77 420–423. • M. R. Andrews, C. G. Townsend, H.-J. Miesner, D. S. Durfee, D. M. Kurn, and W. Ketterle (1997). "Observation of interference between two Bose condensates". Science 275 637–641. . • M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman, and E. A. Cornell (1999). "Vortices in a Bose–Einstein Condensate". Physical Review Letters 83 2498–2501. • E.A. Donley, N.R. Claussen, S.L. Cornish, J.L. Roberts, E.A. Cornell, and C.E. Wieman (2001). "Dynamics of collapsing and exploding Bose–Einstein condensates". Nature 412 295–299. • M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, I. Bloch (2002). "Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms". Nature 415 39–44. . • S. Jochim, M. Bartenstein, A. Altmeyer, G. Hendl, S. Riedl, C. Chin, J. Hecker Denschlag, and R. Grimm (2003). "Bose–Einstein Condensation of Molecules". Science 302 2101–2103. • Markus Greiner, Cindy A. Regal and Deborah S. Jin (2003). "Emergence of a molecular Bose−Einstein condensate from a Fermi gas". Nature 426 537–540. • M. W. Zwierlein, C. A. Stan, C. H. Schunck, S. M. F. Raupach, S. Gupta, Z. Hadzibabic, and W. Ketterle (2003). "Observation of Bose–Einstein Condensation of Molecules". Physical Review Letters 91 250401. • C. A. Regal, M. Greiner, and D. S. Jin (2004). "Observation of Resonance Condensation of Fermionic Atom Pairs". Physical Review Letters 92 040403. • C. J. Pethick and H. Smith, Bose–Einstein Condensation in Dilute Gases, Cambridge University Press, Cambridge, 2001. • Lev P. Pitaevskii and S. Stringari, Bose–Einstein Condensation, Clarendon Press, Oxford, 2003. • Amandine Aftalion, Vortices in Bose–Einstein Condensates, PNLDE Vol.67, Birkhauser, 2006. • Mackie M, Suominen KA, Javanainen J., "Mean-field theory of Feshbach-resonant interactions in 85Rb condensates." Phys Rev Lett. 2002 Oct 28;89(18):180403. External links Search another word or see superfluityon Dictionary | Thesaurus |Spanish Copyright © 2015, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
0cb746633b948045
Wednesday, September 08, 2010 The Narrow Cosmic Performance Envelope The Cosmos must have a very particular performance envelope if evolution is going to get anywhere very fast. (i.e. 0 to Life in a mere 15 billion years) Brian Charlwood has posted a comment on my Blog post Not a Lot of People Know That. As it’s difficult to work with those narrow comment columns I thought I would put my reply here. Brian’s comments are in italics. You say //So evolution is not a fluke process as it has to be resourced by probabilistic biases.// so it is either a deterministic system or it is a random system. I am not happy with this determinism vs. randomness dichotomy: To appreciate this consider the tossing of a coin. The average coin gives a random configuration of heads/tails with a fifty/fifty mix. But imagine some kind of “tossing” system where the mix was skewed in favour of heads. In fact imagine that on average tails only turned up once a year. This system is much closer to a “deterministic” system than it is to the maximally random system of a 50/50 mix. To my mind the lesson here is that the apparent dichotomy of randomness vs. determinism does no justice to what is in fact a continuum. A deterministic system requires two ingredients: 1/ A state space 2/ An updating rule For example a pendulum has as a state space all possible positions of the pendulum, and as updating rules the laws of Newton (gravity, F=ma) which tell you how to go from one state to another, for instance the pendulum in the lowest position to the pendulum in the highest position on the left. Fine, I’m not adverse to that neat way of modeling general deterministic systems as they develop in time, but for myself I’ve scrapped the notion of time. I think of applied mathematics as a set of algorithms for embodying descriptive information about the “timeless” structure of systems. This is partly a result of an acquaintance with relativity which makes the notion of a strict temporal sequencing across the vastness of space problematical. Also, don’t forget that these mathematical systems can also be used to make “predictions” about the past (or post-dictions), a fact which also suggests that mathematical models are “information” bearing descriptive objects rather than being what I can only best refer to here as “deeply causative ontologies”. A random system is a bit more intricate. It can be built up with 1/ A state space 2/ An updating rule Huh? Looks the same. Yeah, but I can now add the rule is updating. Contrary to deterministic systems, the updating rule does not tell us what the next state is going to look like given a previous state, it is only telling us how to update the probability of a certain state. Actually, that is only one possible kind of random system, one could also build updating rules which are themselves random. So you have a lot of possibilities, on the level of probabilities, a random system can look like a deterministic system, but it is really only predicting probabilities. It can also be random on the level of probabilities, requiring a kind of meta-probabilitisic description. If I understand you right then the Schrödinger equation is an example of a system that updates probabilities deterministically. The meta-probabilistic description you talk of is, I think, mathematically equivalent to conditional probabilities. This comes up in random walk where the stepping to the left or right by a given distance are assigned probabilities. But conceivably step sizes could also vary in a probabilistic way, thus superimposing probabilities on probabilities. i.e. conditional probabilities. In the random walk scenario the fascinating upshot of this intricacy is that it has no effect on the general probability distribution as it develops in space. (See the “central limit theorem”) Anyway, these are technical details, but let's look at what happens when we have a deterministic system and we introduce the slightest bit of randomness. Take again the pendulum. What might happen is that we don't know the initial state with certainty, the result is that you still have a deterministic updating rule, but you can now only predict how the probability of having a certain state will evolve. Now, this is still a deterministic system, the probability only creeps in because we have no knowledge of the initial state. But suppose the pendulum was driven by a genuine random system. Say that the initial state of the pendulum is chosen by looking at the state of a radio-active atom. If the atom decayed in a certain time-interval, we let the pendulum start on the left, if not on the right. The pendulum as such is still a deterministic system. But because we have coupled it to a random system, the system as a whole becomes random. This randomness would be irreducible. This would classify as a one of those systems on the deterministic/random spectrum. The mathematics of classical mechanics would mean that any old behavior is not open to the pendulum system, and therefore it is not maximally random.; the system is constrained by classical mechanics to behave within certain limits. The uncertainty in initial conditions, when combined with mathematical constraint of classical mechanics, would produce a system that behaves randomly only within a limited envelope of randomness; the important point to note is that it is an envelope, that is, an object with limits, albeit fuzzy limits like a cloud. Limits imply order. Thus, we have here a system that is a blend of order and randomness; think back to that coin tossing system where  tails turned up randomly but very infrequently. So, if you want to say that there is a part of evolution that is random, the consequence is that the whole of it is random and therefore it is all one big undesigned fluke. No, I don't believe we can yet go this far. Your randomly perturbed pendulum provides a useful metaphor: Relative to the entire space of possibility the pendulum’s behavior is highly organized, its degrees of freedom very limited. Here, once again, the probabilities are concentrated in a relatively narrow envelop of behavior, just as they must be in any working evolutionary system – unless, of course, one invokes some kind of multiverse, which is one (speculative) way of attempting maintain the “It’s just one big fluke” theory. Otherwise, just how we ended up with a universe that has a narrow probability envelope (i.e. an ordered universe) is, needless to say, the big contention that gets people hot under the collar. No comments:
331c9420871285a1
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer While investigating the EPR Paradox, it seems like only two options are given, when there could be a third that is not mentioned - Heisenberg's Uncertainty Principle being given up. The setup is this (in the wikipedia article): given two entangled particles, separated by a large distance, if one is measured then some additional information is known about the other; the example is that Alice measures the z-axis and Bob measures the x-axis position, but to preserve the uncertainty principle it's thought that either information is transmitted instantaneously (faster than light, violating the special theory of relativity) or information is pre-determined in hidden variables, which looks to be not the case. What I'm wondering is why the HUP is not questioned? Why don't we investigate whether a situation like this does indeed violate it, instead of no mention of its possibility? Has the HUP been verified experimentally to the point where it is foolish to question it (like gravity, perhaps)? It seems that all the answers are not addressing my question, but addressing waveforms/commutative relations/fourier transforms. I am not arguing against commutative relations or fourier transforms. Is not QM the theory that particles can be represented as these fourier transforms/commutative relations? What I'm asking this: is it conceivable that QM is wrong about this in certain instances, for example a zero state energy, or at absolute zero, or in some area of the universe or under certain conditions we haven't explored? As in: Is the claim then that if momentum and position of a particle were ever to be known somehow under any circumstance, Quantum Mechanics would have to be completely tossed out? Or could we say QM doesn't represent particles at {absolute zero or some other bizarre condition} the same way we say Newtonian Physics is pretty close but doesn't represent objects moving at a decent fraction of the speed of light? These are from the wikipedia article on the EPR Paradox. This seems to me to be a false dichotomy; the third option being: we could measure the momentum of one entangled particle, the position of the other simultaneously, and just know both momentum and position and beat the HUP. However, this is just 'not an option,' apparently. I'm not disputing that two quantities that are fourier transforms of each other are commutative / both can be known simultaneously, as a mathematical construct. Nor am I arguing that the HUP is indeed false. I'm looking for justification not just that subatomic particles can be models at waveforms under certain conditions (Earth like ones, notably), but that a waveform is the only thing that can possibly represent them, and any other representation is wrong. You van verify the positive all day long, that still doesn't disprove the negative. It is POSSIBLE that waveforms do not correctly model particles in all cases at all times. This wouldn't automatically mean all of QM is false, either - just that QM isn't the best model under certain conditions. Why is this not discussed? share|cite|improve this question I +1d to get rid of the downvote you had. It's the last line that did it for me. – Olly Price Aug 13 '12 at 22:37 Anyone who is downvoting care to elaborate on where my question is unclear, unuseful or shows no effort? I'd be glad to improve it if I can. – Ehryk Aug 13 '12 at 23:40 Try Bohmian mechanics. – MBN Sep 6 '12 at 11:02 @Ehryk: Not my downvote, but this question is a waste of time. You misunderstood what EPR is all about. The EPR effects have nothing to do with HUP, and you can show that they are inconsistent with local variables determining experimental outcomes without doing quantum mechanics, just from the experimental outcomes themselves. This means the weirdness is not due to the formalism, but really there in nature. – Ron Maimon Sep 11 '12 at 6:15 So in a universe without the commutative relation/HUP, where the commutative relation was sometimes zero / position and momentum could both be known, where's the paradox with EPR? You could just determine the values of both entangled particles, no paradox necessary. – Ehryk Sep 11 '12 at 7:24 12 Answers 12 up vote 5 down vote accepted In precise terms, the Heisenberg uncertainty relation states that the product of the expected uncertainties in position and in momentum of the same object is bounded away from zero. Your entanglement example at the end of your edit does not fit this, as you measure only once, hence have no means to evaluate expectations. You may claim to know something but you have no way to check it. In other entanglement experiments, you can compare statistics on both sides, and see that they conform to the predictions of QM. In your case, there is nothing to compare, so the alleged knowledge is void. The reason why the Heisenberg uncertainty relation is undoubted is that it is a simple algebraic consequence of the formalism of quantum mechanics and the fundamental relation $[x,p]=i\hbar$ that stood at the beginning of an immensely successful development. Its invalidity would therefore imply the invalidity of most of current physics. Bell inequalities are also a simple algebraic consequence of the formalism of quantum mechanics but already in a more complex set-up. They were tested experimentally mainly because they shed light on the problem of hidden variables, not because they are believed to be violated. The Heisenberg uncertainty relation is mainly checked for consistency using Gedanken experiments, which show that it is very difficult to come up with a feasible way of defeating it. In the past, there have been numerous Gedanken experiments along various lines, including intuitive and less intuitive settings, and none could even come close to establishing a potential violation of the HUP. Edit: One reaches experimental limitations long before the HUP requires it. Nobody has found a Gedankenexperiment for how to do defeat the HUP, even in principle. We don't know of any mechanism to stop an electron, thereby bringing it to rest. It is not enough to pretend such a mechanism exists; one must show a way how to achieve it in principle. For example, electron traps only confine an electron to a small region a few atoms wide, where it will roam with a large and unpredictable momentum, due to the confinement. Thus until QM is proven false, the HUP is considered true. Any invalidation of the foundations of QM (and this includes the HUP) would shake the world of physicists, and nobody expects it to happen. share|cite|improve this answer Why wouldn't it just invalidate it under certain conditions? For example: by some means, we completely arrest an electron. Position = center of device, momentum = 0. Both known simultaneously. Couldn't we just say QM is 'not a valid model for arrested particles but works for moving ones' without invalidating most of current physics? – Ehryk Sep 7 '12 at 21:43 Any invalidation of the foundations of QM would shake the world of physicists. - But the center of a device is usually poorly definable, and an electron cannot be arrested completely, neither in position nor in momentum. One reaches experimental limitations long before the HUP requires it. - In the past, there have been numerous Gedanken experiments along similar and many other lines, and none could even come close to establishing a violation of the HUP. – Arnold Neumaier Sep 9 '12 at 13:12 until QM is proven false, the HUP is true. – Arnold Neumaier Sep 10 '12 at 11:53 @Ehryk: here's why you're seeming nonsensical to everyone here: at small length scales, an electron looks very, very much like a wave. You get interference patterns and everything. Now, you want to 'stop' it. Well, a "slower" electron has a longer wavelength than a "faster" one, but this longer wavelength is going to spread it out farther. By the time you get to your limit of a 'stopped' electron, the electron will be spread out over all of space. – Jerry Schirmer Sep 14 '12 at 23:13 @Ehryk: sure. Under circumstances not observed, maybe anything could happen. There's just no reason to believe that will be the case. The default assumption should be that things we havne't observed will act like things we have observed. – Jerry Schirmer Oct 12 '12 at 19:42 In quantum mechanics, two observables that cannot be simultaneously determined are said to be non-commuting. This means that if you write down the commutation relation for them, it turns out to be non-zero. A commutation relation for any two operators $A$ and $B$ is just the following $$[A, B] = AB - BA$$ If they commute, it's equal to zero. For position and momentum, it is easy to calculate the commutation relation for the position and momentum operators. It turns out to be $$[\hat x ,\hat p] = \hat x \hat p - \hat p \hat x = i \hbar$$ As mentioned, it will always be some non-zero number for non-commuting observables. So, what does that mean physically? It means that no state can exist that has both a perfectly defined momentum and a perfectly defined position (since $ |\psi \rangle$ would be both a right eigenstate of momentum and of position, so the commutator would become zero. And we see that it isn't.). So, if the uncertainty principle was false, so would the commutation relations. And therefore the rest of quantum mechanics. Considering the mountains of evidence for quantum mechanics, this isn't a possibility. I think I should clarify the difference between the HUP and the classical observer effect. In classical physics, you also can't determine the position and momentum of a particle. Firstly, knowing the position to perfect accuracy would require you to use a light of infinite frequency (I said wavelength in my comment, that's a mistake), which is impossible. See Heisenberg's microscope. Also, determining the position of a particle to better accuracy requires you use higher frequencies, which means higher energy photons. These will disturb the velocity of the particle. So, knowing the position better means knowing the momentum less. The uncertainty principle is different than this. Not only does it say you can't determine both, but that the particle literally doesn't have a well defined momentum to be measured if you know the position to a high accuracy. This is a part of the more general fact in quantum mechanics that it is meaningless to speak of the physical properties of a particle before you take measurements on them. So, the EPR paradox is as follows - if the particles don't have well-defined properties (such as spin in the case of EPR), then observing them will 'collapse' the wavefunction to a more precise value. Since the two particles are entangled, this would seem to transfer information FTL, violating special relativity. However, it certainly doesn't. Even if you now know the state of the other particle, you need to use slower than light transfer of information to do anything with it. Also, Bell's theorem, and Aspect's tests based off of it, show that quantum mechanics is correct, not local realism. share|cite|improve this answer So how do we know that all particles have a non-commuting relationship, always and forever, under all conditions, even the ones we aren't able to measure or with technology or knowledge we don't yet possess? – Ehryk Sep 6 '12 at 10:47 What if you define position and momentum as the two real numbers that you measure at time t from experiment? (That's what most people consider "position" and "momentum" to be anyway.) What is this "new" definition of position and momentum? – Nick Sep 8 '12 at 23:11 Let me add this: I've taken QM and done those calculations for the commutation plenty of times to figure out what sets of compatible observables there are. But I could give someone a random formula for some random integral and divide by 6.3 and say "look, this always comes out to a real value -- thus position and momentum can't be simultaneously well-defined!" and that makes no sense whatsoever. Yeah, I know the whole spiel about eigenvalues and eigenstates and identical preparations of quantum systems, but what kind of physical experiment demonstrates this limit? – Nick Sep 8 '12 at 23:15 Noncommutativity of operators nicely explains emission spectra, which I believe were the subject of Heisenberg's (?) initial ponderings. There's a nice bit of this history explained at page 40 of this book by Alain Connes (there is probably a more focused reference for this history, but I don't know of one) – Ryan Thorngren Sep 9 '12 at 0:51 The Heisenberg's relation is not tied to quantum mechanics. It is a relation between the width of a function and the width of its fourier transform. The only way to get rid of it is to say that x and p are not a pair of fourier transform: ie to get rid of QM. share|cite|improve this answer So if by any means at all (entanglement, future machines, or divine powers) one could measure both position and momentum simultaneously, then all of quantum mechanics is false? There could be no QM in a universe in which this is possible? – Ehryk Aug 14 '12 at 9:35 You necessarily need to change the relationship between position and momemtum. It is mathematically impossible if they just form a fourier transform pair. But considering the huge amount of datas validating QM, one can try to extend QM by adding a small term in the pair or by using a fractional commutator (with fractional derivative) for instance. – Shaktyai Aug 14 '12 at 9:43 How about saying x and p are a pair of fourier transforms USUALLY, but not in certain circumstances such as {inside a black hole, at absolute zero, under certain entanglement experiments, in a zero rest energy universe, etc.} How do we know that because QM is right USUALLY or from what we can observe, that it is right ALWAYS and FOREVER? – Ehryk Sep 6 '12 at 10:43 That is to say: QM as we know it is not valid in these cases. There is no possible objections to such a statement, but for it to get accepted by the physicists, you need to prove that you can explain things in a simpler way and that you can predict something measurable. – Shaktyai Sep 6 '12 at 10:46 Because there is no proof whatsoever that QM fails. The day it fails we shall reconsider the question. However, there are many theorists working on alternative theories, so you have your chances. – Shaktyai Sep 6 '12 at 15:09 The wave formulation has in its seed the uncertainty relation. Let me be precise what is meant by the wave formulation: the amplitude over space points will give information about localization on space, while amplitude over momenta will give information about localization in momentum space. But for a function, the amplitude over momenta is nothing else but the Fourier transform of the space amplitude. The following is jut a mathematical fact, not up to physical discussion: the standard deviation, or the spread of the space amplitude, multiplied by the spread of the momenta amplitude (given by the Fourier transform of the former) will be bounded from below by one. So, it should be pretty clear that, as long as we stick to a wave formulation for matter fields, we are bound mathematically by the uncertainty relation. No work around over that. Why we stick to a wave formulation? because it works pretty nicely. The only way someone is going to seriously doubt that is the right description is to either: 1) find an alternate description that at least explains everything that the wave formulation describes, and hopefully some extra phenomena not predicted by wave formulation alone. 2) find an inconsistency in the wave formulation. In fact, if someone ever manages to measure both momenta and position for some electron below the Planck factor, it would be definitely an inconsistency in the wave formulation. It would mean we would have to tweak the De Broglie ansatz or something equally fundamental about it. Needless to say, nothing like that has happened share|cite|improve this answer It's a mathematical fact IF the particle can indeed be wholly represented by that specific function, right? So in the entanglement experiment, perhaps that function does not represent the state of TWO entangled particles? Maybe we have entanglement wrong, or maybe that function does not represent particles in certain conditions? Why are these possibilities not even discussed? – Ehryk Sep 6 '12 at 18:05 @Ehryk, because scientists, as all humans, tend to do the least amount of effort that will get the job done, it really does not make economical sense to do otherwise. As i said, there would be something to discuss if something in the experiment would not turn out as expected, but it does. If you want to do your life's mission to prove false the wave representation, then you need to build an experiment that will either confirm it or disprove it. then, people will likely start seriously discussing other possibilities. – lurscher Sep 6 '12 at 18:13 We can't prove Zeus doesn't exist, yet we don't accept his existence because of this. An idea shouldn't have to be 'debunked' to have a healthy amount of doubt in it, yet the wave formulation representing all particles, everywhere, at all times and locations seems to be presented 'beyond doubt' - so why is it stated with such certainty about unknowability and when challenged, the opposition gives in without so much as a mention? – Ehryk Sep 6 '12 at 18:26 (I'm not trying to prove it wrong, or stating that it is, I'm asking if it can be false and if so, why it's not treated as such) – Ehryk Sep 6 '12 at 18:28 @Ehryk, suppose someone starts asking why physicists assume that we only have one time dimension, and why we don't try to debunk that. We would reply the same thing; we have no reason to devote resources to debunk something that seems to fit so nicely with existing phenomena, so the ball is in the court of the person that insist that, say, two-dimensional time makes great deal of sense for X or Y experiment. Then, if the experiment sounds like something that has not been tested, and is under budget to implement, maybe some experimentalists will try to do it. That is how science works – lurscher Sep 6 '12 at 18:30 If we want the position and the momentum to be well-defined at each moment of time, the particle has to be classical. We inherited these notions from classical mechanics, where they apply successfully. Also they apply at macroscopic level. So, it is a natural question to ask if we can keep their good behavior in QM. Frankly, there is nothing to stop us to do this. We can conceive a world in which the particles are pointlike all the time, and move along definite trajectories, and this will "beat HUP". This was the first solution to be looked for. Einstein and de Broglie tried it, and not only them. Even Bohr, in his model, envisioned electrons as moving along definite trajectories in the atom (before QM). David Bohm was able to develop a model which has at a sub-quantum level this property, and in the meantime behaves like QM at quantum level. The price to be paid is to allow interactions which "beat the speed of light", and to adjust the model whenever something incompatible with QM was found. IMHO, this process of adjustments still continues today, and this looks very much like adding epicycles in the heliocentric model. But I don't want to be unfair with Bohm and others: it is possible to emulate QM like this, and if we learn QM facts which contradict it, it will always be possible to find such a model which behaves like QM, but also has a subquantum level which consists of classical-like point particles with definite positions and momenta. At this time, these examples prove that what you want is possible. One may argue that they are unaesthetic, because they are indeed more complicated than QM. But this doesn't mean that they are not true. Also, at this time they don't offer anything testable which QM can't offer. So, while QM describes what we observe, the additional features of hidden variable theories are not observable, more complicated, and violate special relativity. Or, if they don't violate special relativity, they contradict what QM predicts and we observed in experiments of entanglement like that of Alan Aspect. If EPR presents us with two alternatives, (1) spooky action at distance, (2) QM is incomplete, and that you propose, (3) HUP is false, let's not forget that Aspect's experiment and many others confirmed the alternative (1). Now, it would be much better for such models if they would stop adjusting themselves to mimic QM, and predict something new, like a violation of HUP. This would really be something. In conclusion, yes, you are right and in principle it is possible to beat HUP. The reason why most physicists don't care too much about this, is that the known ways to beat HUP are ugly, have hidden elements, violate other principles. But others consider them beautiful and useful, and if you are interested, start with Bohm's theory and the more recent developments of this. Synopsis: The Certainty of Uncertainty Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements (arXiv link) share|cite|improve this answer This was rather helpful, so I appreciate it. I'm still just having difficulty wrestling with unknowability in relation to this; for example if we ever found a way to arrest a particle completely; we'd know it's position and momentum (0) both at the same time, and while it violated HUP, it could just be said 'this particle cannot be represented by a wavefunction.' The reach of the HUP seems to include this though, with no provisions, and just be accepted so OBVIOUSLY you can't stop a particle. Would we just say the particle is classical in that instance? – Ehryk Sep 6 '12 at 18:20 @Cristi I see (and generally have no objections to) your argument, but that conclusion seems misleading. Yes, it's possible to beat HUP (by discarding quantum mechanics) in the same sort of sense that it's possible to create a macroscopic stable wormhole: not strictly ruled out, but there is no evidence to support it. So I think it's misleading to be saying that this is possible. – David Z Sep 6 '12 at 18:45 @David Zaslavsky: Thanks. To make clear my conclusion, and less misleading, I wrote the first, rather lengthy, paragraph. This contains for instance the statement "while QM describes what we observe, the additional features of hidden variable theories are not observable, more complicated, and violate special relativity." Anyway, I considered it would be more misleading to claim that one knows HUP can't be violated no matter what. – Cristi Stoica Sep 6 '12 at 19:31 @Ehryk: "What happened to particle-wave duality?". Particles are represented as wavefunctions. They are defined on the full space, but may have a small support (bump functions). At limit, when concentrated at a point, bump becomes Dirac's $\delta$ function. Then, it has definite position $x$, but indefinite wave vector, so it spreads immediately (this corresponds to HUP). Its "dual" is a pure wave (with definite wave vector $k_x$, hence momentum $p_x$). The "particle-wave" duality refers to these two extreme cases. But most of the times the "wavicles" are somewhere between these two extremes. – Cristi Stoica Sep 6 '12 at 20:41 @Ehryk: "How does a wave have mass?" They have momentum and energy: multiply wave 4-vector with $\hbar$ and obtain $4$-momentum, so yes, they have mass. Interesting thing: the rest mass $m_0$ is the same, even though in general the wave 4-vector is undetermined. By "undetermined" you can understand that the wavefunction is a superposition of a large (usually infinite) number of pure wavefunctions. Pure wavefunctions have definite wave vector (hence momentum), but totally undetermined position. – Cristi Stoica Sep 6 '12 at 20:50 You are asking if a more complete theory might show that HUP is wrong and that position and momentum do exist simultaneously. But a more complete theory has to explain all the observations that QM already explains, and those observations already show that position and momentum cannot have definite values simultaneously. This is known because when particles such as photon, electrons, or even molecules are sent through a pair of slits one at a time, an interference pattern on the detector plate appears that shows that the probability of the measured location and time follows a specific mathematical relationship. The fact that certain regions have zero probability shows that before measurement, the particles exist in a superposition of possible states, such that the wave function for those states can cancel out with other states resulting in areas of low probability of observation. The observed relationships through increasingly complex experiments rules out possibilities other than what is described by QM. The only way that QM could be superseded by a new theory is for new observations to be made that violate QM, but the new theory would still result in the same predictions as QM in the circumstances that QM has already been tested. Since HUP results directly from QM, HUP would also follow from a new theory with the only possible exception in conditions such as super high energy conditions such when a single particle is nearly a black hole. Basically you have to get used to the idea that particles are really quantized fluctuations in a field and that the field exists in a superposition of states. Any better theory will simply provide additional details about why the field behaves in that way. share|cite|improve this answer "Accept it as true until it's debunked" is not scientific. "When a particle can be perfectly represented by a waveform and ONLY a waveform, then it cannot have definite momentum and position" is acceptable. Asserting the "When" is "Always and Forever" is not. – Ehryk Sep 15 '12 at 0:05 if can help Open timelike curves violate Heisenberg's uncertainty principle ...and show that the Heisenberg uncertainty principle between canonical variables, such as position and momentum, can be violated in the presence of interaction-free CTCs.... Foundations of Physics, March 2012, Volume 42, Issue 3, pp 341-361 ...considering that a D-CTC-assisted quantum computer can violate both the uncertainty principle... Phys Rev Lett, 102(21):210402, May 2009. arxiv 0811.1209 how a party with access to CTCs, or a "CTC-assisted" party, can perfectly distin- guish among a set of non-orthogonal quantum states.... Phys. Rev. A 82, 062330 2010. arxiv 1003.1987v2 ...and can be interacted with in the way described by this simple model, our results confirm those of Brun et al that non-orthogonal states can be discriminated... ...Our work supports the conclusions of Brun et al that an observer can use interactions with a CTC to allow them to discriminate unknown, non-orthogonal quantum states – in contradiction of the uncertainty principle... share|cite|improve this answer The only way to make Heisenberg's principle irrelevant is to measure the speed and the position (to make it simple) of a fundamental particle. In other words, you would have to observe a particle, without having it collide with a photon or reacting to a magnetic force, or without interacting with it. There might be an other way, which would be to find a very general law (but not statistical) which describes the characteristics (spin, speed, position etc) of an elemental particle in an absolute way.... share|cite|improve this answer I think that's just the observer effect, described in another answer, and I can beat that by hypothesyzing a future race that has developed a gravitational particle-position-and-momentum sensor machine, which does not use photons or interact with the particle in any way that would change the position or momentum (a read only sensor). Even in this case, the HUP says they CANNOT be known simultaneously. – Ehryk Sep 6 '12 at 10:56 I want to know what evidence there is to support this, even in the case of such a hypothetical machine. – Ehryk Sep 6 '12 at 10:57 In this case you interact using the gravitationnal interaction, so that's almost the same. – Yves Sep 6 '12 at 11:21 Not really. Bombarding it with photons are distinct events; surrounding it by a machine that is sensitive to the gravitation inside of it would only exert the same gravity that any other matter around it would, and if done as stated in my hypothetical, would not alter the position or momentum in any way once the particle has settled inside the machine. – Ehryk Sep 6 '12 at 11:24 Very interesting, and it would be possible if such a machine existed (my first point). But how would you measure something else than a change in the surronding gravitationnal field (which would imply an interaction with the particule) and how would you measure a spin ? It sounds like your method is equivalent to trying to measure an absolute quantity of energy, or to "forcing" the position or momentum of your particle, a case which doesn't fall under Heisenberg's principle. This reasoning might end up as a Ouroboros.. – Yves Sep 6 '12 at 11:33 "Heisenberg uncertainty principle" is a school term that is used in popular literature. It simply does not matter. What matters is the wavefunction and Schroedinger equation. The EPR paradox experiment never used any explicit "uncertainty principle" in the proof. share|cite|improve this answer As @MarkM pointed out above, what I meant but wasn't able to espouse was a 'non-commutation' property (a term I've not heard of in this context), or the claim that the exact position and momentum of a particle cannot be known simultaneously. I thought this was semantically equivalent to the Heisenberg Uncertainty Principle, which I guess it is not. – Ehryk Aug 13 '12 at 23:30 Also, from wikipedia: "The uncertainty principle is a fundamental concept in quantum physics." (from the disambiguation page, main article here: ). Could you explain or give sources for it 'not matter'ing? Further, the wiki article on the EPR Paradox explicitly uses the Heisenberg Uncertainty Principle - I'm not claiming WP is any authority, but it would be the source of my confusion. – Ehryk Aug 13 '12 at 23:38 @Annix This isn't true. Firstly, Heisenberg's matrix mechanics is an equally valid formulation of QM as wave mechanics, see Zettlli page 3. Second, the uncertainty principle is a part of wave mechanics. As you say, you can easily derive it from the Schrodinger equation. I find it odd that you say that this somehow makes the uncertainty principle irrelevant. You can't simultaneously know position and momentum to perfect accuracy, since localizing the position of the particle involves adding plane waves, which then makes the momentum uncertain. – Mark M Aug 13 '12 at 23:58 @Anixx If you claim that you may derive the HUP from the Schrödinger equation, you should show it. I actually think it is not possible, but I'm curious. One usually derive the HUP from the commutation relations and later one shows it is preserved by the unitary evolution. The Schr. equation tells us how the states evolve in time, while the HUP must be verified even in the initial state so I'm very skeptical about your derivation. In any case, the HUP is at least as fundamental as the Schr. equation and it is a term very often used in technical papers and seminars. – drake Aug 14 '12 at 0:28 @drake You can't derive it from the SE, but from the wave mechanics formulation (which is what I guess Annix means). See the'Proof of Kennard Inequality using Wave Mechanics' sub-section here:… However, I agree with you that the HUP is fundamental (see my above post.). – Mark M Aug 14 '12 at 1:32 Without gravity: The uncertainty principle is not really a principle because it is a derivable statement, it is not postulated. It is derivable and proven mathematically. Once you prove something you cannot unprove it. That means it cannot turn out to be false. For experimental verifications, see for example this article by Zeilinger et al and the references inside. Zeilinger is a world expert on quantum phenomena and it is expected that he will get Nobel prize in the future. With gravity, (and that matters only at extremely high energy, as high as the Planck scale): Intuitively you can use the uncertainty principle to give an estimate about the energy needed to resolve tiny region of space. For sufficiently small region in space you will create a black hole. So there is a limit on the spacial resolution one can achieve, because of gravity. If you try to use higher energy you will create a bigger black hole. Bottom line is, uncertainty principle does not make sense in this case because space loses its meaning and it cannot be defined operationally. share|cite|improve this answer Things can be unproven if one of the axioms or postulates they are based on is proven false. HUP may be true if <x, y and z> are true, but it certainly is based on foundations (waveforms representing matter, for one) that are not infallible. – Ehryk Aug 14 '12 at 11:37 @Ehryk You cannot unprove something by changing the postulates, because then you are talking about totally different problem. You can compare only 2 situations giving the same postulates/axioms. The axioms are true and not false in the sense that the coherent structure coming out of those postulates leads to predictions that are consistent with experimental observations. The world is quantum mechanical. – Revo Aug 14 '12 at 16:03 You cannot unprove it as a model of how things could work, no, but you could show that it is just not the most accurate model of the world we live it - just like we can theorize about hyperbolic geometry as a model, though it's unlikely to be the model of reality. Is it the case that you could not have a variant of something like QM that produces similar results while in some instances allowing precise position and momentum values, in the same way newton's laws were 'good enough' for the values we had measured at non relativistic speeds up until that point? – Ehryk Aug 15 '12 at 1:43 @Ehryk No. You could not have had something similar to Newtonian meachanics that underlies Quantum Mechanics. What you are thinking of has been thought of for long time ago, it is unknown as hidden variables theories. It has been proven experimentally that something like Newtonian mechanics or any deterministic theory cannot be the basis of Quantum Mechanics. May be you should also keep in mind the following main point: QM is more general than CM, hence it is more fundamental. Since QM is more general than CM, one should understand how CM emerges from QM, not the other way around. – Revo Aug 15 '12 at 1:50 @Ehryk One should understand CM in terms of QM not QM in terms of CM. – Revo Aug 15 '12 at 1:52 The way I see it, HUP cannot be disproven "at absolute zero", because absolute zero cannot be physically reached, er... due to HUP... is circular reasoning good enough? Let's try something else. Maybe try to imagine what would happen if HUP was to be violated? For one, I guess the proton - electron charges would cause one or two electrons to fall down into the nucleus, as HUP normally prevents that (if the electron fell down on nucleus we'd know it's position with great precision, requiring it to have indeterminate but large momentum, so it kind of settles for orbiting around nucleus). If you know more about the stuff than I do, try to imagine what else would happen, and how likely is that effect. For example, if HUP violation would imply violation of 2nd law of thermodynamics, this would render HUP violation pretty unlikely. That much from a layman. share|cite|improve this answer But then why can't we just say 'HUP is only for particles not at absolute zero'? It seems like violating it is 'not an option', even as above - so an electron falls into the nucleus. It has a measurable position and momentum. Why does HUP have to hold so strongly that we instead are comfortable with 'that particle must always have energy'? – Ehryk Sep 6 '12 at 18:31 The way I see it "absolute zero" is purely theoretical concept. Look up Bose-Einstein condensate, get a feeling for what happens at extremely low temperatures and then try to project that further to zero. Doesn't click. So saying "HUP is only for particles at absolute zero" is like saying "HUP is for all particles", for absolute zero can't be reached. – pafau k. Sep 6 '12 at 18:54 Do you have evidence or citations that nothing can be absolute zero? Or are you just asserting it? Note that saying 'we can't get to absolute zero' is different than 'no particle anywhere, at any time, can be at absolute zero.' – Ehryk Sep 6 '12 at 19:10 Let me quote the beginning of Wikipedia entry on absolute zero :) "Absolute zero is the theoretical temperature at which entropy reaches its minimum value", note the word theoretical. Temperature always flows from + to -, so the simple explanation is: you'd have to have something below absolute zero to cool something else to absolute zero. (this would violate laws of thermodynamics). – pafau k. Sep 6 '12 at 19:59 Transfer heat from hot to hotter? Decrease the volume of the container. Cool matter? Increase the volume of the container. In both cases, heat is not 'transferred', but temperature (average kinetic energy) has been changed without the interaction of other matter, either hotter or colder. – Ehryk Sep 10 '12 at 11:18 The Heisenberg uncertainty principle forms one of the most important pillars in physics. It can't be proven wrong because too many experimentally determined phenomena are a result of the uncertainty principle. However, something may be discovered in the future that can make a modification to the uncertainty principle - in a similar way that Newton's laws were modified by Einstein's special relativity. Saying that the uncertainty principle is wrong is like saying that Newton's law is wrong. In reply to the comments, I'm not saying that it can be falsified. It can't. In a classical sense, it will always be correct, in a similar way that Newton's law will always be correct.However, it can be modified. Until the day that all the open questions in physics have been resolved, how can you claim that the uncertainty principle can't be modified further? Do we know everything about extra dimensions? Do we know everything about string theory and physics at the Planck scale? By the way, it has already been modified. Please check this link. The uncertainty principle will always be correct. However, it can and has been modified. In its current formalism and interpretation, it could represent a special case of a larger underlying theory. The claim that the current formalism and limitations to the uncertainty principle are absolute and can never be modified under any circumstance in the universe, is a claim that does not obey the uncertainty principle itself. share|cite|improve this answer The uncertainty principle is a lot closer to uncertainty law than your answer lets on. It's not really about measurement so much as it's about a Fourier Transform. – Brandon Enright Jan 26 '14 at 23:47 The Heisenberg Uncertainty Principle is an unfalsifiable claim? All of (good) science is falsifiable. See the first paragraph: – Ehryk Jan 28 '14 at 6:12 protected by Qmechanic Jan 26 '14 at 23:37 Would you like to answer one of these unanswered questions instead?
11b0a399b2dcc289
Fundamental Physics 2013: What is the Big Picture? November 26, 2013 2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra. The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model: • What is dark matter? • What was the mechanism of cosmic inflation? • What mechanism led to the early production of galaxies and structure? • Why does the strong interaction not break CP? • What is the mechanism that led to matter dominating over anti-matter? • What is the correct theory of neutrino mass? • How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant? • How are the four forces and matter unified? • How can gravity be quantised? • How is information loss avoided for black holes? • What is the small scale structure of spacetime? • What is the large scale structure of spacetime? • How should we explain the existence of the universe? It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum. But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it.  The GDP of China is predicted to surpass that of the US and the EU within 5 years. But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before. Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to. Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed. The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views. Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality. Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos,  eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided. The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy. The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities. When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic  side would follow, perhaps uniquely. It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution? The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence. While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states. My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded. While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them. After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right. We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood. Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by. The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood. It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows. I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct. In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos. The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model. Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly. The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form? I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large  black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been. The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies. This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape. It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us. Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable? At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of appendages. My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table. It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive. The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent. The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified. Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds. I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse. String Theory String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By  2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime. Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided. String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic. What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn. How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another problem. Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale. Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that? Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that. Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group. String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation  symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings. This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection. The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again. Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances. Loop Quantum Gravity If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all. What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions. While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime.  String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent  this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need. Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG. I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign. The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation. Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied  to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory. The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures. Quantum Mechanics Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is  about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity. I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum? In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase space. Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear. The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field theory. Stepping back though we might notice that quantum field theory also has some schizophrenic  characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic  Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us? My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step. Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory. What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse. Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it. Black Hole Information We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality. There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology. Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way. It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry. Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation. Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature. In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works. In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation. It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival. Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place. Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two. Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours.  It is the higher level concept that is universal, not the definition. Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it. Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too. Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent. I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect. Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories.  These structures also seem to be important in physics. I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level. What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm. When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21st century theories. From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe. Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm. Experimental Outlook Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from? In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case. The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward. A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible. However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models. There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature.  If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding. With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies. Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value. Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space. Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators. There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 1034 years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 1036 years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised. Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn. Super Yang-Mills vs Loop Quantum Gravity July 19, 2013 Some of you may remember my Xtranormal video from a few years back “A Double Take on the String Wars.” Here it is if you missed it. If you enjoyed that you will be pleased to know that there is a new sequel called “SYM and LQG: The Same Bloody Thing” No offense intended to anyone who may accidentally resemble the characters in this video :-) This is just for fun. We need to find the Theory of Everything January 27, 2013 Each week the New Scientist runs a one minute interview with a scientist and last week it was Lisa Randall who told us that we shouldn’t be obsessed with finding a theory of everything. It is certainly true that there is a lot more to physics than this goal, but it is an important one and I think more effort should be made to get the right people together to solve this problem now. It is highly unlikely that NS will ever feature me in their column but there is nothing to stop me answering questions put to others so here are the answers I would give to the questions asked of Lisa Randall which also touch on the recent discovery of the Higgs(-very-like) Boson. Doesn’t every physicist dream of one neat theory of everything? Most physicists work on completely different things but ever since Einstein’s attempts at a unified field theory (and probably well before) many physicists at the leading edge of theoretical physics have indeed had this dream. In recent years scientific goals have been dictated more by funding agencies who want realistic proposals for projects. They have also noticed that all previous hopes that we were close to a final theory have been dashed by further discoveries that were not foreseen at the time. So physicists have drifted away from such lofty dreams. So is a theory of everything a myth? No. Although the so-called final theory wont explain everything in physics it is still the most important milestone we have to reach. Yes it is a challenging journey and we don’t know how far away it is but it could be just round the corner. We must always try to keep moving in the right direction. Finding it is crucial to making observable predictions based on quantum aspects of gravity.  Instead people are trying to do quantum gravity phenomenology based on very incomplete theories and it is just not working out. But isn’t beautiful mathematics supposed to lead us to the truth? Beauty and simplicity have played their part in the work of individual physicists such as Einstein and Dirac but what really counts in consistency. By that I mean consistency with experiment and mathematical self-consistency. Gauge theories were used in the standard model, not really because they embody the beauty of symmetry, but because gauge theories are the only renormalisable theories for vector bosons that were seen to exist. It was only when the standard model was shown to be renormalisable that it become popular and replaced other approaches. Only renormalisable theories in particle physics can lead to finite calculations that predict the outcome of experiments, but there are still many renormalisable theories and only consistency with experiment can complete the picture. Consistency is also the guide that takes us into theories beyond the standard model such as string theory that is needed for quantum gravity to be consistent at the perturbative level and the holographic principle that is needed for a consistent theory of black hole thermodynamics. Is it a problem, then, that our best theories of particle physics and cosmology are so messy? Relatively speaking they are mot messy at all. A few short equations are enough to account for almost everything we can observe over an enormous range of scales from particle physics to cosmology. The driving force now is the need to combine gravity and other forces in a form that is consistent non-perturbatively and to explain the few observational facts that the standard models don’t account for such as dark matter and inflation. This may lead to a final theory that is more unified but some aspects of physics may be determined by historical events not determined by the final theory, in which case particle physics could always be just as messy and complicated as biology. Even aside from those aspects, the final theory itself is unlikely to be simple in the sense that you could describe it fully to a non-expert. Did the discovery of the Higgs boson – the “missing ingredient” of particle physics – take you by surprise last July? We knew that it would be discovered or ruled out by the end of 2012 in the worst case. In the end it was found a little sooner. This was partly because it was not quite at the hardest place to find on the mass range which would have been around 118 GeV. Another factor was that the diphoton excess was about 70% bigger than expected. If it had been as predicted they would have required three times as much data to get it from the diphoton excess but the ZZ channel would have helped. This over-excess could be just the luck of the statistics or due to theoretical underestimates, but it could also be a sign of new physics beyond the standard model. Another factor that helped them push towards the finish line in June was that it became clear that a CMS+ATLAS combination was going to be sufficient for discovery. If they could not reach the 5-sigma goal for at least one of the individual experiments then they would have to face the embarrassment of an unofficial discovery announced on this blog and elsewhere. This drove them to use the harder multivariate analysis methods and include everything that bolstered the diphoton channel so that in the end they both got the discovery in July and not a few weeks later when an official combination could have been prepared. toeAre you worried that the Higgs is the only discovery so far at the LHC? It is a pity that nothing else has been found so far because the discovery of any new particles beyond the standard model would immediately lead to a new blast of theoretical work that could take us up to the next scale. If nothing else is found at the LHC after all its future upgrades it could be the end of accelerator driven physics until they invent a way of reaching much higher energies. However, negative results are not completely null. They have already ruled out whole classes of theories that could have been correct and even if there is nothing else to be seen at the electroweak scale it will force us to some surprising conclusions. It could mean that physics is fine tuned at the electroweak scale just as it is at the atomic scale. This would not be a popular outcome but you can’t argue with experiment and accepting it would enable us to move forward. Further discoveries would have to come from cosmology where inflation and dark matter remain unexplained. If accelerators have had their day then other experiments that look to the skies will take over and physics will still progress, just not quite as fast as we had hoped. What would an extra dimension look like? They would show up as the existence of heavy particles that are otherwise similar to known particles, plus perhaps even black holes and massive gravitons at the LHC. But the theory of large extra dimensions was always an outsider with just a few supporters. Theories with extra dimensions such as string theory probably only show these features at much higher energy scales that are inaccessible to any collider. What if we don’t see one? Some argue that seeing nothing else at the LHC would be best, as it would motivate new ideas. I think you are making that up. I never heard anyone say that finding nothing beyond the Higgs would be the best result. I did hear some people say that finding no Higgs would be the best result because it would have been so unexpected and would have forced us to find the alternative correct theory that would have been there. The truth of course is that this was a completely hypothetical situation. The reason we did not have a good alternative theory to the Higgs mechanism is because there isn’t one and the Higgs boson is in fact the correct answer. Update: Motl has a followup with similar views and some additional points here Dirac Medal for Chris Isham July 1, 2011 Chris Isham has been awarded this years Dirac Medal of the Institute of Physics for his work on quantum gravity. For information about his many contributions to the field you can just look at the IOP page about the award. In addition to the Dirac Medal the IOP has just announced a whole slew of other medals named after British Physicists. The Newton Medal this year goes to Leo Kadanoff who noticed the important role of scale invariance and universality in critical systems. The Faraday Medal is taken by Alan Watson for leadership of the Pierre Auger Observatory that studies ultra high energy cosmic rays. The Chadwick Medal is won by Terry Wyatt for work on Hadron Colliders. Another Imperial College prof being honoured is Arkady Tseytlin for string theory research who got the Rayleigh Medal. There are a load more which you can read about here. Congratulations to them all. Rebooting the Cosmos June 5, 2011 Mike Duff on M-Theory in New Scientist June 2, 2011 This weeks New Scientist features four articles by Mike Duff on M-Theory in which he explains the motivations behind it and answers his critics. It is worthy that New Scientist has allowed him to attack some earlier articles in the magazine that attempted to compare cosmic strings with pseudoscience, and M-Theory with religion. My impression is that more people are beginning to realize that there are good reasons why many of the best theorists are not giving up on string theory just because a few people use such rhetoric to try to discredit its successes. M-Theory came to prominence in 1995 when Ed Witten started to take the idea of supermemberane theories in 11 dimensions seriously, but its history goes back to at least 1987 when Mike Duff and others classified the possibilities for membrane theories in various dimensions. They showed that the recently discovered superstring theories might emerge from dimensional reductions with the membranes wrapped round to form the strings. Physicists still don’t have a full description of the dynamics of these membranes but a partial solution is provided by Matrix Models. In his New Scientist article, Mike Duff explains how M-Theory came about. It is important to appreciate that it is not just a wild idea that someone came up with at random. It follows from a need to bring together the standard model of particle physics with general relativity in a way free of the infinities that plague some approaches. The five Superstring theories in 10 dimensions are the only obvious solutions to this problem and they can all be unified into a unique framework using M-Theory. No other approach answers the same questions. But M-theory is not without its problems. There is an embarrassment of choice when you look at  ways to reduce it to 4 spacetime dimensions in order to match it to physics accessible to experiment. It is hoped that the Large Hadron Collider will discover supersymmetry bringing some hope that a connection between string theories and physics at reachable energies is possible. The trouble is that string theory does not make a definitive prediction that supersymmetry will be observed, and conversely the existence of supersymmetry does not necessarily imply string theory. At best we can say is that there is a correlation between these two ideas so the discovery or not of supersymmetry in the Higgs sector will have a strong influence on the acceptability of string theory. A second unresolved problem with M-Theory is the absence of a full non-perturbative formulation that is required to make possible any analysis of its phenomenology at the Planck scale. These shortcomings have been explored in a paper on the arXiv last week by Steve Giddings. Mike Duff has identified some relationships between string theories and the information theory of qubits that might just be the first signs of where to look for such a formulation. In work with Borsten, Dahanayake, Ebrahim Marrani and Rubens, Duff has explored a subtle relationship between the classification of STU black holes and 4 qubit entanglement. He takes pains to stress that for the moment at least they “are only claiming that it is useful, not deep.” The idea that the laws of physics emerges from the dynamics of information has been around for some time and has been boosted in recent years by the theoretical success of the holographic principle and entropic gravity. Whether or not this is a way to understand the fundamentals of M-theory is unclear. It’s a hard problem but not without hope. Having been lucky enough to meet Mike Duff and some of his students, I know that he remains committed to his work on M-theory and the search for a deeper understanding of its principles. He is unusually open to new ideas but is quick to get to the mathematical details and dismiss anything that simply does not work out. It is not so hard to invent ideas using some persuasive numerology that sound good through the written word, but nature prefers the sound logic of equations. Horizon: Before the Big Bang October 15, 2010 • Andrei Linde: Multiverse inspired eternal inflation • Lee Smolin: Black Holes spawning baby universes • Michio Kaku: Vacuum fluctuation from empty space • Neil Turok: Colliding Branes • Laura Mersini Houghton: String cosmology Get every new post delivered to your Inbox. Join 296 other followers
9e7024073a746c93
Superfluid helium-4 From Wikipedia, the free encyclopedia Jump to: navigation, search A superfluid is a state of matter in which the matter behaves like a fluid with zero viscosity and zero entropy. The substance, which looks like a normal liquid, will flow without friction past any surface, which allows it to continue to circulate over obstructions and through pores in containers which hold it, subject only to its own inertia. Known as a major facet in the study of quantum hydrodynamics and macroscopic quantum phenomena, the superfluidity effect was discovered by Pyotr Kapitsa[1] and John F. Allen, and Don Misener[2] in 1937. It has since been described through phenomenological and microscopic theories. The formation of the superfluid is known to be related to the formation of a Bose–Einstein condensate. This is made obvious by the fact that superfluidity occurs in liquid helium-4 at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its zero spin. Helium-3, however, is a fermion particle, which can form bosons only by pairing with itself at much lower temperatures, in a process similar to the electron pairing in superconductivity. In the 1950s, Hall and Vinen performed experiments establishing the existence of quantized vortex lines in superfluid helium.[3] In the 1960s, Rayfield and Reif established the existence of quantized vortex rings.[4] Packard has observed the intersection of vortex lines with the free surface of the fluid,[5] and Avenel and Varoquaux have studied the Josephson effect in superfluid helium-4.[6] In 2006 a group at the University of Maryland visualized quantized vortices by using small tracer particles of solid hydrogen.[7] Fig. 1. Phase diagram of ⁴He. In this diagram is also given the λ-line. Fig. 2. Heat capacity of liquid 4He at saturated vapor pressure as function of the temperature. The peak at T=2.17 K marks a (second-order) phase transition. Fig. 3. Temperature dependence of the relative superfluid and normal components ρn/ρ and ρs/ρ as functions of T. Figure 1 is the phase diagram of 4He.[8] It is a p-T diagram indicating the solid and liquid regions separated by the melting curve (between the liquid and solid state) and the liquid and gas region, separated by the vapor-pressure line. This latter ends in the critical point where the difference between gas and liquid disappears. The diagram shows the remarkable property that 4He is liquid even at absolute zero. Helium four is only solid at pressures above 25 bar. Figure 1 also shows the λ-line. This is the line that separates two fluid regions in the phase diagram indicated by He-I and He-II. In the He-I region the helium behaves like a normal fluid; in the He-II region the helium is superfluid. The name lambda-line comes from the specific heat – temperature plot which has the shape of the Greek letter λ.[9][10] See figure 2, which shows a peak at 2.172 K, the so-called λ-point of 4He. Below the lambda line the liquid can be described by the so-called two-fluid model. It behaves as if it consists of two components: a normal component, which behaves like a normal fluid, and a superfluid component with zero viscosity and zero entropy. The ratios of the respective densities ρn/ρ and ρs/ρ, with ρns) the density of the normal (superfluid) component, and ρ (the total density), depends on temperature and is represented in figure 3.[11] By lowering the temperature, the fraction of the superfluid density increases from zero at Tλ to one at zero kelvin. Below 1 K the helium is almost completely superfluid. It is possible to create density waves of the normal component (and hence of the superfluid component since ρn + ρs = constant) which are similar to ordinary sound waves. This effect is called second sound. Due to the temperature dependence of ρn (figure 3) these waves in ρn are also temperature waves. Fig. 4. Helium II will "creep" along surfaces in order to find its own level – after a short while, the levels in the two containers will equalize. The Rollin film also covers the interior of the larger container; if it were not sealed, the helium II would creep out and escape. Fig. 5. The liquid helium is in the superfluid phase. As long as it remains superfluid, it creeps up the wall of the cup as a thin film. It comes down on the outside, forming a drop which will fall into the liquid below. Another drop will form – and so on – until the cup is empty. Film flow[edit] Many ordinary liquids, like alcohol or petroleum, creep up solid walls, driven by their surface tension. Liquid helium also has this property, but, in the case of He-II, the flow of the liquid in the layer is not restricted by its viscosity but by a critical velocity which is about 20 cm/s. This is a fairly high velocity so superfluid helium can flow relatively easily up the wall of containers, over the top, and down to the same level as the surface of the liquid inside the container, in a siphon effect as illustrated in figure 4. In a container, lifted above the liquid level, it forms visible droplets as seen in figure 5. Superfluid hydrodynamics[edit] The equation of motion for the superfluid component, in a somewhat simplified form,[12] is given by Newton's law \vec F = M_4\frac{\mathrm{d}\vec v_s}{\mathrm{d}t}. The mass M4 is the molar mass of 4He and \vec v_s is the velocity of the superfluid component. The time derivative is the so-called hydrodynamic derivative, i.e. the rate of increase of the velocity when moving with the fluid. In the case of superfluid 4He in the gravitational field the force is given by[13][14] \vec F =- \vec \nabla (\mu + M_4gz).. In this expression μ is the molar chemical potential, g the gravitational acceleration, and z the vertical coordinate. Thus we get M_4\frac{\mathrm{d}\vec v_s}{\mathrm{d}t}=- \vec \nabla (\mu + M_4gz). Eq.(1) only holds if vs is below a certain critical value which usually is determined by the diameter of the flow channel.[15][16] In classical mechanics the force is often the gradient of a potential energy. Eq.(1) shows that, in the case of the superfluid component, the force contains a term due to the gradient of the chemical potential. This is the origin of the remarkable properties of He-II such as the fountain effect. Fig. 6. Integration path for calculating μ at arbitrary p and T. Fig. 7. Demonstration of the fountain pressure. The two vessels are connected by a superleak through which only the superfluid component can pass. Fig. 8. Demonstration of the fountain effect. A capillary tube is “closed” at one end by a superleak and is placed into a bath of superfluid helium and then heated. The helium flows up through the tube and squirts like a fountain. Fountain pressure[edit] In order to rewrite Eq.(1) in more familiar form we use the general formula \mathrm{d} \mu = V_m\mathrm{d}p - S_m\mathrm{d}T. Here Sm is the molar entropy and Vm the molar volume. With Eq.(2) μ(p,T) can be found by a line integration in the p-T plane. First we integrate from the origin (0,0) to (p,0), so at T =0. Next we integrate from (p,0) to (p,T), so with constant pressure (see figure 6). In the first integral dT=0 and in the second dp=0. With Eq.(2) we obtain \mu (p,T)=\mu (0,0)+\int_{0}^{p} V_{m}(p^\prime,0)\mathrm{d}p^\prime -\int_{0}^T S_{m}(p,T^\prime)\mathrm{d}T^\prime. We are interested only in cases where p is small so that Vm is practically constant. So \int_{0}^{p} V_{m}(p^\prime,0)\mathrm{d}p^\prime = V_{m0}p where Vm0 is the molar volume of the liquid at T =0 and p =0. The other term in Eq.(3) is also written as a product of Vm0 and a quantity pf which has the dimension of pressure \int_{0}^T S_{m}(p,T^\prime)\mathrm{d}T^\prime=V_{m0}p_{f}. The pressure pf is called the fountain pressure. It can be calculated from the entropy of 4He which, in turn, can be calculated from the heat capacity. For T =Tλ the fountain pressure is equal to 0.692 bar. With a density of liquid helium of 125 kg/m3 and g = 9.8 m/s2 this corresponds with a liquid-helium column of 56 meter height. So, in many experiments, the fountain pressure has a bigger effect on the motion of the superfluid helium than gravity. With Eqs.(4) and (5), Eq.(3) obtains the form \mu(p,T) = \mu_0 + V_{m0}(p-p_{f}). Substitution of Eq.(6) in (1) gives \rho_0 \frac{\mathrm{d} \vec v_s}{\mathrm{d}t} = - \vec\nabla (p + \rho_0gz-p_{f}). with ρ₀ = M4/Vm0 the density of liquid 4He at zero pressure and temperature. Eq.(7) shows that the superfluid component is accelerated by gradients in the pressure and in the gravitational field, as usual, but also by a gradient in the fountain pressure. So far Eq.(5) has only mathematical meaning, but in special experimental arrangements pf can show up as a real pressure. Figure 7 shows two vessels both containing He-II. The left vessel is supposed to be at zero kelvin (Tl=0) and zero pressure (pl = 0). The vessels are connected by a so-called superleak. This is a tube, filled with a very fine powder, so the flow of the normal component is blocked. However, the superfluid component can flow through this superleak without any problem (below a critical velocity of about 20 cm/s). In the steady state vs=0 so Eq.(7) implies p_{l}+\rho_0gz_{l}-p_{fl}=p_{r}+ \rho_0gz_{r}-p_{fr} where the index l (r) applies to the left (right) side of the superleak. In this particular case pl = 0, zl = zr, and pfl = 0 (since Tl = 0). Consequently This means that the pressure in the right vessel is equal to the fountain pressure at Tr. In an experiment, arranged as in figure 8, a fountain can be created. The fountain effect is used to drive the circulation of 3He in dilution refrigerators.[17][18] Fig. 9. Transport of heat by a counterflow of the normal and superfluid components of He-II Heat transport[edit] Figure 9 depicts a heat-conduction experiment between two temperatures TH and TL connected by a tube filled with He-II. When heat is applied to the hot end a pressure builds up at the hot end according to Eq.(7). This pressure drives the normal component from the hot end to the cold end according to \Delta p = -\eta_nZ\dot V_n. Here ηn is the viscosity of the normal component,[19] Z some geometrical factor, and \dot V_n the volume flow. The normal flow is balanced by a flow of the superfluid component from the cold to the hot end. At the end sections a normal to superfluid conversion takes place and vice versa. So heat is transported, not by heat conduction, but by convection. This kind of heat transport is very effective, so the thermal conductivity of He-II is very much better than the best materials. The situation is comparable with heat pipes where heat is transported via gas–liquid conversion. The high thermal conductivity of He-II is applied for stabilizing superconducting magnets such as in the Large Hadron Collider at CERN. Landau two-fluid approach[edit] L. D. Landau's phenomenological and semi-microscopic theory of superfluidity of helium-4 earned him the Nobel Prize in physics, in 1962. Assuming that sound waves are the most important excitations in helium-4 at low temperatures, he showed that helium-4 flowing past a wall would not spontaneously create excitations if the flow velocity was less than the sound velocity. In this model, the sound velocity is the "critical velocity" above which superfluidity is destroyed. (Helium-4 actually has a lower flow velocity than the sound velocity, but this model is useful to illustrate the concept.) Landau also showed that the sound wave and other excitations could equilibrate with one another and flow separately from the rest of the helium-4, which is known as the "condensate". From the momentum and flow velocity of the excitations he could then define a "normal fluid" density, which is zero at zero temperature and increases with temperature. At the so-called Lambda temperature, where the normal fluid density equals the total density, the helium-4 is no longer superfluid. To explain the early specific heat data on superfluid helium-4, Landau posited the existence of a type of excitation he called a "roton", but as better data became available he considered that the "roton" was the same as a high momentum version of sound. The Landau theory does not elaborate on the microscopic structure of the superfluid component of liquid helium. The first attempt to create the microscopic theory of the superfluid component itself was done by London.[20] and Tisza[21][22] Subsequently, other microscopical models were proposed by different authors. Their main objective is to derive the form of the inter-particle potential between helium atoms in superfluid state from first principles of quantum mechanics. To date, a number of models of this kind have been proposed: models with vortex rings, hard-sphere models, Gaussian cluster theories, etc. Vortex ring model[edit] Landau thought that vorticity entered superfluid helium-4 by vortex sheets, but such sheets have since been shown to be unstable. Lars Onsager and, later independently, Feynman showed that vorticity enters by quantized vortex lines. They also developed the idea of quantum vortex rings. Hendrik van der Bijl in the 1940s,[23] and Richard Feynman around 1955,[24] developed microscopic theories for the roton, which was shortly observed with inelastic neutron experiments by Palevsky. Later on, Feynman admitted that his model gives only qualitative agreement with experiment.[25][26] Hard-sphere models[edit] The models are based on the simplified form of the inter-particle potential between helium-4 atoms in the superfluid phase. Namely, the potential is assumed to be of the hard-sphere type.[27][28][29] In these models the famous Landau (roton) spectrum of excitations is qualitatively reproduced. Gaussian cluster approach[edit] This is a two-scale approach which describes the superfluid component of liquid helium-4. It consists of two nested models linked via parametric space. The short-wavelength part describes the interior structure of the fluid element using a non-perturbative approach based on the Logarithmic Schrödinger equation; it suggests the Gaussian-like behaviour of the element's interior density and interparticle interaction potential. The long-wavelength part is the quantum many-body theory of such elements which deals with their dynamics and interactions. The approach provides a unified description of the phonon, maxon and roton excitations, and has noteworthy agreement with experiment: with one essential parameter to fit one reproduces at high accuracy the Landau roton spectrum, sound velocity and structure factor of superfluid helium-4.[30] This model utilizes the general theory of quantum Bose liquids with logarithmic nonlinearities[31] which is based on introducing a dissipative-type contribution to energy related to the quantum Everett-Hirschman entropy function.[32][33] Although the phenomenologies of the superfluid states of helium-4 and helium-3 are very similar, the microscopic details of the transitions are very different. Helium-4 atoms are bosons, and their superfluidity can be understood in terms of the Bose–Einstein statistics that they obey. Specifically, the superfluidity of helium-4 can be regarded as a consequence of Bose-Einstein condensation in an interacting system. On the other hand, helium-3 atoms are fermions, and the superfluid transition in this system is described by a generalization of the BCS theory of superconductivity. In it, Cooper pairing takes place between atoms rather than electrons, and the attractive interaction between them is mediated by spin fluctuations rather than phonons. (See fermion condensate.) A unified description of superconductivity and superfluidity is possible in terms of gauge symmetry breaking. Superfluids, such as helium-4 below the lambda point, exhibit many unusual properties. (See Helium#Helium II state). A superfluid acts as if it were a mixture of a normal component, with all the properties of a normal fluid, and a superfluid component. The superfluid component has zero viscosity and zero entropy. Application of heat to a spot in superfluid helium results in a flow of the normal component which takes care of the heat transport at relatively high velocity (up to 20 cm/s) which leads to a very high effective thermal conductivity. Another fundamental property becomes visible if a superfluid is placed in a rotating container. Instead of rotating uniformly with the container, the rotating state consists of quantized vortices. That is, when the container is rotated at speeds below the first critical angular velocity, the liquid remains perfectly stationary. Once the first critical angular velocity is reached, the superfluid will form a vortex. The vortex strength is quantized, that is, a superfluid can only spin at certain "allowed" values. Rotation in a normal fluid, like water, is not quantized. If the rotation speed is increased more and more quantized vortices will be formed which arrange in nice patterns similar to the Abrikosov lattice in a superconductor. Practical application[edit] Recently in the field of chemistry, superfluid helium-4 has been successfully used in spectroscopic techniques as a quantum solvent. Referred to as Superfluid Helium Droplet Spectroscopy (SHeDS), it is of great interest in studies of gas molecules, as a single molecule solvated in a superfluid medium allows a molecule to have effective rotational freedom, allowing it to behave similarly to how it would in the "gas" phase. Droplets of superfluid helium also have a characteristic temperature of about 0.4 K which cools the solvated molecule(s) to its ground or nearly ground rovibronic state. Superfluids are also used in high-precision devices such as gyroscopes, which allow the measurement of some theoretically predicted gravitational effects (for an example, see the Gravity Probe B article). In 1999, one type of superfluid was used to trap light and greatly reduce its speed. In an experiment performed by Lene Hau, light was passed through a Bose-Einstein condensed gas of sodium (analogous to a superfluid) and found to be slowed to 17 metres per second (61 km/h) from its normal speed of 299,792,458 metres per second in vacuum.[34] This does not change the absolute value of c, nor is it completely new: any medium other than vacuum, such as water or glass, also slows down the propagation of light to c/n where n is the material's refractive index. The very slow speed of light and high refractive index observed in this particular experiment, moreover, is not a general property of all superfluids. The Infrared Astronomical Satellite IRAS, launched in January 1983 to gather infrared data was cooled by 73 kilograms of superfluid helium, maintaining a temperature of 1.6 K (−271.55 °C). Furthermore, when used in conjunction with helium-3, temperatures as low as 40 mK are routinely achieved in extreme low temperature experiments. The helium-3, in liquid state at 3.2 K, can be evaporated into the superfluid helium-4, where it acts as a gas due to the latter's properties as a Bose-Einstein condensate. This evaporation pulls energy from the overall system, which can be pumped out in a way completely analogous to normal refrigeration techniques. Superfluid-helium technology is used to extend the temperature range of cryocoolers to lower temperatures. So far the limit is 1.19 K, but there is a potential to reach 0.7 K.[35] 21st-century developments[edit] In the early 2000s, physicists created a Fermionic condensate from pairs of ultra-cold fermionic atoms. Under certain conditions, fermion pairs form diatomic molecules and undergo Bose–Einstein condensation. At the other limit, the fermions (most notably superconducting electrons) form Cooper pairs which also exhibit superfluidity. This work with ultra-cold atomic gases has allowed scientists to study the region in between these two extremes, known as the BEC-BCS crossover. Supersolids may also have been discovered in 2004 by physicists at Penn State University. When helium-4 is cooled below about 200 mK under high pressures, a fraction (~1%) of the solid appears to become superfluid.[36][37] By quench cooling or lengthening the annealing time, thus increasing or decreasing the defect density respectively, it was shown, via torsional oscillator experiment, that the supersolid fraction could be made to range from 20% to completely non-existent. This suggested that the supersolid nature of helium-4 is not intrinsic to helium-4 but a property of helium-4 and disorder.[38][39] Some emerging theories posit that the supersolid signal observed in helium-4 was actually an observation of either a superglass state[40] or intrinsically superfluid grain boundaries in the helium-4 crystal.[41] See also[edit] 1. ^ Kapitza, P. (1938). "Viscosity of Liquid Helium Below the λ-Point". Nature 141 (3558): 74. Bibcode:1938Natur.141...74K. doi:10.1038/141074a0.  2. ^ Allen, J. F.; Misener, A. D. (1938). "Flow of Liquid Helium II". Nature 142 (3597): 643. Bibcode:1938Natur.142..643A. doi:10.1038/142643a0.  3. ^ Hall, H. E.; Vinen, W. F. (1956). "The Rotation of Liquid Helium II. II. The Theory of Mutual Friction in Uniformly Rotating Helium II". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 238 (1213): 215. Bibcode:1956RSPSA.238..215H. doi:10.1098/rspa.1956.0215.  4. ^ Rayfield, G.; Reif, F. (1964). "Quantized Vortex Rings in Superfluid Helium". Physical Review 136 (5A): A1194. Bibcode:1964PhRv..136.1194R. doi:10.1103/PhysRev.136.A1194.  5. ^ Packard, Richard E. (1982). "Vortex photography in liquid helium". Physica B+C. 109–110: 1474. Bibcode:1982PhyBC.109.1474P. doi:10.1016/0378-4363(82)90510-1.  6. ^ Avenel, O.; Varoquaux, E. (1985). "Observation of Singly Quantized Dissipation Events Obeying the Josephson Frequency Relation in the Critical Flow of Superfluid ^{4}He through an Aperture". Physical Review Letters 55 (24): 2704–2707. Bibcode:1985PhRvL..55.2704A. doi:10.1103/PhysRevLett.55.2704. PMID 10032216.  7. ^ Bewley, Gregory P.; Lathrop, Daniel P.; Sreenivasan, Katepalli R. (2006). "SUPERFLUID HELIUM: Visualization of quantized vortices". Nature 441 (7093): 588. Bibcode:2006Natur.441..588B. doi:10.1038/441588a. PMID 16738652.  8. ^ Swenson, C. (1950). "The Liquid-Solid Transformation in Helium near Absolute Zero". Physical Review 79 (4): 626. Bibcode:1950PhRv...79..626S. doi:10.1103/PhysRev.79.626.  9. ^ Keesom, W.H.; Keesom, A.P. (1935). "New measurements on the specific heat of liquid helium". Physica 2: 557. Bibcode:1935Phy.....2..557K. doi:10.1016/S0031-8914(35)90128-8.  10. ^ Buckingham, M.J.; Fairbank, W.M. (1961). "The nature of the λ-transition in liquid helium". Progress in Low Temperature Physics 3. p. 80. doi:10.1016/S0079-6417(08)60134-1. ISBN 9780444533098.  |chapter= ignored (help) 11. ^ E.L. Andronikashvili Zh. Éksp. Teor. Fiz, Vol.16 p.780 (1946), Vol.18 p. 424 (1948) 12. ^ S.J. Putterman, Superfluid Hydrodynamics (North-Holland Publishing Company, Amsterdam, 1974) ISBN 0444106812 13. ^ L.D. Landau, J. Phys. USSR, Vol.5 (1941) p.71. 14. ^ I.M. Khalatnikov, An introduction to the theory of superfluidity (W.A. Benjamin, Inc., New York, 1965) ISBN 0738203009. 15. ^ Van Alphen, W.M.; Van Haasteren, G.J.; De Bruyn Ouboter, R.; Taconis, K.W. (1966). "The dependence of the critical velocity of the superfluid on channel diameter and film thickness". Physics Letters 20 (5): 474. Bibcode:1966PhL....20..474V. doi:10.1016/0031-9163(66)90958-9.  16. ^ De Waele, A.Th.A.M.; Kuerten, J.G.M. (1992). "Thermodynamics and hydrodynamics of 3He-4He mixtures". Progress in Low Temperature Physics 13. p. 167. doi:10.1016/S0079-6417(08)60052-9. ISBN 9780444891099.  |chapter= ignored (help) 17. ^ Staas, F.A.; Severijns, A.P.; Van Der Waerden, H.C.M. (1975). "A dilution refrigerator with superfluid injection". Physics Letters A 53 (4): 327. Bibcode:1975PhLA...53..327S. doi:10.1016/0375-9601(75)90087-0.  18. ^ Castelijns, C.; Kuerten, J.; De Waele, A.; Gijsman, H. (1985). "3He flow in dilute 3He-4He mixtures at temperatures between 10 and 150 mK". Physical Review B 32 (5): 2870. Bibcode:1985PhRvB..32.2870C. doi:10.1103/PhysRevB.32.2870.  19. ^ J.C.H. Zeegers Critical velocities and mutual friction in 3He-4He mixtures at low temperatures below 100 mK', thesis, Appendix A, Eindhoven University of Technology, 1991. 20. ^ F. London (1938). "The λ-Phenomenon of Liquid Helium and the Bose-Einstein Degeneracy". Nature 141 (3571): 643–644. Bibcode:1938Natur.141..643L. doi:10.1038/141643a0.  21. ^ L. Tisza (1938). "Transport Phenomena in Helium II". Nature 141 (3577): 913. Bibcode:1938Natur.141..913T. doi:10.1038/141913a0.  22. ^ L. Tisza (1947). "The Theory of Liquid Helium". Phys. Rev. 72 (9): 838–854. Bibcode:1947PhRv...72..838T. doi:10.1103/PhysRev.72.838.  23. ^ Bijl, A; de Boer J (1941). "Properties of liquid helium II". Physica 8 (7): 655–675. Bibcode:1941Phy.....8..655B. doi:10.1016/S0031-8914(41)90422-6.  24. ^ Braun, L. M., ed. (2000). Selected papers of Richard Feynman with commentary. World Scientific Series in 20th century Physics 27. World Scientific. ISBN 978-9810241315.  Section IV (pages 313 to 414) deals with liquid helium. 25. ^ R. P. Feynman (1954). "Atomic Theory of the Two-Fluid Model of Liquid Helium". Phys. Rev. 94 (2): 262. Bibcode:1954PhRv...94..262F. doi:10.1103/PhysRev.94.262.  26. ^ R. P. Feynman and M. Cohen (1956). "Energy Spectrum of the Excitations in Liquid Helium". Phys. Rev. 102 (5): 1189–1204. Bibcode:1956PhRv..102.1189F. doi:10.1103/PhysRev.102.1189.  27. ^ T. D. Lee, K. Huang and C. N. Yang (1957). "Eigenvalues and Eigenfunctions of a Bose System of Hard Spheres and Its Low-Temperature Properties". Phys. Rev. 106 (6): 1135–1145. Bibcode:1957PhRv..106.1135L. doi:10.1103/PhysRev.106.1135.  28. ^ L. Liu, L. S. Liu and K. W. Wong (1964). "Hard-Sphere Approach to the Excitation Spectrum in Liquid Helium II". Phys. Rev. 135 (5A): A1166–A1172. Bibcode:1964PhRv..135.1166L. doi:10.1103/PhysRev.135.A1166.  29. ^ A. P. Ivashin and Y. M. Poluektov (2011). "Short-wave excitations in non-local Gross-Pitaevskii model". Cent. Eur. J. Phys. 9 (3): 857–864. Bibcode:2010CEJPh.tmp..120I. doi:10.2478/s11534-010-0124-7.  30. ^ K. G. Zloshchastiev (2012). "Volume element structure and roton-maxon-phonon excitations in superfluid helium beyond the Gross-Pitaevskii approximation". Eur. Phys. J. B 85 (8): 273. arXiv:1204.4652. Bibcode:2012EPJB...85..273Z. doi:10.1140/epjb/e2012-30344-3.  31. ^ A. V. Avdeenkov and K. G. Zloshchastiev (2011). "Quantum Bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent". J. Phys. B: At. Mol. Opt. Phys. 44 (19): 195303. arXiv:1108.0847. Bibcode:2011JPhB...44s5303A. doi:10.1088/0953-4075/44/19/195303.  32. ^ Hugh Everett, III. The Many-Worlds Interpretation of Quantum Mechanics: the theory of the universal wave function. Everett's Dissertation 33. ^ I.I. Hirschman, Jr., A note on entropy. American Journal of Mathematics (1957) pp. 152–156 34. ^ Hau, Lene Vestergaard; Harris, S. E.; Dutton, Zachary; Behroozi, Cyrus H. (1999). "Light speed reduction to 17 metres per second in an ultracold atomic gas". Nature 397 (6720): 594. Bibcode:1999Natur.397..594V. doi:10.1038/17561.  35. ^ Tanaeva, I. A. (2004). "AIP Conference Proceedings" 710. p. 1906. doi:10.1063/1.1774894.  |chapter= ignored (help) 36. ^ E. Kim and M. H. W. Chan (2004). "Probable Observation of a Supersolid Helium Phase". Nature 427 (6971): 225–227. Bibcode:2004Natur.427..225K. doi:10.1038/nature02220. PMID 14724632.  37. ^ Moses Chan's Research Group. "Supersolid." Penn State University, 2004. 38. ^ Sophie, A; Rittner C (2006). "Observation of Classical Rotational Inertia and Nonclassical Supersolid Signals in Solid 4 He below 250 mK". Phys. Rev. Lett 97 (16): 165301. Bibcode:2006PhRvL..97p5301R. doi:10.1103/PhysRevLett.97.165301. PMID 17155406.  39. ^ Sophie, A; Rittner C (2007). "Disorder and the Supersolid State of Solid 4 He". Phys. Rev. Lett 98 (17): 175302. arXiv:cond-mat/0702665. Bibcode:2007PhRvL..98q5302R. doi:10.1103/PhysRevLett.98.175302.  40. ^ Boninsegni, M; Prokofev (2006). "Superglass Phase of 4 He". Phys. Rev. Lett 96 (13): 135301. PMID 16711998.  41. ^ Pollet, L; Boninsegni M (2007). "Superfuididty of Grain Boundaries in Solid 4 He". Phys. Rev. Lett 98 (13): 135301. arXiv:cond-mat/0702159. Bibcode:2007PhRvL..98m5301P. doi:10.1103/PhysRevLett.98.135301. PMID 17501209.  Further reading[edit] External links[edit]
f05114182bf92248
next up previous Next: Basis Functions in Coordinate Up: Time Evolution Previous: Summary Decoupling of Equations in Quantum Mechanics Recall that the time-dependent Schrödinger equation is \begin{displaymath}i \hbar \frac{d \Psi({\bf r}, t)}{dt} = {\hat H} \Psi({\bf r}, t), \end{displaymath} (29) where ${\bf r}$ represents the set of all Cartesian coordinates of all particles in the system. If we assume that ${\hat H}$ is time-independent, and if we pretend that ${\hat H}$ is just a number, than we can be confident that the solution is just \begin{displaymath}\Psi({\bf r}, t) = e^{- i {\hat H} t / \hbar} \Psi({\bf r}, 0). \end{displaymath} (30) In fact, this remains true even though ${\hat H}$ is of course an operator, not just a number. So, the propagator in quantum mechanics is {\hat G}(t) = e^{- i {\hat H} t / \hbar}. \end{displaymath} (31) C. David Sherrill
c43c90f1ad8880d7
Sign up × I am an electronics and communication engineer, specializing in signal processing. I have some touch with the mathematics concerning communication systems and also with signal processing. I want to utilize this knowledge to study and understand Quantum Mechanics from the perspective of an engineer. I am not interested in reading about the historical development of QM and i am also not interested in the particle formalism. I know things have started from the wave-particle duality but my current interests are not study QM from that angle. What I am interested is to start studying a treatment of QM from very abstract notions such as, 'what is an observable ? (without referring to any particular physical system)' and 'what is meant by incompatibility observables ?' and then go on with what is a state vector and its mathematical properties. I am okay to deal with the mathematics and abstract notions but I some how do not like the notion of a particle, velocity and momentum and such physical things as they directly contradict my intuition which is based on classical mechanics ( basic stuff and not the mathematical treatment involving phase space as i am not much aware of it). I request you to give some suggestions on advantages and pitfalls in venturing into such a thing. I also request you to provide me good reference books or text books which give such a treatment of QM without assuming any previous knowledge of QM. share|cite|improve this question Why not sakurai? – user7757 Feb 3 '13 at 12:08 Possible duplicate: – Qmechanic May 23 '13 at 19:51 7 Answers 7 up vote 1 down vote accepted Try "Mathematical Foundations of Quantum Mechanics" by George Mackey. It is about 130 pages with a chapter on classical mechanics. The author is a very well know mathematician, and I think the books is what you are looking for. Also on higher level is the book "Quantum mechanics for mathematicians" by Leon Takhtadzhian. The book by Folland on Quantum Field Theory has a chapter on Quantum Mechanics, which can be read independently of the rest of the book. Edit: Since this came up on the first page, I'll add one more. F.Strocchi "An Introduction to the Mathematical Structures of Quantum Mechanics. A Short Course for Mathematicians." share|cite|improve this answer That's quite an unusual request i.e. to start with an abstract formulation of quantum mechanics, especially for someone in a profession so closely connected with the "real world". However, to answer your question, I think what you're looking for is an axiomatic approach to quantum mechanics. Such treatments will keep the physical examples to the minimum and skip to the mathematics straight away. You could start with this reference (just for a chatty treatment of what the postulates look like !), and maybe search for quantum mechanics textbooks with "axiomatic" in the title. For many people, the fact that the quantum mechanical predictions of physical quantities are sometimes counterintuitive is precisely what gives the subject its appeal. Hope this helps. Edit: It appears not so easy to find "Axiomatic Quantum Mechanics" in textbook titles ! However Google returns a few articles featuring those words. Of course there is also Von Neumann's "Mathematical Foundations of Quantum Mechanics". share|cite|improve this answer An unconventional approach would be to study quantum computation or quantum information theory first. What is 'unusual' about quantum mechanics is the mathematical underpinnings, which is essentially a generalization of probability theory. (I have heard more than one colleague say that quantum mechanics is simply physics which involves 'non-commutative probability', i.e. in testing whether some collection of events are realized for some sample space, there is a pertinent sense of the order in which one tests those events.) To the extent that this is true, it is not important to be learning the actual physics alongside that mathematical underpinning, so long as you can learn about something evolving e.g. under the Schrödinger equation or collapsing under measurement. Studying quantum information evolving under a computational process is one way you could achieve that. Because the narrative of the field is less about the crisis in physics in the 20s–40s, and more about physicists and computer scientists struggling to find a common language, the development is clearer and there is a better record of justifying the elements of the formalism from a fundamental standpoint. By studying quantum information and/or quantum computation, you will be able to decouple the learning of the underpinnings from the learning of the physics, and thereby get to the heart of any conceptual troubles you may have; and it will give you a tidier sandbox in which to play with ideas. To this end, I recommend "Nielsen & Chuang", which is the standard introductory text of the field. It's suitable as an introduction both for those coming from a computer science background, and from a quantum physics background; so apart from learning some of the formalism, you can get some exposure to some of the physics as well. There are other texts which I have not read, though; and about a bazillion pages of lecture notes floating around on the web. share|cite|improve this answer I strongly advise you Quantum Theory: Concepts and Methods by Asher Peres. I think this book is answering the questions you're asking. Like 'what is an observable ?' share|cite|improve this answer You might find articles by Leon Cohen of interest. He has considered the relationship between classical and quantum theory from a signal processing since the 1960s. For example, PROCEEDINGS OF THE IEEE, VOL. 77, NO. 7, JULY 1989, "Time-Frequency Distributions-A Review". This concentrates on the relationship between the Wigner function in quantum theory and various concepts in signal processing. This might not answer your question so much as point to something that you might find more broadly helpful because of its signal processing provenance. The mathematics of Hilbert spaces only enters into a small fraction of the signal processing literature, but the vast majority of it could be put into such mathematical terms (signal processing is, after all, preoccupied with Fourier and other integral transforms). share|cite|improve this answer I always recommend Tony Sudbery's Quantum Mechanics and the Particles of Nature, don't be put off by the bad word in the title: he is fairly axiomatic and has both the abstract part and the concrete part. I recommend them more highly than either Mackey, already cited, or Varadarajan, both of which are idiosyncratic. Prof. Sudbery is an expert in Quantum Information Theory but does not take a biassed or idiosyncratic approach in his text. share|cite|improve this answer Here is a little book by a physicist trained as an engineer in fluid dynamics applied to aircraft: "Foundations of Quantum Physics" by Toyoki Koga (1912-2010), Wood and Jones, Pasadena, CA, 1980. This book has forewords by Henry Margenau and Karl Popper. Another book by him is "Inquiries into Foundations of Quantum Physics", 1983. share|cite|improve this answer Your Answer
d2f4e6a6cd1c9bee
My watch list   Angular momentum coupling Coupling in science Classical coupling Rotational-vibrational coupling Quantum mechanical coupling Rovibrational coupling Vibronic coupling Rovibronic coupling Angular momentum coupling [edit this template] In quantum mechanics, the procedure of constructing eigenstates of total angular momentum out of eigenstates of separate angular momenta is called angular momentum coupling. For instance, the orbit and spin of a single particle can interact through spin-orbit interaction, in which case it is useful to couple the spin and orbit angular momentum of the particle. Or two charged particles, each with a well-defined angular momentum, may interact by Coulomb forces, in which case coupling of the two one-particle angular momenta to a total angular momentum is a useful step in the solution of the two-particle Schrödinger equation. In both cases the separate angular momenta are no longer constants of motion, but the sum of the two angular momenta usually still is. Angular momentum coupling in atoms is of importance in atomic spectroscopy. Angular momentum coupling of electron spins is of importance in quantum chemistry. Also in the nuclear shell model angular momentum coupling is ubiquitous. spin-orbit coupling in astronomy reflects the general law of conservation of angular momentum, which holds for celestial systems as well. In simple cases, the direction of the angular momentum vector is neglected, and the spin-orbit coupling is the ratio between the frequency with which a planet or other celestial body spins about its own axis to that with which it orbits another body. This is more commonly known as orbital resonance. Often, the underlying physical effects are tidal forces. General theory and detailed origin Angular momentum is a property of a physical system that is a constant of motion[1] (is time-independent and well-defined) in two situations: (i) The system experiences a spherical symmetric potential field. (ii) The system moves (in quantum mechanical sense) in isotropic space. In both cases the angular momentum operator commutes with the Hamiltonian of the system. By Heisenberg's uncertainty relation this means that the angular momentum can assume a sharp value simultaneously with the energy (eigenvalue of the Hamiltonian). An example of the first situation is an atom whose electrons only feel the Coulomb field of its nucleus. If we ignore the electron-electron interaction (and other small interactions such as spin-orbit coupling), the orbital angular momentum l of each electron commutes with the total Hamiltonian. In this model the atomic Hamiltonian is a sum of kinetic energies of the electrons and the spherical symmetric electron-nucleus interactions. The individual electron angular momenta l(i) commute with this Hamiltonian. That is, they are conserved properties of this approximate model of the atom. An example of the second situation is a rigid rotor moving in field-free space. A rigid rotor has a well-defined, time-independent, angular momentum. These two situations originate in classical mechanics. The third kind of conserved angular momentum, associated with spin, does not have a classical counterpart. However, all rules of angular momentum coupling apply to spin as well. In general the conservation of angular momentum implies full rotational symmetry (described by the groups SO(3) and SU(2)) and, conversely, spherical symmetry implies conservation of angular momentum. If two or more physical systems have conserved angular momenta, it can be useful to add these momenta to a total angular momentum of the combined system—a conserved property of the total system. The building of eigenstates of the total conserved angular momentum from the angular momentum eigenstates of the individual subsystems is referred to as angular momentum coupling. Application of angular momentum coupling is useful when there is an interaction between subsystems that, without interaction, would have conserved angular momentum. By the very interaction the spherical symmetry of the subsystems is broken, but the angular momentum of the total system remains a constant of motion. Use of the latter fact is helpful in the solution of the Schrödinger equation. As an example we consider two electrons, 1 and 2, in an atom (say the helium atom). If there is no electron-electron interaction, but only electron nucleus interaction, the two electrons can be rotated around the nucleus independently of each other; nothing happens to their energy. Both operators, l(1) and l(2), are conserved. However, if we switch on the electron-electron interaction depending on the distance d(1,2) between the electrons, then only a simultaneous and equal rotation of the two electrons will leave d(1,2) invariant. In such a case neither l(1) nor l(2) is a constant of motion but L = l(1) + l(2) is. Given eigenstates of l(1) and l(2), the construction of eigenstates of L (which still is conserved) is the coupling of the angular momenta of electron 1 and 2. In quantum mechanics, coupling also exists between angular momenta belonging to different Hilbert spaces of a single object, e.g. its spin and its orbital angular momentum. Reiterating slightly differently the above: one expands the quantum states of composed systems (i.e. made of subunits like two hydrogen atoms or two electrons) in basis sets which are made of direct products of quantum states which in turn describe the subsystems individually. We assume that the states of the subsystems can be chosen as eigenstates of their angular momentum operators (and of their component along any arbitrary z axis). The subsystems are therefore correctly described by a set of l, m quantum numbers (see angular momentum for details). When there is interaction between the subsystems, the total Hamiltonian contains terms that do not commute with the angular operators acting on the subsystems only. However, these terms do commute with the total angular momentum operator. Sometimes one refers to the non-commuting interaction terms in the Hamiltonian as angular momentum coupling terms, because they necessitate the angular momentum coupling. 1. ^ Also referred to as a conserved property Spin-orbit coupling The behavior of atoms and smaller particles is well described by the theory of quantum mechanics, in which each particle has an intrinsic angular momentum called spin and specific configurations (of e.g. electrons in an atom) are described by a set of quantum numbers. Collections of particles also have angular momenta and corresponding quantum numbers, and under different circumstances the angular momenta of the parts add in different ways to form the angular momentum of the whole. Angular momentum coupling is a category including some of the ways that subatomic particles can interact with each other. In atomic physics, spin-orbit coupling also known as spin-pairing describes a weak magnetic interaction, or coupling, of the particle spin and the orbital motion of this particle, e.g. the electron spin and its motion around an atomic nucleus. One of its effects is to separate the energy of internal states of the atom, e.g. spin-aligned and spin-antialigned that would otherwise be identical in energy. This interaction is responsible for many of the details of atomic structure. In the macroscopic world of orbital mechanics, the term spin-orbit coupling is sometimes used in the same sense as spin-orbital resonance. LS coupling In light atoms (generally Z<30), electron spins si interact among themselves so they combine to form a total spin angular momentum S. The same happens with orbital angular momenta li, forming a single orbital angular momentum L. The interaction between the quantum numbers L and S is called Russell-Saunders coupling or LS coupling. Then S and L add together and form a total angular momentum J: \mathbf J = \mathbf L + \mathbf S where \mathbf L = \sum_i \mathbf{l}_i and \mathbf S = \sum_i \mathbf{s}_i. This is an approximation which is good as long as any external magnetic fields are weak. In larger magnetic fields, these two momenta decouple, giving rise to a different splitting pattern in the energy levels (the Paschen-Back effect.), and the size of LS coupling term becomes small. For an extensive example on how LS-coupling is practically applied, see the article on Term symbols. jj coupling In heavier atoms the situation is different. In atoms with bigger nuclear charges, spin-orbit interactions are frequently as large or larger than spin-spin interactions or orbit-orbit interactions. In this situation, each orbital angular momentum li tends to combine with each individual spin angular momentum si, originating individual total angular momenta ji. These then add up to form the total angular momentum J \mathbf J = \sum_i \mathbf j_i = \sum_i (\mathbf{l}_i + \mathbf{s}_i). This description, facilitating calculation of this kind of interaction, is known as jj coupling. Spin-spin coupling See also: J-coupling and Dipolar coupling in NMR spectroscopy Spin-spin coupling is the coupling of the intrinsic angular momentum (spin) of different particles. Such coupling between pairs of nuclear spins is an important feature of Nuclear Magnetic Resonance spectroscopy as it can provide detailed information about the structure and conformation of molecules. Spin-spin coupling between nuclear spin and electronic spin is responsible for hyperfine structure in atomic spectra. Term symbols Term symbols are used to represent the states and spectral transitions of atoms, they are found from coupling of angular momenta mentioned above. When the state of an atom has been specified with a term symbol, the allowed transitions can be found through selection rules by considering which transitions would conserve angular momentum. A photon has spin 1, and when there is a transition with emission or absorption of a photon the atom will need to change state to conserve angular momentum. The term symbol selection rules are. ΔS=0, ΔL=0,±1, Δl=±1, ΔJ=0,±1 Relativistic effects In very heavy atoms, relativistic shifting of the energies of the electron energy levels accentuates spin-orbit coupling effect. Thus, for example, uranium molecular orbital diagrams must directly incorporate relativistic symbols when considering interactions with other atoms. Nuclear coupling In atomic nuclei, the spin-orbit interaction is much stronger than for atomic electrons, and is incorporated directly into the nuclear shell model. In addition, unlike atomic-electron term symbols, the lowest energy state is not L - S, but rather, l + s. All nuclear levels whose l value (orbital angular momentum) is greater than zero are thus split in the shell model to create states designated by l + s and l - s. Due to the nature of the shell model, which assumes an average potential rather than a central Coulombic potential, the nucleons that go into the l + s and l - s nuclear states are considered degenerate within each orbital (e.g. The 2p3/2 contains four nucleons, all of the same energy. Higher in energy is the 2p1/2 which contains two equal-energy nucleons). See also Clebsch-Gordan coefficients This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Angular_momentum_coupling". A list of authors is available in Wikipedia.
b5176a1174f029f4
The WKB Approximation I hope that this is a clear explanation of what the important WKB approximation in elementary quantum mechanics is all about. History of the WKB Approximation The WKB, or BWK, or WBK, or BWKJ, or adiabatic, or semiclassical, or phase integral approximation or method, is known under more names than any confidence man. It approximates a real Schrödinger wave function by a sinusoidal oscillation whose phase is given by the space integral of the classical momentum, the phase integral, and whose amplitude varies inversely as the fourth root of the classical momentum. This approximation was already known for the physical waves of optics and acoustics, and was quickly applied to the new Schrödinger "probability" waves. The new quantum theory that was introduced by Planck, Bohr and Sommerfeld after the turn of the 20th century, now the "old" quantum theory, was based on classical (Newtonian) mechanics supplemented by arbitrary postulates. These new postulates were completely absurd, but nevertheless explained a great deal of surprising phenomena, such as the black-body thermal radiation spectrum, the line spectra of hydrogen, Compton scattering, Landé's vector model, and the principal aspects of the photoelectric effect. The principal tool of this analysis was the "correspondence principle" that the large-scale behavior of quantum systems agreed with classical analysis. Much use was also made of advanced mechanics, such as Hamiltonian theory. Nevertheless, arbitrariness of the quantum postulates, and lack of progress in understanding more complex systems, created a hunger for a better theory. The better theory arrived with Heisenberg's matrix mechanics (1925), de Broglie's matter waves (1924), and the Schrödinger wave equation (1926), soon elaborated by Born, Jordan and others. It was the introduction of waves that gave a necessary mental model to support the abstract algebra of Heisenberg matrices. It is necessary to repeat that the Schrödinger waves are not waves in the sense of electromagnetic waves, but include the essence of quantum behavior in combining phase and amplitude in their description. The waves can, in fact, be complex (consist of real and imaginary parts). Born's interpretation of the absolute value squared as the probability density for the coordinate x of the system described is the best intimation of its meaning. The wave function is not a wave in physical space, but a mathematical device. Any model that combines classical waves and particles to explain wave mechanics is essentially incorrect, and leads to erroneous predictions. States are described in new ways in wave mechanics, of which the Schrödinger wave function is only one example, based on the position coordinate. The wave function includes information on position and momentum simultaneously. Heisenberg's uncertainty relations express this fact concisely. There is no surprise in uncertainty relations for waves, only in the application to what may be interpreted as particles. Heisenberg's famous thought experiments on the process of measurement do not show that measurement causes the quantum behavior, but rather that quantum behavior is implicit in any such measurement. There is no disagreement whatever between the predictions of wave mechanics and experimental observations. The new wave mechanics gave complete explanations for the arbitrary postulates of the old quantum theory. There was no necessity for postulating quantum behavior--the behavior fell out of the theory naturally. The quantization of oscillators was quite analogous to the normal modes and proper frequencies of electromagnetic waves in a resonator. However, although the wave picture of the new mechanics gave many such wavelike properties naturally, there is always more to the picture, and essential differences with classical waves. These differences are hard to find in optics, but are certainly there, as in the photoelectric effect. The fascinating thing was the existence of quantum effects in particles. Wave mechanics is now called simply quantum mechanics, as it is no longer necessary to emphasize the difference from the Newtonian mechanics of the old quantum theory. No sooner was wave mechanics abroad, than a method of applying it to the most important problems of the day was devised. These problems were the new phenomenon of tunnelling through a potential barrier, and the energy eigenstates of a potential well, either of an oscillator or the radial problem in atomic spectra. Solving problems in wave mechanics generally meant the solution of differential equations, for which even in one dimension there were no analytic solutions, except in a few special cases. By approximating the wave function as a oscillatory wave depending on a phase integral, many useful problems could be solved by a mere quadrature. Almost simultaneously, G. Wentzel [Zeits. f. Phys. 38, 518 (1926)], H. A. Kramers [Zeits. f. Phys. 39, 828 (1926)] and L. Brillouin [Comptes Rendus 183, 24 (1926)] published applications of this theory to the Schrödinger equation. Their initials give the term WKB approximation. The developments were independent, and it would not be fair to recognize one author rather than another. Why the inverse alphabetic order was chosen, I do not know. BWK and WBK are also found. The latter may be accurately chronological by publication date. The question of the relation of quantum and classical mechanics is a large and important one, of which the WKB approximation is only one part. I think the term should be restricted to the one-dimensional approximation that is so valuable in applications, and not to the general subject of semiclassical approximations, as is done by P. J. E. Peebles (1992), who relegates it to an historical chapter and does not mention the important applications at all. Abbreviated references to authors in this page refer to well-known quantum mechanics or spectra texts of the dates given. The name adiabatic or semiclassical may also be applied with justice. Ehrenfest showed that the quantum expectation values of mechanical quantities behaved like their classical analogues. Also, Schrödinger's equation can be approximated in a form resembling Hamilton's principal function theory. We won't be concerned with these more general interesting questions here. The method used by WKB was very similar to the theory developed by H. Jeffreys [Proc. London Math. Soc. (2)23,428 (1923)], who apparently went into a huff at not being mentioned. Now, this was before wave mechanics, so he could not have applied the theory to the Schrödinger equation, which is really what is in question. Whether WKB ever read his paper or used it I do not know. My suspicion is that they did not, since the theory, as needed by them, is not very difficult, as we shall see. Some authors, such as Condon and Odobasi, give Jeffreys his due by calling it the WBKJ approximation. However, the only difficult part of the theory, the connection of solutions on opposite sides of a turning point, was published much earlier by Rayleigh [Proc. Roy. Soc. A86, 207 (1912)], so possibly WBKJR approximation would be fairer. The general matter was actually treated by J. Liouville [Jour. de Math. 2, 168, 418 (1837)], and the function used by Rayleigh was invented by G. B. Airy [Trans. Cambr. Phil. Soc. 6, 379 (1849)] in connection with the theory of the rainbow. Anyone who thinks Physics is advancing at the present time should carefully consider the short period 1925-1930, and what was done in these few years. Later contributions were made by J. L. Denham [Phys. Rev. 41, 713 (1932), R. E. Langer [Phys. Rev. 51, 669 (1937)] and W. H. Furry [Phys. Rev. 71, 360 (1947)]. This would make it the WBKJRLDLF approximation, I suppose. The WKB approximation appears in most quantum mechanics texts, with the notable exception of Dirac's. Pauling and Wilson (1935) have a short account (pp 198-201), E. C. Kemble (1937) (pp 90-112) with his own contributions, W. V. Houston (1951) (pp 87-90), N. F. Mott (1952) passim, A. Messiah (1962) (pp 194-202), A. S. Davydov (1963) (pp 73-86), L. I. Schiff (1968) (pp 268-279), E. U. Condon and H. Odobasi (1980) (pp 130-135) and P. J. E. Peebles (1992) (pp 44-47). These accounts vary in comprehensibility, and some are unnecessarily mathematical. Unfortunately, I do not have a copy of Rojansky's text with its exceptionally understandable treatments of many topics. The WKB was probably included. I borrowed the copy I studied from the Billings public library long ago, and have never been able to find a copy to buy. Explanation of the WKB Approximation The Schrödinger equation results if we make the operator substitution p = -i(h/2π)d/dx in the eigenvalue equation Hψ = Wψ, where H is the Hamiltonian p2/2m + V(x), and W is the energy eigenvalue. We are dealing with a single coordinate x. Recall that ψ may be chosen real in this case, and has (unexpressed) time dependence exp(iWt). The result is ψ" + (8π2m/h2)[W - V(x)]ψ = 0, or ψ" + k2(x)ψ = 0, which is simply Helmholtz's equation, very familiar in wave theory. The "wave vector" k = 2π/λ, and the momentum p = hk/2π (Internet Explorer does not support h bar, so I have to insert the cumbersome 2π explicitly). This is de Broglie's relation. Remember that low momentum means a long wavelength. Let's try for a solution in the form ψ = A(x)exp[iS(x)]. To make things a little simpler, use ψ = exp[iS(x) + T(x)]. This merely expresses the amplitude in a different way, and puts the functions S(x) and T(x) on a more equal basis. Substitute this expression in the Schrödinger equation, and write the real and imaginary parts separately. What you find is: -S'2 + T" + T'2 + k2 = 0 and S" + 2S'T' = 0. Now suppose we are considering cases where the oscillation of the wave function is very rapid compared to changes in amplitude. This will be true when the momentum p is large (and so λ is small) and does not change very rapidly--that is, when V(x) varies slowly enough. The condition is something like p'/p << 1. In this case, T" and T' will be very much smaller than S', and if they are neglected in the first equation, we have the happy result that S'2 = k2, or S' = ±k. This gives S(x) = ±∫ kdx, which is our phase integral, the fundamental quantity in this approximation. Because we neglected T" and T', it is indeed an approximation, which we must not forget, however good the results. The second derivative S" is just ±k', so the imaginary equation gives 2kT' = -k', from which T = -(1/2)ln k plus a constant of integration. Then exp (T) = A/k1/2. This factor makes the amplitude vary with momentum so that the particle flux is constant, and is essential to a reasonable answer. Finally, therefore, the approximate solution is ψ = (A/k1/2) cos [S(x) + δ], where there are two arbitrary constants, A and δ The function S was originally expressed as a series of powers of h: S0 + hS1 + h2S2 + ..., where the first term gave the classical result (Hamiltonian theory), the second term a quantum correction, and so on. In what we did above, S0 corresponds to S(x) and S1 to T(x), and the separation of real and imaginary parts was the separation of the zeroth and first powers of h. Unfortunately, this series is not convergent, merely asymptotic, and taking higher order terms into account is unprofitable. A lot of effort seems to have been wasted along this line. Let's try to apply the approximation to a particle in a potential well. If we first consider an infinite square well, the well has hard, infinite walls at, say, x = 0 and a, and the boundary condition is that the ψ = 0 there. Then, (A/k1/2)sin[S(x)] satisfies the left-hand boundary condition, and we must have sin[S(a)] = 0, or S(a) = nπ. Hence, [(8π2/h2)W]1/2a = nπ, or W = n2h2/8a2, the energy eigenvalues for the infinite square well. This is the exact result, not surprising since our approximation is exact in this case (the amplitude is constant). So the WKB approximation works here. Thus encouraged, let's study the harmonic oscillator, for which V(x) = mω2x2/2. For a given energy W, the particle bounces back and forth classically between turning points at which the momentum is zero and reversing. In wave mechanics, we have a region to the left of the turning point where the total energy is negative, and the wave function decreases to zero exponentially as we go deeper into this forbidden region. If we carry out the WKB approximation in this region, we find an exponential solution instead of the oscillatory solution, but everything else is the same. There is a similar region to the right of the right-hand turning point. Between the turning points we have the positive-energy region in which the wave function is oscillatory, as we have seen. The great difficulty that now arises is that at the turning points, our approximation fails utterly. We have approximate solutions except in small regions at the turning points, and do not know how to connect the solutions is the different regions. Connection, however, is necessary for the success of the approximation. One way to get a connection is to consider the x-axis as the real axis in a complex plane. Our solutions are analytic functions of the complex variable z = x + iy. Perhaps we can somehow start with a function on the positive x-axis (considering the turning point at x = 0) and move around to the other side of the origin by the principles of analytic continuation to find out what kind of solution it connects to there. This works, but not easily. Stokes discovered that asymptotic series can, disconcertingly, jump to represent different solutions as arg z (the angle θ in z = re) changes. This is known as Stokes's Phenomenon. A lot of work was done along these lines, but however interesting and arcane it is, it is also unnecessary for practical applications. An easier way, due to Rayleigh, is simply to find an exact solution valid in the neighborhood of the turning point, and to fit it to the asymptotic solutions on either side. If the potential is replaced by a linear variation at the turning point, then the desired solution is the Airy function Ai(z) from the theory of the rainbow. Let's assume that V(x) = -ax near the turning point x = 0, so that k2 = (8πma/h2)x, which gives us S(x) = (2/3)(x/α)3/2, where α = (h2/8πma)1/3 is a characteristic distance near the turning point. The dimensionless variable s = x/α is very convenient to use. In terms of s, our approximate solution is (A/s1/4)cos[(2/3)s3/2 + δ]. The exact solution Ai(s), sketched at the right in the neighborhood of the turning point, has the asymptotic form (1/π1/2s1/4)cos[(2/3)s3/2 - π/4] to the right of the turning point. To the left, it decreases exponentially as required. The net effect is that the WKB approximate solution is pushed away from the turning point by an eighth of a wavelength, or phase π/4, in the asymptotic region. The Airy function can be expressed in terms of Bessel functions of order 1/3. Therefore, we can carry the phase integral from turning point to turning point, as in the case of the infinite square well, and subtract the π/2 from the two ends to allow for the connection. This gives us S = (n + 1/2)π. The phase integral for a harmonic oscillator with energy W is S = Wπ/hν (the integral is easily done with the aid of a table of integrals), so we find W = (n + 1/2)hν. Surprisingly, this is the exact result, in spite of the fact that our method is approximate. The connection relations supply the 1/2 that implies a zero-point energy, which is not present in the old quantum theory. To apply the WKB approximation to an arbitrary potential V(x), simply find the phase integral S as a function of the energy W. This can always be done by computer integration, where all the tedium is eliminated by the automatic calculations. Then the energy eigenvalues are the values of W for which S/π = n + 1/2. This can be done for anharmonic oscillators, such as vibrating diatomic molecules, or for the radial function in atomic spectra or in atomic collision problems, where the centrifugal potential is added to the self-consistent potential V(r). A large number of such calculations were done in the 30's and 40's, before great computing power was available for more elaborate schemes, and the results were quite satisfactory. The WKB approximation is second only to perturbation theory as a fruitful method of calculation. One interesting problem to which the WKB approximation is convenient is tunnelling through a region in which W < V(x), and the approximate wave function is, therefore, exponential. I'll not work out the details here, except to state the resulting formula, as given in Mott (1951). The tunnelling probability is P = exp{-2∫ [(8π2m/h2)(V(x) - W)]1/2dx}, which is the absolute value squared of the ratio of the amplitudes on the two sides of the barrier. This result was used by Gamow, Gurney and Condon in the theory of alpha decay. The integral is extended from turning point to turning point, and is analogous to the phase integral S. The subject of asymptotic expansions is treated in H. and B. S. Jeffreys, Mathematical Physics (Cambridge, 1956), Chapter 17, and in E. T. Whittaker and G. N. Watson, A Course in Modern Analysis (Cambridge, 1927), Chapter VIII. Such expansions are of great utility in physics. The rainbow problem is treated in R. A. R. Tricker, Atmospheric Optics (American Elsevier, 1970), Chapter VI. Return to Physics Index Composed by J. B. Calvert Created 17 December 2001 Last revised
dabe478c07f06e65
Testing Helps Us Understand Physics In our testing industry we’ve borrowed ideas from the physics realm to provide ourselves some glib phrases. For example, you’ll hear about “Schrödinger tests” and “Heisenbugs.” It’s all in good fun but, in fact, the way that physics developed over time certainly has a great deal of corollaries with our testing discipline. I already wasted people’s precious time talking about what particle physics can teach us about testing. But now I’m going to double down on that, broaden the scope a bit, and look at a wider swath of physics. So let’s start at the seventeenth century. It was in this century (1687, to be exact) that a guy named Isaac Newton listed some laws for the motion of particles experiencing a force. That subject was initially called Newtonian mechanics and, eventually, after some radical new developments that I’ll talk about later, came to be called classical mechanics. In Newtonian mechanics, particles are characterized fairly simply by a mass (how much they “weigh”, basically) and two vectors that evolve in time. If you’re not too sure on what I mean by vector, just think of a line with an arrow. That right there is pretty much all there is to a vector. Testing is often about keeping things simple, right? KISS: Keep It Simple (Sort of) A vector has a size — usually referred to as magnitude — and a direction. In the case of those two Newtonian mechanics vectors, one vector gives the position of the particle, the other provides its momentum. So what does this mean? It means that given a force acting on a particle, you can solve for the evolution in time of the position and momentum vectors. Uh huh. Whatever. Yeah, that’s a mouthful. Putting that more simply, this really just means you can figure out how a particle moves. Simple things are simple, right? This happens in testing all the time. We can start with a concept, we have a few terms. They all seem to line up. Even our visuals are fairly easy to express. And it all boils down to something pretty simple. You figure out the specifics of how a particle moves by solving a differential equation that is given by Newton’s second law. What this law, and thus equation, specifically says is this: Uh huh. Whatever. Well, you might recognize this better when it is stated more simply: “Force equals mass times acceleration.” Notice that a couple of times there I reframed ideas in a simpler fashion. We often have to do this in testing, particularly in a technical context. The question then is whether the lower fidelity we are going for in terms of explanation compromises the meaning we want to convey. Consider the two definitions above, for example. Sometimes we simply use both descriptions of something, but choose one based on our audience at any given time. This is a balancing act we’re always dealing with. Yet, even when using a simpler means of expression, we can’t deny the underlying complexity … If you were talking to an “expert” they would tell you that the important point here is something like this: “A force is equal to the differential change in momentum per some unit of time as described by a certain calculus of reasoning.” In the context of mathematics, a “calculus of reasoning” means a way to construct relatively simple quantitative models of change and to determine the consequences of those changes. If that’s hard to wrap your head around, consider that poor old Newton had to develop both the mechanics named after him and the differential and integral calculus at the same time. Frame the Language I should probably state that calculus is the language that allows the precise expression of the ideas of Newtonian mechanics. Without that language, Newton’s ideas about physics couldn’t be formulated in any useful way. Basically, differential calculus has you looking at the rate of change of something at any point. It’s very unit-like in nature. Integral calculus allows you to take the sum of many infinitesimal points. It’s very integration-like in nature. In testing, we create our own languages that allow expression of the ideas of how to test an application. This is no different than how code, often in vey different types of programming languages, is used to express the ideas of a business domain. Notice, also, that we have a very unit-based style of reasoning as well as an integration-based style of reasoning. But Don’t Attribute It All To One Person Did you, like I sometimes do, feel slightly stupid when I said that Newton had to develop both classical mechanics and the differential and integral calculus at the same time? It’s a little humbling, right? Well, cheer up! Newton didn’t just invent calculus one day. In fact, the ideas and concepts of mathematical calculus were developed and worked on over the course of a very long time. What Newton did do is provide one form of simplifying algorithm for doing the “work” of calculus. That’s nothing to sneeze at, to be sure. But don’t get hung up on one guy. We see this in the testing industry all the time, such as with “Dan North invented BDD” or “Kent Beck invented extreme programming” or “Martin Fowler said everything that is ever worth saying about programming ever.” It’s important not to discount the public work these people did but it’s also important to realize that they were simply following in a tradition. This is important because that tradition needs to continue to evolve. And often their solutions are only “approximately right.” Keep that “approximately right” part in the back of your mind; I’ll return to it. Expanding Our Field Of Knowledge Okay, so Newton knew that particles experienced a force. And when they did, they moved. And Newton helped us understand the laws for that movement. That brief paragraph could actually describe, at a very high level, the work of the seventeenth and eighteenth century. Moving forward a bit, in the nineteenth century, one of those forces — the electromagnetic force that acts between charged particles — was starting to be understood. A new language for this understanding was required and the subject eventually became known as electrodynamics. Eventually it would be called classical electrodynamics. Electrodynamics adds a new element — the field — to the particles of Newtonian mechanics. Specifically, an electromagnetic field was added. Much like in testing, we might need a new language, or means of expression, when a new element is added. Consider how we used the “language” of Selenium for a long time to test web sites. But then mobile sites came along and we needed a new language for expression, which we find in Appium. Then Windows-specific applications got Winium. In truth, these are all speaking the same language — delegating down to a common Selenium API — but using different means of expression. Let’s stick with this idea of the field for a moment. A field is basically something that exists throughout an extended area of space. This is quite different from something like a particle that exists at just one point in space. Now hang on to your socks because here we’re going to reuse our vector idea from particle motion. The electromagnetic field at a given time is described by two vectors for every point in space. In this context, one vector describes the electric field, the other the magnetic field. The above visual shows this with the particle vector from before added in. So notice that before we had: • particles with two vectors that evolved in time And now here we have: • a field with two vectors for every point Are these an equivalence class, though? Meaning, even given the slightly different wording as I did here, are we still fundamentally talking about the same kinds of concepts? A question that we often have to ask in testing. Eventually a guy named James Clerk Maxwell formulated what came to be known as the Maxwell equations. (It was actually a guy named Oliver Heaviside who summarized Maxwell’s theory into what would become known as the four “Maxwell equations.” So, again, not everything is always due to one person, even if everyone acts like it.) These were a set of differential equations whose solution determined how the electric and magnetic field vectors interacted with charged particles and how the field itself evolved in time. So earlier I said this: • the evolution in time of particles (their position and momentum) was described by differential equations Now here I’m saying this: • the evolution in time of fields (and their interactions with particles) is also described by differential equations As you would imagine, the equations are probably different, but again: are we still fundamentally dealing with the same thing? Notice how spotting the similarity in wording when describing concepts is critical. Shifting Contexts, But Same Domain So as I ramble my way through all this, what you should get here is that we were learning better ways to communicate about nature. It became necessary to take one context — particles acting under a force — and apply them to a wider context — fields that exist throughout all of space. This required further refinement as we realized that since the fields exist throughout space, they must co-exist with the particles. And if that’s the case, perhaps there are some interactions between the particles and the fields. Historically speaking, this is a perfect example of the broadening of context and a better understanding of scope within a particular domain. It’s not any different from what we have to do as we understand a business domain, or consider how tests and code have to interact to provide a more full picture of reality. When Things Integrate… So we have particles all over space. We have fields that exist throughout space. This means particles and fields are not just “units” but rather are “integrated.” As it turned out, the solutions to Maxwell’s equations could be quite complicated in the presence of charged particles. But in a vacuum the solutions were relatively simple. Further, those solutions corresponded to wavelike variations in the electromagnetic field. So here note that our description language — our means of expressing — becomes complicated once you consider two things acting together. When things start to “integrate”, things become more complicated. Yet notice how the language starts to lead to something interesting: these wavelike variations … As it turns out, these wavelike variations were light waves. Imagine that! Being slightly more descriptive, not to mention accurate, these waves seemingly could have arbitrary frequency and they traveled at the speed of light. So yeah: light waves. This is really important to understand. What Maxwell’s equations did was describe simultaneously two seemingly different sorts of things — the interactions of charged particles and the phenomenon of light. This is often the quest of test tools. Find the means to describe the application and the ways by which the application is tested. In reality, we have two bits: the production code and the test code. Our “field” that permeates all of space is the business domain that our code sets operate within. That domain is impacted by its interaction with code — i.e., the attempt of code to make the domain realizable. But think of all the language attempts we use to describe all this. Just in the testing sphere alone, we use natural language requirements, fluent interfaces, APIs, DSLs, structured abstraction layers like Gherkin, and so forth. Things Get … Complicated In the twentieth century, it became clear that the explanation of certain sorts of physical phenomena required ideas that went beyond those of Newtonian mechanics and electrodynamics as put forward by Maxwell. There’s a whole lot of detail here that I have to skip over but just know that it became clear that electromagnetic waves had to be “quantized.” A common way to visualize what this means is via the following: This quantization idea said that for an electromagnetic wave of a given frequency, the energy carried by the wave could not be arbitrarily small, but had to be an integer multiple of a fixed energy. Going around saying the “integer multiple of a fixed energy” was probably just as cumbersome then as it is now. So a simplifying term was used that became the justification for everything in science fiction: the “quantum.” What you see here is the emergence of a new way of thinking. This happens a lot in testing and coding, right? We get exposed to design patterns that make us entirely reconsider how we think of expression. We encounter entirely new libraries of code that do certain things — like, say, create a Virtual DOM — and that impacts how we express tests to handle that new way of doing things. The basic ideas for a consistent theory of the “quantum” were worked out over a relatively short period of time after initial breakthroughs by Werner Heisenberg and Erwin Schrödinger. (Yes, these are the people for whom “Heisenbugs” and “Schrödinger tests” are named. They would no doubt be so proud.) We can all relate to certain individuals in the field who guide ways of thinking. Testers might point to James Bach. Developers might point to Martin Fowler. And so on. It’s still important not to get too stuck on the people however. This overall set of ideas came to be known as quantum mechanics. Once the broader implications of these ideas became known, this revolutionized our picture of the physical world. In fact, it was here that “mechanics” and “electrodynamics” as they were understood up to that point were given the title “classical.” And here we have not just a new way of thinking, but a new way of doing. A new mechanics was on the scene. And yet … Old and New Coexist Quantum mechanics subsumes classical mechanics, which now appears merely as a limiting case that is approximately valid under certain conditions. But those “certain conditions” are pretty wide. For example, we still uses Newton’s equations to launch rockets and satellites even though they are only ever going to be an approximation to the true state of affairs. So we don’t necessarily throw out everything we had before. We just realize that we have different levels of abstraction that we can work on. The amount of fidelity — i.e., the correspondence with reality — required determines what level of abstraction we work with. Certainly approaches like TDD and BDD have introduced many layers of abstraction and/or expression. It cannot be overstated that the picture of the world provided by quantum mechanics is completely alien to that of our everyday experience. Our everyday experiences are in a realm of distance and energy scales where classical mechanics is highly accurate, at least so far as we can tell. We have to either consider the very, very small or the very, very large before quantum theory starts to “actually” matter. Think here of the cognitive friction testers or developers go through as they have to become acquainted with a new way of thinking and behaving. A perfect example is idea of BDD where the emphasis is much more on communication and collaboration. But also consider those cases where sometimes we don’t have to go for the “latest and greatest” because the domain we’re working in doesn’t require the “true” or “pure” vision, but rather the one that is “merely” highly accurate. This goes back to what I said before about the ‘big names’ in our industry often only being “approximately right.” The Space of Ideas Okay, let’s not lose our path on the winding trail here. Remember those vectors we talked about before? Yeah, so, it turns out that in quantum mechanics, the state of the world at a given time is not given by a bunch of particles, each with a well-defined position and momentum vector. Rather, the state of the world is given by a mathematical abstraction. Literally. What we have are vectors specified not by three spatial coordinates, but by an infinite number of coordinates. How often have you felt like you started with something on your projects that was relatively simple and then, as you learned more, it felt like you ended up with something approaching infinity? Yeah, me too. I don’t know about you but I have a hard time visualizing an infinite state space. To make things even harder to visualize, these spatial coordinates don’t have the decency to be real numbers. Instead they are called complex numbers, which basically just means a number that has a real and an imaginary part. Try presenting your next set of test metrics that way (“the real part looks bad, granted, but the imaginary part — fantastic!”). Certainly as we get into more distributed architectures, it can be difficult to visualize things. New applications, likewise, demand a level of code complexity that can make it very difficult to determine what is happening. Machine learning, expert systems with rules-based languages, and data science are often dealing with the very complex. This infinite dimensional set of vectors to represent a “state space” was eventually given a name. It was called Hilbert space, named after a mathematician by the name of David Hilbert. The Space of Ideas Needs Simplicity An interesting thing about complexity is that it often presents itself as a fundamental simplicity. I bet all of this above talk can seem very complex if you’ve never been exposed to the ideas. But you can focus on the simplicity initially. For example, if you’re trying to impress your spouse and/or significant other how smart you are, you might say that the fundamental conceptual and mathematical structure of quantum mechanics is really quite simple. After all, it only has two components. 1. At every moment in time, the state of a system is completely described by a vector in Hilbert space. 2. Observable quantities correspond to operators on Hilbert space. That’s it! What this simplistic description did is frame itself in terms that can be asked about. For example, going with the second point, the most important of those operators is referred to as the Hamiltonian operator. It basically tells you how the state vector changes with time. Here’s a perfect example where the “business rules” can be stated quite simply and concisely but they introduce terminology that needs context and provides the path to the complexity. The Space of Ideas Also Needs Complexity Okay, hold on! What’s Hilbert space? Basically it’s a Euclidean space — that which we all likely learned about in grade school — generalized to infinite dimensions. Consider that three-dimensional space has three independent dimensions, which we can combine in different amounts to give a specific “address” or “identifier” to any point in space. So to address, or identify, the point (2,4,6) we might say “go two units in the x-dimension, go 4 units in the y-dimension and go 6 units in the z-dimension.” With Hilbert spaces the dimensions can be arbitrary. So, just being a bit silly here, consider the set of all test cases which can be executed an integer amount of times in a given time period. I could, if were a madman, create some arbitrary function by combining all these test cases together. I then consider each test case to be an independent “dimension” in a Hilbert space. The function I created is now a “point” in this abstract space. Huh. Nifty. Okay, so what’s the Hamiltonian operator? The Hamiltonian operator is the thing that “causes translation in time.” This is just jargon for the fact that the Hamiltonian tells you how the system moves forward in time. In particular, how that system movies forward in time as it is describe by something called the Schrödinger equation. Uh … yeah. Okay, and the Schrödinger equation is …? Well, like some other stuff we talked about, it’s a differential equation. That equation is used to find the allowed energy levels of any system that can be described as “quantum.” So going with my test case example, I may have a Hamiltonian operator that tells you how the test case is executed over a time period. That is it’s “translation in time.” The Schrödinger equation would then limit the possible states that the test could be in at any time: it can be, say, passed or failed, but not somewhere in between those states. So if you do what I do and pretend all this makes sense, here’s what you get: the state of any system made up of particles and subject to forces is described by some vector in an infinite dimensional, abstract space (Hilbert space). These vectors have operators applied to them (Hamiltonians) that cause them to evolve in time and equations (Schrödinger) can be applied to all of this to tell us what the allowed energy is as that evolution in time occurs. How many times do we, particularly when working with business, have to figure out a bunch of domain terminology and then try to explain it in some way that is concise, necessarily abstracts away some details, but doesn’t give a false picture of whatever is we’re talking about? We often have to come up with examples that are illustrative of what I’m talking about. My test case idea above was one, probably poor, attempt to do just that. Method to the Madness? A bit of an aside, but perhaps a relevant one: a lot of people hear that quantum mechanics is entirely probabilistic. The idea being that the entire world is nothing more than a set of fortuitous happenings based on some sort of “roll of the dice” occurring below our levels of observation. C’mon, be honest: don’t a lot of our projects feel that same way sometimes? However, you should note that the fundamental structure I’ve been describing to you so far is not probabilistic. Rather it’s just as deterministic as the classical mechanics we started off with. It is? Sure. If you know precisely the state vector at a given time, then the Hamiltonian operator tells you precisely what it will be at all later times. The explicit equation you solve to do this is the Schrödinger equation. Sounds pretty deterministic to me. Well, here’s the rub. Probability comes in because you don’t directly measure the state vector. Instead you measure the values of certain states that are called “classical observables.” These are things like like position, momentum, and energy. The tricky part is that most states don’t correspond to well-defined values of these observables. In fact, the only states with well-defined values of the classical observables are special ones that are called “eigenstates” of the corresponding operator. Oh, just what I needed: a new term. What’s an eigenstate? This actually gets a little complicated. Just know that it’s basically a quantum mechanical state that has a particular value — called a wave function — and that wave function is an eigenvector — not just a vector, notice — that corresponds to a physical quantity in real space. What all this means is that our operator acts on the state in a very simple way. If you remember that part of these states is an imaginary number, the operator just multiplies the state by some real number. This real number is then the value of the classical observable — such as momentum, for example. So when you perform a measurement, you don’t directly measure the state of a system. Rather, you interact with it in some way that puts the system into one of those special states that does have well-defined values for the particular classical observables that you are measuring. This must sound like testing to you, right? Putting systems in a state so that you can measure something about them. Yet that very act of putting them into a state means, by definition, you are interacting with the thing being observed and thus you might change what can be observed about it. It’s at this very point that probability comes in. This means you can predict only what the probabilities will be for finding various values for the different possible classical observables. I’m willing to bet that many of you were able to follow along with all of this relatively well until we got into the probability and, more particularly, the eigenstates. I presented the material this way on purpose to show that, eventually, you can’t abstract away the details any more. True knowledge does require specialized knowledge of some sort. There does come a point in our work — whether it’s understanding the requirements, understanding how best to apply automation, and so on — where we start to run into cognitive friction. Don’t Count on Your Intuition While I’m necessarily skipping a whole lot of details that could prove what I’m saying, there is actually a fairly important point I hope you understand. The disjunction between the classical observables that you measure and the underlying conceptual structure of the theory is what leads to all sorts of results that violate our intuitions based on classical physics. A corollary I find here is when I try to train testers in modern testing practices, particularly when they have been more “traditional” — dare I say, classical? — testers. They have a very hard time building up their intuition for this brave new world. I’ve talked before about how there are limits to our intuitions in testing. The same applies in physics. Probably one of the most famous examples of this intuition-violation is the uncertainty principle attributed to Werner Heisenberg. This principle’s name originates in the fact that there is no vector in Hilbert space that describes a particle with a well-defined position and momentum. The reason for this is that the position and momentum operators don’t commute. This means that applying them in different orders gives different results. That’s really weird, by the way. It’s like saying a × b does not equal b × a. The result of this oddness is that state vectors that are “eigenstates” for both position and momentum simply don’t exist. Read that again: they literally do not exist. How this applies in the real world is that if you consider the state vector corresponding to a particle at a fixed position, it contains no information at all about the momentum of the particle. Similarly, the state vector for a particle with a precisely known momentum contains no information about where the particle actually is. If you think way back to our vectors in classical mechanics (ah, how simple it all was then), you will realize that this new “quantum” way of viewing the world totally upends everything we know. Which is odd because “everything we know” still seems to work pretty much like it did when classical mechanics was formulated. So we get a double-dose of intuition-violation. (Thank you very much, universe!) Can I Have a Visual for This? If you find a good one, do me a favor and let me know. But here’s one that gets passed around a lot: Well, that’s helpful, huh? What that visual means is “wave functions exist in the context of Hilbert space”. That’s what you got out of that, right? Don’t worry. No one else did either. Well, to be fair, you might get that if you knew the terms. And the context behind them. But you also might not. While states in quantum mechanics can abstractly be thought of as vectors in Hilbert space, a somewhat more concrete alternative is to think of them as “wave functions.” This is a term I introduced above but didn’t really get into. Being simplistic, wave functions are complex-valued functions of the space and time coordinates. When a function is described as “complex-valued” it just means functions whose values are complex numbers. What this wave function idea basically says is that the probability of finding a given particle with one of the classical observable states (say, a particular position) is determined by all of the values of the wave function (throughout all space) as well as the localized values of the wave function (at each point in space). So you end up with this kind of visual: I talked about the dogma and tradition problems with visuals in testing. I bet it comes as no surprise at all to you that other domains have this same problem. Compare the words I’ve used and the recent visuals I’ve shown. If you have a problem drawing the correspondence, fear not: you’re normal. But contrast this with how our visuals earlier on did seem to correspond fairly well to what we were talking about. Unified Vision for Explanation As all of these thoughts on physics were developing, there were two competing views of quantum mechanics. One was the wave mechanics proposed by Schrödinger and the other was referred to as matrix mechanics, led mostly by Heisenberg and a few other folks. (Max Born, Pascual Jordan, and Paul Dirac, if curious.) It turns out they both work the same, just using different methods to get to the same conclusions. The point I want to make, however, is that as this picture continued to emerge, you might notice how we’re bringing in specific determined states — like the particles of classical mechanics — and the waves — like those in classical electrodynamics. We’re also correlating these new functions to the previous idea of vectors, which applied in both mechanics and dynamics. The reason this matters is because, depending on context, it can be useful to think of quantum mechanical states either as abstract vectors in Hilbert space or as wave functions. These two points of view are completely equivalent. Remember before how I kept asking about whether what we were dealing with is equivalent? Well, here’s an example of where it can be — you just need those higher level ideas to tie everything together. But, again, you probably didn’t get that from the visuals. Then again, you probably didn’t get that from any of my explanations either. But I hope, at the very least, that you started to see the convergence of the ideas. Are You Done Yet? (Please be done.) Yeah, I think we’re at a good point to close out here. So did, as the title of this post states, testing help us understand physics? Or was it the other way around? A key thing to understand is that all of these ideas — regardless of which approach you take and which visuals you use and what explanations you suffer through — requires a lot of effort to develop intuitions about. However, that practice of developing intuition, and using those intuitions to further your own insights, is critically important. And that’s what I hope this post helped you do a little bit. One of my goals is to get people to start thinking a bit more broadly about the discipline of testing as well as quality assurance. Sometimes one of the most effective ways to do that is to present the context of testing as embedded within the thinking of a different discipline. About Jeff Nyman This entry was posted in Thinking. Bookmark the permalink. 6 Responses to Testing Helps Us Understand Physics 1. Tatiana says: I mostly followed all this. The whole eigenstate concept makes very little sense to me so I was glad to see you affirm this was a likely spot we might get confused! 🙂 Two things you said makes me wonder about the equivalence point you were trying to draw. “As it turned out, the solutions to Maxwell’s equations could be quite complicated “What Maxwell’s equations did was describe simultaneously two seemingly different sorts of things — the interactions of charged particles and the phenomenon of light.” But was there really an equivalance class here then? It sounds like you would have different equations. So this reads to me like they would be tests that work one way (in a test environment; “charged particles”) but another way (in a production environment, “vacuum”). • Jeff Nyman says: Indeed, yes, they are an equivalence. And, yes, you also have different equations. There are many ways — all of them of equal validity — that these equations can be written. Which particular way you want to use really depends on the application you are going for. The most popular incarnation of the equations, and the one that Heaviside formulated for Maxwell, has to do with how electrostatics was merged with magnetism, giving us electromagnetism. But a key point is that regardless of which equation set you use, they give identical results. This is in the same way that the wave mechanics and matrix mechanics formulations of quantum mechanics similarly give the same results, even though the formulations are very different. If something didn’t work in one, it wouldn’t work in another and vice versa. Thus like an equivalence class in testing wherein if a bug is found with one test, it will be found with another — if those tests are in the equivalence class. So, in your scenario, if you have tests that respond differently in that test environment than they do in the production environment, it’s a clear sign that there is not an equivalence in something. It could be a configuration in the environment. It could be due to the lack of certain data that the tests rely on. It could be due to certain sensitivities in one environment that are not present in another. 2. Bob Lane says: So part of what I get out of this is that Newton’s theories still work but we now know they are incomplete or wrong. But how can they work if they are wrong? • Jeff Nyman says: Putting on the science hat, it has to do with reductionist observation and the use of approximation in the reliability of measurement over (relatively) short-term effects. Consider that Copernicus pretty much demolished any notion of an Earth-centered system. Einstein smacked around Newton’s illusion of absolute space and time. Quantum theory put a somewhat decisive end to the notion of a controllable measurement process. Yet prior to those developments, those ideas — geocentrism, absolute space and time, always measurable processes — did work in some very real sense. They explained observations and even allowed certain amount of prediction. They simply had a breaking point where the scale they were considering was no longer “short-term.” Putting on the testing hat, Newton’s mechanics are an abstraction. Just like we may have, say, a series of specifically worded tests about a particular aspect of our application. For example, say we have a major billing component. I can write tests about that billing component but billing, as a concept, is not just spatial in nature; it’s temporal. So any tests I write are, by definition, a static representation of the temporal duration that is the reality of how billing works. The tests are correct insofar as they allow me to accurately test aspects of billing. But there is a wider aspect that they don’t capture. When I reach that wider aspect, I need to refine the tests. 3. George Dekruif says: Sheesh, reminders of the school days. I never did quite get all the wave functions and probabilities. Good read, though. • Jeff Nyman says: Yeah, it’s by no means intuitive in many cases. One way that I sometimes present it to people — when it’s okay to skimp on details — is just to say that in quantum mechanics, this thing called the wave function essentially stores up a lot of complex numbers to represent particles. These numbers can be added and combined in lots of ways and, when they are, the absolute values of the squares of these sums are interpreted as probabilities. The probability can be visualized as an amplitude: the higher the amplitude, the more likely is the probability of finding the particle right there or the more likely the particle has an energy of this or a momentum of that. The wave function is (apparently, anyway) nature’s way of encapsulating a certain amount of complexity but providing an API of sorts that we can call on. Each call is like a REST endpoint that it returns us something tangible. So if I want to know something about a particle, I query the API. But the trick is that I can’t query the API about two “complementary” aspects of the particle. So I can’t query for its speed and its direction. I can only query for one of those. To be sure, I can query for an approximation of, say, the speed. And that will then allow me to have at least an approximation of the direction. But the more I query one aspect and the more refined my knowledge becomes, the less I can query the other aspect. So I can’t have “complete knowledge” of the system under observation/test. Leave a Reply
8185a588f5930f28
Take the 2-minute tour × How to derive the Schrodinger equation for a system with position dependent effective mass? For example, I encountered this equation when I first studied semiconductor hetero-structures. All the books that I referred take the equation as the starting point without at least giving an insight into the form of the Schrodinger equation which reads as $$\big[-\frac{\hbar^2}{2}\nabla \frac{1}{m^*}\nabla + U \big]\Psi ~=~ E \Psi. $$ I feel that it has something to do with conserving the probability current and should use the continuity equation, but I am not sure. share|improve this question Hi ballkikhaal - I edited out the part of your question asking about a book, because we limit the number of book recommendation questions on the site. See if anything in this question helps you. –  David Z Oct 10 '12 at 16:23 5 Answers 5 The derivation is straightforward if you consider the source of the effective mass is a slowly varying hopping parameter on a tight-binding (lattice particle) model. Here you have a particle on a square lattice with a probability amplitude to go left, right, up and down, forward, backwards. The main physical requirement is Hermiticity, which in 1d can be used (with a phase choice on the wavefunction) to turn the phase everywhere real. Once you do this, there is a real amplitude at site n to hop one square to the right r(n) and an amplitude to hop one square to the left, which by hermiticity and reality, must be r(n-1)--- it must be the complex conjugate of the amplitude to hop right from position n-1. So the amplitude equation is $$ i{dC\over dt} = r(n-1) C_{n-1} - (r(n-1)+r(n))C_n + r(n) C_{n+1} $$ This is, when r is slowly varying, equivalent to the continuum equation found by Taylor expanding and keeping only the most relevant terms: $$ i {d\psi \over dt} = {1\over 2} {\partial \over \partial x} (r(x) {\partial\over \partial x} \psi(x)) $$ As Feynman noted but never published (Dyson published this comment posthumously, in a paper in American Journal of Physics titles something like "Feynman's derivation of the Maxwell equations from Schrodinger equation"), Dirac's phase trick doesn't work in higher dimensions, because you can't fix all the phases. Then the commutators have a magnetic field addition, and to make it consistent, the magnetic field has to end up obeying Maxwell's equation, since the phase rotation gives a U(1) symmetry. This is not a true derivation of Maxwell's physics from quantum mechanics, it is just a way of showing you need the extra assumption of CP invariance to make the hopping hamiltonian real (which is true). Then with the extra assumption, you just get $$ i {d\psi\over dt} = {1\over 2} \nabla \cdot (t(x) \nabla \psi) + V(x) \psi $$ Where I have added back the potential. This is the continuum limit of a tight binding model with spatially slowly varying hopping, or inverse effective mass. share|improve this answer A Hamiltonian must be self-adjoint. The equation must also reduce to the familiar equation in the case of a constant mass. Now the form of the operator is already determined as the only simple self-adjoint generalization of the position-independent Schroedinger equation to the position dependent case. If you specialize to 1 dimension, you get the Sturm-Liouville equation. At you can find a discussion of its self-adjointness. Everything genaralizes to the PDE case. share|improve this answer For derivation of the PDM Schrodinger equation see K. Young, Phys. Rev. B 39, 13434–13441 (1989) "Position-dependent effective mass for inhomogeneous semiconductors". Abstract.:A systematic approach is adopted to extract an effective low-energy Hamiltonian for crystals with a slowly varying inhomogeneity, resolving several controversies. It is shown that the effective mass $m_R$ is, in general, position dependent and enters the kinetic energy operator as $ -\nabla({m_R-1})\nabla/2$. The advantage of using a basis set that exactly diagonalizes the Hamiltonian in the homogeneous limit is emphasized. share|improve this answer Link in answer(v2) behind paywall. @MKB: In the future please link to abstract pages rather than pdf files, e.g. prb.aps.org/abstract/PRB/v39/i18/p13434_1 –  Qmechanic Nov 30 '12 at 22:54 In addition to Claudius' and Ron Maimon's answers, I would like to make three comments: 1. Classically, the Hamiltonian function for the effective mass approximation reads $$\tag{1} H({\bf r}, {\bf p})~:=~\frac{{\bf p}^2}{2m^*({\bf r})}+V({\bf r}).$$ 2. Quantum mechanically, when one quantizes the classical model (1), one should pick a self-consistent choice for the Hamiltonian operator $\hat{H}$. It is natural to replace the classical variable ${\bf r}$ and ${\bf p}$ in the Hamiltonian (1) with the operators $$\tag{2}\hat{\bf r}~=~{\bf r} \qquad\text{and}\qquad \hat{\bf p}~=~\frac{\hbar}{i}\nabla $$ (in the Schrodinger representation). But which operator ordering prescription should one choose? One natural choice, which (under appropriate boundary conditions) makes the Hamiltonian Hermitian, is $$\tag{3} \hat{H}~:=~\hat{\bf p}\cdot \frac{1}{2m^*(\hat{\bf r})}\hat{\bf p}+V(\hat{\bf r})~=~-\frac{\hbar^2}{2}\nabla\cdot \frac{1}{m^*({\bf r})}\nabla+V({\bf r}).$$ 3. Finally, let us mention a somewhat related/generalized Hermitian Hamiltonian operator $$\tag{4} \hat{H}~=~-\frac{\hbar^2}{2}\Delta_g +V({\bf r}), $$ which may give another useful (anisotropic) effective mass model. Here $\Delta_g$ is the Laplace-Beltrami operator for a Riemannian $3\times 3$ metric $g_{ij}=g_{ij}({\bf r})$, which, roughly speaking, may be viewed as an (anisotropic) effective mass tensor. share|improve this answer I would be very surprised if you managed to find a strict mathematical derivation of the Schrödinger equation anywhere – at least I have not encountered one until now. However, it might be worth pointing out that the ‘general’ time-dependent Schrödinger equation, which is often taken as an axiom of quantum mechanics, is usually $$i \hbar \partial_t \Psi = \hat H \Psi \quad .$$ In the case of a stationary Hamiltonian (usually $U(x,t) \equiv U(x)$), this equation separates and you get the stationary Schrödinger equation, namely $$ \hat H \Psi = E \Psi \quad ,$$ that is, an eigenvalue equation for the Hamiltonian. Given this equation, it is then relatively simple to work out the form of the Hamiltonian (in your case, $-\frac{\hbar^2}{2} \nabla \frac{1}{m^\star} \nabla + U$) and plug it into the equation. The exact form of the Hamiltonian is usually guesswork based on observation and analogies to classical mechanics. In general, we have $$ \hat H = \hat T + \hat U $$ where $\hat T$ and $\hat U$ denote the operators for kinetic and potential energy, correspondingly. It is worth noting that you can derive the continuity equation (which is identical to probability conservation in this case) from the Schrödinger equation by adding the complex conjugate of the Schrödinger equation to itself. share|improve this answer thank you for the answer but what i am asking is how to derive it within the regime of effective mass approximation and that too when the mass has a spatial profile... for example in the case of Al/GaAs high electron mobility transistor we have a position dependent mass. –  baalkikhaal Oct 10 '12 at 16:55 Are you looking for a derivation of the Hamiltonian $\hat H$ or the Schrödinger equation? I am positive that neither probability conservation nor the continuity equation have anything to do with the earlier. –  Claudius Oct 10 '12 at 16:57 I have come across this equation in Hamaguchi on page 347 <books.google.co.in/…; –  baalkikhaal Oct 10 '12 at 18:13 Where do you think Schrodinger equations come from, if not by some sort of a derivation? –  Ron Maimon Oct 10 '12 at 19:31 The answer is not by guesswork, it is from the tight-binding approximation with CP invariance to guarantee that the hopping parameter is real, and then Hermiticity guarantees the hopping is symmetric and equal to the given Hamiltonian. If the hopping is slowly locally varying, then you get the Hamiltonian they say. The Schrodinger equation which is axiomatic is not as specific as the Schrodinger equation in space, which has a specific ansatz for the kinetic term which can be justified from tight binding, as Feynman does in his lectures. –  Ron Maimon Oct 10 '12 at 19:47 Your Answer
c094a9a20b19aab7
Deterministic system From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system.[1] A deterministic model will thus always produce the same output from a given starting condition or initial state.[2] Physical laws that are described by differential equations represent deterministic systems, even though the state of the system at a given point in time may be difficult to describe explicitly. In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic. The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions. Markov chains and other random walks are not deterministic systems, because their development depends on random choices. A finite state machine may be either deterministic or non-deterministic. A pseudorandom number generator is a deterministic algorithm, although its evolution is deliberately made hard to predict. A hardware random number generator, however, may be non-deterministic. In economics, the Ramsey–Cass–Koopmans model is deterministic. The stochastic equivalent is known as Real Business Cycle theory. See also[edit] 1. ^ deterministic system - definition at The Internet Encyclopedia of Science 2. ^ Dynamical systems at Scholarpedia
29c4f534aad97478
Syllabus for physical science June 2011 I. Mathematical Methods of Physics Dimensional analysis. Vector algebra and vector calculus. Linear algebra, matrices, Cayley-Hamilton Theorem. Eigenvalues and eigenvectors. Linear ordinary differential equations of first & second order, Special functions (Hermite, Bessel, Laguerre and Legendre functions). Fourier series, Fourier and Laplace transforms. Elements of complex analysis, analytic functions; Taylor & Laurent series; poles, residues and evaluation of integrals. Elementary probability theory, random variables, binomial, Poisson and normal distributions. Central limit theorem. II. Classical Mechanics Newton's laws. Dynamical systems, Phase space dynamics, stability analysis. Central force motions. Two body Collisions - scattering in laboratory and Centre of mass frames. Rigid body dynamics- moment of inertia tensor. Non-inertial frames and pseudoforces. Variational principle. Generalized coordinates. Lagrangian and Hamiltonian formalism and equations of motion. Conservation laws and cyclic coordinates. Periodic motion: small oscillations, normal modes. Special theory of relativity- Lorentz transformations, relativistic kinematics and mass–energy equivalence. III. Electromagnetic Theory Electrostatics: Gauss's law and its applications, Laplace and Poisson equations, boundary value problems. Magnetostatics: Biot-Savart law, Ampere's theorem. Electromagnetic induction. Maxwell's equations in free space and linear isotropic media; boundary conditions on the fields at interfaces. Scalar and vector potentials, gauge invariance. Electromagnetic waves in free space. Dielectrics and conductors. Reflection and refraction, polarization, Fresnel's law, interference, coherence, and diffraction. Dynamics of charged particles in static and uniform electromagnetic fields. IV. Quantum Mechanics Wave-particle duality. Schrödinger equation (time-dependent and time-independent). Eigenvalue problems (particle in a box, harmonic oscillator, etc.). Tunneling through a barrier. Wave-function in coordinate and momentum representations. Commutators and Heisenberg uncertainty principle. Dirac notation for state vectors. Motion in a central potential: orbital angular momentum, angular momentum algebra, spin, addition of angular momenta; Hydrogen atom. Stern-Gerlach experiment. Time-independent perturbation theory and applications. Variational method. Time dependent perturbation theory and Fermi's golden rule, selection rules. Identical particles, Pauli exclusion principle, spin-statistics connection. V. Thermodynamic and Statistical Physics Laws of thermodynamics and their consequences. Thermodynamic potentials, Maxwell relations, chemical potential, phase equilibria. Phase space, micro- and macro-states. Micro-canonical, canonical and grand-canonical ensembles and partition functions. Free energy and its connection with thermodynamic quantities. Classical and quantum statistics. Ideal Bose and Fermi gases. Principle of detailed balance. Blackbody radiation and Planck's distribution law. VI. Electronics and Experimental Methods Semiconductor devices (diodes, junctions, transistors, field effect devices, homo- and hetero-junction devices), device structure, device characteristics, frequency dependence and applications. Opto-electronic devices (solar cells, photo-detectors, LEDs). Operational amplifiers and their applications. Digital techniques and applications (registers, counters, comparators and similar circuits). A/D and D/A converters. Microprocessor and microcontroller basics. Data interpretation and analysis. Precision and accuracy. Error analysis, propagation of errors. Least squares fitting I. Mathematical Methods of Physics Green's function. Partial differential equations (Laplace, wave and heat equations in two and three dimensions). Elements of computational techniques: root of functions, interpolation, extrapolation, integration by trapezoid and Simpson's rule, Solution of first order differential equation using Runge-Kutta method. Finite difference methods. Tensors. Introductory group theory: SU(2), O(3). II. Classical Mechanics Dynamical systems, Phase space dynamics, stability analysis. Poisson brackets and canonical transformations. Symmetry, invariance and Noether's theorem. Hamilton-Jacobi theory. III. Electromagnetic Theory Dispersion relations in plasma. Lorentz invariance of Maxwell's equation. Transmission lines and wave guides. Radiation- from moving charges and dipoles and retarded potentials. IV. Quantum Mechanics Spin-orbit coupling, fine structure. WKB approximation. Elementary theory of scattering: phase shifts, partial waves, Born approximation. Relativistic quantum mechanics: Klein-Gordon and Dirac equations. Semi-classical theory of radiation. V. Thermodynamic and Statistical Physics First- and second-order phase transitions. Diamagnetism, paramagnetism, and ferromagnetism. Ising model. Bose-Einstein condensation. Diffusion equation. Random walk and Brownian motion. Introduction to nonequilibrium processes. VI. Electronics and Experimental Methods Linear and nonlinear curve fitting, chi-square test. Transducers (temperature, pressure/vacuum, magnetic fields, vibration, optical, and particle detectors). Measurement and control. Signal conditioning and recovery. Impedance matching, amplification (Op-amp based, instrumentation amp, feedback), filtering and noise reduction, shielding and grounding. Fourier transforms, lock-in detector, box-car integrator, modulation techniques. High frequency devices (including generators and detectors). VII. Atomic & Molecular Physics Quantum states of an electron in an atom. Electron spin. Spectrum of helium and alkali atom. Relativistic corrections for energy levels of hydrogen atom, hyperfine structure and isotopic shift, width of spectrum lines, LS & JJ couplings. Zeeman, Paschen-Bach & Stark effects. Electron spin resonance. Nuclear magnetic resonance, chemical shift. Frank-Condon principle. Born-Oppenheimer approximation. Electronic, rotational, vibrational and Raman spectra of diatomic molecules, selection rules. Lasers: spontaneous and stimulated emission, Einstein A & B coefficients. Optical pumping, population inversion, rate equation. Modes of resonators and coherence length. VIII. Condensed Matter Physics Bravais lattices. Reciprocal lattice. Diffraction and the structure factor. Bonding of solids. Elastic properties, phonons, lattice specific heat. Free electron theory and electronic specific heat. Response and relaxation phenomena. Drude model of electrical and thermal conductivity. Hall effect and thermoelectric power. Electron motion in a periodic potential, band theory of solids: metals, insulators and semiconductors. Superconductivity: type-I and type-II superconductors. Josephson junctions. Superfluidity. Defects and dislocations. Ordered phases of matter: translational and orientational order, kinds of liquid crystalline order. Quasi crystals. IX. Nuclear and Particle Physics Basic nuclear properties: size, shape and charge distribution, spin and parity. Binding energy, semi-empirical mass formula, liquid drop model. Nature of the nuclear force, form of nucleon-nucleon potential, charge-independence and charge-symmetry of nuclear forces. Deuteron problem. Evidence of shell structure, single-particle shell model, its validity and limitations. Rotational spectra. Elementary ideas of alpha, beta and gamma decays and their selection rules. Fission and fusion. Nuclear reactions, reaction mechanism, compound nuclei and direct reactions. Classification of fundamental forces. Elementary particles and their quantum numbers (charge, spin, parity, isospin, strangeness, etc.). Gellmann-Nishijima formula. Quark model, baryons and mesons. C, P, and T invariance. Application of symmetry arguments to particle reactions. Parity non-conservation in weak interaction. Relativistic kinematics. Follow the link to download the Syllabus Directly : MCA Sem-II Computer Oriented Numerical Methods January 2011 GTU Question paper for Computer Oriented Numerical Methods ; GTU question paper for MCA 2011 ; GTU questionbank 2011 Seat No.: _____ Enrolment No.______ MCA. Sem-II Remedial Examination December 2010 Subject code: 620005 Subject Name: Computer Oriented Numerical Methods Date: 20 /12 /2010 Time: 10.30 am – 01.00 pm Total Marks: 70 Follow the link to download free MCA Sem-II Computer Oriented Numerical Methods January 2011 GTU Question paper for Computer Oriented Numerical Methods  question paper: - Direct link
392841b5d5af265a
Anderson localization From Wikipedia, the free encyclopedia Jump to: navigation, search In condensed matter physics, Anderson localization, also known as strong localization, is the absence of diffusion of waves in a disordered medium. This phenomenon is named after the American physicist P. W. Anderson, who was the first one to suggest the possibility of electron localization inside a semiconductor, provided that the degree of randomness of the impurities or defects is sufficiently large.[1] Anderson localization is a general wave phenomenon that applies to the transport of electromagnetic waves, acoustic waves, quantum waves, spin waves, etc. This phenomenon is to be distinguished from weak localization, which is the precursor effect of Anderson localization (see below), and from Mott localization, named after Sir Nevill Mott, where the transition from metallic to insulating behaviour is not due to disorder, but to a strong mutual Coulomb repulsion of electrons. In the original Anderson tight-binding model, the evolution of the wave function ψ on the d-dimensional lattice Zd is given by the Schrödinger equation i \hbar \dot{\psi} = H \psi~, where the Hamiltonian H is given by (H \phi)(j) = E_j \phi(j) + \sum_{k \neq j} V(|k-j|) \phi(k)~, with Ej random and independent, and interaction V(r) falling off as r−2 at infinity. For example, one may take Ej uniformly distributed in [−W,   +W], and V(|r|) = \begin{cases} 1, & |r| = 1 \\ 0, &\text{otherwise.} \end{cases} Starting with ψ0 localised at the origin, one is interested in how fast the probability distribution |\psi|^2 diffuses. Anderson's analysis shows the following: • if d is 1 or 2 and W is arbitrary, or if d ≥ 3 and W/ħ is sufficiently large, then the probability distribution remains localized: \sum_{n \in \mathbb{Z}^d} |\psi(t,n)|^2 |n| \leq C uniformly in t. This phenomenon is called Anderson localization. • if d ≥ 3 and W/ħ is small, \sum_{n \in \mathbb{Z}^d} |\psi(t,n)|^2 |n| \approx D \sqrt{t}~, where D is the diffusion constant. Example of a multifractal electronic eigenstate at the Anderson localization transition in a system with 1367631 atoms. The phenomenon of Anderson localization, particularly that of weak localization, finds its origin in the wave interference between multiple-scattering paths. In the strong scattering limit, the severe interferences can completely halt the waves inside the disordered medium. For non-interacting electrons, a highly successful approach was put forward in 1979 by Abrahams et al.[2] This scaling hypothesis of localization suggests that a disorder-induced metal-insulator transition (MIT) exists for non-interacting electrons in three dimensions (3D) at zero magnetic field and in the absence of spin-orbit coupling. Much further work has subsequently supported these scaling arguments both analytically and numerically (Brandes et al., 2003; see Further Reading). In 1D and 2D, the same hypothesis shows that there are no extended states and thus no MIT. However, since 2 is the lower critical dimension of the localization problem, the 2D case is in a sense close to 3D: states are only marginally localized for weak disorder and a small magnetic field or spin-orbit coupling can lead to the existence of extended states and thus an MIT. Consequently, the localization lengths of a 2D system with potential-disorder can be quite large so that in numerical approaches one can always find a localization-delocalization transition when either decreasing system size for fixed disorder or increasing disorder for fixed system size. Most numerical approaches to the localization problem use the standard tight-binding Anderson Hamiltonian with onsite-potential disorder. Characteristics of the electronic eigenstates are then investigated by studies of participation numbers obtained by exact diagonalization, multifractal properties, level statistics and many others. Especially fruitful is the transfer-matrix method (TMM) which allows a direct computation of the localization lengths and further validates the scaling hypothesis by a numerical proof of the existence of a one-parameter scaling function. Direct numerical solution of Maxwell equations to demonstrate Anderson localization of light has been implemented (Conti and Fratalocchi, 2008). The phenomenon has also been observed in numerical simulation of the non-relativistic Schrödinger equation. Experimental evidence[edit] Two reports of Anderson localization of light in 3D random media exist up to date (Wiersma et al., 1997 and Storzer et al., 2006; see Further Reading), even though absorption complicates interpretation of experimental results (Scheffold et al., 1999). Anderson localization can also be observed in a perturbed periodic potential where the transverse localization of light is caused by random fluctuations on a photonic lattice. Experimental realizations of transverse localization were reported for a 2D lattice (Schwartz et al., 2007) and a 1D lattice (Lahini et al., 2006). Transverse Anderson localization of light has also been demonstrated in an optical fiber medium (Karbasi et al., 2012) and has also been used to transport images through the fiber (Karbasi et al., 2014). It has also been observed by localization of a Bose–Einstein condensate in a 1D disordered optical potential (Billy et al., 2008; Roati et al., 2008). Anderson localization of elastic waves in a 3D disordered medium has been reported (Hu et al., 2008). The observation of the MIT has been reported in a 3D model with atomic matter waves (Chabé et al., 2008). Random lasers can operate using this phenomenon. 1. ^ Anderson, P. W. (1958). "Absence of Diffusion in Certain Random Lattices". Phys. Rev. 109 (5): 1492–1505. Bibcode:1958PhRv..109.1492A. doi:10.1103/PhysRev.109.1492.  2. ^ Abrahams, E.; Anderson, P.W.; Licciardello, D.C.; Ramakrishnan, T.V. (1979). "Scaling Theory of Localization: Absence of Quantum Diffusion in Two Dimensions". Phys. Rev. Lett. 42 (10): 673–676. Bibcode:1979PhRvL..42..673A. doi:10.1103/PhysRevLett.42.673.  Further reading[edit] • Brandes, T. & Kettemann, S. (2003). "The Anderson Transition and its Ramifications --- Localisation, Quantum Interference, and Interactions". Berlin: Springer Verlag  External links[edit]
81e25b2642b4e764
Every so often mariners report the sighting of a huge wave towering up to 30 m above the regular swells of the ocean surface. No one is sure why these rogue waves form, but now physicists in the US and Germany have managed to produce equivalent optical rogue waves by launching laser pulses into photonic-crystal fibres. Having performed computer simulations of the optical system, the researchers suggest that optical rogue waves, and therefore oceanic rogue waves, are seeded by noise. A photonic-crystal fibre is a transparent strand containing hundreds of regularly-spaced air holes running throughout its length. The alternating refractive index produced by this structure has a non-linear effect on light waves, shifting their frequency depending on the wave intensity. When a wave pulse — which comprises many waves with a bell-shaped distribution of frequencies — enters a photonic crystal fibre, its frequency spectrum is broadened. Rogue waves are examples of wave pulses, but their short, sharp nature requires too broad a frequency spectrum to be produced by this process alone. Now, Daniel Solli and colleagues at the University of California at Los Angeles, together with Claus Ropers from the Max Born Institute for Non-linear Optics and Short Pulse Spectroscopy in Berlin, have discovered that noise on either side of a wave pulse’s frequency spectrum can occasionally strike just the right wavelength and intensity to make the broadening process in a photonic-crystal fibre much faster, leading to the production of a optical rogue wave (Nature 450 1054). “Understanding optical rogue waves can help us to understand the oceanic phenomenon,” Ropers told physicsworld.com. “It could, in the future, enable us to predict when and where oceanic waves form.” The mathematics that describes optical wave production is extremely similar to that which describes water waves in the deep sea Unusual distribution The first hint of the underlying cause of optical rogue waves came when Solli, Ropers and colleagues used a laser to send trains of pulses into a photonic fibre without attempting to reduce noise. They found that more high-amplitude rogue waves were produced than would be expected from a usual Gaussian distribution. To make sense of the findings, the researchers modelled the pulses’ propagation numerically using the non-linear Schrödinger equation. It appeared that, with the photonic-crystal fibre shifting all the frequencies differently, sometimes the noise by chance sums with the main pulse to make it very broad. As soon as this happens, part of the pulse detaches into a soliton — a special type of wave pulse that resists having its shape modified while it propagates because of a balancing act between dispersion and the fibre’s properties. Taking much of the main pulse’s energy and having a very broad frequency spectrum, this soliton stands out from the pulse train as a rogue wave. Similar mathematics How is this analogous to the ocean? Just like a photonic-crystal fibre, the ocean is a non-linear medium. It also has a lot of noise, produced, among other things, by the bombardment of wind. “The mathematics that describes optical wave production is extremely similar to that which describes water waves in the deep sea,” said Solli. Still, to check if rogue waves really are produced by the same mechanism, scientists will have to find ways of accurately measuring the parameters of the non-linear Schrödinger equation — the degree of non-linearity and dispersion — for the open ocean.
d4e7c70e5aa86807
History or Physics? Nobody can quite say which of these two men’s great words outdo the other. As Thomas Carlyle said, “History may be called, more generally still, the Message, verbal or written, which all Mankind delivers to everyman,” or, as Lord Kelvin himself put it, “When you can measure what you are talking about and express it in numbers, you know something about it.” Although I knew the answer, I have been pondering over the need to study history ever since fifth grade when I first remember studying it. I still have no justification to study it except for the sake of entertainment. But a few learned men beg to differ. While I have pondered over this question for quite a while now, it was not until I came upon a similar question on another website recently that I decided to write about it. History or Physics? The website asked. And that set me thinking once again! Why study history? Apparently, there are many reasons to study/pursue the study of history. The American Historical Association has come up with an extensive list, from which I will pick the five most important of the lot: 1. History helps us understand people and society. In the first place, history offers a storehouse of information about how people and societies behave. Understanding the operations of people and societies is difficult, though a number of disciplines make the attempt. Why study physics? As I have said already, I side with physics. The reasons, most as stated by the University of Saskatchewan, are more than just convincing; I have, however, contained my enthusiasm and picked five: 1. Physics is the most fundamental of the sciences. It is concerned with the most basic building blocks of all things in existence. It explores the very fabric of nature and is the foundation on which other sciences stand. In a strict, true sense, every other scientific discipline is basically a form of applied physics. 2. Physics is beautifulPhysicists love simplicity and elegance. They are constantly striving to find the most fundamental ideas that can be used to describe even the most complex of phenomena. For example Newton found that only a very small  number of concepts could be used to describe just about all of the mechanical world – from steam engines to the motion of the planets. Not only is this beautiful, it’s downright amazing! 3. Physics encourages one to think and question. This might seem like a strange statement. The study of all subjects teach you to think. But because physics deals with the most basic concepts, the application of such techniques as “Separation of Variables” and “The Scientific Method” are never more clear than they are in the study of physics. Once mastered you will find that these methods can be applied to all subjects, including the business world and just coping with everyday life. 4. Physics gives one a new appreciation of the world. You can look a rainbow and say “Wow, pretty colours!”, or you can marvel at the amazing interactions between photons and electrons that come together in that particular way when light from the sun strikes spherical water droplets in the sky, and that you perceive as a multicolored arc suspended in the air. Now that’s awe! 5. Physics gives good earning. I ought to have put an exclamation mark at the end of that sentence, but I will let the latest international job trend statistics speak for itself: an engineer earns a starting salary of $30,000, and an average of $60,000; a historian earns a starting salary of $25,000 and an average of $50,000; and a physicist earns a starting salary of $60,000 and goes up to $95,000! Adding to that, a Physics Bachelor’s degree alone will leave you with an average salary of $52,000. The Verdict While history would be most promising in shaping a man in his responsibilities and character, physics would hardly allow one to develop questionable attributes. While history can be useful as a good afternoon pass time, it would only make sense to study physics to attempt to answer those questions far deeper within our universe which would make the questions tackled by history—no offense here, but—shallow. But opinions differ, and I would like to hear your side of the debate. Share it below. Peter Walker The supreme arrogance of religious thinking: that a carbon-based bag of mostly water on a speck of iron-silicate dust around a boring dwarf star in a minor galaxy in an underpopulated local group of galaxies in an unfashionable suburb of a supercluster would look up at the sky and declare, ‘It was all made so that I could exist!’ Our Universe: marble or wood? —Steven Weinberg If there is any fundamental quality of nature that has eluded physicists and sparked debates of a fearful scale, it is the question as to whether the universe has a simple (beautiful) underlying principle that runs quite everything in existence. Undoubtedly, the dream of every physicist is, as Leonard Lederman creatively summed it up, ‘…to live to see all of physics reduced to a formula so elegant and simple that it will fit easily on the front of a T-shirt.” On a serious note, this highlights the strong belief in most physicists that nature is elegance and simplicity bundled into one. Einstein’s world of marble Physicist Albert Einstein likened this world to marble. In the pith his idea was that the world as we first see it would appear to the observer like wood. Various observations would seem vastly different, unpredictable and complicated. It was his strong belief that we could, on further investigation, chop off these wooden structures to reveal an inside made of marble. Marble he likened to an elegant and simple universe with predictability. Apart from the fact that wood and marble seemed, for some reason, to represent chaos and cosmos to Einstein, it was also an idea that would hold true for almost all discoveries in physics preceding, and including, relativity. From Newton’s laws to Maxwell’s equations to Einstein’s relativity, with every passing discovery we seemed to have united another chunk of our universe to form a whole; and it was as if we either found new explanations for phenomena or found that new, unexplained phenomena happened to be in coherence with previously explained processes/laws. It was like a work of fiction where everything, right up to the climax, would remain perfectly alright, going without a hitch and then, out of the blue, the whole idea of a world of marble collapses. Quantum theory backs wood Einstein was one of the founders of Quantum theory through his explanation of the photoelectric effect wherein he said that light—much like was predicted for other forms of energy, by Max Planck—was really made up of discrete packets of energy he dubbed photons. At first the idea appeared perfectly alright (and Einstein went on to win the Nobel prize for this many years later.) To physicists, by and large, the idea seemed to boil down to one phrase: waves could, at times, act as particles. The problem arose when Erwin Schrödinger took it upon himself—under the belief that the vice versa could also be true—to theorise how particles could convert into waves. After a year-long vacation, he returned to the physics society with his magnum opus, what is called the Schrödinger equation. Einstein and Schrödinger both believed then that this would descirbe the wave equation of a solid particle; but the reality was—as other quantum physicists like Max Planck, Wolfgang Pauli and Werner Heisenberg were quick to point out—this did not completely make sense. At first both Einstein and Schrödinger were pleased with the resulting equation, but they had made a fatal flaw: they had failed to fully analyse the implications of the equation as soon as they realised it satisfied all properties they originally expected of it. And when the others studied it completely, they realised that it allowed for almost fantastic occurrences. Quantum physics was predicting that we needed to crack the marble to reveal a world of wood—and nothing could prove it wrong. The debate rages Einstein rejected the theory almost in its entirety simply because it went head-on against his personal belief of a world made of marble. When chance was introduced to the very core of physics; when we realised that these new developments allowed for the mathematical possibility of the unthinkable and the presence of a chance, however small, of contradictory occurrences, Einstein and Schrödinger shared the idea that this was outrageous. This even prompted Einstein to write, Quantum mechanics calls for a great deal of respect. But some inner voice tells me that this is not the true Jacob. The theory offers a lot, but it hardly brings us any closer to the Old Man’s secret. For my part, at least, I am convinced that He doesn’t throw dice. The Old Man was how Einstein referred to God in most of his writings. This debate rages to this day, even if subtly. While quantum theory, siding with wood, has not found a phenomenon that can prove it wrong, neither has relativity, which sides with marble. Perhaps the most powerful analogy we have to date is that of Schrödinger’s Cat. Quantum theory tells us that an observation depends entirely on the observer—literally. Let us have Schrödinger’s Cat clarify this: imagine you had a box with a cat in it. It was alive when you put it. You keep it closed for a while (that is to say, you are not making any observation of it as yet.) Then you open the box and find the cat happily playing, or sleeping sound. But what quantum theory suggests is that the only reason the cat is doing any of that is because you are observing it. To go a step further, when you are not observing it, the cat may be doing something else. While this may seem all to commonplace, what is startling is that quantum theory defines the clause something else very differently: the theory, in fact, allows for the possibility that the cat may be dead when you are not making an observation of it! In conclusion A further complication is that neither of these two major theories, Quantum or Relativity, come in the other’s way. While quantum physics laws rule the smallest of particles, atoms, electrons and the like, relativity seems to apply to the largest of them, like our Earth or even our whole massive galaxy. To clarify this further, quantum theory actually breaks down at relativistic dimensions and relativistic laws break down at atomic levels. It therefore seems to me at times that there is a possibility of these two theories themselves being manifestations of the same fundamental principle; that our race to uncover the basic law that runs the universe really lies in our ability to see through these theories at the point where they appear to contradict. Perhaps, like atoms making up galaxies, the quantum theory is really a microscopic view of relativity. Perhaps the reason why quantum theory backs wood and relativity backs marble is because quantum theory really makes up relativity so long as we are able to take ten steps aside and view it from a different angle. Perhaps, in the end, wood is what observation suggests to us and marble is what investigation does. Perhaps the world is really made of marble. We are just not seeing it with a perspective that is broad enough. You may not have anything to do with physics, or you may be a physicist yourself, but do share your views below. What do you think our universe is made of? Does it have an underlying principle? Is it predictable, or does chance really rule it? Or will we just have to excuse ourselves, ultimately, like Schrödinger who said, ‘I don’t like it. I’m sorry I ever had anything to do with it.’? To commemorate my short tryst with the study of our Universe, I decided to compile a set of seven of To commemorate my short tryst with the study of our Universe, I decided to compile a set of seven of the best photographs available, each describing one of seven phenomenon/bodies that have eluded physicists or struck them as remarkable. In order, we have, 1. Neutrinos (Super-K Neutrino Observatory) 2. Wormholes 3. Black Holes (‘Black Holes ain’t so black!’ courtesy Hawking, Stephen, A Brief History of Time) 4. Supernovae (Eye Supernova) 5. Saturn’s Rings 6. Nebulae (Horse Head Nebula) 7. Galaxies
a1941feb8e6fa6e9
return to home page Special Announcements The 2005 ACTC will be Held July 16-21, 2005 at UCLA and organized by Prof. Emily Carter This conference in Gyeongju, Korea has a very international flavor and was quite exciting. It honored Fritz Schaefer's 1000th publication (and his good health). What is Present Day Theoretical Chemistry About? Three Primary Subdisciplines Structure theory molecular dynamics statistical mechanics A. Electronic structure theory describes the motions of the electrons and produces energy surfaces  The shapes and geometries of molecules, their electronic, vibrational, and rotational energy levels and wavefunctions, as well as the interactions of these states with electromagnetic fields lie within the realm of structure theory. 1. The Underlying Theoretical Basis In the Born-Oppenheimer model of molecular structure, it is assumed that the electrons move so quickly that they can adjust their motions essentially instantaneously with respect to any movements of the heavier and slower moving atomic nuclei. This assumption motivates us to view the electrons moving in electronic wave functions (orbitals within the simplest and most commonly employed theories) that smoothly "ride" the molecule's atomic framework. These electronic functions are found by solving a Schrödinger equation whose Hamiltonian He contains the kinetic energy Te of the electrons, the Coulomb repulsions among all the molecule's electrons Vee, the Coulomb attractions Ven among the electrons and all of the molecule's nuclei treated with these nuclei held clamped, and the Coulomb repulsions Vnn among all of these nuclei. The electronic wave functions yk and energies Ek that result from solving the electronic Schrödinger equation He yk = Ek yk thus depend on the locations {Qi} at which the nuclei are sitting. That is, the Ek and yk are parametric functions of the coordinates of the nuclei. These electronic energies' dependence on the positions of the atomic centers cause them to be referred to as electronic energy surfaces such as that depicted below for a diatomic molecule where the energy depends only on one interatomic distance R. For non-linear polyatomic molecules having N atoms, the energy surfaces depend on 3N-6 internal coordinates and thus can be very difficult to visualize. A slice through such a surface (i.e., a plot of the energy as a function of two out of 3N-6 coordinates) is shown below and various features of such a surface are detailed. The Born-Oppenheimer theory of molecular structure is soundly based in that it can be derived from a starting point consisting of a Schrödinger equation describing the kinetic energies of all electrons and of all N nuclei plus the Coulomb potential energies of interaction among all electrons and nuclei. By expanding the wavefunction Y that is an eigenfunction of this full Schrödinger equation in the complete set of functions { yk } and then neglecting all terms that involve derivatives of any yk with respect to the nuclear positions {Qi }, one can separate variables such that: 1. The electronic wavefunctions and energies must obey He yk = Ek yk 2. The nuclear motion (i.e., vibration/rotation) wavefunctions must obey (TN + Ek) ck,L = Ek,L ck,L , where TN is the kinetic energy operator for movement of all nuclei. That is, each and every electronic energy state, labeled k, has a set, labeled L, of vibration/rotation energy levels Ek,L and wavefunctions ck,L . Because the Born-Oppenheimer model is obtained from the full Schrödinger equation by making approximations (e.g., neglecting certain terms that are called non-adiabatic coupling terms), it is not exact. Thus, in certain circumstances it becomes necessary to correct the predictions of the Born-Oppenheimer theory (i.e., by including the effects of the neglected non-adiabatic coupling terms using perturbation theory). For example, when developing a theoretical model to interpret the rate at which electrons are ejected from rotationally/vibrationally hot NH- ions, we had to consider coupling between two states having the same total energy: 1. 2P NH- in its v=1 vibrational level and in a high rotational level (e.g., J >30) prepared by laser excitation of vibrationally "cold" NH- in v=0 having high J (due to natural Boltzmann populations); see the figure below, and 2. 3S NH neutral plus an ejected electron in which the NH is in its v=0 vibrational level (no higher level is energetically accessible) and in various rotational levels (labeled N). Because NH has an electron affinity of 0.4 eV, the total energies of the above two states can be equal only if the kinetic energy KE carried away by the ejected electron obeys: KE = Evib/rot (NH- (v=1, J)) - Evib/rot (NH (v=0, N)) - 0.4 eV. In the absence of any non-adiabatic coupling terms, these two isoenergetic states would not be coupled and no electron detachment would occur. It is only by the anion converting some of its vibration/rotation energy and angular momentum into electronic energy that the electron that occupies a bound N2p orbital in NH- can gain enough energy to be ejected. Energies of NH- and of NH pertinent to the autodetachment of v=1, J levels of NH- formed by laser excitation of v=0 J'' NH- . My own research efforts have, for many years, involved studying negative molecular ions (a field in which Professor Ken Jordan is a leading figure) taking into account such non Born-Oppenheimer couplings, especially in cases where vibration/rotation energy transferred to electronic motions causes electron detachment as in the NH- case detailed above. Professor Ken Jordan Professor Jack Simons in the Uinta Mts. My good friend, Professor Yngve Öhrn, has been active in attempting to avoid making the Born-Oppeheimer approximation and, instead, treating the dynamical motions of the nuclei and electrons simultaneously. Professor David Yarkony has contributed much to the recent treatment of non-adiabatic (i.e., non Born-Oppenheimer) effects and to the inclusion of spin-orbit coupling in such studies. Professor Yngve Öhrn Professor David Yarkony 2. What is Learned from an Electronic Structure Calculation? The knowledge gained via structure theory is great. The electronic energies Ek (Q) allow one to determine (see my book Energetic Principles of Chemical Reactions) the geometries and relative energies of various isomers that a molecule can assume by finding those geometries {Qi ) at which the energy surface Ek has minima Ek /Qi = 0, with all directions having positive curvature (this is monitored by considering the so-called Hessian matrix if none of its eigenvalues are negative, all directions have positive curvature). Such geometries describe stable isomers, and the energy at each such isomer geometry gives the relative energy of that isomer. Professor Berny Schlegel at Wayne State has been one of the leading figures whose efforts are devoted to using gradient and Hessian information to locate stable structures and transition states. Professor Peter Pulay has done as much as anyone to develop the theory that allows us to compute the gradients and Hessians appropriate to the most commonly used electronic structure methods. His group has also pioneered the development of so-called local correlation methods which focus on using localized orbitals to compute correlation energies in a manner that scales less severely with system size than when delocalized canonical molecular orbitals are employed. Professor Bernie Schlegel Professor Peter Pulay There may be other geometries on the Ek energy surface at which all "slopes" vanish Ek /Qi = 0, but at which not all directions possess positive curvature. If the Hessian matrix has only one negative eigenvalue there is only one direction leading downhill away from the point {Qi } of zero force; all the remaining directions lead uphill from this point. Such a geometry describes that of a transition state, and its energy plays a central role in determining the rates of reactions which pass through this transition state. At any geometry {Qi }, the gradient or slope vector having components Ek /Qi provides the forces (Fi = - Ek /Qi ) along each of the coordinates Qi . These forces are used in molecular dynamics simulations (see the following section) which solve the Newton F = m a equations and in molecular mechanics studies which are aimed at locating those geometries where the F vector vanishes (i.e., the stable isomers and transition states discussed above). Also produced in electronic structure simulations are the electronic wavefunctions {yk } and energies {Ek} of each of the electronic states. The separation in energies can be used to make predictions about the spectroscopy of the system. The wavefunctions can be used to evaluate properties of the system that depend on the spatial distribution of the electrons in the system. For example, the z- component of the dipole moment of a molecule mz can be computed by integrating the probability density for finding an electron at position r multiplied by the z- coordinate of the electron and the electron's charge e: mz = Ú e yk* yk z dr . The average kinetic energy of an electron can also be computed by carrying out such an average-value integral: Ú yk* (- h2 /2me 2 ) yk dr. The rules for computing the average value of any physical observable are developed and illustrated in popular undergraduate text books on physical chemistry (e.g., Atkins text or Berry, Rice, and Ross) and in graduate level texts (e.g., Levine, McQuarrie, Simons and Nichols). Professor Steve Berry, University of Chicago one of the most broad-based members of the theory community Dr. Jeff Nichols, Oak Ridge National Labs Prof. Stuart Rice, University of Chicago. He has done as much as anyone in a tremendous variety of theoretical studies including studies of interfaces and using coherent external perturbations to control chemical processes. Not only can electronic wavefunctions tell us about the average values of all physical properties for any particular state (i.e., yk above), but they also allow us to tell how a specific "perturbation" (e.g., an electric field in the Stark effect, a magnetic field in the Zeeman effect, light's electromagnetic fields in spectroscopy) can alter the specific state of interest. For example, the perturbation arising from the electric field of a photon interacting with the electrons in a molecule is given within the so-called electric dipole approximation (see, for example, Simons and Nichols, Ch. 14) by: Hpert =Sj e2 rj E (t) where E is the electric field vector of the light, which depends on time t in an oscillatory manner, and ri is the vector denoting the spatial coordinates of the ith electron. This perturbation, Hpert acting on an electronic state yk can induce transitions to other states yk' with probabilities that are proportional to the square of the integral Ú yk'* Hpert yk dr . So, if this integral were to vanish, transitions between yk and yk' would not occur, and would be referred to as "forbidden". Whether such integrals vanish or not often is determined by symmetry. For example, if yk were of odd symmetry under a plane of symmetry sv of the molecule, while yk' were even under sv , then the integral would vanish unless one or more of the three cartesian components of the dot product rj • E were odd under sv The general idea is that for the integral to not vanish the direct product of the symmetries of yk and of yk' must match the symmetry of at least one of the symmetry components present in Hpert . Professor Poul Jørgensen has been especially involved in developing such so-called response theories for perturbations that may be time dependent (e.g., as in the interaction of light's electromagnetic radiation). 3. Present Day Challenges in Structure Theory As a preamble to the following discussion, I wish to draw the readers' attention to the web pages of several prominent theoretical chemists who have made major contributions to the development and applications of electronic structure theory and who have agreed to allow me to cite them in this text. In addition to the people specifically mentioned in this text for whom I have already provided web links, I encourage you to look at the following web pages for further information: Professor Ernest Davidson, Univ. of Washington and Indiana University has contributed as much as anyone both to the development of the fundamentals of electronic structure theory and its applications to many perplexing problems in molecular structure and spectroscopy. The late Professor John Pople, Northwestern University, made developments leading to the suite of Gaussian computer codes that now constitute the most widely used electronic structure computer programs.His contributions to theory were recognized in his sharing the 1998 Nobel Prize in Chemistry. Professor Bill Goddard, Cal Tech. When most quantum chemists were pursuing improvements in the molecular orbital method, he returned to the valence bond theory and developed the so-called GVB methods that allow electron correlation to be included within a valence bond framework. Professor Fritz Schaefer, University of Georgia. He has carried out as many applications of modern electronic structure theory to important chemical problems as anyone. Professor Rod Bartlett, University of Florida. He brought the coupled-cluster method, developed earlier by others, into the mainstream of electronic structure theory. Professor Bernd Heb, University of Erlangen, has a special focus in his research on relativistic effects and the development of tools needed to study them. Professor Hans-Joachim Werner, University of Stuttgart, has for many years pioneered developments of multi-configurational SCF and coupled-cluster methods and applied them to a wide variety of chemical problems. Professor Sigrid Peyerimhoff, Bonn University, is one of the earliest pioneers of multi-reference configuration interaction methods and has applied these powerful tools to many important chemical species and reactions. She has prepared a wonderful web site that details much of the history of the development of quantum chemistry. Professor Debashis Mukherjee of the Indian Association for the Cultivation of Science has extended the basis of coupled-cluster theory to allow for multiconfigurational reference wavefunctions and for the calculation of excitation energies and ionization energies. The late Professor Mike Zerner, University of Florida, was very influential in continuing the development of semi-empirical methods within quantum chemistry. Such methods can be applied to much larger molecules than ab initio methods, so their continued evolution is an essential component to growth in this field of theoretical chemistry. Professor Poul Jørgensen of Aarhus University spent much of his early career developing the fundamentals of electron and polarization propagator theory. Following up on that early work, he moved on to develop the tools of response theory (time dependent and time independent) for computing a wide variety of molecular properties. His group has combined the power of response theory with several powerful wavefunctions including coupled-cluster and configuration interaction functions. a. Orbitals form the starting point; what are the orbitals? The full N-electron Schrödinger equation governing the movement of the electrons in a molecule is [-h2 /2me Si=1 i2 - Sa Si Za e2 /ria + Si,j e2 /rij ] y = E y . In this equation, i and j label the electrons and a labels the nuclei. Even at the time this material was written, this equation had been solved only for the case N=1 (i.e., for H, He+ , Li2+ , Be3+ , etc.). What makes the problem difficult to solve for other cases is the fact that the Coulomb potential e2/rij acting between pairs of electrons depends upon the coordinates of the two electrons ri and rj in a way that does not allow the separations of variables to be used to decompose this single 3N dimensional second-order differential equation into N separate 3-dimensional equations. However, by approximating the full electron-electron Coulomb potential Si,j e2 /rij by a sum of terms, each depending on the coordinates of only one electron Si, V(ri ), one arrives at a Schrödinger equation [-h2 /2me Si=1 i2 - Sa Si Za e2 /ria + Si V(ri)] y = E y which is separable. That is, by assuming that y (r1 , r2 , ... rN ) = f1 (r1 ) f2 (r2) ... fN (rN), and inserting this ansatz into the approximate Schrödinger equation, one obtains N separate Schrödinger equations: [-h2 /2me i2 - Sa Za e2 /ria + V(ri)] fi = Ei fi one for each of the N so-called orbitals fi whose energies Ei are called orbital energies. It turns out that much of the effort going on in the electronic structure area of theoretical chemistry has to do with how one can find the "best" effective potential V(r); that is, the V(r), which depends only on the coordinates r of one electron, that can best approximate the true pairwise additive Coulomb potential experienced by an electron due to the other electrons. The simplest and most commonly used approximation for V(r) is the so-called Hartree-Fock (HF) potential: V(r) fi (r) = Sj [ Ú|fj (r')|2 e2 /|r-r'| dr' fi (r) - Úfj*(r') fj (r') e2 /|r-r'| dr' fj (r) ]. This potential, when acting on the orbital fi , can be viewed as multiplying fi by a sum of potential energy terms (which is what makes it one-electron additive), each of which consists of two parts: a. An average Coulomb repulsion Ú |fj (r')|2 e2 /|r-r'| dr' between the electron in fi with another electron whose spatial charge distribution is given by the probability of finding this electron at location r' if it resides in orbital fj : |fj (r')|2 . b. A so-called exchange interaction between the electron in fi with the other electron that resides in fj.. The sum shown above runs over all of the orbitals that are occupied in the atom or molecule. For example, in a Carbon atom, the indices i and j run over the two 1s orbitals, the two 2s orbitals and the two 2p orbitals that have electrons in them (say 2px and 2py ) The potential felt by one of the 2s orbitals is obtained by setting fi = 2s, and summing j over j=1s, 1s, 2s, 2s, 2px , 2py . The term Ú |1s(r')|2 e2 /|r-r'| dr' 2s(r) gives the average Coulomb repulsion between an electron in the 2s orbital and one of the two 1s electrons; Ú |2px(r')|2 e2 /|r-r'| dr' 2s(r) gives the average repulsion between the electron in the 2px orbital and an electron in the 2s orbital; and Ú |2s(r')|2 e2 /|r-r'| dr' 2s(r) describes the Coulomb repulsion between one electron in the 2s orbital and the other electron in the 2s orbital. The exchange interactions, which arise because electrons are Fermion particles whose indistinguishability must be accounted for, have analogous interpretations. For example, Ú 1s*(r') 2s(r') e2 /|r-r'| dr' 1s(r) is the exchange interaction between an electron in the 1s orbital and the 2s electron; Ú 2px*(r') 2s(r') e2 /|r-r'| dr' 2px(r) is the exchange interaction between an electron in the 2px orbital and the 2s electron; and Ú 2s*(r') 2s(r') e2 /|r-r'| dr' 2s(r) is the exchange interaction between a 2s orbital and itself (note that this interaction exactly cancels the corresponding Coulomb repulsion Ú |2s(r')|2 e2 /|r-r'| dr' 2s(r), so one electron does not repel itself in the Hartree-Fock model). There are two primary deficiencies with the Hartree-Fock approximation: a. Even if the electrons were perfectly described by a wavefunction y (r1 , r2 , ... rN ) = f1 (r1 ) f2 (r2) ... fN (rN) in which each electron occupied an independent orbital and remained in that orbital for all time, the true interactions among the electrons Si,j e2 /rij are not perfectly represented by the sum of the average interactions. b. The electrons in a real atom or molecule do not exist in regions of space (this is what orbitals describe) for all time; there are times during which they must move away from the regions of space they occupy most of the time in order to avoid collisions with other electrons. For this reason, we say that the motions of the electrons are correlated (i.e., where one electron is at one instant of time depends on where the other electrons are at that same time). Let us consider the implications of each of these two deficiencies. b. The imperfections in the orbital-level picture are substantial To examine the difference between the true Coulomb repulsion between electrons and the Hartree-Fock potential between these same electrons, the figure shown below is useful. In this figure, which pertains to two 1s electrons in a Be atom, the nucleus is at the origin, and one of the electrons is placed at a distance from the nucleus equal to the maximum of the 1s orbital's radial probability density (near 0.13 Å). The radial coordinate of the second is plotted along the abscissa; this second electron is arbitrarily constrained to lie on the line connecting the nucleus and the first electron (along this direction, the inter-electronic interactions are largest). On the ordinate, there are two quantities plotted: (i) the Hartree-Fock (sometimes called the self-consistent field (SCF) potential) Ú |1s(r')|2 e2 /|r-r'| dr', and (ii) the so-called fluctuation potential (F), which is the true coulombic e2/|r-r'| interaction potential minus the SCF potential. As a function of the inter-electron distance, the fluctuation potential decays to zero more rapidly than does the SCF potential. However, the magnitude of F is quite large and remains so over an appreciable range of inter-electron distances. Hence, corrections to the HF-SCF picture are quite large when measured in kcal/mole. For example, the differences DE between the true (state-of-the-art quantum chemical calculation) energies of interaction among the four electrons in Be (a and b denote the spin states of the electrons) and the HF estimates of these interactions are given in the table shown below in eV (1 eV = 23.06 kcal/mole). Orb. Pair DE in eV These errors inherent to the HF model must be compared to the total (kinetic plus potential) energies for the Be electrons. The average value of the kinetic energy plus the Coulomb attraction to the Be nucleus plus the HF interaction potential for one of the 2s orbitals of Be with the three remaining electrons -15.4 eV; the corresponding value for the 1s orbital is (negative and) of even larger magnitude. The HF average Coulomb interaction between the two 2s orbitals of 1s22s2 Be is 5.95 eV. This data clearly shows that corrections to the HF model represent significant fractions of the inter-electron interaction energies (e.g., 1.234 eV compared to 5.95- 1.234 = 4.72 eV for the two 2s electrons of Be), and that the inter-electron interaction energies, in turn, constitute significant fractions of the total energy of each orbital (e.g., 5.95 -1.234 eV = 4.72 eV out of -15.4 eV for a 2s orbital of Be). The task of describing the electronic states of atoms and molecules from first principles and in a chemically accurate manner (± 1 kcal/mole) is clearly quite formidable. The HF potential takes care of "most" of the interactions among the N electrons (which interact via long-range coulomb forces and whose dynamics requires the application of quantum physics and permutational symmetry). However, the residual fluctuation potential is large enough to cause significant corrections to the HF picture. This, in turn, necessitates the use of more sophisticated and computationally taxing techniques to reach the desired chemical accuracy. c. Going beyond the simplest orbital model is sometimes essential What about the second deficiency of the HF orbital-based model? Electrons in atoms and molecules undergo dynamical motions in which their coulomb repulsions cause them to "avoid" one another at every instant of time, not only in the average-repulsion manner that the mean-field models embody. The inclusion of instantaneous spatial correlations among electrons is necessary to achieve a more accurate description of atomic and molecular electronic structure. Some idea of how large the effects of electron correlation are and how difficult they are to treat using even the most up-to-date quantum chemistry computer codes was given above. Another way to see the problem is offered in the figure shown below. Here we have displayed on the ordinate, for Helium's 1S (1s2) state, the probability of finding an electron whose distance from the He nucleus is 0.13Å (the peak of the 1s orbital's density) and whose angular coordinate relative to that of the other electron is plotted on the absissa. The He nucleus is at the origin and the second electron also has a radial coordinate of 0.13 Å. As the relative angular coordinate varies away from 0 deg, the electrons move apart; near 0 deg, the electrons approach one another. Since both electrons have the same spin in this state, their mutual Coulomb repulsion alone acts to keep them apart. What this graph shows is that, for a highly accurate wavefunction (one constructed using so-called Hylleraas functions that depend explicitly on the coordinates of the two electrons as well as on their interparticle distance coordinate) that is not of the simple orbital product type, one finds a "cusp" in the probability density for finding one electron in the neighborhood of another electron with the same spin. The probability plot for the Hylleraas function is the lower dark line in the above figure. In contrast, this same probability density, when evaluated for an orbital-product wavefunction (e.g., for the Hartree-Fock function) has no such cusp because the probability density for finding one electron at r, q, f is independent of where the other electron is (due to the product nature of the wavefunction). The Hartree-Fock probability, which is not even displayed above, would thus, if plotted, be flat as a function of the angle shown above. Finally, the graph shown above that lies above the Hylleraas plot and that has no "sharp" cusp was extracted from a configuration interaction wavefunction for He obtained using a rather large correlation consistent polarized valence quadruple atomic basis set. Even for such a sophisticated wavefunction (of the type used in many state of the art ab initio calculations), the cusp in the relative probability distribution is clearly not well represented. d. For realistic accuracy, improvements to the orbital picture are required Although highly accurate methods do exist for handling the correlated motions of electrons (e.g., the Hylleraas method mentioned above), they have not proven to be sufficiently computationally practical to be of use on atoms and molecules containing more than a few electrons. Hence, it is common to find other methods employed in most chemical studies in which so-called correlated wavefunctions are used. By far, the most common and widely used class of such wavefunctions involve using linear combinations of orbital product functions (actually, one must use so-called antisymmetrized orbital products to properly account for the fact that Fermion wavefunctions such as those describing electrons are odd under permutations of the electrons' labels): Y = S J CJ| fJ1 fJ2 fJ3 ...fJ(N-!) fJN |, with the indices J1, J2, ..., JN labeling the spin-orbitals and the coefficients CJ telling how much of each particular orbital product to include. As an example, one could use Y = C1 |1sa 1sb | - C2 [|2pza2pzb | - |2pxa2pxb | -2pya2pyb |] as a wavefunction for the 1S state of He (the last three orbital products are combined to produce a state that is spherically symmetric and thus has L = 0 electronic angular momentum just as the |1sa1sb| state does). Using a little algebra, and employing the fact that the orbital products |f1 f2 | = (2)-1/2 [ f1 f2 - f2 f1 ] are really antisymmetric products, one can show that the above He wavefunction can be rewritten as follows: Y = C1/3 {|fz a f'z b| - |fx a f'xb| - |fya f'y b| }, where fz = 1s + (3C2/C1)1/2 2pz and f'z = 1s - (3C2/C1)1/2 2pz , with analogous definitions for fx , f'x , fy , and f'y . The physical interpretation of the three terms ({|fz a f'z b| , |fx a f'x b| , and |fy a f'y b| ) is that |fz a f'z b| describes a contribution to Y in which one electron of a spin resides in a region of space described by fz while the other electron of b spin is in a region of space described by f'z , and analogously for |fxa f'x b| and |fy a f'y b|. Such a wavefunction thus allows the two electrons to occupy different regions of space since each orbital f in a pair is different from its partner f'. The extent to which the orbital differ depends on the C2/C1 ratio which, in turn, is governed by how strong the mutual repulsions between the two electrons are. Such a pair of so-called polarized orbitals is shown in the figure below. e. Density Functional Theory (DFT) These approaches provide alternatives to the conventional tools of quantum chemistry. The CI, MCSCF, MPPT/MBPT, and CC methods move beyond the single-configuration picture by adding to the wave function more configurations whose amplitudes they each determine in their own way. This can lead to a very large number of CSFs in the correlated wave function, and, as a result, a need for extraordinary computer resources. The density functional approaches are different. Here one solves a set of orbital-level equations [ - h2/2me 2- SA ZAe2/|r-RA| + Úr(r')e2/|r-r'|dr' + U(r)] fi = ei fi in which the orbitals {fi} 'feel' potentials due to the nuclear centers (having charges ZA), Coulombic interaction with the total electron density r(r'), and a so-called exchange-correlation potential denoted U(r'). The particular electronic state for which the calculation is being performed is specified by forming a corresponding density r(r'). Before going further in describing how DFT calculations are carried out, let us examine the origins underlying this theory. The so-called Hohenberg-Kohn theorem states that the ground-state electron density r(r) describing an N-electron system uniquely determines the potential V(r) in the Hamiltonian H = Sj {-h2/2mej2 + V(rj) + e2/2 Skj 1/rj,k }, and, because H determines the ground-state energy and wave function of the system, the ground-state density r(r) determines the ground-state properties of the system. The proof of this theorem proceeds as follows: a. r(r) determines N because Ú r(r) d3r = N. b. Assume that there are two distinct potentials (aside from an additive constant that simply shifts the zero of total energy) V(r) and V'(r) which, when used in H and H', respectively, to solve for a ground state produce E0, Y (r) and E0', Y'(r) that have the same one-electron density: Ú |Y|2 dr2 dr3 ... drN = r(r)= Ú |Y'|2 dr2 dr3 ... drN . c. If we think of Y' as trial variational wave function for the Hamiltonian H, we know that E0 < <Y'|H|Y'> = <Y'|H'|Y'> + Ú r(r) [V(r) - V'(r)] d3r = E0' + Ú r(r) [V(r) - V'(r)] d3r. d. Similarly, taking Y as a trial function for the H' Hamiltonian, one finds that E0' < E0 + Ú r(r) [V'(r) - V(r)] d3r. e. Adding the equations in c and d gives E0 + E0' < E0 + E0', a clear contradiction. Hence, there cannot be two distinct potentials V and V' that give the same ground-state r(r). So, the ground-state density r(r) uniquely determines N and V, and thus H, and therefore Y and E0. Furthermore, because Y determines all properties of the ground state, then r(r), in principle, determines all such properties. This means that even the kinetic energy and the electron-electron interaction energy of the ground-state are determined by r(r). It is easy to see that Ú r(r) V(r) d3r = V[r] gives the average value of the electron-nuclear (plus any additional one-electron additive potential) interaction in terms of the ground-state density r(r), but how are the kinetic energy T[r] and the electron-electron interaction Vee[r] energy expressed in terms of r? The main difficulty with DFT is that the Hohenberg-Kohn theorem shows that the ground-state values of T, Vee , V, etc. are all unique functionals of the ground-state r (i.e., that they can, in principle, be determined once r is given), but it does not tell us what these functional relations are. To see how it might make sense that a property such as the kinetic energy, whose operator -h2 /2me 2 involves derivatives, can be related to the electron density, consider a simple system of N non-interacting electrons moving in a three-dimensional cubic "box" potential. The energy states of such electrons are known to be E = (h2/2meL2) (nx2 + ny2 +nz2 ), where L is the length of the box along the three axes, and nx , ny , and nz are the quantum numbers describing the state. We can view nx2 + ny2 +nz2 = R2 as defining the squared radius of a sphere in three dimensions, and we realize that the density of quantum states in this space is one state per unit volume in the nx , ny , nz space. Because nx , ny , and nz must be positive integers, the volume covering all states with energy less than or equal to a specified energy E = (h2/2meL2) R2 is 1/8 the volume of the sphere of radius R: F(E) = 1/8 (4p/3) R3 = (p/6) (8meL2E/h2)3/2 . Since there is one state per unit of such volume, F(E) is also the number of states with energy less than or equal to E, and is called the integrated density of states. The number of states g(E) dE with energy between E and E+dE, the density of states, is the derivative of F: g(E) = dF/dE = (p/4) (8meL2/h2)3/2 E1/2 . If we calculate the total energy for N electrons, with the states having energies up to the so-called Fermi energy (i.e., the energy of the highest occupied molecular orbital HOMO) doubly occupied, we obtain the ground-state energy: = (8p/5) (2me/h2)3/2 L3 EF5/2. The total number of electrons N can be expressed as N = 2 g(e)dE= (8p/3) (2me/h2)3/2 L3 EF3/2, which can be solved for EF in terms of N to then express E0 in terms of N instead of EF: E0 = (3h2/10me) (3/8p)2/3 L3 (N/L3)5/3 . This gives the total energy, which is also the kinetic energy in this case because the potential energy is zero within the "box", in terms of the electron density r (x,y,z) = (N/L3). It therefore may be plausible to express kinetic energies in terms of electron densities r(r), but it is by no means clear how to do so for "real" atoms and molecules with electron-nuclear and electron-electron interactions operative. In one of the earliest DFT models, the Thomas-Fermi theory, the kinetic energy of an atom or molecule is approximated using the above kind of treatment on a "local" level. That is, for each volume element in r space, one assumes the expression given above to be valid, and then one integrates over all r to compute the total kinetic energy: TTF[r] = Ú (3h2/10me) (3/8p)2/3 [r(r)]5/3 d3r = CF Ú [r(r)]5/3 d3r , where the last equality simply defines the CF constant (which is 2.8712 in atomic units). Ignoring the correlation and exchange contributions to the total energy, this T is combined with the electron-nuclear V and Coulombic electron-electron potential energies to give the Thomas-Fermi total energy: E0,TF [r] = CF Ú [r(r)]5/3 d3r + Ú V(r) r(r) d3r + e2/2 Ú r(r) r(r')/|r-r'| d3r d3r', This expression is an example of how E0 is given as a local density functional approximation (LDA). The term local means that the energy is given as a functional (i.e., a function of r) which depends only on r(r) at points in space but not on r(r) at more than one point in space. Unfortunately, the Thomas-Fermi energy functional does not produce results that are of sufficiently high accuracy to be of great use in chemistry. What is missing in this theory are a. the exchange energy and b. the correlation energy; moreover, the kinetic energy is treated only in the approximate manner described. In the book by Parr and Yang, it is shown how Dirac was able to address the exchange energy for the 'uniform electron gas' (N Coulomb interacting electrons moving in a uniform positive background charge whose magnitude balances the charge of the N electrons). If the exact expression for the exchange energy of the uniform electron gas is applied on a local level, one obtains the commonly used Dirac local density approximation to the exchange energy: Eex,Dirac[r] = - Cx Ú [r(r)]4/3 d3r, with Cx = (3/4) (3/p)1/3 = 0.7386 in atomic units. Adding this exchange energy to the Thomas-Fermi total energy E0,TF [r] gives the so-called Thomas-Fermi-Dirac (TFD) energy functional. Professor Bob Parr Professor Weitao Yang Because electron densities vary rather strongly spatially near the nuclei, corrections to the above approximations to T[r] and Eex.Dirac are needed. One of the more commonly used so-called gradient-corrected approximations is that invented by Becke, and referred to as the Becke88 exchange functional: Eex(Becke88) = Eex,Dirac[r] -g Ú x2 r4/3 (1+6 g x sinh-1(x))-1 dr, where x =r-4/3 |r|, and g is a parameter chosen so that the above exchange energy can best reproduce the known exchange energies of specific electronic states of the inert gas atoms (Becke finds g to equal 0.0042). A common gradient correction to the earlier T[r] is called the Weizsacker correction and is given by dTWeizsacker = (1/72)(h/me) Ú |r(r)|2/r(r) dr. Although the above discussion suggests how one might compute the ground-state energy once the ground-state density r(r) is given, one still needs to know how to obtain r. Kohn and Sham (KS) introduced a set of so-called KS orbitals obeying the following equation: {-1/22 + V(r) + e2/2 Ú r(r')/|r-r'| dr' + Uxc(r) }fj = ej fj , where the so-called exchange-correlation potential Uxc (r) = dExc[r]/dr(r) could be obtained by functional differentiation if the exchange-correlation energy functional Exc[r] were known. KS also showed that the KS orbitals {fj} could be used to compute the density r by simply adding up the orbital densities multiplied by orbital occupancies nj : r(r) = Sj nj |fj(r)|2. (here nj =0,1, or 2 is the occupation number of the orbital fj in the state being studied) and that the kinetic energy should be calculated as T = Sj nj <fj(r)|-1/2 2 |fj(r)>. The same investigations of the idealized 'uniform electron gas' that identified the Dirac exchange functional, found that the correlation energy (per electron) could also be written exactly as a function of the electron density r of the system, but only in two limiting cases- the high-density limit (large r) and the low-density limit. There still exists no exact expression for the correlation energy even for the uniform electron gas that is valid at arbitrary values of r. Therefore, much work has been devoted to creating efficient and accurate interpolation formulas connecting the low- and high- density uniform electron gas expressions (see Appendix E in the Parr and Wang book for further details). One such expression is EC[r] = Ú r(r) ec(r) dr, ec(r) = A/2{ln(x/X) + 2b/Q tan-1(Q/(2x+b)) -bx0/X0 [ln((x-x0)2/X) +2(b+2x0)/Q tan-1(Q/(2x+b))] is the correlation energy per electron. Here x = rs1/2 , X=x2 +bx+c, X0 =x02 +bx0+c and Q=(4c - b2)1/2, A = 0.0621814, x0= -0.409286, b = 13.0720, and c = 42.7198. The parameter rs is how the density r enters since 4/3 prs3 is equal to 1/r; that is, rs is the radius of a sphere whose volume is the effective volume occupied by one electron. A reasonable approximation to the full Exc[r] would contain the Dirac (and perhaps gradient corrected) exchange functional plus the above EC[r], but there are many alternative approximations to the exchange-correlation energy functional. Currently, many workers are doing their best to "cook up" functionals for the correlation and exchange energies, but no one has yet invented functionals that are so reliable that most workers agree to use them. To summarize, in implementing any DFT, one usually proceeds as follows: 1. An atomic orbital basis is chosen in terms of which the KS orbitals are to be expanded. 2. Some initial guess is made for the LCAO-KS expansion coefficients Cj,a: fj = Sa Cj,a ca. 3. The density is computed as r(r) = Sj nj |fj(r)|2 . Often, r(r) is expanded in an atomic orbital basis, which need not be the same as the basis used for the fj, and the expansion coefficients of r are computed in terms of those of the fj . It is also common to use an atomic orbital basis to expand r1/3(r) which, together with r, is needed to evaluate the exchange-correlation functional's contribution to E0. 4. The current iteration's density is used in the KS equations to determine the Hamiltonian {-1/2 2 + V(r) + e2/2 Ú r(r')/|r-r'| dr' + Uxc(r) }whose "new" eigenfunctions {fj} and eigenvalues {ej} are found by solving the KS equations. 5. These new fj are used to compute a new density, which, in turn, is used to solve a new set of KS equations. This process is continued until convergence is reached (i.e., until the fj used to determine the current iteration's r are the same fj that arise as solutions on the next iteration. 6. Once the converged r(r) is determined, the energy can be computed using the earlier expression E [r] = Sj nj <fj(r)|-1/2 2 |fj(r)>+ Ú V(r) r(r) dr + e2/2 Ú r(r)r(r')/|r-r'|dr dr'+ Exc[r]. In closing this section, it should once again be emphasized that this area is currently undergoing explosive growth and much scrutiny. As a result, it is nearly certain that many of the specific functionals discussed above will be replaced in the near future by improved and more rigorously justified versions. It is also likely that extensions of DFT to excited states (many workers are actively pursuing this) will be placed on more solid ground and made applicable to molecular systems. Because the computational effort involved in these approaches scales much less strongly with basis set size than for conventional (SCF, MCSCF, CI, etc.) methods, density functional methods offer great promise and are likely to contribute much to quantum chemistry in the next decade. There is a nice DFT web site established by the Arias research group at Cornell devoted to a DFT project involving highly efficient computer implementation within object-oriented programming. f. Efficient and widely distributed computer programs exist for carrying out electronic structure calculations The development of electronic structure theory has been ongoing since the 1940s. At first, only a few scientists had access to computers, and they began to develop numerical methods for solving the requisite equations (e.g., the Hartree-Fock equations for orbitals and orbital energies, the configuration interaction equations for electronic state energies and wavefunctions). By the late 1960s, several research groups had developed reasonably efficient computer codes (written primarily in Fortran with selected subroutines that needed to run especially efficiently in machine language), and the explosive expansion of this discipline was underway. By the 1980s and through the 1990s, these electronic structure programs began to be used by practicing "bench chemists" both because they became easier to use and because their efficiency and the computers' speed grew (and cost dropped) to the point at which modest to large molecules could be studied at reasonable cost and effort. Even with much faster computers, there remain severe bottlenecks to extending ab initio quantum chemistry tools to larger and larger molecules (and to extended systems such as polymers, solids, and surfaces). Two of the most difficult issues involve the two-electron integrals (ci cj |1/r1,2| ck cl ). Nearly all correlated electronic structure methods express the electronic energy E (as well as its gradient and second derivative or Hessian) in terms of integrals taken over the molecular orbitals, not the basis atomic orbitals. This usually then requires that the integrals be first evaluated in terms of the basis orbitals and subsequently transformed from the basis orbital to the molecular orbital representation using the LCAO-MO expansion fj = Sa Cj,a ca. For example, one such step in the transformation involves computing Sa Cj,a (ca cj |1/r1,2| ck cl ) = (fi cj |1/r1,2| ck cl ). Four such one-index transformations must performed to eventually obtain the (fi fj |1/r1,2| fk fl ) integrals. Given a set of M basis orbitals, there are ca. M4/8 integrals (ca cj |1/r1,2| ck cl ). Each one-index transformation step requires ca. M5 calculations (i.e., to form the sum of products such as Sa Cj,a (ca cj |1/r1,2| ck cl ). Hence the task of forming these integrals over the molecular orbitals scales as the fifth power of M. The research group of Professor Martin Head-Gordon has been attacking two aspects of the above integral bottleneck. Professror Martin Head-Gordon First, his group has been deriving and implementing in a very efficient manner expressions for the electronic energy (and its gradient with respect to nuclear positions ) that are not written in terms of integrals over the molecular orbitals but in terms of integrals over the basis atomic orbitals. This allows them to produce what are called "direct" procedures for evaluating energies and gradients. The advantages of such direct methods are (1) that one does not have to go through the M5 integral transformation process, and (2) that one does not have to first compute all of the atomic-orbital integrals (ci cj |1/r1,2| ck cl ). Instead, one can compute groups of these integrals (ci cj |1/r1,2| ck cl ) (e.g., as many as one can retain within the fast main memory of the computer), calculate the contributions made by these integrals to the energy or gradient, and then delete this group of integrals and proceed to compute (and use) another group of such integrals. This allows one to handle larger basis sets than when one has to first obtain all of the integrals and store them (e.g., on disk). The second major advance that the Head-Gordon group has fostered is developing clever and efficient new tools for computing the atomic-orbital-level two-electron integrals (ci cj |1/r1,2| ck cl ), especially when the product functions ci (1) cj (1) and ck (2) cl (2) invlove functions that are distant from one another. When there is good reason to view these products as residing in different regions of space (e.g., when the constitutent atomic orbitals are centered on atoms in different parts of a large molecule), so-called multipole expansion methods (and other tools) can be used to approximate the two-electron integrals (ci cj |1/r1,2| ck cl ). In this way, the Head-Gordon group has been able to (a) calculate integrals in which all of the basis orbitals reside on the same or very nearby atoms in conventional (highly efficient) ways, (b) approximate (very rapidly and in a numerically reliable multipolar manner) the integrals where the charge densities are somewhat distant yet still significant, and (c) ignore (to controlled tolerances) integrals for product densities that are even more distant. This has allowed them to obtain integral evaluation and energy-computation schemes that display nearly linear scaling with the number of atoms (and thus basis orbitals) in the system. It is only through such efforts that there is any hope of extending ab initio electronic structure mehtods to large molecules and extended systems. The Head-Gordon group has also been expanding the horizons of the very powerful coupled-cluster method for treating electron correlation at a high level, especially by introducing so-called local-methods for handling interactions among electrons. In particular, by making clever defintions of localized occupied and virtual orbitals, they have been able to develop new methods whose computational effort promises to scale more practically with the number of electrons (i.e., the molecule size) than do conventional coupled cluster methods. Combining their advances in coupled-cluster theory with their breakthroughs in handling electron-electron interactions, has lead to a large body of important new work from this outstanding group. At present, more electronic structure calculations are performed by non-theorists than by practicing theoretical chemists. This is largely due to the proliferation of widely used computer programs. This does not mean that all that needs to be done in electronic structure theory is done. The rates at which improvements are being made in the numerical algorithms used to solve the problems as well as at which new models are being created remain as high as ever. For example, Professor Rich Friesner has developed and Professor Emily Carter has implemented for correlated methods a highly efficient way to replace the list of two-electron integrals (fi fj |1/r1,2| fk fl ), which number N4 , where N is the number of atomic orbital basis functions, by a much smaller list (fi fj |l) from which the original integrals can be rewritten as: (fi fj |1/r1,2| fk fl ) = Sg (fi (g)fj (g)) Ú dr fk(r) fl (r)/|r-g| . Professor Rich Friesner Professor Emily Carter This tool, which they call pseudospectral methods, promises to reduce the CPU, memory, and disk storage requirements for many electronic structure calculations, thus permitting their applications to much larger molecular systems. In addition to ongoing developments in the underlying theory and computer implementation, the range of phenomena and the kinds of physical properties that one needs electronic structure theory to address is growing rapidly. Professor Gustavo Scuseria has been especially active in developing new methods for treating very large molecules, in particular, methods whose computer requirements scale linearaly (or nearly so) with molecular size. In addition, a great deal of progress has been made in constructing sequences of atomic orbital basis sets whose use allows one to extrapolate to essentially complete-basis quality results. Thom Dunning has, more than anyone else, been responsible for progress in this area. Professor Gustavo Scuseria Dr. Thom Dunning There are a variety of tools that aim to compute differences betweeen state energies (e.g., electron affinities, ionization potentials, and excitation energies) directly. Several workers who have been instrumental in developing these methods include those shown below. Professors Lorenz Cederbaum (l), University of Heidelberg, Professor Jan Linderberg (r), Aarhus University, and Professor Yngve Ohrn (below), University of Florida. Also, Professor Howard Taylor, University of Southern California (below) contributed much to these developments as well as to methods for treating metastable electronic states. In more recent years, the author, Professor J. V. Ortiz (below, l), Kansas State University, Professor P. Jørgensen (below, r), and Profesor J. Oddershede (bottom) have continued to develop these and related methods. In addition to the many practicing quantum chemists introduced above, I show below photos of several others whose research and educational efforts will be of interest to students reading this web site. Professor Krishnan Balusubramanian University of California, Davis Professor Jerzy Cioslowski Florida State University Professor Marcel Nooijen, University of Waterloo In addition to Prof. Balusubramanian, another expert on the effects of relativity on atomic and molecular properties is Prof. Pekka Pyykko of Helsinki University (below). This dynamic scholar always give a wonderful talk on how relativity contributes to many properties of matter in nature. Most people know that the study of transition metal containing systems is especially difficult because of the near-degeneracy of the ns and (n-1) d orbitals and the role of relativistic effects in the heavier elements. Prof. Gernot Frenking of the University of Marburg, shown below, has devoted a great deal of effort to understanding such systems. His group has also developed a powerful and useful means of decomposing the bonding interactions among atoms into various physical contributions. Professor Gernot Frenking Professor Piotr Piecuch, Michigan State University, has been involved in extending the coupled-cluster method to allow one to use multiconfigurational reference wave functions, which is very important when one wishes to describe diradicals and bond-breaking and bond-forming proceses. He and his co-workers have forumlated what they call a renormalized coupled-cluster method that can accurately describe bond breaking and excited electronic surfaces at a computational cost similar to that of a single-configuration reference calculation. They have also been looking into using explicitly correlated two-electron exponential cluster expansions of the N-electron wave function to see to what extent one can capture most (if not all) of the electron-electron correlations within such a compact framework. Professor Piotr Piecuch (upper left) with his research group at Michigan State Univesity Professor Nicholas Handy (above), Cambridge University, has made numerous contributions to electronic structure theory, to the theory of molecular spectroscopy, and to the rapidly expanding field of density functional theory. Professor Jose Ramon Alvarez Collado (above) has made several contributions to Hartree-Fock and configuration interaction theory as well as to the treatment of vibtrational Hamiltonia and vibrational motions of molecules. He recently has shown how to handle large clusters or solid materials that contain a very large number of unpaired electrons. Professor Mark Ratner, Northwestern University Professor Cliff Dykstra Indiana University- Purdue University Professor John Stanton University of Texas Professor Bjørn Roos, Lund University, Sweden has been one of his nation's leading quantum chemists for many years, has developed one of the most powerful and widely used quantum chemistry codes, and has organized many schools on quantum chemistry. Professor Jerzy Leszczynski at Jackson State University has established a very strong program in quantum chemistry and has hosted many very important conferences as well as "schools". Professor Kimihiko Hirao's group in Tokyo has made many important contributions to the development and applications of modern quantum chemistry tools, especially those involving multi-configurational wave functions. There exists an approach to solving the Schrödinger equation that has proven to be extremely accurate and is based on viewing the time-dependent Schrödinger equaiton as a diffusion equation with its time variable defined as imaginary. The idea then is to propagate an "initial" wavefunction (chosen to possess the proper permutational and symmetry properties of the desired solution) forward in time with the diffusion equation (having a source and sink term arising from the electron-nuclear and electron-electron Coulomb potentials). It can be shown that such a propogated wavefunction will converge to the lowest energy state that has the symmetry and nodal behavior of the trial wavefunction. The people who have done the most to propose, implement, and improve such so-called Diffusion Monte-Carlo type procedures include: Professor James Anderson of Penn State University Professor Bill Lester, University of California, Berkeley Professor Jules Moskowitz of New York University and Professor David Ceperley of the University of Illinois Professor Greg Gellene, Texas Tech, has pioneered the study of Rydberg species and of concerted reactions of small molecules and molcular compexes. Greg Gellene Professor Anna Krylov, University of Southern California, has been developing new electronic structure methods aimed at particularly difficult classes of compounds where multiconfigurational wave functions are essential. These include diradical and triradical species. Professor Debbie Evans, University of New Mexico, has been working on electron transport and other quantum dynamics in branched macromolecules and other condensed phase systems. Professor Angela Wilson, University of North Texas has done a lot to calibrate basis sets so we know to what extent we can trust them in various kinds of electronic structure calculations. Prof. Angela Wilson Professor Thomas Cundari, University of North Texas, is very active in using electronic structure methods to study inorganic and organometallic species. Prof. Thomas Cundari (red shirt).  Professor Wes Borden recently joined the University of North Texas as Welch Professor. He has a long and distinguished record of applying quantum chemistry to important problems in organic chemistry. Prof. Wes Borden Professor Ludwik Adamowicz (below) of the University of Arizona has done a lot of work on molecular anions, especially dipole-bound anions involving bio-molecules. He has also done much work on multi-reference coupled cluster methods, method for generating non-adiabatic multiparticle wave functions, and for calculating rovibrational states of polyatomic molecules.  Professor Kwang Kim, is using a variety of theoretical methods to study functional materials with the support of Creative Research Initiativ, Ministry of Science and Technology of Korea. His laboratory has three subdivisions: (1) the quantum theoretical chemistry group, (2) a theoretical condensed matter physics group, and (3)a synthesis and property measurement group. Indiana University has had a long tradition of excellence in theoretical chemistry. Currently, its chemistry faculty include Prof. Peter Ortoleva, Prof. Krishnan Raghavachari, and Prof. Srinivasan Iyengar who are shown below. Professor Peter Ortoleva who works on pattern formations within biological systems as well as in geology. Professor Krishnan Raghavachari who has made numerous advances in quantum chemical methodologies and in the study of small to moderate size clusters of main group atoms. Prof. Srinivasan Iyengar who developed the atom-centered density matrix propogation method for combining electronic structure and collision/reaction dynamics and is applying this to a wide variety of problems. At Notre Dame University, there are also several faculty specializing in theory. They include Prof. Eli Barkai who studies single-molecule spectroscopy and fractional kinetics.  Prof. Dan Gezelter who studies condensed-phase molecular dynamics, and Prof. Dan Chipman who is interested in solvation effects, electronic structure methods developments and free radicals. Several recent new faculty hires have been made in extremely good chemistry departments including those shown below. We need to be looking out for many good new developments from these people. Prof. Garnet Chan, Cornell University, says the following about his group's work: Of particular theoretical interest are the construction of fast (polynomial) algorithms to solve the quantum many-particle problem, and the treatment of correlation in time-dependent processes. A key feature of our theoretical approach is the use of modern renormalization group and multi-scale ideas. These enable us to extend the range of simulation from the simple to the complex, and from the small to the very large. Some current phenomena under study include: (i) Energy and electron transfer in conjugated polymers: specifically photosynthetic carotenoids, optoelectronic polymers, and carbon nanotubes, (ii) Spin couplings in multiple-transition metal systems, including iron-sulfur proteins and molecular magnets. (iii) Lattice models of high Tc superconductors. Profesor Phillip Geissler, University of California, Berkeley Professor Troy van Voorhis, MIT, says the following about his group's work: The Van Voorhis group develops new methods that make reliable predictions about real systems for which existing techniques are inadequate. At present, our ideas center around the following major themes: the value of explicitly time-dependent theories, the importance of electron correlation and the proper treatment of delicate effects such as van der Waals forces and magnetic interactions. Electronic Structure of Molecular Magnetism Molecular magnetism is currently a ``hot'' area in chemical physics because of the technological promise of colossal magneto-resistant materials compounds and super-paramagnetic molecules. We are interested in developing a better fundamental understanding of magnetism that will allow us to predict the behavior of systems like these in an ab initio way. For example, one should be able to extract the Heisenberg exchange parameters (even for challenging oxo-bridged transition metal compounds) by simulating the response of the system to localized magnetic fields. We are also interested in extending the commonly used Heisenberg Hamiltonian to include spin orbit interactions in a local manner. This would be useful, for example, if one is interested in assembling a large molecule out of smaller building blocks - by knowing the preferred axis of each fragment one could potentially extract the magnetic axis and anisotropy of the larger compound. Modeling Real-Time Electron Dynamics Ab initio methods tend to focus the lion's share of attention on the description of electronic structure. However, there are a variety of systems where a focus on the electron dynamics is extremely fruitful. On the one hand, there are systems where it is the motion of the electrons that is interesting. This is true, for example, in conducting organic polymers and crystals - where it is charge migration that leads conductivity - and in photosynthetic and photovoltaic systems - where excited state energy transfer determines the efficiency. Also, in a very deep way, dynamic simulations can offer improved pictures of static phenomena. Here, our attention is focused on the fluctuation-dissipation theorem, an exact relation between the static correlation function and the time-dependent response of the system, and on semiclassical techniques, which provide a simple ansatz for approximating quantum results using essentially classical information. Prof. Aaron Dinner, Univ. of Chicago Prof. Misha Ovchinnikov, Univ. of Rochester Prof. David Mazziotti, Univ. of Chicago Web page links to many of the more widely used programs offer convenient access: Pacific Northwest Labs is developing a suite of programs called NWChem The MacroModel program The Gaussian suite of programs The GAMESS program The HyperChem programs of Hypercube, Inc. The CAChe software packages from Fujitsu The Spartan sofware package of Wavefunction, Inc. The MOPAC program of CambridgeSoft The Amber program of Prof. Peter Kollman, University of California, San Francisco The CHARMm program The programs of Accelrys, Inc. The COLUMBUS program The CADPAC program of Dr. Roger Amos The programs of Wavefunction, Inc. The ACES II program of Prof. Rod Bartlett. The MOLCAS program of Prof. Bjorn Roos. The MOLPRO quantum chemistry package of Profs. Werner and Knowles The Vienna Ab Initio Simulations Package (VASP) A nice compendium of various softwares is given in the Appendix of Reviews in Lipkowitz K B and Boyd D B (Eds) 1996 Computational Chemistry (New York, NY: VCH Publications) Vol 7 I hope the discussion I have offered has made it clear that there is every reason to believe that this sub-discipline of theoretical chemistry will continue to blossom for many years to come. Clearly, electronic structure theory provides a wealth of information about molecular structure and molecular properties. It does not, however, give us all the information we need to characterize a molecule's full motions. What is missing primarily is a description of the movements of the nuclei (or, equivalently, the bond lengths and angles and intermolecular coordinates), whose study lies within the realm of the next sub-discipline of theoretical chemistry to be discussed. B. Molecular and chemical dynamics describes the motions of the atoms within the molecule and the surrounding solvent The collisions among molecules and resulting energy transfers among translational, vibrational, rotational, and electronic modes, as well as chemical reactions that occur intramolecularly or in bi-molecular encounters lie within the realm of molecular and chemical dynamics theory. The vibrational and rotational motions that a molecule's nuclei undergo on any one of the potential energy surfaces EkQ) is also a subject for molecular dynamics and provides a logical bridge to the subject of molecular vibration-rotation spectroscopy. 1. Classical Newtonian Dynamics Can Often Be Used For any particular molecule with its electrons occupying one of its particular electronic states, the atomic centers (i.e., the N nuclei) undergo translational, rotational, and vibrational movements. The translational and rotational motions do not experience any forces and are thus "free motions" unless (1) surrounding solvent or lattice species are present or (2) an external electric or magnetic field is applied. Either of the latter influences will cause the translations and rotations to experience potential energies that depend on the location of the molecule's center of mass and the molecule's orientation in space, respectively. In contrast, the vibrational coordinates of a molecule experience forces that result from the dependence of the electronic state energy Ek on the internal coordinates Fi = - dEk({Q})/dQi. By expressing the kinetic energy T for internal vibrational motions and the corresponding potential energy V= Ek({Q}) in terms of 3N-6 internal coordinates {Qk}, classical equations of motion based on the so-called Lagrangian L = T - V: d/dt d[T-V]/di = d[T-V]/dQi can be developed. If surrounding solvent species or external fields are present, equations of motion can also be developed for the three center of mass coordinates R and the three orientational coordinates W . To do so one must express the molecule-solvent intermolecular potential energy in terms of R and W, and the translational and orientational kinetic energies must also be written in terms of the time rates of change of R and W. The equations of motion discussed above are classical. Most molecular dynamics theory and simulations are performed in this manner. When light atoms such as H, D, or He appear, it is often essential to treat the equations of motion that describe their motions (vibrations as well as translations and rotations in the presence of solvent or lattice surroundings) using the Schrödinger equation instead. This is a much more difficult task. a. A collision between an atom and a diatomic molecule Let us consider an example to illustrate the classical and quantum treatments of dynamics. In particular, consider the collision of an atom A with a diatomic molecule BC with all three atoms constrained to lie in a plane. I. The coordinates in which the kinetic energy has no cross terms Here the three atoms have a total of 2N = 6 coordinates because they are constrained to lie in the X,Y plane. The center of mass of the three atoms x = (mA xA + mB xB + mC xC )/M y = (mA yA + mB yB + mC yC )/M requires two coordinates to specify, which leaves 2N-2 = 4 coordinates to describe the internal and overall orientational arrangement of the three atoms. It is common (the reason will be explained below) in such triatomic systems to choose the following specific set of internal coordinates: 1. r, the distance between two of the atoms (usually the two that are bound in the A + BC collision); 2. R, the distance of the third atom (A) from the center of mass of the first two atoms (BC); 3. a and b , two angles between the r and R vectors and the laboratory-fixed X axis, respectively. II. The Hamiltonian or total energy In terms of these coordinates, the total energy (i.e., the Hamiltonian) can be expressed as follows: H = 1/2 m' 2 + 1/2 m 2 + 1/2 m'R22 + 1/.2 m r22 + 1/2 M (2 + 2 ) + V. Here, m is the reduced mass of the BC molecule (m = mBmC/(mB +mc )) M is the mass of the entire molecule M = mA + mB + mC , and m' is the reduced mass of A relative to BC (m' = mAmBC/(mA +mBC )). The potential energy V is a function of R, r, and q = a + b the angle between the r and R vectors. III. The conjugate momenta To eventually make a connection to the quantum mechanical Hamiltonian, it is necessary to rewrite the above H in terms of the coordinates and their so-called conjugate momenta. For any coordinate Q, the conjugate momentum PQ is defined by PQ = L/ . For example, PR = m', Pr = m , Pa = m' R2, Pb = m r2 allow H to be rewritten as H = PR2/2m' + Pr2 /2m + Pa2 /2m'R2 + Pb2 /2mr2 + (PX2 + PY2 )/2M +V. Because the potential V contains no dependence on X or Y, the center of mass motion (i.e., the kinetic energy (PX 2 + PY 2 )/2M) will time evolve in a straight-line trajectory with no change in its energy. As such, it can be removed from further consideration. The total angular momentum of the three atoms about the Z- axis can be expressed in terms of the four remaining coordinates as follows: LZ = m' R2 - m r2 = Pa - Pb . Since this total angular momentum is a conserved quantity (i.e., for each trajectory, the value of LZ remains unchanged throughout the trajectory), one can substitute this expression for Pa and rewrite H in terms of only three coordinates and their momenta: H = PR2 /2m' + pr2 /2m + (LZ -Pb )2 /2m'R2 + Pb2 /2mr2 +V. Notice that Pb is the angular momentum of the BC molecule (Pb = m r2 db/dt). IV. The equations of motion From this Hamiltonian (or the Lagrangian L = T - V), one can obtain, using d/dt d[T-V]/di = d[T-V]/dQi , the equations of motion for the three coordinates and three momenta: dPR /dt = L/R = -V/R - (LZ -Pb )2 /m'R3 , dPr /dt = -V/r - Pb2 /mr3 , dPb /dt = -V/b = -V/q , dR/dt = PR /m', dr/dt = Pr /m , db/dt = Pb /mr2 . V. The initial conditions These six classical equations of motion that describe the time evolution of r, R, b, PR , pr , and Pb can then be solved numerically as discussed earlier. The choice of initial values of the coordinates and momenta will depend on the conditions in the A + BC collision that one wishes to simulate. For example, R will be chosen as very large, and PR will be negative (reflecting inward rather than outward relative motion) and with a magnitude determined by the energy of the collision PR2 /2m' = Ecoll.. If the diatomic BC is initially in a particular vibrational state, say v, the coordinate r will be chosen from a distribution of values given as the square of the v-state's vibrational wavefunction |yv (r)|2 . The value of pr can then be determined within a sign from the energy ev of the v state: pr2 /2m + VBC = ev , where VBC is the BC molecule's potential energy as a function of r. Finally, the angle b can be selected from a random distribution within 0<b<2p, and Pb would be assigned a value determined by the rotational state of the BC molecule at the start of the collision. VI. Computer time requirements The solution of the above six coupled first order differential equations may seem like it presents a daunting task. This is, however, not at all the case given the speed of modern computers. For example, to propagate these six equations using time steps of dt = 10-15 sec (one must employ a time step that is smaller than the period of the fastest motion- in this case, probably the B-C vibrational period which can be ca. 10-14 sec) for a total time interval of one nanosecond Dt = 10-9 sec, would require of the order of 106 applications of the above six equations. If, for example, the evaluation of the forces or potential derivatives appearing in these equations requires approximately 100 floating point operations (FPO), this 1 nanosecond trajectory would require about 108 FPOs. On a 100 Mflop (Mflop means million floating point operations per sec) desktop workstation, this trajectory would require only one second to run! You might wonder about how long trajectories on much larger molecules would run. The number of coordinates and momenta involved in any classical dynamics simulations is 3N-6. The evaluation of the force on any one atom due to its interactions with other atoms typically requires computer time proportional to the number of other atoms (since potentials and thus forces are often pairwise additive). In such cases, the number of FPOs would be approximately: #FPO = 100 x (Dt/dt) x ((3N-6)/3) x (3N-7)/2, where the (3N-6)/3 factor scales the number of coordinates and momenta from 3 in the above example to the number for a general molecule containing N atoms, and (3N-9)/2 scales the number of "other" coordinates that enter into the force calculation for any given coordinate. Thus, we expect #FPO to vary as #FPO = 100 x (Dt/dt) x 3N2 /2 for large N. Thus a trajectory run for 1 ns with time steps of 10-15 sec on a large bio-molecule containing 1000 atoms would require approximately #FPO = 1.5 x 1014 operations. Even with a 100 Mflop computer, this trajectory would run for 1.5 x106 sec (or ca. 17 days!). For such large-molecule simulations, which are routinely carried out these days, it is most common to "freeze" the movement of the high frequency coordinates (e.g., the C-H bond stretching motions in a large bio-molecule), so that longer time steps can be taken (e.g., this may allow one to take dt = 10-14 sec or longer). It is also common to ignore the forces produced on any given atom by those atoms that are far removed from the given atom. In this way, one can obtain a simulation whose FPO requirements do not scale as N2 but as NxN', where N' is the number of atoms close enough to produce significant forces. By making such approximations, classical trajectory simulations on molecules (or molecules with solvent species also present) containing 1000 or more moving atoms can be carried out in a matter of a few minutes per trajectory. b. Monitoring the outcome of a classical trajectory Let us consider a case in which the collision A + BC is followed from its initial conditions through a time at which the A atom has struck the BC molecule and been scattered so A and BC are now traveling away from one another. In such a situation, we speak of a non-reactive collision. Alternatively, the A + BC collision may result in formation of AB + C, which, of course, corresponds to a chemically reactive collision. In this case, to follow the trajectory toward its ultimate conclusion, it is common to employ a different set of internal coordinates than were used in the early part of the collision. In particular, one changes to two new distance and two new angular coordinates as shown below. These coordinates are analogous to those used to describe A + BC and are: r the distance from C to the center of mass of AB, t the AB internuclear distance, and two angles (g and f) between the x- axis and t and r, respectively. So, as the classical Newton equations are solved in a step-by-step manner, eventually one changes from solving the equations for the time evolution of R, r, a, and b and their momenta to solving the corresponding equations for r, t, f, and g and their conjugate momenta. Then, once the C atom is far enough away (and still moving outward) to declare the collision ended, one can interrogate its outcome. In particular, one can examine the vibrational and/or rotational energy content of the product (BC in the non-reactive case; AB in the reactive case) diatomic molecule, and one can compute the relative kinetic energy (e.g., 1/2 m' (dr/dt)2 in the reactive case) with which the product fragments are moving away from one another. It is through such interrogation that one extracts information from classical trajectory simulations. Professor Barbara Garrison, Penn State University, has exploited such classical trajectory simulations to study reactions and energy transfer processes taking place when an atom, ion, or molecule impacts a surface that may undergo ablation as a result of the impact. Professor Barbara Garrison, Penn State University. 2. Sometimes Quantum Dynamics are Required Let us now consider the above A + BC collision dynamics simulation but from the point of view of the quantum mechanical Schrödinger equation. This is a much more difficult problem to treat. Why? Because instead of having to solve six coupled first order differential equations subject to specified initial conditions, one must solve one four-dimensional partial differential equation: Hy = E y, where  or one must propagate in time y (R,r,a,b;t) = exp[-i Ht/h ] y(R,r,a,b;t=0) to find y at time t, given initial conditions for y at t=0. a. What needs to be done to apply the Schrödinger equation? I. An initial wavefunction must be given Specifying an initial wavefunction y(R,r,a,b;t=0) is not the difficult part of this quantum simulation. One would probably form the initial wavefunction as a product of a vibrational function yv (r) appropriate for the vth level of the BC molecule, a rotational wavefunction exp(iPb b) describing rotation of the BC molecule, and a relative-motion wavefunction yR (R) describing motion along the R- coordinate and a function exp(iPaa) describing the angular momentum of A relative to the center of mass of the BC molecule : y(R,r,a,b;t=0) = yv (r ) exp(iPb b) exp (iPa a) yR (R). If, alternatively, the initial value of the total angular momentum is specified, the relation LZ = m' R2 - m r2 = Pa - Pb can be used to eliminate the initial Pa and express it in terms of Lz and Pb . The most likely way to describe a collision in which the A atom begins at a position R0 along the R- axis and with a relative collision momentum along this coordinate of - PR 0 is to use a so-called coherent wave packet function (see Professor Rick Heller): yR (R) = exp (-iPR0 R/h) [2p <dR>2 ]-1/2 exp (-(R-R0 )2 /(4<dR>2 )). Professor Rick Heller Here, the parameter <dR>2 gives the uncertainty or "spread" along the R- degree of freedom for this wavefunction, defined as the mean squared displacement away from the average coordinate R0 , since it can be shown for the above function that Ú (R-R0)2 yR* (R) yR (R) R2 dR = <dR>2 and that R0 = Ú R yR *(R) yR (R) R2 dR. It can also be shown that the parameter PR0 is equal to the average value of the momentum along the R coordinate: Ú yR (R)*{- i h yR/R } R2 dR = PR0, and that the uncertainty in the momentum along the R coordinate: <dPR2> = Ú yR (R)*{- i h yRR + PR0 yR (R) }2 R2 dR is related to the uncertainty in the R-coordinate by <dPR2> <dR2 > = h2/4. Note that for these coherent state wavefunctions, the uncertainties fulfill the general Heisenberg uncertainty criterion for any coordinate Q and its conjugate momentum P: <dP2> <dQ2> > h2/4 but have the smallest possible product of uncertainties. In this sense, the coherent state function is the closest possible to a classical description (where uncertainties are absent). II. Time propagation of the wavefunction must be effected- the basis expansion approach The difficult part of the propagation process involves applying the time evolution operator exp (- iHt/h) to this initial wavefunction. The most powerful tools for applying this operator fall under the class of methods termed Feynman path-integral methods (see the links to Professors Greg Voth, Nancy Makri, and Jim Doll). However, let us first examine an approach that has been used for a longer time, for example in the hands of Professors George Schatz and John Light. Professor Greg Voth Professor Nancy Makri Professor George Schatz Professor John Light Professor Jim Doll Several other first-rate theorists have contributed much to the field of molecular dynamics, including quantum dynamics and condensed-media processes. They include those shown below: Professor Chi Mak, USC Professor Debbie Evans, University of New Mexico Professor David Freeman, University of Rhode Island Professor Ann McKoy, Ohio State University Professor Ron Elber, Cornell Universtiy Professor David Wales, Cambridge University Professor Rob Coalson, University of Pittsburgh, one of the leaders in the applications of quantum dynamics to condensed phase and biological problems including ion channels. a. Expanding the wavefunction in a basis If, alternatively, one prefers to attempt to solve the time-independent Schrödinger equation Hyk = Ek yk the most widely used approaches involve expanding the unknown y(R,r,a,b) in a basis consisting of products of functions of R, functions of r, and functions of a and of b: yk = Sn,v,L,M Ck (n,v,L,M) Fn (R) yv(r) yL (b) yM (a). This expansion is what one would use in the case of a non-reactive collision. If a chemical reaction (e.g., A + BC Æ AB + C) is to be examined, then one must add to the sum of terms given above another sum which contains products of functions of the so-called exit-channel coordinates (r, t, g, and f). In this case, one uses + Sn,v,L,M Ck (n,v,L,M) Fn (r) yv(t) yL (g) yM (f). Of course, the yv , yL, Fn, and yM functions relating to the exit-channel are different than those for the entrance channel (e.g., yv (r) would be a AB-molecule vibrational wavefunction, but yv (r) would be a BC-molecule vibrational function). As will be seen from the discussion to follow shortly, expanding y in this manner and substituting into the Schrödinger equation produces a matrix eigenvalue equation whose eigenvalues are the Ek and the eigenvectors are the expansion coefficients Ck (n,v,L,M). Given any wavefunction y(R,r,a,b;t=0) at t=0 (e.g., the one shown above may be appropriate for a collision of A with a BC molecule in a specified vibrational/rotational state), one can then express this wavefunction at later times as follows: y(R,r,a,b;t) = Sk yk exp(-i Ekt/h) yk * y(R,r,a,b;t=0) dr . This expression arises when one uses the facts that: 1. the {yk } form a complete set of functions so one can expand y(R,r,a,b;t=0) in this set, and 2. the time evolution operator exp (-i Ht/h) can be applied to any eigenfunction yk and produces exp (-i Ht/h) yk = exp (-i Ekt/h) yk . So, it is possible to follow the time development of an initial quantum wavefunction by first solving the time-independent Schrödinger equation for all of the {yk } and then expressing the initial wavefunction in terms of the eigenfunctions (i.e, the integral Ú yk * y(R,r,a,b;t=0) dr is nothing but the expansion coefficient of y(R,r,a,b;t=0) in terms of the yk ) after which the wavefunction at a later time can be evaluated by the expression y(R,r,a,b;t) = Sk yk exp(-i Ekt/h) Ú yk * y(R,r,a,b;t=0) dr. b. The resulting matrix eigenvalue problem Let us now return to see how the expansion of y in the manner (or the generalization in which both entrance- and exit-channel product functions are used) produces a matrix eigenvalue problem. Substituting this expansion into Hy =Ey and subsequently multiplying through by the complex conjugate of one particular Fn' (R) yv' (r) yL' (b) yM' (a)and integrating over R, r, b, and a one obtains a matrix eigenvalue equation: H Ck = Ek Ck in which the eigenvector is the vector of expansion coefficients Ck (n.v,L,M) and the H matrix has elements = < Fn' (R) yv' (r) yL' (b) yM' (a)| H | Fn (R) yv(r) yL (b) yM (a)> given in terms of integrals over the basis functions with the above Hamiltonian operator in the middle. Notice that in writing the integral on the right-hand side of the above equation, the so-called Dirac notation has been used. In this notation, any integral with a wavefunction on the right, a complex conjugated wavefunction on the left, and an operator in the middle is written as follows:  ÚY*Opfdq = < Y|Op|f> c. The dimension of the matrix scales as the number of basis functions to number of coordinates power What makes the solution of the four-variable Schrödinger equation much more difficult than the propagation of the four-coordinate (plus four-momenta) Newtonian equations? The Hamiltonian differential operator is non-separable because V depends on R, r, and q (which is related to b and a by q = a + b ) in a manner that can not be broken apart into an R-dependent piece plus an r-dependent piece plus a q-dependent piece. As a result, the four dimensional second-order partial differential equation can not be separated into four second-order ordinary differential equations. This is what makes its solution especially difficult. To make progress solving the four-dimensional Schrödinger equation on a computer, one is thus forced either to 1. expand y (R,r,a,b) as S C(n,v,L,M) Fn (R) yv(r) yL (b) yM (a), and then solve the resultant matrix eigenvalue problem, or to 2. represent y (R,r,a,b) on a grid of points in R, in r, and in a and b space and use finite-difference expressions (such as [F(R+dR) + F(R-dR) -2F(R)]/d2 to represent d2F(R)/dR2 ) to also express the derivatives on this same grid to ultimately produce a sparse matrix whose eigenvalues must be found. In either case, one is left with the problem of finding eigenvalues of a large matrix. In, the former case, the dimension of the H matrix is equal to the product of the number of Fn (R) functions used in the expansion times the number of yv(r) functions used times the number of yL (b) used and times the number of yM (a) functions. In the latter case, the dimension of H is equal to the number of grid points along the R coordinate times the number of grid points along r times the number along b and times the number along a. It is the fact that the dimension of H grows quartically with the size of the basis or grid set used (because there are four coordinates in this problem; for a molecule with N atoms, the dimension of H would grow as the number of grid points or basis functions to the 3N-6 power!) and that the computer time needed to find all of the eigenvalues of a matrix grows as the cube of the dimension of the matrix (to find one eigenvalue requires time proportional to the square of the dimension) that makes the solution of this quantum problem very difficult. Let us consider an example. Suppose that a grid of 100 points along the R-coordinate and 100 points along the r-coordinate were used along with say only 10 points along the a and b-coordinates. This would produce a (albeit very sparse) H matrix of dimension 106 . To find just one eigenvalue of such a large matrix on a 100 Mflop desktop workstation would require of the order of several times (106)2 /(100x106 ) sec, or at least 100 minutes. To find all 106 eigenvalues would require 106 times as long. III. Time propagation of the wavefunction can be effected using Feynman path integrals instead Given a wavefunction at t=0, y(R,r,a,b;t=0), it is possible to propagate it forward in time using so-called path integral techniques that Richard Feynman pioneered and which have become more popular and commonly used in recent years (see links to Professors Greg Voth, Jim Doll, and Nancy Makri). In these approaches, one divides the time interval between t=0 and t=t into N small increments of length t = t/N, and then expresses the time evolution operator as a product of N short-time evolution operators: exp[-i Ht/h ] = {exp[-i Ht/h ]}N . For each of the short time steps t , one then approximates the propagator as exp[-i Ht/h] = exp[-i Vt/2h] exp[-i Tt/h] exp[-i Vt/2h], where V and T are the potential and kinetic energy operators that appear in H = T + V. It is possible to show that the above approximation is valid up to terms of order t4 . So, for short times (i.e., small t ), these so-called symmetric split operator approximations to the propagator should be accurate. The time evolved wavefunction y (t) can then be expressed as y (t) = { exp[-i Vt/2h] exp[-i Tt/h] exp[-i Vt/2h]}N y (t=0). The potential V is (except when external magnetic fields are present) a function only of the coordinates {qj } of the system, while the kinetic term T is a function of the momenta {pj } (assuming cartesian coordinates are used). By making use of the completeness relations for eigenstates of the coordinate operator 1 = dqj| qj> < qj| and inserting this identity N times (once between each power of exp[-i Vt/2h] exp[-i Tt/h] exp[-i Vt/2h]), the expression given above for y (t) can be rewritten as follows: y (qN ,t)= dqN-1 dqN-2 . . . dq1 dq0 exp{(-it/2h)[V(qj) + V(qj-1)]}  < qj| exp(-itT / h ) |qj-1>y (q0 ,0). Then, by using the analogous completeness identity for the momentum operator 1 = 1 / h dpj| pj>< pj | one can write < qj| exp(-itT / h ) |qj-1> = 1 / hdp < qj|p > exp(-ip2t / 2mh ) < p|qj-1 >. Finally, by using the fact that the momentum eigenfunctions |p>, when expressed as functions of coordinates q are given by < qj|p > = (1/2p)1/2 exp(ipq/h), the above integral becomes < qj | exp(-itT / h) |qj-1> = 1/2 ph dp exp(-ip2 t / 2mh) exp[ip(qj - qj - 1)]. This integral over p can be carried out analytically to give < qj | exp(-itT / h) |qj-1> =exp[im(qj - qj - 1)2 / 2ht]. When substituted back into the multidimensional integral for y (qN ,t), we obtain exp[im(qj - qj-1)2 /2ht] y (qo,0) y (qN ,t) = dqN-1 dqN-2 . . . dq1 dq0 exp{i/h [m(qj - qj-1)2 / 2t - t(V(qj) + V(qj-1))/2]} y (q0 , t=0). Why are such quantities called path integrals? The sequence of positions q0 , q2 , ... qN describes a "path" connecting q0 to qN . By integrating over all of the intermediate positions q1 , q2 ,... qN-1 one is integrating over all such paths. Further insight into the meaning of the above is gained by first realizing that  (qj - qj-1)2 = (qj - qj-1)2 t = Tdt is the representation, within the N discrete time steps of length t, of the integral of Tdt over the jth time step, and that [V(qj) + V(qj-1)] = V(q)dt is the representation of the integral of Vdt over the jth time step. So, for any particular path (i.e., any specific set of q0 , q1, , ... qN-1 , qN values), the sum over all N such terms [m(qj - qj-1)2 / 2t - t(V(qj) + V(qj-1))/2] is the integral over all time from t=0 until t=t of the so-called Lagrangian L = T - V: This time integral of the Lagrangian is called the "action" S in classical mechanics. Hence, the N-dimensional integral in terms of which y (qN ,t) is expressed can be written as y (qN ,t) = exp{i / h dt L } y (q0 ,t=0). Here, the notation "all paths" is realized in the earlier version of this equation by dividing the time axis from t=0 to t=t into N equal divisions, and denoting the coordinates of the system at the jth time step by qj . By then allowing each qj to assume all possible values (i.e., integrating over all possible values of qj ), one visits all possible paths that begin at q0 at t=0 and end at qN at t=t. By forming the classical action S S = dtL for each path and then summing exp(iS/h) y ( q0 , t=0) over all paths and multiplying by , one is able to form y (qN ,t). The difficult step in implementing this Feynman path integral method in practice involves how one identifies all paths connecting q0 , t=0 to qN , t. Each path contributes an additive term involving the complex exponential of the quantity This sum of many, many (actually, an infinite number) oscillatory exp(iS/h) = cos (S/h) + i sin(S/h) terms is extremely difficult to evaluate because of the tendency of contributions from one path to cancel those of another path. The evaluation of such sums remains a very active research subject. The most commonly employed approximation to this sum involves finding the path(s) for which the action is smallest because such paths produce the lowest frequency oscillations in exp(iS/h), and thus are not subject to cancellation by contributions from other paths. The path(s) that minimize the action S are, in fact the classical paths. That is, they are the paths that the system whose quantum wavefunction is being propagated in time would follow if the system were undergoing classical Newtonian mechanics subject to the conditions that the system be at q0 at t=0 and at qN at t=t. In this so-called semi-classical approximation to the propagation of the initial wavefunction using Feynman path integrals, one finds all classical paths that connect q0 at t=0 and at qN at t=t, and one evaluates the action S for each such path. One then applies the formula y (qN ,t) = exp{i / hdt L } y (q0 ,t=0) but includes in the sum only the contribution from the classical path(s). In this way, one obtains an approximate quantum propagated wavefunction via a procedure that requires knowledge of only classical propagation paths. b. How is information extracted from a quantum dynamics simulation? Once an initial quantum wavefunction has been propagated for a time long enough for the event of interest to have occurred (e.g., long enough for an A + BC Æ AB + C reactive collision to take place in the above example problem), one needs to interpret the wavefunction in terms of functions that are applicable to the final-state. In the example considered here, this means interpreting y(R, r, a, b; t) in terms of exit-channel basis states that describe an intact AB molecule with a C atom moving away from it (i.e., the terms Sn,v,L,M Ck (n,v,L,M) Fn (r) yv(t) yL (g) yM (f)). As explained earlier when the classical trajectory approach was treated, to describe this final-state configuration of the three atoms, one uses different coordinates than R,r,a,b that were used for the initial state. The appropriate coordinates are shown below in the figure. That is, one projects the long-time wavefunction onto a particular exit-channel product function Fn (r) yv(t) yL (g) yM (f) to obtain the amplitude A of that exit channel in the final wavefunction: A = < Fn (r) yv(t) yL (g) yM (f)| y(R,r,a,b;t)> = Sk < Fn (r) yv(t) yL (g) yM (f)| yk> exp(-iEkt/h) Ú yk * y(R,r,a,b;t=0) dr. Using the earlier expansion for yk , one sees that < Fn (r) yv(t) yL (g) yM (f)| yk> = Ck (n,v,L,M) for the exit channel, so the modulus squared of A, which gives the probability of finding the AB + C system in the particular final state Fn (r) yv(t) yL (g) yM (f), is given by: |A|2 = | Sk Ck (n,v,L,M) exp(-i Ekt/h) Ú yk * y(R,r,a,b;t=0) dr |2 . This result can be interpreted as saying that the probability of finding the state Fn (r) yv(t) yL (g) yM (f) is computed by (1) first, evaluating the projection of the initial wavefunction along each eigenstate (these components are the Ú yk * y(R,r,a,b;t=0) dr), (2) multiplying by the projection of the specified final wavefunction along each eigenstate (the < Fn (r) yv(t) yL (g) yM (f)| yk> = Ck (n,v,L,M)), (3) multiplying by a complex phase factor exp(-i h Ek t) that details how each eigenstate evolves in time, and (4) summing all such products after which (5) taking the modulus squared of the entire sum. 3. Present Day Challenges in Chemical Dynamics Professor Bill Reinhardt, University of Washington. His early career focused on developing theories and computational tools for studying the dynamical decay of metastable states (e.g., such as occurs in unimolecular decomposition of molecules) and the rates of energy transfer among modes in molecules. Professor Richard Stratt, Brown University. He has been involved in examining the dynamics of clusters of atoms or molecules that are small on a macroscopic level (i.e., do not contain enough atoms to be considered to follow macroscopic thermodynamic laws) yet large enough to show certain characteristics (e.g., phase transition like behavior) associated with macroscopic systems. The late Professor Kent Wilson, University of California, San Diego. He was active in simulating dynamical behavior of a wide variety of complicated chemical and physical systems. His web sites contain some of the more "exciting" depictions and animations of molecular phenomena. Professor John Tully, Yale University, has pioneered much of the study of dynamics and reactions that occur at or near surfaces and he invented the most commonly used so-called surface hopping model for treating non-adiabatic transitions among potential energy surfaces.  Professor Mark Child, Oxford University has, along with Professor Bill Miller, Berkeley, developed the area of semi-classical collision and reaction dynamics. This approch to solving quantum equations in a manner that allows one to retain a great deal of classical mechanics' conceptual clarity has been very successufl and has lead to many new insights. Professor Mark Child of Oxford University Professor Bill Miller of Berkeley Professor Don Truhlar, University of Minnesota was able to extend the Eyring idea of the transition state (the "pass" or col separating reactant and product valleys on the potential energy surface) by introducing what is now termed Variational Transition State Theory in which a pass on a free energy surface replaces the conventional transition state. He and his long-time collaborator, Dr. Bruce Garrett, succeeded in making this theory one of the most useful computational and conceptual tools in reaction dynamics. Don Truhlar and Bruce Garrett Professor D avid Clary, University College, London has extended the capability of quantum reactive dynamics theory to systems with several degrees of freedom by introducing clever approximations that focus attention on certain "active" coordinates while treating other coordinates in an approximate manner. Such approaches are essential if one hopes to employ quantum methods on more than the simplest chemical reactions. Professor Bob Wyatt, University of Texas, has played a primary role in developing new methods and algorithms that allow one to use quantum dynamics efficiently to study chemical reactions and energy flow in molecules and in collsions. Professor Herschel Rabitz of Princeton University has worked closely with experimentalists to show that series of specially tuned light pulses can be used to selectively excite a molecule in a manner that allows one to control which bond is broken. Professor Susan Tucker, University of California, Davis. Professor Ned Sibert, University of Wisconsin, has been studing intramolecular energy flow and the interactions of lasers with small molecules. The goal of much of this work is to determine how one might control where the energy goes in a molecule that is photo-excited. This very exciting branch of photochemistry is eventually aimed at trying to control the energy (i.e., to keep it in certain internal modes) so that one might effect the outcome (i.e., which products are formed) of chemical reactions.  The University of Wisconsin, Madison has one of the longest traditions of excellence in theoretical chemistry, dating back to when the late Prof. Joseph O. Hirschfelder started the theortical chemistry institute (TCI). Their strength in theory continues to this day with Profs. Jim Skinner, Prof. Ned Sibert, Prof. John Harriman, Prof. Frank Weinhold, Prof. Arun Yethiraj, and Prof. Qiang Cui shown below. Qiang Cui (left), whose works focus on macromolecules such as catalysis in enzymes or ATP hydrolysis in motor proteins, and John Harriman (right) whose efforts have emphasized gaining a better understanding of the electronic structure of molecules by use of reduced density matrices. Professor Jim Skinner (left) whose work is discussed elsewhere and who serves as Director of TCI, Prof. Frank Weinhold (center) who invented the Natural Bond Orbital (NBO) analysis methods that offer a mathematically rigorous "Lewis structure" representation of the wavefunction, and Prof. Arun Yethiraj, who has contributed much to the statistical mechanical study of polyme materials. Professor Jeff Krause, University of Florida has been studying the interaction of pulses of coherent electromagnetic radiation with molecules in hopes of designing laser means for controlling which bonds are broken in a photochemical process. Professor John Straub, Boston University is currently the Chair of the Theoretical Chemistry Subdivision of the Physical Chemistry Division of ACS. His work involves using molecular dynamics and Monte-Carlo simulation tools to study biomolecules, liquids, and clusters. Two of John's colleagues at Boston University are Professor David Coker whose group develops new methods for treating condensed-phase chemical dynamics and Professor Tom Keyes whose group specializes in statistical mechanics of supercooled liquids a. Large Biomolecules and Polymers Following the motions of large molecules even using classical Newton equations involves special challenges. First, there is the fact that large molecules contain more atoms, so the Newtonian equations that must be solved involve many coordinates (N) and momenta. Because the potential energy functions,and thus the Newtonian forces, depend on the relative positions of the atoms, of which there are N(N-1)/2 , the computer time needed to carry out a classical trajectory varies as (at least) the square of the number of atoms in the molecule. Moreover, to adequately sample an ensemble of initial coordinates and momenta appropriate to a large molecule, one needs to run many trajectories. That is, one needs to choose a range initial values for each of the molecule's internal coordinates and momenta; the number of such choices is proportional to N. The net result is that computer time proportional to N3 (or worse) is needed,and this becomes a major challenge for large biomolecules or polymers. One way this problem is being addressed is to use parallel computers which allow one to run trajectories with different initial conditions on different processors. Another problem that is special to large molecules involves the force field itself (see, for example, the web pages of Professors Andy McCammon, Charles Brooks and the late Peter Kollman, as well as of Biosym and CHARMm). Usually, the potential energy is expressed as a sum of terms involving: 1. bond stretching potentials of either harmonic V(r) = 1/2 k (r-re)2 or Morse form V(r) = De (1-exp(-b(r-re)))2 2. bond angle bending potentials of harmonic form depending on the angles between any three atoms A, B, C that are bonded to one another (e.g., the H-C-H angles in -CH2 - groups) 3. dihedral angles (e.g., the H-C-C-H angle in -H2C-CH2- groups 4. Van der Waals interactions V(r) = Ar-12 - Br-6 among all pairs of atoms (these potentials allow for repulsions among, for example, the H atoms of a methyl group in H3C-CH2-CH2-CH2-CH2-C H3 to repel the other H atoms as this "floppy" molecule moves into geometries that bring such groups into strong steric interaction) 5. electrostatic attraction (for opposite charges) and repulsions (for like charges) among charged or highly polar groups (e.g., between pairs of phosphate groups in a biopolymer, between a Na+ ion and a phosphate group, or between a charged group and a neighboring H2O molecule) 6. polarization of the electronic clouds of the large solute molecule or of surrounding solvent molecules induced by ionic or highly polar groups approaching highly polarizable parts of the molecule. Professor Andy McCammon The Late Professor Peter Kollman Professor Charles Brooks  Professor Joan-Emma Shea, University of California, Santa Barbara, has been studying bio-molecules folding and motions as well. Professor Joan Shea, University of California, Santa Barbara Professor Klaus Schulten (below) University of Illinois Physics, has studied many problems in biophysics. Most recently, he has focued descriptions and efficient computing tools for structural biology. Professor Zan Luthey-Schulten, University of Illinois Chemistry is one of the world's leading figures in using statistical mechancis techniques to predict protein structures and to understand their relations to their functions. One of the earliest pioneers in applying fundamental statistical mechanics, molecular mechanics, and solvation modelling theories to bio-molecules is Professor Arieh Warshel (below). His groups more recent interests include the dynamics of photobiological processes, enzyme catalysis and protein action, as well as the study of basic chemical reactions in solutions. Professor Arieh Warshel, University of Southern Califonia Professor Jeffry Madura, Duquesne Universtiy The primary difficulties in using such a multi-parameterized force field are 1. that the values of the parameters characterizing each kind of potential have to be obtained either by requiring the results of a simulation to "fit" some experimental data or by extracting these parameters from ab initio quantum chemical calculations of the interaction potentials on model systems; 2. these developments and calibrations of the parameters in such force fields are still undergoing modifications and improvements, so they are not yet well established for a wide range of functional groups, solvents, ionic strengths, etc. One more difficulty that troubles most large molecule simulations is the wide range of time scales that must be treated when solving the Newton equations. For example, in attempting to monitor the uncoiling of a large biomolecule, which may occur over a time scale of ca. 10-5 to 1 sec, one is usually forced to "freeze" the much faster motions that occur (i.e., to treat the C-H, O-H, and N-H bonds, which oscillate over ca.10-14 sec, as rigid). If one were to attempt to follow the faster motions, one would have to take time steps in solving the Newton equations of ca. 10-14 sec, as a result of which the uncoiling event would require 109 to 1014 time steps. Even with a computer that could perform 109 floating point operations per second (i.e., a one gigaflop computer), and assuming that a single time step would use at least 1000 floating point operations (which is very optimistic), a single such trajectory would use 103 to 108 seconds of computer time. This is simply unrealistic. As a result, one is forced to freeze (i.e., by constraining to fixed geometries) the faster motions so that the Newton equations can be solved only for the slower motions. It is an area of current research development to invent new methods that allow such multiple time scale issues to be handled in a more efficient and accurate manner. b. Mixed Classical and Quantum Dynamics One of the most pressing issues in chemical reaction dynamics involves how to treat a very large molecular system that is undergoing a chemical reaction or a photochemical event (i.e., a change that affects its electronic structure). Because the system's bonding and other attributes of its electronic wavefunction are changing during such a process, one is required to use quantum mechanical methods. However, such methods are simply impractical (because their computer time, memory, and disk space needs scale as the number of electrons in the system to the fourth (or higher) power) to use on very large molecules. If the chemical reaction and/or photon excitation is localized within a small portion of the molecular system (that may involve solvent molecules too), there are so-called quantum mechanics/molecular mechanics (QMMM) methods currently under much development that can be employed. In these methods, one uses an explicit quantum chemical (ab initio or semi-empirical) method to handle that part of the system that is involved in the electronic process. For example, in treating the tautomerization of formamide H2 N-CHO to formamidic acid HN=CHOH in aqueous solution, one must treat the six atoms in the formamide explicitly using quantum chemistry tools. The surrounding water molecules (H2O)n could then be handled using a classical force field (i.e., the formamide-water and water-water interaction potentials as well as the internal potential energy function of each H2O molecule could be expressed as a classical potential like those described earlier in this section). If, on the other hand, one wished to treat one or more of the H2O solvent molecules as actively involved in promoting the tautomerization (as shown in the figure below) one would use quantum chemistry tools to handle the formamide plus the actively involved water molecule(s), and use classical potentials to describe the interactions of the remaining solvent molecules with these active species and among one another. At present, much effort is being devoted to developing efficient and accurate ways to combine quantum treatment of part of the system with classical treatment of the remainder of the system. Professor Chris Cramer has been very active developing new ways to model the influence of surrounding solvent molecules on chemical species whose spectroscopy or reactions one wishes to study. Chris Cramer Professors Mark Gordon, Than h Truong, Weito Yang, and Keji Morokuma are especially active in developing this kind of QMMM methods. Professor Truong has made some of his programs that compute rates of chemical reactions available through a web site called TheRate. Professor Mark Gordon Professor Thanh Truong Professor Keiji Morokuma c. The Car-Parrinello Method As discussed earlier, it is often important to treat the electronic degrees of freedom, at least for the electrons involved in a chemical reaction or in a photon absorption or emission event, at a quantum level while still, at least for practical reasons, treating the time evolution of the nuclei classically. A novel approach developed by Car and Parrinello, and reviewed very nicely by Remler and Madden, incorporates the task of optimizing the electronic wavefunction into the Newtonian dynamics equations used to follow the nuclear movements. The LCAO-MO coefficients Ci,k relating molecular orbital fi to atomic orbital ck are assumed to minimize an energy function E[{Ci,k}] which can be a Hartree-Fock or DFT-type function, for example. At each time step during which the Newtonian equations ma d2Xa /dt2 = - E/Xa are propagated, a corresponding propagation equation is used to follow the time evolution of the {Ci,k } coefficients. The latter equation was put forth by Car and Parrinello by defining a (fictitious) kinetic energy T = Si, k 1/2 m |dCi,k/dt|2 in which the time derivatives of the Ci,k coefficients are viewed as analogs to the time rate of changes of the molecule's Cartesian coordinates {Xa}, and m is a (fictitious mass introduced to give this expression the units of energy. Then, using the dependence of the electronic energy E on the {Ci,k } to define the potential energy V = E[{Ci,k}], and then introducing a (fictitious) Lagrangian L = T-V whose classical action S = Ú L dt is made stationary with respect to variations in the {Ci,k } coefficients but subject to the constraint that the orbitals {fi} remain orthonormal: di,j = S k,l C*i,k Sk,l Cj,l the following dynamical equations are obtained for the Ci,k coefficients: m d2Ci,k /dt2 = -E/Ci,k - Sj,l li,j Sk,. Cj,l . The left hand side of this equation gives the mass m multiplied by the "acceleration" of the Ci,k variable. The right side gives the "force" -E/Ci,k acting on this variable, and - Sj,l li,j Sk,. Cj,l is the coupling force that connects Ci.k to the other Cj,l variables (this term arises, with its Lagrange multipliers li,j , from the constraints di,j = S k,l C*i,k Sk,l Cj,l relating to the orthonormality of the molecular orbitals. The result is that the time evolution of the Ci,k coefficients can be followed by solving an equation that is exactly the same in form as the Newtonian equations ma d2Xa /dt2 = - E/Xa used to follow the nuclear positions. Of course, one wonders whether the idea used by Car and Parrinello to obtain the classical-like equations for the time development of the Ci,k coefficients is valid. These equations were obtained by making stationary, subject to orthonormality constraints, an action. However, if one were to set the mass m equal to zero, the Lagrangian L would reduce to L = T-V = -V, since the (fictitious) kinetic energy would vanish. Since the Ci,k coefficients, in a conventional quantum chemical study, would be chose to make E[{Ci.k }] stationary, it can be shown that making the action stationary when T vanishes is equivalent to making V stationary, which, in turn, makes E[{Ci,k}] stationary. So, the Car-Parrinello approach is valid, but one has to be aware of the restriction to the ground-state (since E[{Ci,k }] is made stationary with no constraint of orthogonality to lower-energy states) and one must realize that finite-time-step propagations with non-zero m render the {Ci,k} amplitudes obtained via classical propagation not equivalent to those obtained by minimizing E at each geometry. Before closing this section on dynamics, I wish to bring to the readers' attention several other leading researchers in this exciting area of theoretical chemistry by showing photos of them below. Professor Joel Bowman Emory University Profess or Greg Ezra Cornell University Professor Sharon Hammes-Schiffer Penn State University. Sharon's group has made use of basis set expansion approaches to follow the quantum dynamics of light nuclei such as H and D, and has applied this novel technique to a wide variety of biological and condensed-phase systems. Professor Jan Linderberg, Aarhus University Dr. Steven Klippenstein,Sandia National Labs Pro fessor Chi Mak, University of Southern California Professor Shaul Mukamel University of California, Irvine Professor Dan Neuhauser, UCLA C. Statistical mechanics provides the framework for studying large collections of molecules and tells us how to average over positions and velocities to properly simulate the laboratory distribution of molecules When dealing with a sample containing a large number (e.g., billions) of molecules, it is impractical to attempt to monitor the time evolution of the coordinates and momenta (or analogous quantum state populations) of the individual constituent molecules. Especially for systems at or near equilibrium, these properties of individual molecules may vary irregularly on very short time scales (as the molecules undergo frequent collisions with neighbors), but the average behavior (e.g, average translational energy or average population of a particular vibrational energy level) changes much more gently with time. Such average properties can be viewed as averages for any one molecule over a time interval during which many collisions occur or as averages over all the molecules in the sample at any specific time. By focusing on the most probable distribution of the total energy available to a sample containing many molecules and by asking questions that relate to the average properties of the molecules, the discipline of statistical mechanics provides a very efficient and powerful set of tools (i.e., equations) for predicting and simulating the properties of such systems. In a sense, the machinery of statistical mechanics allow one to describe the "most frequent" behavior of large molecular systems; that is how the molecules are moving and interacting most of the time. Fluctuations away from this most probable behavior can also be handled as long as these fluctuations are small. 1. The framework of statistical mechanics provides efficient equations for computing thermodynamic properties from molecular properties a. The Boltzmann population equation The primary outcome of asking what is the most probable distribution of energy among a large number N of molecules within a container of volume V that is maintained at a specified temperature T is the most important equation in statistical mechanics, the Boltzmann population formula: Pj = Wj exp(- Ej /kT)/Q, where Ej is the energy of the jth quantum state of the system (which is the whole collection of N molecules), T is the temperature in K, Wj is the degeneracy of the jth state, and the denominator Q is the so-called partition function: Q = Sj Wj exp(- Ej /kT). The classical mechanical equivalent of the above quantum Boltzmann population formula for a system with M coordinates (collectively denoted q and M momenta denoted p) is: P(q,p) = h-M exp (- H(q, p)/kT)/Q, where H is the classical Hamiltonian, and the partition function Q is Q = h-M Ú exp (- H(q, p)/kT) dq dp . b. The limit for systems containing many molecules Notice that the Boltzmann formula does not say that only those states of a given energy can be populated; it gives non-zero probabilities for populating all states from the lowest to the highest. However, it does say that states of higher energy Ej are disfavored by the exp (- Ej /kT) factor, but if states of higher energy have larger degeneracies Wj (which they usually do), the overall population of such states may not be low. That is, there is a competition between state degeneracy, which tends to grow as the state's energy grows, and exp (-Ej /kT) which decreases with increasing energy. If the number of particles N is huge, the degeneracy W grows as a high power (say M) of E because the degeneracy is related to the number of ways the energy can be distributed among the N molecules; in fact, M grows at least as fast as N. As a result of W growing as EM , the product function P(E) = EM exp(-E/kT) has the form shown below (for M=10). By taking the derivative of this function P(E) with respect to E, and finding the energy at which this derivative vanishes, one can show that this probability function has a peak at E* = MkT, and that at this energy value, P(E*) = (MkT)M exp(-M), By then asking at what energy E' the function P(E) drops to exp(-1) of this maximum value P(E*) P(E') = exp(-1) P(E*), one finds E' = MkT (1+ ). So the width of the P(E) graph, measured as the change in energy needed to cause P(E) to drop to exp(-1) of its maximum value divided by the value of the energy at which P(E) assumes this maximum value is (E'-E*)/E* =. This width gets smaller and smaller as M increases. So, as the number N of molecules in the sample grows, which causes M to grow as discussed earlier, the energy probability functions becomes more and more sharply peaked about the most probable energy E*. It is for the reasons just shown that for so-called macroscopic systems, in which N (and hence M) is extremely large (i.e., 10L with L being ca. 10-24), only the most probable distribution of the total energy among the N molecules need be considered that the equations of statistical mechanics are so useful. Certainly, there are fluctuations (as evidenced by the finite width of the above graph) in the energy content of the molecular system about its most probable value. However, these fluctuations become less and less important as the system size (i.e., N) becomes larger and larger. c. The connection with thermodynamics What are some of these equations? The first is the fundamental Boltzmann population formula shown earlier: Pj = Wj exp(- Ej /kT)/Q. Using this result, it is possible to compute the average energy <E> of a system <E> = Sj Pj Ej , and to show that this quantity can be recast (try to do this derivation; it is not that difficult) as <E> = kT2 (lnQ/T)N,V . If the average pressure <p> is defined as the pressure of each quantum state pj = (Ej /V)N multiplied by the probability Pj for accessing that quantum state, summed over all such states, one can show that <p> = Sj (Ej /V)N Wj exp(- Ej /kT)/Q = kT(lnQ/V)N,T . Without belaboring the point much further, it is possible to express all of the usual thermodynamic quantities in terms of the partition function Q. The average energy and average pressure are given above; the average entropy is given as <S> = k lnQ + kT(lnQ/N)V,T . So, if one were able to evaluate the partition function Q for N molecules in a volume V at a temperature T, either by summing the quantum-state degeneracy and exp(-Ej/kT) factors Q = Sj Wj exp(- Ej /kT) or by evaluating the classical phase-space integral (phase space is the collection of coordinates and conjugate momenta) one could then compute all thermodynamic properties of the system. This is the essence of how statistical mechanics provides the tools for connecting the molecule-level properties, which ultimately determine the Ej and the Wj, to the macroscopic properties such as <E>, <S>, <p>, etc. 2. Statistical mechanics gives equations for probability densities in coordinate and momentum space Not only is statistical mechanics useful for relating thermodynamic properties to molecular behavior but it is also necessary to use in molecular dynamics simulations. When one attempts, for example, to simulate the reactive collisions of an A atom with a BC molecule to produce AB + C, it is not appropriate to consider a single classical or quantal collision between A and BC. Why? Because in any laboratory setting, 1. The A atoms are moving toward the BC molecules with a distribution of relative speeds. That is, within the sample of molecules (which likely contains 1010 or more molecules), some A + BC pairs have low relative kinetic energies when they collide, and others have high kinetic energies. There is a probability distribution P(EKE ) for this relative kinetic energy. 2. The BC molecules are not all in the same rotational (J) or vibrational (v) state. There is a probability distribution function P(J,v) describing the fraction of BC molecules that are in a particular J state and a particular v state. 3. When the A and BC molecules collide with a relative motion velocity vector v, they do not all hit "head on". Some collisions have small impact parameter b (the closest distance from A to the center of mass of BC if the collision were to occur with no attractive or repulsive forces), and some have large b-values (see below). The probability function for these impact parameters is P(b) = 2p b db, which is simply a statement of the geometrical fact that larger b-values have more geometrical volume element than smaller b-values. So, to simulate the entire ensemble of collisions that occur between A atoms and BC molecules in various J, v states and having various relative kinetic energies EKE and impact parameters b, one must: 1. run classical trajectories (or quantum propagations) for a large number of J, v, EKE , and b values, 2. with each such trajectory assigned an overall weighting (or importance) of Ptotal = P(EKE ) P(J,v) 2pb db. 3. Present Day Challenges in Statistical Mechanics In addition to the scientists whose work is discussed explicitly below or later in this section, the reader is encouraged to visit the following web sites belonging to other leaders in the area of statistical mechanics. I should stress that the "dividing line" between chemical dynamics and statistical mechanics has become less clear in recent years, especially as dynamics theory has become more often applied to condensed-media systems containing large numbers of molecules. For this reason, one often finds researchers who are active in the chemical dynamics community producing important work in statistical mechanics and vice versa. Professor James Skinner, University of Wisconsin. He has been a leader in many statistical mechanics studies of glassy materials and of relaxation processes that occur in NMR and optical spectroscopy. Professor Peter Rossky, University of Texas, was one of the earliest to use modern statistical mechanics and quantum dynamics to study the structures and optical spectroscopy of solvated electrons. Professor Hans C. Andersen, Stanford University, has pioneered many theories and computational methodologies that describe liquids and glasses. Profesor Benjamin Widom, Cornell University is one of the world's leading figures in statistical mechanics. One of the most active research areas in statistical mechanics involves the evaluation of so-called time correlation functions. The correlation function C(t) is defined in terms of two physical operators A and B, a time dependence that is carried by a Hamiltonian H via exp(-iHt/h), and an equilibrium average over a Boltzmann population exp(-bH)/Q. The quantum mechanical expression for C(t) is C(t) = Sj <yj | A exp(iHt/h) B exp(-iHt/h) |yj > exp(-bEj)/Q, while the classical mechanical expression is C(t) = Ú dq Ú dp A(q(0),p(0)) B(q(t),p(t)) exp(-bH(q(0),p(0)))/Q, where q(0) and p(0) are the values of all the coordinates and momenta of the system at t=0 and q(t) and p(t) are their values, according to Newtonian mechanics, at time t. An example of a time correlation function that relates to molecular spectroscopy is the dipole-dipole correlation function: C(t) = Sj <yj | e•m exp(iHt/h) e•m exp(-iHt/h) |yj > exp(-bEj)/Q, for which A and B are both the electric dipole interaction e•m between the photon's electric field and the molecule's dipole operator. The Fourier transform of this particular C(t) relates to the absorption intensity for light of frequency w: I(w) µÚ dt C(t) exp(iwt). The computation of correlation functions involves propagating either wavefunctions or classical trajectories. In the quantum case, one faces many of the same difficulties discussed earlier in Sec. III.B.2. However, one aspect of the task faced in the equilibrium averaging process that is also included in C(t) makes this case somewhat easier than in the quantum dynamics case. To illustrate, consider the time propagation issue contained in the quantum definition of C(t). One is faced with 1. propagating |yj > from t=0 up to time t, using exp(-iHt/h) |yj > and then multiplying by B 2. propagating A+ |yj > from t=0 up to time t, using exp(-iHt/h)A+ |yj >. The exp(-bH) operator can be combined with the first time propagation step as follows: exp(-iHt/h) |yj > exp(-bEj)/Q = exp(-iHt/h) exp(-bH) |yj >/Q =exp(-i/h[t+bh/i]H) |yj>/Q. Doing so introduces a propagation in complex time from t = 0 +bh/i to t = t + bh/i. The Feynman path integral techniques can now be used to carry out this propagation. One begins, as before, by dividing the time interval into N discrete steps exp[-i Ht/h ] = {exp[-i Ht/Nh ]}N . and then utilizing the same kind of short-time split propagator methodology described earlier in our discussion of quantum dynamics and Feynman path integrals. Unlike the real-time propagation case, which is plagued by having to evaluate sums of oscillatory functions, the complex-time propagations that arise in statistical mechanics (through the introduction of t = t + bh/i) are less problematic. That is, the quantity exp[-i Ejt/Nh ] = exp[-i Ejt/Nh ] exp[- Ejb/N] contains an exponential "damping factor" exp[- Ejb/N] in the complex-time case that causes the evaluation of correlation functions to be less difficult than the evaluation of real-time wavefunction propagation. For a good overview of how time correlation functions relate to various molecular properties, I urge you to look at McQuarrie's text book on Statistical Mechanics. Other areas of active research in statistical mechanics tend to involve systems for which the deviations from "ideal behavior" (e.g., dilute gases whose constituents have weak intermolecular forces, nearly harmonic highly ordered solids, weakly interacting dilute liquid mixtures, species adsorbed to surfaces but not interacting with other adsorbed species, highly ordered polymeric materials) are large. Such systems include glasses, disordered solids, polymers that can exist in folded or disentangled arrangements, concentrated electrolyte soltions, solvated ions near electrode surfaces, and many more. In addition to those scientists already mentioned above, below, I show several of the people who are pursuing research aimed at applying statistical mechanics to such difficult problems. Professor Branka Ladanyi, Colorado State University, has been studying chemical reactions (including electron transfer reactions) that occur in solutions where the solvent interacts strongly with the reacting species. Professor Peter Wolynes, University of Illinois, has contributed much to developing new theories of how protiens and other biopolymers undergo folding and denaturation. Professor Michael Klein, University of Pennsylvania, has used molecular dynamics and Monte-Carlo computer simulations to study molecular overlayers, membranes, and micelles. Professor Doug Henderson, Brigham Young University, is one of the pioneers in theories of the liquid state. In particular, he and Barker developed and made numerous applications of perturbation theories of liquids. Professor Hannes Jønsson of the University of Washington has invented numerous numerical tools for more efficiently simulating behavior of condensed-media dynamics, in particular, liquids, crystals, surfaces, and interfaces. Professor JoanEmma Shea, University of California, Santa Barbara, studies the folding of large bio-molecules such as proteins using Monte-Carlo and molecular dynamics methods. Professor Tony Haymet of the University of Houston has contributed much to studies of hydrophobic effects and to the study of so-called antifreeze proteins that some fish use in very cold environments to survive. Professor Bill Gelbart, UCLA, uses statistical mechanics to study the structures of very comlex fluids and to follow condensation of DNA and other biological macromolecules. return to title page
c043d3f96ba5afd0
NMR Overview NMR Spectrometers NMR spectrometers combine three building blocks • A strong magnetic field • A radio frequency generator • A radio frequency detector NMR spectrometers are commerically available (0.1 - 10 Mega $ ) from: Modern NMR spectrometers rely on three revolutionary developments: • Pulse NMR • Fourier Transform data acquisition • 2D NMR techniques The investigation of living tissue (uncluding humans) requires special spectrometers, the technique is called • Magnetic Resonance Imaging (MRI) NMR Spectroscopy on the WWW NMR Spectroscopy is now used in so many disciplines that the true breadth of the method cannot be covered in a introductory lecture. The hyperlinks below lead to important NMR centers and should give you a first impression of NMR and its many applications. Infromation about our own NMR center: NMR Center University of Guelph Other hyperlinks: An Introduction to Magnetic Resonance (Word.doc, Uni-Duisburg) NMR Center Heidelberg (Germany) Bruker WWW Server | Bruker ftp server | Frequently Asked Questions Protein NMR Structures (Oxford) Slovenian NMR Center NMR Information Server Univ. of Florida. NMR Center Madison (Wisconsin, USA) | NMR Pulse sequences | Meetings of Interest to the NMR Community BioMagResBank Database for Protein, Peptide, and Nucleic Acid Structural Data Derived by NMR Spectroscopy NMR Spectroscopy. Principles and Application(ICSTM Chemistry Department, UK) NMR Center University of Florida NMR Manuals for Bruker Instruments (Uni-Duisburg) Documents on NMR Theory (Uni-Duisburg) (MS-Winword files)   Electron Spin: Theory Goudsmit and Uhlenbeck postulated (1925), that the of splitting of certain spectral lines in an external magnetic field is due to an "electron spin" (which causes a constant magnetic moment of the electron). The assumption of an electron spin can readily explain the observed fine structure if one assumes that the spin s (+1/2 or -1/2) combines with the orbital momentum m to give a total momentum j. j = m + s It is this total momentum that must now change by ±1 to absorb the photon's spin, in other words we have a refined selection rule: j = ±1 The electron spin is not delivered by the Schrödinger equation but follows from a more refined version that was developed by Paul Dirac and that includes relativity. Paul Dirac Nobel Prize 1933 Electron Spin: Experiment The first experimental evidence for the existence of an electron spin came from the Stern-Gerlach experiment which involved the the deflection of a beam of silver atoms in an inhomogeneous magnetic field. Silver atoms possess the valence shell of 4d10 5s1. The ten d electrons are all paired, but the spin of the 5s electron is not. If silver atoms are shot at a target, they produce a single broad spot. However, in the presence of an external inhomogeneous magnetic field, two spots are observed (O. Stern, Z. Phys. 1921, 7, 249; O. Stern, W. Gerlach, Ann. Phys. 1924, 74, 63). Otto Stern 1888 - 1969 Nobel Prize 1943 Nuclear Spin: Postulate Nuclear Magnetic resonance has the distinction to be the brainchild of no less than five Nobel prize winners. Wolfgang Pauli postulated the existence of a nuclear spin [1] to explain the so called hyperfine structure of atomic spectra even before he "invented" electron spin (the 4th quantum number). "Following Bohr's invitation, I went to Copenhagen in the autumn of 1922, where I made a serious effort to explain the so-called « anomalous Zeeman effect », as the spectroscopists called a type of splitting of the spectral lines in a magnetic field which is different from the normal triplet... W. Pauli, Nobel Lecture, December 13, 1946 1. W. Pauli, Naturwissenschaften, 1924, 12, 741. Nuclear Spin: First Experiments The existence of a nuclear spin was confirmed experimentally by Rabi in 1939. The Rabi experiment showed that a beam of hydrogen molecules passing through a magnetic field absorbs radio frequency of a discrete wavelength. The experiment is very similar to the Stern-Gerlach experiment but uses the absorption of radio waves to demonstrate the formation of a 2-level system.The experiment is thus not only proof for the existence of a nuclear spin but was also the first magnetic resonance experiment. Rabi received the Nobel Prize in Physics, 1944 "for his resonance method for recording the magnetic properties of atomic nuclei" (Original papers by Rabi et al.: 1 | 2| 3) Other Nobelprize winners associated with NMR: Wolfgang Pauli 1900 - 1958 Nobel Prize 1945 Isidor Isaac Rabi 1898 - 1988 Nobel Prize 1944 Felix Bloch 1905 - 1983 Nobel Prize 1955 Edward Purcell 1912 - 1997 Nobel Prize 1955 Richard Ernst Nobel Prize 1991 John B. Fenn1917- Nobel Prize 2002 Koichi Tanaka1959- Nobel Prize 2002 Kurt Wüthrich1938- Nobel Prize 2002 NMR - The First Experiments The first NMR experiment on condensed matter was carried out by Felix Bloch (Zürich & Stanford) and Edward Purcell (Harvard) who were jointly awarded the 1952 Nobel Prize in Physics for their discovery. 1. F. Bloch, W. Hansen, M. E. Packard, Phys. Rev. 1946, 69, 127. 2. E. M. Purcell, H. C. Torrey, R. V. Pound, Phys. Rev. 1946, 69, 37. The interest of chemists in NMR was limited until it was observed (1949-1950) that the chemical environment leads to a specific shift in the resonance frequency ("chemical shift") 1. W. D. Knight, Phys. Rev. 1949, 76, 1259. 2. W. G. Proctor, F. C. Yu, The Dependence of a Nuclear Magnetic Resonance Frequency upon Chemical Compound Phys. Rev. 1950, 77, 717. 3. W. C. Dickinson, "Dependence of the F19 Nuclear Resonance Position on Chemical Compound" Phys. Rev. 1950, 77, 736-737. 4. G. Lindström, Phys. Rev. 1950, 78, 817-818. 5. J. T. Arnold, S. S. Dharmatti, M. E. Packard, J. Chem. Phys. 1951, 19, 507. The first commercial NMR spectrometer (operating in CW mode) was introduced in 1953 by Varian Associates. The development of 2-dimensional and 3-dimensional experiments was initiated by Richard Ernst who was awarded the 1991 Nobel Prize in Chemistry for his contributions. NMR is also the physical basis for Magnetic Resonance Imaging (MRI) one of the most powerful diagnostic tools in modern medicine. Nuclear Spin And Nuclear Magnetic Moment The nuclear spin depends on the number of protons and neutrons. Different isotopes therefore can have different nucelar spin Nuclei with an even number of protons have all proton spins paired and the resulting nuclear spin is zero. in the same way, nuclei with an even number of neutrons have all neutron spins paired and the resulting nuclear spin is again zero. this is the physical basis for a simple rule: • even,even nuclei (sometimes called g,g) are NMR inactive Note that the magnetic moments of protons and neutrons are slightly different and this means that neutron spins and proton spins do not compensate each other. The nuclear spin I (a vector) is given by I = (h/2p) [(I•(I+1)]1/2 where I (not a vector but a number) is the nuclear spin quantum number which can assume integral and half integral values: I = 0, 1/2, 1, 3/2, 3, 5/2, ... The magnetic moment m (vector) of a nucleus is directly proportional to the nuclear spin (vector). The proportionality constant g (magnetogyric ratio, scalar) is characteristic for each individual isotope m = gI It is common to classify isotopes by the nuclear spin quantum number. The most commonly measured NMR active nuclei have all I = 1/2: 1H, 19F, 31P Nuclei that have spins numbers of > 1/2 are said to be "quadrupolar". They share a number of characteristic features, most notably broad absorption bands, that set them apart from the "1/2 nuclei" and make them less useful for structure determination. The distinction between "1/2 nuclei" and "quadrupolar nuclei is very fundamental to NMR experiments and the first question that is usually settled before and NMR experiment begins is the multiplicity of all involved nulcei. Nuclear Magnetic Moments And Bo In classical physics, the energy of a magnetic moment m in a presence of a magnetic field Bo is: E = mBo = |m||Bo|•cosq Nuclear magnetic moments show quantized behavior, only certain energies (and hence angles) are allowed: Em = (-mh/2p) g|Bo|  The constant m is a magnetic quantum number that describes the individual states and runs can assume the values I, I-1, ...0 ... -I+1, -I. The total number of allowed states (=angles between m and Bo ) depends on the spin I of the particle: there are 2I+1 allowed orientations. Note that is only the z-component of the spin angular momentum vector that becomes quantized : Iz = mh/2p The x and y components of I can assume any value. As a result, the vector I is not precisely defined by its angle with the magnetic fields direction but can lie anywhere on a cone: In principle, we would expect one transition for I = 1/2 nuclei, 3 transitions for I = 1 nuclei (+1->0 and+1 - > -1 and 0 -> -1) etc. but the selection rules allow transitions only between adjacent levels (Dm = ± 1). Allowed transitions require n = g/2p•|Bo| This resonance frequency of the NMR experiment is also called Larmor Frequency. NMR Properties of Individual Nuclei The constant g (also called "g-value") is the so called magnetogyric ratio and is specific for each nucleus. The constant g is important for the NMR experiment because the magnitude ofm determines the sensitivity of respective nucleus. Receptivity - Sensitivity An analysis of the Bloch equations shows than the signal intensity that we can expect in measuring a specific nucleus is given by Receptivity = g3 x Isotopic abundance The four most sensitive nuclei are: Relative receptivity1.0000.8340.1400.067 The least sensitive nuclei are: Relative receptivity7.4 • 10-72 • 10-7 A very insensitive nucleus that is frequently investigated (usually with sensitivity enhancing techniques) is 15N (rel sensitivity 3.85•10-6). Chemical Shielding The chemical shift can be treated through the introduction of a shielding parameter s. This shielding constant is a 3 x 3 tensor but, as a result of rotational averaging, can be reduced to a single constant for solution NMR Diamagnetic and Paramagnetic Shielding Contributions A more refined analysis of the shielding phenomenon distinguishes two contributions that have an opposite effect: • diamagnetic shielding term • paramagnetic shielding term The diamagnetic term sD contains an integral the describes the electron density around the nucleus. The paramagnetic term sP arises from the perturbation of the ground state wave function due to the coupling between electronic orbital momentum and the external magnetic field The two equations should in principle allow the calculation of chemical shifts through quantum chemical methods but the quality of the results is at present rather variable. Quantum Chemical Treatment of NMR shifts The precise energies of the magnetic terms (and hence the observed frequencies and observed chemical shifts) depends on a many individual factors. The quantum chemical way to obtain energies is through the solution of the Schrödinger equation with the appropriate Hamiltonian, for NMR: H = HZeeman + Hrf + HCS + HDD + HJ + HQ + HSR rf = radio frequency CS = chemical shielding DD= dipole-dipole interaction J = indirect spin-spin interaction Q = quadrupolar interaction SR = spin rotation interaction The Zeeman interaction leads to 2I+1 spin energy levels and is the principal interaction. The other interactions are weaker and treated as perturbations HZeeman = -g•h/2pBoIz Chemical Shift (I) The chemical shift of nuclei is rarely given in Hz but is given as a dimensionless number , d,   In rare cases, nuclei that are in a different chemical environment can accidentally have the same chemical shift. An example is the 1H-NMR spectrum of 3-cyano-methylpropionate: The close similarity in the chemical shifts of the two CH2 protons prevents us from seeing the expected coupling pattern (2 triplets). Instead, we see only a singlet. For details see theAB spin system below. Chemical Shift: Aromatic Compounds Aromatic compounds show a characteristic deshielding of ring protons (d 6 - 9 ppm vs. d 3 - 4 ppm for alkenes) The explanation of this phenomenon relies on the analogy of aromatic rings with a closed conductor. An external magnetic field induces a proportional current which in turn generates a (smaller) magnetic field whose direction is opposed to that of the external magnetic field on the outside of the conducting ring and parallel on the inside. A particularly interesting case is the aromatic [14]Annulene and its antiaromatic dianion. The aromatic annulene shows strongly deshielded ring protons and strongly shielded endocyclic methyl groups. For the antiaromatic dianion, the situation is exactly reversed: Polycyclic aromatic hydrocarbons show superpositions of different ring currents: NMR Shift Reagents Certain lanthanide compounds form complexes with molecules featuring -O- or -N< groups. Due to the magnetic moments of the paramagnetic lanthanide ions, this coordination leads to additional chemical shifts that decay with increasing distance to the metal ion. The induced chemical shift depends on the distance between the paramagnetic atom (e.g. Eu) and the observed nucleus, but also on the angle between the two. The quantitative relation ship is the McConnell equation: The use of Lanthanide shift reagents is very useful to disentangle complex multiplet patterns and to distinguish between protons based on their proximity to the functional group coordinated to the lanthanide (-OH in our example of 2-b-Androstanol). The angle dependency can produce shifts in either direction. Of particular interest are chiral NMR shift reagents like Eu(facam)3 . Their complexation with a substrate leads to diastereomeric complexes which have different chemical shifts. Q.:why are the signals of the compound not obscured by those of the shift reagent ? A.:The shift reagent is used only in small quantities (<< than sample), the induced shift is not due to the stoichiometric formation of the complex but is the result of a dynamic process involving rapid complexation and decomplexation. Q.:Why do we have to use expensive materials like lanthanides (Europium, Praseodymium, Terbium), why can't we use paramagnetic transition metal ions ? A.:Transition metal ions produce strong shifts as well but also lead to often unacceptable line broadening. Chemical Shift And Hydrogen Bonding Acidic protons (-OH, -NH, -SH) are rapidly exchanged through intermolecular hydrogen bonding. Two different shifts are involved in this exchange process, the shift of the free compound (1) and the shift of the hydrogen bonded species (2). Because the protons exchange rapidly between a and b, we only observe an averaged chemical shift. For alcohols, the averaging involves the free alcohol ROH and the hydrogen bonded dimer (ROH)2 the relative amount of which is dependent on the concentration of the alcohol. The chemical shift of the OH proton is therefore concentration dependent. An important exception are compounds with intramolecular hydrogen bonding. Intramolecular hydrogen bonding (see salicylaldehyde below) is usually strong enough to suppress the exchange and we observe only on species with a shift that is largely independent of the concentration and temperature. The 1H NMR spectrum below show a mixture of ethanol (intermolecular hydrogen bonding) and salicylaldehyde (intramolecular hydrogen bonding). a) 5 % solution in CCl b) neat Note the appearance of the ethanol-OH proton as triplet in the diluted sample (coupling to CH2) and absence in the neat sample (rapid exchange). Spin-Spin Coupling NMR active nuclei can couple with other NMR active nuclei that are in close proximity. This coupling leads to so called multiplets (doublet, triplet etc.). In solution, the predominant mechanism for spin-spin coupling is "through-bond". Coupling "through-space" usually suppressed by rapid rotation processes but can become important for rigid systems. Spin-Spin coupling leads to the splitting of single lines into multiplets. The number of lines in a multiplet (M) is determined by the number of identical coupling nuclei (n) and their nuclear spin I. M = 2nI + 1 The overwhelming majority of multiplets are caused by the coupling of I - 1/2 nuclei which conveniently reduces the multiplicity formula to M = n + 1 The intensities of the individual lines is given by the spin I of the coupling nucleus (not the observed one). The relative intensities of the lines can be derived from Pascal's triangle and similar arrays: Spin-Spin Coupling I > 1/2 A an example for a molecule with both I - 1/2 and I > 1/2 nuclei, we will now analyze the spin system of CH2D2, The three types of NMR active nuclei (1H: I=1/2; 13C:I=1/2; 2D: I=1) lead to three different types of coupling constants: 2J (H,D)1J (C,D) 1J (C,H) In the proton NMR, we will only observe coupling to deuterium because the 13C atoms are not abundant enough (1.1 % natural isotopic abundance ) to be visisble under normal recording conditions. In the carbon NMR, we have to consider the two different coupling constants 1J (C,D)1J (C,H) We know from the magnetogyric constants of 1H abd 2D that J(C,H) = 6.5 • J(C,D) in other words the C,H couling is much larger than the C,D coupling. The 13C signal is therefore split into a large triplet by the presence of two protons. Each of the three lines of this multiplet is then further split by the presence of two deuteriumatoms. The multiplicity is now (I = 1 for deuterium) (2 x 2 x 1) +1 = 5. The overall splitting pattern is therfore a triplet of quintets. The intensities of the individual lines of a multiplet depend of the nuclear spin I: For our case CH2D2, the line intensities of the quintet caused by the deuterium coupling are 1 : 2 : 3 : 2 : 1 Coupling Constants (I): 2J(H,H) and 3 J(H,H) Coupling Constants of Organic Compounds: The following is a list of H,H coupling constants observed for different structural fragments. Many different fragments give coupling constants in the range of 2-10 Hz so that coupling constants in this range are of limited value as a diagnostic tool for structure elucidation. Coupling Constants: 1J(C,H) The 1J(C,H) coupling constant can give valuable information about the structure of the molecule. The magnitude of 1J(C,H) is proportional to the s-character of the CH bond: % s-character = 1J(C,H) / 500 The factor 500 is empirical and characteristic for Similar equations have been derived for many other combinations of elements. e.g.: % s-character = -0.59 •1J(15N,H) -17.5 The validity of this relationship is readily verified by analyzing the coupling constants of methane (sp3), ethene (sp2) and ethine (sp). The coupling constants of CHCl3 reflects the fact that the thc compound is CH-acidic. Chlorofrom is therfore often unsuitable as a solvent under conditions where CH2Cl2 is still inert. Ring strain is partially compensated through a high p-character of the involved CC bonds. As a consequence, the exocyclic CH bonds possess a high s-character and show high 1J(C,H) coupling constants. Coupling Constants (IX): Influence of Ring Strain on 3J(H,H) Ring strain leads to changes in the hybridization (which ones ?) and this effects coupling constants: Coupling Constants (V): 1J(C,C) Due to the low natural abundance of 13C (the only NMR active isotope of carbon), it is very difficult to observe 13C-13C coupling. Only one out of 8000 molecules will possess two adjacent 13C isotopes. A method that uses the 1J(C,C) coupling to establish the C-C backbone of organic molecules is the INADEQUATE pulse sequence. INADEQUATE spectra are usually presented as 2D plots. Like the 1J(C,H) coupling constant, the 1J(C,C) coupling constant depends on the hybridization (s-character) of the two carbon atoms Ca and Cb. A good approximation is:  1J(Ca,Cb) = 550 x %s(a) x %s(b) 1) Ethane, 2 x sp3, => 1J(C,C) = 550 x 0.25 x 0.25 = 34 Hz (experimental: 35 Hz)2) Ethylene, 2 x sp2, => 1J(C,C) = 550 x 0.33 x 0.33 = 60 Hz (experimental: 67.6 Hz) Coupling Constants (VI): Quadrupolar Nuclei Quadrupolar nuclei can in principle couple to I =1/2 nuclei like 1H, 31P , 13C, or 31P. However, the linewidth of the quadrupolar nuclei is usually so broad that the coupling is hidden coupling. A comparison of the 14N and 15N NMR spectra of the tin complex below illustrates the point: Coupling Constants (VIII): Cis-Trans Isomers of Olefins The magnitude of the 3J(H,H) coupling constant in olefins follows the general rule Jtrans > Jcis and this is a convenient way to distinguish between cis and trans isomers: 3J is also influenced by the olefin's substituents, the distinction is meaningful only for one by direct comparison of the isomers or for olefins with very similar substitution pattern. Coupling Constants (VI): Long Range Coupling While 1J, 2J or 3J coupling is commonly observed for H,H coupling and 1J and 2J for C,H-coupling, coupling constants 4J or higher are usually too small to be observed. A notable exception are 4J coupling constants in rigid frameworks, particularly those of "W-geometry." Coupling Constants (VII): Heteronuclear Coupling Most organic molecules contain 1H as only NMR active nucleus and observed couplings are accordingly those between neighboring protons. The situation is more complex if other high abundance I = 1/2 nuclei are present. The only commonly encountered high abundance I - 1/2 nuclei are 19F and 31P. The following spectrum is a 1H broadband decoupled 13C spectrum (shorthand notation: 13C {1H }) of trifluoroacetic acid that shows "hydrogen like " coupling (two quartets) Quadrupolar nuclei can in principle couple to I=1/2 nuclei. The coupling is usually detectable by measuring the quadrupolar nucleus but rarely by measuring the I=1/2 nucleus (lifetime issue). I =1/2 to I =1/2 is generally observed unless fast exchange disrupts the bond (loss of through-bond coupling) 14N NMR Spectrum of ammonium ions (a) and 199Hg NMR spectrum of di-tert-butylmercury (b); Spectrometer frequency 4.33 and 14.3 MHz. b: only 11 of the 19 expected lines are actually observed. Coupling Constants (X): The Karplus Equation The 3J(H,H) coupling constants of aliphatic H-C-C-H fragments depend on the dihedral angle. The quantitative relationship is given by the Karplus equation. The Karplus equation was first suggested by Martin Karplus and has become an extremely important tool for the elucidation of 3D structures through NMR methods. The Applications of the Karplus Equation Example: Sugars The Karplus equation is a useful tool to distinguish between the equatorial and axial protons present in many natural products. In the following example, the 3J(H,H) coupling constant s used to distinguish between a-D-Glucose and b-D-Glucose: Example: Ethane. The Karplus equation can also be used to establish equilibrium constants in conformer mixtures. Ethane consist of a mixture of rotamers. The Karplus equation allows us to predict the 3J(H,H) coupling constant by assuming that all three staggered rotamers (t = 60o, 180o and 30o) are equally important: Chemical and Magnetic Equivalence It is usually fairly obvious which atoms in a compound are chemically equivalent. A precise definition would be • chemically equivalent atoms are related by at least one symmetry operation Chemically equivalent atoms also have identical chemical shifts and that might lead us to conclude that they are magnetically equivalent as well However, this is not the case, magnetic equivalence is a more stringent criterion because we have to take into account the phenomenon of spin-spin coupling as well. Magnetically equivalent nuclei are: • chemically equivalent (=possess the same chemical shifts) • have identical spin-spin interactions (=coupling constants) with other magnetic nuclei in the molecule This issue of identical vs. non-identical spin-spin interaction is best explained (and memorized) with the following 2 examples: Difluoromethane has chemically equivalent protons and fluorine nuclei (mirror plane, C2-axis) which also have identical H-F coupling constants. The spin system is of the so called A2X2 type. 1,1-difluoroethylene has chemically equivalent pairs of hydrogen and fluorine atoms but there are now two different sets of coupling constants. The two hydrogens and fluorines are chemically equivalent (symmetry plane, C2 axis) but magnetically inequivalent they form a so called AA'XX' spin system. In view of the small number of nuclei involved, AA'XX' systems give rise to very complex spectra: The magnetic inequivalence leads to the observation of two additional coupling constants: 2J(H,H') and 2J(F,F'). - Coupling between magnetically equivalent nuclei cannot be observed in the spectrum and this is why we do not see 2J(H,H) and 2J(F,F) in the example of difluoromethane. The AB Spin System and Higher Order Spin Systems If the two coupling nuclei have a very similar chemical shift, the intensity pattern of the individual lines deviates from that derived by the number pyramids. Such spin systems are labeled AB (AB2, A2B, A2B2 etc). The intensity of the individual lines and hence the shape of the multipletsdepends on the ratio of the shift difference (in Hz) na - nb and the coupling constant J(A,B). The hypothetical spectra of an AB systembelow illustrate this point: a) J / (na - nb) = 1: 3; b) J / (na - nb) = 1: 1; c) J / (na - nb) = 5: 3; d) J / (na - nb) = 5: 1 Spectra like a) - d) which show deviations from the binomial intensity pattern are called higher order spectra. Case d) can easily be mistaken for a singlet although it still is an AB system albeit without experimentally observed coupling constant. An example for this case is the 1H NMR spectrum of NC-CH2CH2-COOMe above. Spectra with J / (na - nb) > 1:10 have undistorted intensities and are called 1. order spectra AB Spin Systems AM Spin Systems AM spin systems show the beginning breakdown of first order behavior. The signal intensities are slightly distorted, the type of the distortion ("roof effect") can sometimes be used to identify coupling protons in complex NMR spectra. AA'BB' Spin Systems AA'BB' spin systems are higher order spins systems with complex but symmetric line patterns. A classical example for the AA'BB' spin system is ortho-Dichlorobenzene: High resolution 1H NMR spectrum of o-dichlorobenzene (AA'BB' system) at 90 MHz. AA'BB' spin systems are characterized by 4 independent coupling constants. Their determination is not readily obtained by visual inspection but best obtained through computer simulation. AA'XX' Spin Systems The spin system of furan is similar to that of ortho-dichlorobenzene, but the chemical shifts of the ring protons are further apart which makes it AA'XX' spin system. The AA'XX' spin system has fewer lines but is deceptive in the sense that it resembles two triplets resulting from an A2X2 spin system (e. g. Cl-CH2-CH2-Br). Lack of Coupling The absence of coupling can have a variety of reasons: • magnetic equivalence, e.g. methyl protons H3C-R • no magnetic nucleus, e.g.. 1H-12C • natural abundance too low, e.g. 1H-15N • coupling constant too small, e.g. 5J(H,X) • fast chemical exchange, e.g. H3C-OH • low lifetime of states (some quadrupolar nuclei, e.g. 35Cl, 79Br, 81Br, 127I) Coupling to Low Abundance Spin 1/2 Nuclei: Satellite Signals Only a small number of NMR active nuclei occur with a natural abundance of 100 % (or close to 100%): 1H, 19F, 31P. Most NMR active nuclei have natural abundances in the range of 1- 20 %. Spin 1/2 nuclei with intermediate natural abundance give rise to so called "satellites" with intensities reflecting the natural abundance of the isotope. I = 1/2 elements that are readily detected in this way are: Natural Abundance % 107Ag -109Ag 51.8 / 48.2 111Cd -113Cd 12.7 / 12.3 117Sn - 119Sn 7.6 / 8.6 203Tl / 205Tl 29.5 / 70.5 The presence of satellites with the correct intensity is a valuable diagnostic tool to establish the presence of the respective element. The spectrum above shows the proton decoupled 15N-NMR spectrum of a spirocyclic tin(IV) amide. All 4 nitrogen atoms are equivalent and couple to the central metal tin. The coupling to the two NMR active tin isotopes 117Sn and 119 Sn is well resolved. The coupling constants for different isotopes A and B are usually differenttheir ratio is depends on the magnetogyric ratios gA and gB: J(A) / J(B) = gA / gB: Using the g-values for 117Sn and 119 Sn: J( 117Sn) / J( 119Sn) = g( 117Sn) / g( 119Sn) = -9.578 / -10.021 = 0.956 The 117Sn and 119Sn satellites are usually well resolved like in the spectrum above and the appearance of double satellites is strong NMR evidence for the presence of tin in a given compound. Selective Decoupling Individual nuclei can be "decoupled" if an external radio frequency exactly matches the Larmor frequency of the nucleus. Decoupling experiments are usually carried out determine which of nuclei couple and which don't. The example of 3-amino-acroleine illustrates the technique. • Selective irradiation of M reduces the AMX spin system to AX : two dublets, A-X coupling constant can be determined • Selective irradiation of X reduces the AMX spin system to AM: two dublets, A-M coupling constant can be determined Broadband Decoupling: 13C{1H} All modern NMR spectrometers have the capability to decouple 1H. The most common use is to record broadband decoupled 13C NMR spectra. The shorthand notation is 13C{1H} with the curled brackets denoting the broadband decoupled nucleus. The main application of broadband decoupling is the simplification of spectra. This is particularly valuable for 13C spectra that would otherwise consist of multiplets that tend to obscure each other by superposition. The coupled (above) and 1H broadband decoupled (below) 13C-NMR spectra of the hydrocarbon norbornane illustrate this point: While the triplet at ~ 30 ppm is readily identified with a CH2 group, the remaining multiplets obscure each other and are not readily recognized as what they are namely a triplet and a doublet. The decoupled 13C NMR clearly shows the presence of three different carbon atoms Coupling Constants of Different Isotopes Although different isotopes of the same element have only very small differences in chemicals shifts, the coupling constants are often quite different. Coupling constants of different isotopes in the same compound are related through: For the specific example H,D: This means, that deuterium coupling constants are much smaller ( ~ 1/6.5) than the respective hydrogen coupling constants. It is therefore possible that X,H coupling produces the expected multiplet structure while the respective deuterated compound does not show the analogous multiplet resulting from X,D coupling because the coupling constant has become too small. An example from our own research (M. K. Denk, J. Rodezno, J. Organometal. Chem. 2000 608 , 122-125 ) is the ring deuterated stable carbene below. The deuteration was observed when the stability of the carbene towards DMSO was investigated. Dissolution of a small amount of the carbene in DMSO-D6 (D3C-SO-CD3) produced the ring deuterated species. While the ring protons showed a doublet of doublets with two well resolved coupling constants (1J(H,C) > 2J(H,C)), the smaller 2J(H,C) coupling constant remains unresolved in the deuterated compound (below, left). The IR spectrum of the deuterated compound (below, right) clearly shows the presence of a C-D bond, but the expected doublet (symmetric + antisymmetric combination) is not resolved. Quadrupolar Nuclei Quadrupolar Nuclei: Linewidths Quadrupolar nuclei have an uneven charge distribution of the type shown below. after R. K. Harris, "Nuclear Magnet Resonance Spectroscopy", Longman 1986, p132 While nuclei with I > 1/2 generally have shorter relaxation times and hence broader lines (lifetime broadening) the linewidth depends on the quadrupole moment of the nucleus and in particular on the symmetry of the environment. Coupling With Quadrupolar Nuclei The NMR properties of quadrupolar nuclei, in particular line width and coupling to other nuclei, crucially depend on the magnitude of the their quadrupole moment. Nuclei with small quadrupole moments like 2D (deuterium) and lithium 6Li have small quadrupole moments and resemble I = 1/2 nuclei in having small linewidths. They also couple with most I=1/2 nuclei. Typical quadrupolar nuclei like like 59Co show extremely broad lines and do not couple to other nuclei. Quadrupolar Nuclei And Site Symmetry The linewidth of quadrupolar nuclei depends not just on the magnitude of their respective quadrupole moment but also on how symmetric their environment is. A highly symmetric environment (tetrahedral, linear) leads to unusually small linewidths that can approach those observed for I = 1/2 nuclei. The following table of 14N NMR data illustrates the point. Note the strong difference between [NMe4]+ (tetrahedral) and :NMe3 (pyramidal). Symmetry and Coupling in Quadrupolar Nuclei The following example of the 17O NMR spectrum of trimethylphosphate illustrates the importance of the relaxation time for the observation of coupling constants. While the the PO oxygen atom has a small line width that allows the easy detection of a 17O-31P coupling, the respective MeO signal is broad and the coupling to 31P is barely visible Dynamic NMR (I): Coupling Constants and Chemical Exchange Coupling may be lost if one of the coupling nuclei undergoes chemical exchange. A classical example is ethanol. In principle, we would expect that the proton of the OH group couples to the CH2 group and appears as a triplet. This is indeed observed but only in absolutely pure and dry ethanol. The addition of water or traces of acid catalyze a rapid proton exchange between different ethanol molecules and the coupling to the OH proton is lost. Note that the CH3 group is unaffected (no coupling to OH). In the presence of water, the OH signal o ethanol is not only "decoupled" but is also shifted towards the resonance of pure water. This is the result of a rapid proton exchange between water and ethanol. Dynamic NMR (II): Measuring Activation Barriers through NMR The rate of exchange of any reaction is described through the Eyring equation: For the simples case of two singlets of equal intensity, the average frequency observed for a rapidly exchanging system will be: In the intermediate temperature range between fast and slow exchange we will observe signals that begin to coalesce: For a specific temperature, the average signal will begin to split. This temperature is the coalescence temperature Tc. The reaction rate at this temperature is: inserting this value for k into the Eyring equation: This is an equation that can be rearranged to give the free activation energy DG#at Tc: (N = Avogadro number). Or, after numerical solution with all constants for n in Hz and T in Kelvin: In other words, if we know the chemical shifts of two exchanging nuclei and the coalesence temperature, we can calculate DG# Dynamic NMR (II): Hindered Rotation A classical example for the unexpected observation of additional lines due to hindered rotation is dimethylformamide (DMF)  1H NMR Spectrum of dimethylformamide at room temperature (above 120 oC b/c coalesce to a single signal) Dynamic NMR (III): Loss of Coupling In Floppy Rings Chemical Shift (III): Chemical Exchange Acetylacetone is a good example how a seemingly simple molecule can give rise to a fairly complex spectrum. We first have to note that acetylacetone is in equilibrium with its enol tautomer. This equilibrium is slow on the NMR time scale so that two sets of signals for the two tautomers are observed in the 1H and 1H decoupled 13C NMR: The enol can exist in two prototropic isomers but the energy barrier for their interconversion must be very low (or zero) because the two forms can not be observed separately. (How would you modify acetylacetone so that the two enol isomers become distinguishable ?) The Nuclear Overhauser Effect The irradiation of an NMR active nucleus leads to distortions (usually: enhancements) in the intensity of neighboring nuclei. This can be signals that show very similar chemical shifts and would otherwise be difficult to distinguish:  A particularly elegant way to extract the few peaks that gain intensity from the selective irradiation of a nucleus is a so called NOE-difference spectrum. The example below establishes the syn-relationship of the methyl group A and the proton H-3: irradiation of A gives a negative peak in the NOE difference spectrum and a sensitivity gain for H-3 Note that solvent peaks and other very strong signals (here: OCH3) are often not completely compensated in NOE difference spectra. INEPT Measurements INEPT was one of the first pulse sequences designed to boost the sensitivity of insensitive nuclei through NOE. The name stands for Insensitive Nuclei Enhancement by PolarizationTransfer. INEPT effects can also be used to assign signals. The broadband decoupled 15N NMR spectrum (top) shows two nitrogen signals at roughly the same intensity. It was suspected that the slightly more intense signal at 359.6 ppm belongs to the CH2N nitrogen because this nitrogen has two a-protons exercising their NOE effect. An INEPT experiment with a typical 2J(1H,15N) confirms this assignment. Signals from Noise: The Power of Fourier Transformation The classical way of measuring a spectrum is a frequency sweep, that is, we measure the intensity (absorbed, transmitted, reflected) for each frequency and slowly change the frequency. Recording times for individual spectra are then in the order of minutes. A radically different approach is the simultaneous excitation of all frequencies. This leads to emission of a superposition of many individual frequencies corresponding to the individual absorption bands in other words: noise. However, the emitted noise is not random but contains the frequency information of the individual spectral lines. The characteristic frequencies can be extracted through a mathematical process called Fourier Transformation. "Robespierre fell from power on 27 July 1794, and Jean Baptiste Joseph Fourier, a diplomat and scientist who was to be executed the next day because of his involvement in revolutionary affairs 1, was spared. In 1807 Fourier was attempting to solve mathematically a problem concerning the conduction of heat by using a series expansion of sine and cosine terms that became familiar as Fourier Series.The principles that Fourier established laid the foundation for mathematical methods of extraordinary power and flexibility that have not only stimulated continuing original contributions to pure mathematics, but have found a variety of applications in applied mathematics and science. Fourier transformed NMR and IR Spectroscopy are now fully fledged analytical techniques that are used in laboratories throughout the world." R. P. Wayne, "Fourier Transformed", Chem. Br. 1987, 440-445. 1) other then NMR, MD The advantage is, the short noise bursts that follow the excitation are much faster to record than a frequency sweep. Moreover, many individual bursts (free induction decays = FIDs in NMR spectroscopy) can be accumulated in a computer and then transformed. It can be shown, that the S/N ratio increases with the number of accumulated spectra n as n1/2. The following example shows the improvement of S/N for a series of accumulated (=superimposed) spectra: after W. W. Paudler, Nuclear Magnetic Resonance, Wiley, 1987. Pulse NMR A simple way to explain the inner workings of pulse NMR is the vector formalism. In the absence of an external magnetic field, the magnetic moments of the NMR active nuclei will be oriented at random. If we switch on the field, they will align to assume their allowed energy values. This alignment only implies a fixed z-component , mz, in x and y direction there is no quantiziation. The vectors will thus lie on a cone:  The two different states m= + 1/2 and m= -1/2 have a slightly different population and this leads a net magnetization M0 : The Time Evolution of M Relaxation describes, how the magnetization M0 of a sample returns to its thermodynamic equilibrium value (-> Boltzmann distribution). In the absence of an external field Bo, the spins are oriented at random. If we place the sample into an external field, the spins of the individual molecules will align. This will build up magnetization along the z-axiz (Mz) but will at the same time lead to a decay of the magnetization in the x,y plane (Mx,y). The time evolution of these two processes is described by the Bloch equations with their two different time constants T1 and T2: Integration gives the time dependence of Mz : The relationship says that the equilibrium (Boltzmann distribution) decays exponentially with time. Detection of Polarization: The 90 o Pulse. The transversal or spin-spin relaxation time T2 describes to the decrease of My after a 90 degree pulse. The inhomogeneity of the magnetic field Bo ensures that the nuclear spins of different molecules have a slightly different Larmor frequency. This to a spreading of the individual magnetization vectors in the x,y plane and a net decrease of the the resulting total magtnetization My over time. The determination of T2 uses a pulse sequence that was proposed by Hahn. Magnetic Resonance Imaging (MRI) and the value of T2-Relaxation Magnetic resonance imaging relies on the high abundance of hydrogen (1H) in biological tissues. Most of the hydrogen is present in the form of water and that seriously limits our ability to differentiate between different types of tissues. Fortunately, the relaxation times T2 of hydrogen in biological environments are slightly different and this difference in T2 is the basis for the contrast in MRI images. The sequence of eight MRI images shows 8 subsequent T2-echos. Note the decreasing luminosity (~ signal intensity) of the pictures. The echos are generated with the Carr-Purcell-Maiboom-Gill pulse sequence. Time for one complete pulse sequence 0.3 seconds. Total measurement time 11 minutes. Bo = 1.5 Tesla. The T2 values depend on the type of tissue and allow the rapid detection of tumors or other degenerative processes. Relaxation Time T1 The relaxation times of individual nuclei determine the linewidth and (see below) the intensity of NMR signals. Apart from the chemical shift d and the coupling constant J, the relaxation time T is specific for a nucleus and its surrounding and can provide valuable chemical information. 1H shows only a small variation of T1 and is therefore rarely measured to gain chemical information. 13C shows a broad range of different relaxation times and 13C T1 and T1 measurements are increasingly used for structure elucidation purposes. Relaxation Time T1 : Determination T1 relaxation times are usually determined with the Inversion- Recovery Method. The individual signals vanish at certain times tnull . T1 is obtained from tnull according to tnull = T1•ln2 From the tnul values of ethylbenzene above gave T1 times of 7.2 sec (CH3), and 72 sec (quaternary carbon). T1-Relaxation Times of Carbon: Dipolar Coupling The magnitude of T1 is determined by the neighborhood of dipolar nuclei (1H). This leads to the generally observed sequence of T1 times CH3 < CH2 < CH < quaternary The example of isooctane below illustrates this rule: Isotopic substitution can lead to very selective changes of the T1 time. particularly valuable is the selective deuteration of compounds. Deuterium is a quadrupolar isotope and does not contribute towards the dipole-dipole relaxation. As a result, the relaxation times of carbon atoms increase in the proximity of deuterium atoms: T1-Relaxation Times of Carbon: Molecular Motion For linear molecules, the relaxation times of atoms towards the center of the long axis are significantly longer than those at the periphery of the molecules. The ends of long rigid molecules tend to relax faster than those of short molecules. This is nicely illustrated by the series Ph-Me, Ph-CCH, and Ph-Ph. The relaxation time of the para proton decreases with the length of the molecule from (15 seconds - 8.2 seconds - 3.2 seconds) and the same trend is observed for the ortho and meta protons. Extending the length of the molecule beyond biphenyl has little effect. T1 relaxation times re uniquely suitable to distinguish individual carbon atoms in carbon chains. The first example shows the individual carbon atoms of decane, C10 H22. The left half of the molecule contains the full information due to the symmetry of the molecule. Connection Between T2 and T2 • T2 is largely determined by field inhomogeneities DBo and does not contain much chemically useful information (see however MRI imaging above). • T2 is an important parameter for multipulse experiments • T2 is measured through Hahn Spin-Echo experiments • T2 describes the time evolution of Mx and My (transversal magnetization) and is therefore called transversal relaxation time. • T2 is largely driven by spin-spin interaction and is also called spin-spin relaxation for that reason   • The general rule is that T1 ³ T2 in other words the spins are often completely randomized in the x,y plane before Mz magnetization has reached thermal equilibrium. • T2 and T1 are of comparable magnitude for I=1/2 nuclei in solution and add up to give natural linewidths of about o.1 Hz. • In solids, T1 is on the order of minutes or even hours (no tumbling !) but T2 becomes very short (10-5 sec)) due to strong dipole-dipole interactions. Historic Development 2D Methods have greatly contributed to the explosive growth of NMR in the investigation of chemical and biological problems. The first 2D-NMR experiment was reported by the Belgian Physicist Jeener. The most active developers of 2D methods were R. R. Ernst (Varian Research Center at Palo Alto and ETH Zürich) and R. Freemen (Oxford). Types of 2D Experiments 2D methods can be broadly classified in shift-shift correlations (d,d methods) and those that display coupling constants and shifts (J,d methods). A 2D NMR spectrum displays signal intensity as the function of two frequency variables. An obvious advantage of the approach is the removal of signal crowding. As an example, the resolving power of a 1H-13C HetCor experiment improves the resolving power by a factor of 1000 ! More important is the detection of spatial relationships between individual nuclei. Measurement Time for Typical 2D NMR Experiments 2D-NMR: The Pulse Sequence The appearance of 1D-spectra is determined by the three blocks of preparation, mixing and detection (=acquisition). Detection (t2) The pulse sequence of a 2D experiment adds a 4th block, the so called evolution time t1. Evolution (t1) Detection (t2) The detection period corresponds exactly to that of 1D experiments and the time t2 provides, after FT, the w2 axis of the spectrum. The crucial difference that distinguishes the 2D methods is the introduction of a second, incrementally increased time, the so called evolution time t1. Separate FID's are collected for each incremental variation of t1. 2D-NMR uses a second Fourier Transformation that generates a second frequency axis w1 from t1. While t2 is a "natural" time determined by the FID of the molecule, t1 can be manipulated to a wide degree. The total number of increments is varied in exponents of 2 and is usually 128 or 256 although much lower numbers n are quite acceptable for some experiments. 2D-NMR: Description of Pulse Sequences While It is possible to understand many 2D techniques as arrayed 1D pulse sequences 2D techniques involving phase correlation or multi quantum transition cannot be reduced to the 1D formalism. A reasonably simple formalism to explain the results of 2D experiments in a qualitatively and quantitatively correct fashion is the so called product operator formalism. An excellent introduction to 2D-NMR with the product operator formalism is the famous Review by Kessler, Gehrke and Griesinger [H. Kessler, M. Gehrke and C. Griesinger, "Two-Dimensional NMR Spectroscopy: Background and Overview of Experiments" Angew. Chem. Int. Ed. Engl. 1988, 27, 490-536.] 2D-NMR: d,J vs. d,d Experiments The coordinate axis of a 2D NMR spectrum are invariably frequencies, but depending on whether these frequencies represent chemical shifts or coupling constants it is convenient to classify 2D NMR in those that correlate shifts with shifts (d,d- experiments) and those that correlate shifts with coupling (d,J- and d,D-experiments). A third mechanism of information exchange is the chemical exchange of atoms. Exchange processes can be intermolecular or intramolecular. 2D-NMR: d,J vs. d,D Experiments Both types of coupling, the scalar coupling (through bonds) and the dipolar coupling (through space) can be used for 2D methods. They are complementary to each other: • 1J-coupling is used to establish the connectivity of covalent frameworks. • 3J-coupling is sensitive to the dihedral angle and is used to establish conformations. • D-coupling establishes the proximity of non-connected atoms through NOE transfer experiments 2D-NMR: Common Pulse Sequences To make a selection which 2D pulse sequences are the most important to an inorganic chemist is a difficult task. Kessler's review states that in 1988 (!) more than 500 pulse sequences were known. Most pulse sequences that were developed for organic molecules apply to organometallic molecules but many more are specific to fluxional problems not often encountered in organic chemistry and of course to the correlation of hetero-atoms like 1H-11B. The following sections will discuss the most important 2D pulses: • COSY • EXSY • HMQC • Inverse Detection Techniques A particularly popular 2D NMR experiment is the COSY experiment. COSY experiments are often 1H-1H experiments but there are many other correlations in the literature. The COSY transfer , which proceeds through J-coupling relies on quantum mechanical effects and cannot be explained with classical models. By applying a single (p/2)x rf- mixing pulse, it is possible to transfer coherence of spin a into coherence of spin b where these spins couple. Only with coupling do off-diagonal peaks occur. Consequently, COSY spectra are a terrific way to establish connectivity of networks. The (H,H) COSY experiment establishes the connectivity of a molecule by giving cross peaks (these are the off diagnonal peaks) for pairs of protons that are in close proximity. For the example of Glutamic acid below, we obtain cross peaks for the proton pairs (2,3) and (3,4). We do not observe a crosspeak for the pair (2,4), because these protons are not directly adjacent. 500 MHz (H,H) COSY Spectrum of Glutamic acid. 1-D spectra left and top. 10 mg of compound in 0.5 mL of D2O, 5 mm sample tube, 256 spectra, digital resolution of 2.639 Hz/data point. Total measurement time ca. 3h. A relayed COSY experiment goes one step beyond a COSY experiment by showing cross peaks not just for pairs of adjacent protons, but for triples as well. As a result, we observe additional cross peaks like the one for the pair 2,4 in Glutamic acid below. Relayed COSY experiments can give cross peaks for protons that are too distant to show coupling in the 1D NMR spectrum. With present hardware and pulse sequences, it is possible to repeat the relay step up to three times. This allows the correlation of of protons that are separated by up to six bonds (d-protons). The relaying nucleus is typically 1H, but high abundance I =1/2 hetero-elements like 31P or 19F can be used as well. 2D-NOE Techniques NOE techniques complement techniques that involve scalar coupling because they establish the proximity of non-bonded atoms. Most published data are on 1H,1H experiments. Common pulse sequences: • NOESY (3-dimensional structure of molecules, exchange processes) • zz-NOESY (solvent peak suppression, differentiation between chemical exchange and NOE). The cross peaks in NOESY are due to pairs of nuclei that are close enough (<2.5 Å) to have strong dipolar interactions. Moreover, the size of 1H-1H NOESY cross peaks is proportional to their distance. This is particularly valuable for the structure elucidation of proteins. Heteronuclear NOESY experiments are sometimes called HOESY experiments. Cross peaks in EXSY spectra indicate pairs of nuclei that undergo rapid chemical exchange. EXSY and NOESY have essentially the same pulse sequence . Both require a mixing time to allow the random process of cross relaxation or exchange to occur. The following 2D spectrum combines an EXSY spectrum (top-left) and a COSY spectrum (bottom-right) in one plot. The data show that only the carbonyl groups 6 and 7 undergo chemical exchange. Note that 6 from 7 lead two different peaks and are hence magnetically inequivalent. Kinetic Data From EXSY EXSY spectra can not only be used in a qualitative way to establish the absence or presence of chemical exchange. It is often possible to investigate the time constants of the exchange process by incremental variation of the acquisition parameters. In the following example, the exchange of complexed and uncomplexed 7Li was studied by varying the mixing time of the 2D experiment. With 7 ms mixing time, the exchange process is barely visible while strong cross peaks are observed for a mixing time of 50 ms. Limitations of COSY: Low Abundance I = 1/2 nuclei In principle, the 13C-13C connectivity could be established through a C,C-COSY experiment, but the sensitivity of such a COSY experiment would reflect the natural abundance of a 13C-13C pairs (only one in 8000 molecules contributes to the experiment 0.01 times 0.01 = 10-4). If an HH-COSY experiment takes 1h, the respective CC-COSY would take 10,000 hours (more than a year) to achieve the same signal/noise ratio. Conclusion: COSY is suitable only for abundant (ideally 100 %) nuclei like 1H, 31P or 19F. It must be noted that the COSY off-diagonal peaks can also become very weak for dilute spin nuclei and that 13C with only 1% natural abundance is an extreme case. Most other diluted I = 1/2 nuclei like 29Si, 15N, 119Sn or 183W are more abundant. 13C-INADEQUATE experiments are the preferred substitute for a CC-COSY and is one of the most attractive pulse sequence for the structure determination of complex organic and organometallic structures. The experiment can be tuned in such a way that only 1J(13C,13C) coupling leads to cross peaks. The INADEQUATE pulse sequence is still plagued by the very low abundance 13C-13C pairs, but it eliminates the very intense diagonal peaks which can to obscure cross peaks. Measurement times are ca 6 - 12 hours for concentrated samples (500 mg) on high field instruments. The elimination of the diagonal peaks is due to the fact that the INADEQUATE pulse sequence is a so called double quantum coherence experiment. Because only those atoms that are directly connectedgive cross peaks it is usually very easy to map the carbon skeleton of a molecule. (See also N. Chr. Nielsen, H. Thøgersen O. W. Sørensen "Doubling the sensitivity of INADEQUATE for Tracing out the Carbon Skeleton of Molecules by NMR" J. Am. Chem. Soc. 1995, 117, 11365-11366). No comments: Post a Comment
50d86da17e6423e2
Dr Román Orús – Tensor Networks: Untangling the Mysteries of Quantum Systems Apr 3, 2018 | Physical Science For decades, physicists have struggled endlessly with the problem of quantum many-body systems – systems containing multiple quantum particles. Because of quantum properties, the ways in which these systems behave are unpredictable when using conventional mathematics, making theoretical simulations virtually impossible. Now, Dr Román Orús at the Johannes Guttenberg Universität in Germany (soon moving to the Donostia International Physics Centre in Spain) believes that tensor networks will become a vital tool when exploring these unconventional properties. He hopes that the cutting-edge mathematical technique will have implications in fields from artificial intelligence to quantum gravity. The Many-body Problem When quantum particles are on their own, theoretical physicists have little trouble in predicting how they will behave. Using the Schrödinger equation, they can predict how the characteristics of the particle will change over time, accounting for quantum properties such as the particle’s spin and momentum. However, for systems where many of these particles are in play, calculations are notoriously difficult. In these so-called ‘quantum many-body systems’, the fates of two or more particles can become entwined through the little-understood property of entanglement. Due to this phenomenon, the locations, spins, and momenta of two particular particles can be directly connected with each other, regardless of the distance between them. Meanwhile, observers ultimately have no easy way to determine how entangled two particles are – a nightmare situation when many particles are involved. To begin to understand the behaviour of quantum many-body systems, theoretical physicists have long realised that they need to construct simplified models in order to study the complex interactions that occur. To do this, they require reliable numerical simulations – the construction of which has created the need for an entirely new branch of mathematics. Since the 1990s, the idea of constructing so-called ‘tensor networks’ has propelled the mission to construct simulation methods, but the complex mathematics behind them has required years of research to perfect. Dr Román Orús and his colleagues are at the forefront of this work, and believe that tensor networks are vital in explaining a wide range of quantum phenomena. ‘The topic of tensor networks has exploded a lot in recent years, and is now finding applications in many fields, including condensed matter physics, quantum gravity, lattice gauge theories, open systems, machine learning, and many other places.’ The Mathematical Power of Tensors Ask just about any physicist, and they will tell you that among their earliest memories as students of the subject was the study of vectors. In physics, we can attach numbers to many quantities: speed, distance and mass, to name a few. These ‘scalar’ quantities are useful in simple problems, but most often in physics, we need information about the direction in which these scalars act. By assigning direction coordinates to scalars, they can be turned into more useful vectors: velocity, displacement, acceleration. However, as is so often the case in physics, problems rapidly become far more complex than before, as even Einstein found out the hard way. When constructing his theory of general relativity, Einstein struggled to make his calculations using vectors alone. He famously enlisted the help of his mathematician friend Marcel Grossmann, who would enlighten him on a mathematical construct a degree of complexity higher than vectors, known as tensors. Now part of the toolbox of skills for many physicists, tensors contain information about the relationships between scalars, vectors and even other tensors. The concept was a breakthrough for Einstein, who used tensors as a fundamental building block in his explanation of the union of space and time. However, in more recent decades, problems have inevitably been thrown up that require yet another layer of complexity – perhaps the most notorious of these is the quantum many-body problem. This is the stage of complexity where Dr Orús joined, and at the heart of his solution for creating simulations of complex quantum phenomena, is the concept of tensor networks. ‘The topic of tensor networks has exploded a lot in recent years,’ he says. ‘It is now finding applications in many fields, including condensed matter physics, quantum gravity, lattice gauge theories, open systems, machine learning, and many other places.’ A New Layer of Complexity To visualise the structure of tensor networks, Dr Orús and his colleagues use the analogy of DNA, and how its structure determines the overall makeup of the human body. Molecules of DNA act as the fundamental building blocks of our biological characteristics – aspects ranging from our height to our personality traits can ultimately be explained by how these blocks are positioned and interact within a highly complex network. In the same way, individual tensors act as the DNA of an overall complex system, which can be described by its ‘wave function’. The characteristics and interactions of each individual tensor in the network play a part in determining what the overall wave function will look like, therefore providing insight into how the quantum system will behave. But to get to this stage, some highly sophisticated mathematics – the product of decades of research – is required. Essentially, constructing the simplest tensor network first involves arranging tensors in an orderly line. Known as ‘Matrix Product States’ (MPS), these one-dimensional arrays allow tensors to interact with their neighbours, already allowing some advanced predictions to take place. To improve the construct further, a two-dimensional lattice can be created by replicating the one-dimensional structure repeatedly, using a technique named ‘Projected Entangled Pair States’ (PEPS). With this infrastructure in place, reliable numerical simulations of quantum many-body systems can now take place with some further mathematical modifications. ‘My main focus now is the development of simulation techniques for quantum lattice systems using tensor networks, and the mathematical investigation of quantum many-body entanglement,’ Dr Orús explains. ‘My plan is to apply these methods to study a number of important phenomena.’ Long-awaited Numerical Simulations By exploiting the mathematics behind MPS and PEPS, Dr Orús and his colleagues have worked towards fine-tuning tensor networks to display quantum properties reliably. Using the models, they have analysed how tensor networks can be used to simulate fermions – a family of quantum particles that includes electrons and quarks, whose dynamics are incredibly hard to predict using conventional mathematics. The team has also studied symmetries in tensor network states – a special case in which the quantum state of the system was not changed during an interaction with other systems. In addition, they have explored how tensor network simulations can be used to account for complex patterns of entanglement – perhaps the most important requirement for replicating truly realistic properties of many-body systems. Using these ingredients, Dr Orús and his team have recently investigated a wide range of special cases of the behaviour of quantum many-body systems. In 2016, they studied how a theoretical system of magnetic particles in two dimensions would dissipate over time when immersed in an environment. Previous difficulties had arisen from the unpredictable behaviour of entangled particles, yet the algorithms used by Dr Orús and his colleagues managed to reliably predict how the system would evolve. One year later, the team analysed how infinite tensor networks could be constructed to analyse Quantum Electrodynamics (i.e., the theory of electromagnetic interactions), allowing them to be used to simulate a system large enough to display macroscopic properties. This was particularly useful, as it allowed a first theoretical glimpse into how the so-called ‘standard model’, which is our deepest understanding of how particles interact in nature, could be eventually studied thanks to the new simulation methods based on entanglement – an unthinkable feat just a few decades ago. Promising Potential for Tensor Networks Dr Orús is confident that tensor networks could be put to an almost bewildering array of uses in the near future. ‘Examples of problems are topological quantum order, frustrated quantum antiferromagnets, quantum dissipation, quantum transport, many-body localisation, lattice gauge theories, holographic entanglement, new numerical simulation methods, the connection to artificial intelligence, and possible connections of all these topics to experiments,’ he lists. Each of these uses is an in-depth area of physics in itself, but they are all unified in their need for the accurate simulations of the properties of quantum many-body systems. The study of antiferromagnets, for example, involves simulating the behaviour of many electrons at low temperatures. Artificial intelligence and machine learning solve problems using algorithms that act out scenarios repeatedly – gradually improving their methods for completing tasks, until becoming expertly skilled. Just as in our brains, computers will need to make complex connections between ideas and concepts in their own hardware, making simulations with tensor networks particularly useful. Perhaps the most intriguing consequence of Dr Orús’s research will be an insight into the most infamous problem in physics to date: quantum gravity. In their own right, the theories of quantum mechanics and Einstein’s theory of general relativity seem to work remarkably well, and yet a single theory that marries them both together has eluded physicists for the best part of a century. In a macroscopic system of quantum particles, however, quantum effects play a significant role in determining the behaviour of the system. Dr Orús believes that by studying such a system using tensor networks, the path towards a unified theory would be clear: the equations of gravity would be nothing but those governing the quantum many-body entanglement. This deep connection between quantum physics and gravity has become increasingly evident in recent years.   It is fitting that the very mathematical concept that allowed Einstein to make his most important discovery, in a more advanced form, could soon provide us with a more advanced understanding of the consequences of his work. Meet the researcher Dr Román Orús Johannes Guttenberg Universität Dr Román Orús is a Junior Professor of Condensed Matter Theory at the Johannes Guttenberg Universität in Mainz, Germany. After obtaining his degree and PhD in Physics at the University of Barcelona in 2006, he has worked as a Research Fellow at the University of Queensland, Australia, and the Max Planck Institute, Germany, as well as visiting Professor at the Universitè Paul Sabatier – CNRS, France, and the Donostia International Physics Center – DIPC, Spain. In September 2018 he will become a tenured Ikerbasque Research Professor at DIPC and a Visiting Professor at the Barcelona Supercomputing Center. With research interests in the diverse applications of quantum many-body systems and quantum technologies, Dr Orús has achieved several awards for his work, including a Marie Curie Incoming International Fellowship, and the Early Career Prize (2014) by the European Physical Society. He has written around 60 scientific articles about quantum research (cited around 3000 times), and is Founding Editor of the journal Quantum, member of the ‘Quantum for Quants’ commission of the Quantum World Association, and partner at Entanglement Partners SL. E: roman.orus@gmail.com W: http://www.romanorus.com Deutsche Forschungsgemeinschaft K Zapp, R Orus, Tensor network simulation of QED on infinite lattices: learning from (1 + 1)d, and prospects for (2 + 1)d, Physical Review D, 2017, 95, 114508. A Kshetrimayum, H Weimer, R Orus, A simple tensor network algorithm for two-dimensional steady states, Nature communications, 2017, 8, 1291. R Orus, Advances on tensor network theory: Symmetries, entanglement, and holography, The European Physical Journal B, 2014, 87, 280. R Orus, A practical introduction to tensor networks: Matrix Product States and Projected Entangled Pair States, Annals of Physics, 2014, 349, 117–158.
711e19cb158df9ad
World Library   Flag as Inappropriate Email this Article Article Id: WHEBN0000009649 Reproduction Date: Title: Energy   Author: World Heritage Encyclopedia Language: English Subject: Outline of physics, Nature, Philosophy of physics, Physics, Mass Collection: Energy, Energy (Physics), State Functions Publisher: World Heritage Encyclopedia Energy transformation; In a typical lightning strike, 500 megajoules of electric potential energy is converted into the same amount of energy in other forms, most notably light energy, sound energy and thermal energy. Work and heat are two categories of processes or mechanisms that can transfer a given amount of energy. The second law of thermodynamics limits the amount of work that can be performed by energy that is obtained via a heating process—some energy is always lost as waste heat. The maximum amount that can go into work is called the available energy. Systems such as machines and living things often require available energy, not just any energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. There are many forms of energy, but all these types must meet certain conditions such as being convertible to other kinds of energy, obeying conservation of energy, and causing a proportional change in mass in objects that possess it. Common energy forms include the kinetic energy of a moving object, the radiant energy carried by light and other electromagnetic radiation, the potential energy stored by virtue of the position of an object in a force field such as a gravitational, electric or magnetic field, and the thermal energy comprising the microscopic kinetic and potential energies of the disordered motions of the particles making up matter. Some specific forms of potential energy include elastic energy due to the stretching or deformation of solid objects and chemical energy such as is released when a fuel burns. Any object that has mass when stationary, such as a piece of ordinary matter, is said to have rest mass, or an equivalent amount of energy whose form is called rest energy, though this isn't immediately apparent in everyday phenomena described by classical physics. According to mass–energy equivalence, all forms of energy (not just rest energy) exhibit mass. For example, adding 25 kilowatt-hours (90 megajoules) of energy to an object in the form of heat (or any other form) increases its mass by 1 microgram; if you had a sensitive enough mass balance or scale, this mass increase could be measured. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. Although any energy in any single form can be transformed into another form, the law of conservation of energy states that the total energy of a system can only change if energy is transferred into or out of the system. This means that it is impossible to create or destroy energy. The total energy of a system can be calculated by adding up all forms of energy in the system. Examples of energy transfer and transformation include generating or making use of electric energy, performing chemical reactions, or lifting an object. Lifting against gravity performs work on the object and stores gravitational potential energy; if it falls, gravity does work on the object which transforms the potential energy to the kinetic energy associated with its speed. More broadly, living organisms require available energy to stay alive; humans get such energy from food along with the oxygen needed to metabolize it. Civilisation requires a supply of energy to function; energy resources such as fossil fuels are a vital topic in economics and politics. Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun (as well as the geothermal energy contained within the earth), and are sensitive to changes in the amount received. The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, while energy is always conserved (in the sense that the total energy does not change despite energy transformations), energy can be converted into a form, e.g., thermal energy, that cannot be utilized to perform work. When one talks about "conserving energy by driving less", one talks about conserving fossil fuels and preventing useful energy from being lost as heat. This usage of "conserve" differs from that of the law of conservation of energy.[2] • Forms 1 • History 2 • Measurement and units 3 • Scientific use 4 • Classical mechanics 4.1 • Chemistry 4.2 • Biology 4.3 • Earth sciences 4.4 • Cosmology 4.5 • Quantum mechanics 4.6 • Relativity 4.7 • Transformation 5 • Conservation of energy and mass in transformation 5.1 • Reversible and non-reversible transformations 5.2 • Conservation of energy 6 • Transfer between systems 7 • Closed systems 7.1 • Open systems 7.2 • Thermodynamics 8 • Internal energy 8.1 • First law of thermodynamics 8.2 • Equipartition of energy 8.3 • See also 9 • Notes and references 10 • Further reading 11 • External links 12 The total energy of a system can be subdivided and classified in various ways. For example, classical mechanics distinguishes between kinetic energy, which is determined by an object's movement through space, and potential energy, which is a function of the position of an object within a field. It may also be convenient to distinguish gravitational energy, electric energy, thermal energy, several types of nuclear energy (which utilize potentials from the nuclear force and the weak force), electric energy (from the electric field), and magnetic energy (from the magnetic field), among others. Many of these classifications overlap; for instance, thermal energy usually consists partly of kinetic and partly of potential energy. Some types of energy are a varying mix of both potential and kinetic energy. An example is mechanical energy which is the sum of (usually macroscopic) kinetic and potential energy in a system. Elastic energy in materials is also dependent upon electrical potential energy (among atoms and molecules), as is chemical energy, which is stored and released from a reservoir of electrical potential energy between electrons, and the molecules or atomic nuclei that attract them. .The list is also not necessarily complete. Whenever physical scientists discover that a certain phenomenon appears to violate the law of energy conservation, new forms are typically added that account for the discrepancy. Heat and work are special cases in that they are not properties of systems, but are instead properties of processes that transfer energy. In general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from. Potential energies are often measured as positive or negative depending on whether they are greater or less than the energy of a specified base state or configuration such as two interacting bodies being infinitely far apart. Wave energies (such as radiant or sound energy), kinetic energy, and rest energy are each greater than or equal to zero because they are measured in comparison to a base state of zero energy: "no wave", "no motion", and "no inertia", respectively. The distinctions between different kinds of energy is not always clear-cut. As Richard Feynman points out: Some examples of different kinds of energy: Forms of energy Type of energy Description Potential A category comprising many forms in this list Mechanical The sum of (usually macroscopic) kinetic and potential energies Mechanical wave (≥0), a form of mechanical energy propagated by a material's oscillations Chemical that contained in molecules Electric that from electric fields Magnetic that from magnetic fields Radiant (≥0), that of electromagnetic radiation including light Nuclear that of binding nucleons to form the atomic nucleus Ionization that of binding an electron to its atom or molecule Gravitational that from gravitational fields Intrinsic,  the rest energy (≥0) that equivalent to an object's rest mass Thermal A microscopic, disordered equivalent of mechanical energy The word energy derives from the Ancient Greek: ἐνέργεια energeia “activity, operation”,[3] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[4] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy, was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. Measurement and units A schematic diagram of a Calorimeter - An instrument used by physicists to measure energy. In this example it is X-Rays. Energy, like mass, is a scalar physical quantity. The joule is the International System of Units (SI) unit of measurement for energy. It is a derived unit of energy, work, or amount of heat. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories for instance. There is always a conversion factor for these to the SI unit; for instance; one kWh is equivalent to 3.6 million joules.[6] Because energy is defined as the ability to do work on objects, there is no absolute measure of energy. Only the transition of a system from one state into another can be defined and thus energy is measured in relative terms. The choice of a baseline or zero point is often arbitrary and can be made in whatever way is most convenient for a problem. For example in the case of measuring the energy deposited by X-rays as shown in the accompanying diagram, conventionally the technique most often employed is calorimetry. This is a thermodynamic technique that relies on the measurement of temperature using a thermometer or of intensity of radiation using a bolometer. Energy density is a term used for the amount of useful energy stored in a given system or region of space per unit volume. For fuels, the energy per unit volume is sometimes a useful parameter. In a few applications, comparing, for example, the effectiveness of hydrogen fuel to gasoline it turns out that hydrogen has a higher specific energy than does gasoline, but, even in liquid form, a much lower energy density. Scientific use Classical mechanics Work, a form of energy, is force times distance. W = \int_C \mathbf{F} \cdot \mathrm{d} \mathbf{s} Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Basic overview of energy and human life. C6H12O6 + 6O2 → 6CO2 + 6H2O C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP ADP + HPO42− → ATP + H2O Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[12] i.e. reconverted into carbon dioxide and heat. Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior.,[13] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth. Quantum mechanics In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. In results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of slow changing (non-relativistic) wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: E = h\nu (where h is the Planck's constant and \nu the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body: E = m c^2 , m is the mass, c is the speed of light in vacuum, E is the rest mass energy. For example, consider electronpositron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In this system the matter (electrons and positrons) is destroyed and changed to non-matter energy (the photons). However, the total system mass and energy do not change during this interaction. It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has an inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[14] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). There are strict limits to how efficiently energy can be converted into other forms of energy via work, and heat as described by Carnot's theorem and the second law of thermodynamics. These limits are especially evident when an engine is used to perform work. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy is also transferred from potential energy (E_p) to kinetic energy (E_k) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: E_{pi} + E_{ki} = E_{pF} + E_{kF} The equation can then be simplified further since E_p = mgh (mass times acceleration due to gravity times the height) and E_k = \frac{1}{2} mv^2 (half mass times velocity squared). Then the total amount of energy can be found by adding E_p + E_k = E_{total}. Conservation of energy and mass in transformation Matter may be converted to energy (and vice versa), but mass cannot ever be destroyed; rather, mass/energy equivalence remains a constant for both the matter and the energy, during any process when they are converted into each other. However, since c^2 is extremely large relative to ordinary human scales, the conversion of ordinary amount of matter (for example, 1 kg) to other forms of energy (such as heat, light, and other radiation) can liberate tremendous amounts of energy (~9\times 10^{16} joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (i.e., kinetic energy into particles with rest mass) are found in high-energy nuclear physics. Reversible and non-reversible transformations Conservation of energy According to conservation of energy, energy can neither be created (produced) nor destroyed by itself. It can only be transformed. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Energy is subject to a strict global conservation law; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[15] Richard Feynman said during a 1961 lecture:[16] Most kinds of energy (with gravitational energy being a notable exception)[17] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[2][16] Transfer between systems Closed systems Energy transfer usually refers to movements of energy between systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work doing during the transfer is called heat.[19] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[20] and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. Mathematically, the process of energy transfer is described by the first law of thermodynamics: \Delta{}E = W + Q where E is the amount of energy transferred, W  represents the work done on the system, and Q represents the heat flow into the system.[21] As a simplification, the heat term, Q, is sometimes ignored, especially when the thermal efficiency of the transfer is high. \Delta{}E = W Open systems \Delta{}E = W + Q + E Internal energy First law of thermodynamics The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[23] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas), the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as where \delta Q is the heat supplied to the system and \delta W is the work applied to the system. Equipartition of energy See also Notes and references 1. ^ Energy units are usually defined in terms of the   7. ^ The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 9. ^ Bicycle calculator - speed, weight, wattage etc. [1]. 13. ^ "Earth's Energy Budget". Retrieved 2010-12-12.  17. ^ "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws". 1918-07-16. Retrieved 2010-12-12.  18. ^ "Time Invariance". Retrieved 2010-12-12.  21. ^ The signs in this equation follow the IUPAC convention. Further reading • Crowell, Benjamin (2011) [2003]. Light and Matter. Fullerton, California: Light and Matter.  • Ross, John S. (23 April 2002). "Work, Power, Kinetic Energy". Project PHYSNET. Michigan State University.  External links
b170baf2d20cf657
You are here Singular Hartree equation in fractional perturbed Sobolev spaces TitleSingular Hartree equation in fractional perturbed Sobolev spaces Publication TypeJournal Article Year of Publication2018 AuthorsMichelangeli, A, Olgiati, A, Scandone, R JournalJournal of Nonlinear Mathematical Physics We establish the local and global theory for the Cauchy problem of the singular Hartree equation in three dimensions, that is, the modification of the non-linear Schrödinger equation with Hartree non-linearity, where the linear part is now given by the Hamiltonian of point interaction. The latter is a singular, self-adjoint perturbation of the free Laplacian, modelling a contact interaction at a fixed point. The resulting non-linear equation is the typical effective equation for the dynamics of condensed Bose gases with fixed point-like impurities. We control the local solution theory in the perturbed Sobolev spaces of fractional order between the mass space and the operator domain. We then control the global solution theory both in the mass and in the energy space. Sign in
0f48e40d6132c93d
Advanced Search RSS LinkedIn Twitter Journal Archive Platinum Metals Rev., 2000, 44, (4), 146 Relativistic Phenomena in the Chemistry of the Platinum Group Metals Effects on Coordination and Chemisorption in Homogeneous and Heterogeneous Catalysis • By Geoffrey C. Bond • Brunel University, Uxbridge UB8 3PH, U.K. Article Synopsis It may at first sight seem strange that concepts developed by Albert Einstein in the first decade of the 20th century to explain the structure and dynamics of the cosmos should have any relevance to chemistry. However, it is now quite clear that the chemical behaviour of the heavier elements in particular is dominated by what are termed “relativistic effects”. This article explores the implications of this for the coordination and chemisorption of carbon monoxide and unsaturated hydrocarbons, and their reactions in homogeneous and heterogeneous catalysis involving the platinum group metals. A highly significant feature of the chemistry of the elements at the end of the three Transition Series of the Periodic Table is the close similarity that exists between the six elements with partially occupied 4d and 5d orbitals, that is, the group known as the “platinum metals”, and the marked difference between them and the three elements which have only 3d electrons, that is, the base metals (iron, cobalt, nickel). This difference also extends into Group 11, where gold and silver are much “nobler” than copper. Part of the reason for this behaviour is that the size of the atoms increases on passing from the 3d to the 4d metals, but there is no further increase on going to the 5d metals, see Table I. This size difference is seen right across the Transition Series, from Group 4 to Group 10 and beyond, and it has traditionally been explained by electron occupation of the 4f orbitals to create the Rare Earth elements before the 5d orbitals start to be filled. This has been held to produce what is termed the “lanthanide contraction”, caused by the inability of electrons in the 4f orbitals to shield effectively the s electrons from the increasing positive nuclear charge. The 6s orbital therefore contracts, and the expected increase in atomic size does not occur (1,2). Table I Atomic Numbers, Z, Electronic Structures, Melting Temperatures, Tm, Enthalpies of Sublimation, ΔHsub, and Metallic Radii, r, for the Platinum Group Metals MetalZElectronic StructureTm, KΔHsub, kJ mol-1r, pm Ru44(Kr)4d 75s 12655648132.5 Rh45(Kr)4d 85s 12239556134.5 Pd46(Kr) 4d 101825373137.5 Os76(Xe)5d 66s 23318784133.7 Ir77(Xe)5d 76s 22713663135.7 Pt78(Xe)5d 96s 12042469138.5 Chemistry of the Heavier Elements The heavier elements, that is, those having 5d, 6s or 6p electrons, show a number of unusual physicochemical properties, many of which have been interpreted in terms of other measurable parameters, such as ionisation potential, or by assigning a label, such as “the 6s inert pair effect”, which provides a comforting sense of understanding. However the underlying causes have remained uncertain, and there has been an increasing suspicion, especially in the last 20 years, that the lanthanide contraction is not a full, perfect and sufficient explanation for all that is observed. Among the anomalies are the greater stability of the PtIV oxidation state compared to PdIV, the stability of the AuIII state, the varied colours of the Group 11 metals, the low melting temperature of mercury and the existence of the Hg22+ ion. Numerous other unexplained aspects of the chemistry of the heavier elements have been noted (1, 2). The quite startling properties of gold have recently been reviewed in depth (3): it has high electronegativity and can form the auride ion, Au. It owes its nobility to the instability of its compounds with other electronegative elements, such as oxygen and sulfur. The Relativistic Analogue of the Schrödinger Equation The Schrödinger wave equation, describing subatomic particle motion, which we were taught to believe contained in principle the understanding of all chemistry, has in fact one major defect: it does not treat space and time as equivalent, as required by Einstein’s Theory of Special Relativity. P. A. M. Dirac and, independently, the Dutch physicist, H. A. Kramers, therefore devised a relativistic analogue of the Schrödinger equation, which incidentally predicted the existence of the positive electron (positron), and accounted for the occurrence of electrons having opposite “spins”. The difference which the relativistic correction made to the energetic description of the hydrogen atom was however very small, and it was therefore concluded that the chemical consequences were insignificant; but what was overlooked was the fact that, as the positive charge on the nucleus increases, the orbiting electrons must move faster in order to overcome the greater electrostatic attraction and hence to maintain their position. When the nuclear charge is about 50 (at the element tin), electrons in the 1s orbital are moving at about 60 per cent of the speed of light, and their mass is increased according to the equation: where mass m moves with speed v, c is the speed of light and m 0 is the mass of the particle at rest. The 1s orbital therefore contracts, and the outer s orbitals have to contract in sympathy, but p orbitals are less affected, and d orbitals hardly at all: their shape determines that their electrons spend little time close to the nucleus, see Figure 1. Fig. 1 The forms of atomic orbitals: (i) s orbitals are spherically symmetrical (ii) p orbitals consist of three pairs of lobes, centred on each of the Cartesian axes (the px orbital is shown) (iii) d orbitals comprise the eg family, that is the dz2 and dx2-y2 (shown), and the t2g family, which contains three four-fold lobes centred on two of the Cartesian axes, each set being mutually at right angles (the dxy set is shown) The forms of atomic orbitals: (i) s orbitals are spherically symmetrical(ii) p orbitals consist of three pairs of lobes, centred on each of the Cartesian axes (the px orbital is shown)(iii) d orbitals comprise the eg family, that is the dz2 and dx2-y2 (shown), and the t2g family, which contains three four-fold lobes centred on two of the Cartesian axes, each set being mutually at right angles (the dxy set is shown) Relativistic Effects on the Properties of the Heavier Elements The net effect of all this can be illustrated by the results of recent calculations for the metals molybdenum, tungsten and seaborgium, (4), see Figure 2. We may provisionally assume this also applies to palladium, platinum and eka-platinum (element 110), respectively. The 6s orbital, having contracted, is thus lowered in energy, while the 5d levels are raised in energy because the orbitals have expanded. The spin-orbital splitting of the p and d orbitals is a particular feature of the relativistic treatment (57). Fig. 2 This shows the calculated outermost atomic energy levels for molybdenum, tungsten and seaborgium (4), (assumed to be analogous to palladium, platinum and eka-platinum). The left-hand part for each element shows the non-relativistic values (NR) and the right-hand part the values having the relativistic correction (Rel) The extent of the relativistic correction to the size of the s orbitals (and its smaller effect on p orbitals) increases approximately as the square of the nuclear charge, and the total effect on the atomic size is expressed as a “relativistic contraction”, that is, the fractional decrease in the actual metallic radius compared to the calculated non-relativistic value, see Figure 3. This relativistic contraction is greatest for platinum and gold, but it subsequently decreases with increase in nuclear charge as the space occupied by the 6s orbital has a diminishing effect on atomic size. Fig. 3 Relativistic contraction of the 6s electron level as a function of nuclear charge (redrawn from (4)) It is now generally thought that the relativistic contraction is at least as significant, if not more so, than the lanthanide contraction for the 5d metals, and that it is a dominant factor in the chemistry of elements having nuclear charge greater than 80 (mercury). From the energy level diagram of Figure 2 we can understand why the electron configuration of palladium is 4d 10 while that of platinum is 5d 96s 1, and why osmium and iridium have the 6s 2 configuration while ruthenium and rhodium are 5s 1 (because the lower energy levels are filled up first). It also follows that the 5d metals have higher ionisation potentials than the 4d (because the 6s level is of lower energy) and that they should have the greater bond strengths, with atoms both of different type and of the same type, as shown by the sublimation enthalpies and melting temperatures in Table I. This also explains why the (100) and (110) surfaces of only the 5d metals undergo reconstruction in the absence of chemisorbed atoms. The small difference in the energies of the 6s and 5d levels, Figure 2, means that d -electrons can be more easily removed or mobilised for bonding purposes, thus accounting for the stabilities of the PtIV and AuIII oxidation states. Indeed, it would be an interesting exercise to try to predict the properties of eka-platinum on the basis of calculated energy levels of the type shown in Figure 2. The above observations and their interpretation have been well described (3, 57), so it is of more interest to explore less well documented areas. Inorganic textbooks (1, 2) pay due regard to the phenomenon of coordination of ligands to metal atoms and ions, and to the related reactions that occur in homogeneous catalysis, but little attention is paid to the parallel phenomenon of chemisorption or to the resultant reactions of heterogeneous catalysis. These are no less interesting, and in practical terms more useful, aspects of the inorganic/organometallic chemistry of the elements. We ought therefore to look for evidence of the operation of relativistic effects in these areas, to see whether they assist in understanding and in rationalising what is seen. This will be done by considering: (a) carbonyl complexes, chemisorption of carbon monoxide and relevant reactions, and (b) complexes and chemisorption of unsaturated hydrocarbons, and their reactions. In doing so it will be necessary to engage in some rather broad generalisations. Coordination and Chemisorption of Carbon Monoxide A very great deal of research has been performed on the carbonyls formed by coordination of carbon monoxide to metal atoms (1,2). Stable neutral complexes containing one or more metal atoms exist for all the metals of Groups 8 to 10 except palladium and platinum, where the presence of either negative charge or of other ligands, such as halide ion, is required for stability. The same is true for the metals of Groups 11 and 12. Palladium is different from platinum, however, in that it cannot form polynuclear ionic complexes of the Chini type (8), but neutral monatomic complexes, M(CO), of both metals can be made at very low temperature using the matrix-isolation technique (9). The carbon monoxide can bond to one, two or three metal atoms, and the nature of the bonding is well understood (1, 2). In the linear form of coordinated carbon monoxide, the 5p molecular orbital donates charge to a vacant d orbital on the metal atom, and accepts charge from a filled atomic orbital into the 2π molecular orbital, Figure 4, but since both are antibonding the first strengthens the C-O bond and the second weakens it. The two effects therefore tend to cancel out, but the strength of the C-M bond depends on the extents of the two transfers of charge. Fig. 4 Molecular orbital diagram for the linear bonding of carbon monoxide to a metal atom (14) The role of the relativistic effect is shown by recent density functional theory (DFT) calculations on the first bond dissociation energy of carbonyl complexes of the type M(CO)5 and M(CO)4 (10), see Table II. The agreement between theory and experiment, where possible, is satisfactory. It is one of the triumphs of modern computational chemistry that results can be obtained on molecules such as Pd(CO)4 and Pt(CO)4 which do not exist as stable entities, although the calculation correctly predicts very low M-CO dissociation energies for them. Estimates of force constants for the Group 10 molecules M(CO) in matrix isolation provide confirmation that nickel is a much better σ-donor and π-acceptor than either palladium or platinum, although the latter is the second best because the destabilised 5d levels, see Figure 2, allow more metal-to-ligand back donation (9). In Group 8, both theory and experiment show that the sequence of M-CO bond energies is Fe > Os > Ru, for the same reason (10). Table II Calculated and Experimental First Bond Dissociation Energies, D, of Mononuclear Carbonyls of Groups 8 and 10 (10) D, kJ mol-1 Much is also known about the chemisorption of carbon monoxide on metal surfaces and particles: the nature of the bonding is very similar to that in carbonyl complexes, and the fact that a surface metal atom is linked to a number of others seems to have little effect. The significant difference is however that the M-CO bond is stronger at surfaces than in complexes, so that strong chemisorption is observed with palladium and platinum, and the adsorbed state can even be studied on the Group 11 metals. In fact, it is in Group 11 that the relativistic effect is more clearly seen (1113), but results from a recent important paper which analyses the situation with the Group 10 metals in great depth must be examined (14). In summary it concludes that the chemisorption of a carbon monoxide molecule on platinum differs from that on nickel and palladium because there is: • a larger differential shift in C-O vibration frequency between the atop and bridge sites, and • a smaller change in the work function of the metal. These observations can only be explained theoretically on the basis of DFT calculations which incorporate the relativistic correction. This is however of much less importance in the case of palladium, see Figure 2. Coordination of Hydrogen Atoms and Molecules In homogeneously-catalysed reactions in which hydrogen is a reactant, it must first be coordinated to the metal centre: this usually occurs by dissociation and oxidative addition. The metals of Group 8 form complexes of the type M(PR3)3H4 (15). When M is osmium there are four hydride ligands, but if M is iron or ruthenium there are two hydrides and one hydrogen molecule, the latter acting as a σ* acceptor via its vacant antibonding σ* orbital. DFT calculations show that this changeover is due to the destabilisation of the osmium 5d orbitals, making osmium a stronger donor. It is not yet known whether there are parallel differences in hydrogen chemisorption. Homogeneous and Heterogeneous Catalytic Reactions of Carbon Monoxide The industrially and environmentally important catalysed reactions of carbon monoxide are: (i) its reaction with hydrogen (ii) its reaction with hydrogen and an alkene, that is, hydroformylation (iii) its reaction with methanol, and (iv) its oxidation to carbon monoxide. The reaction with hydrogen can lead to many different products, including methane, methanol and, by the Fischer-Tropsch synthesis, higher alkenes, alkanes and oxygenated products, depending on the catalyst used, and on the temperature and the pressure at which the reaction is conducted. There are no significant applications of homogeneously catalysed reactions of carbon monoxide with hydrogen alone. Laboratory studies have shown that HCo(CO)4 can catalyse the formation of methanol, and rhodium cluster complexes can give 1,2-dihydroxyethane (ethylene glycol), but vigorous conditions are needed, and they are not commercially attractive (1). All the many important reactions of carbon monoxide and hydrogen require heterogeneous catalysts. The metals which feature in Fischer-Tropsch synthesis are the three base metals of Groups 8 to 10, and ruthenium and osmium. Ruthenium is able to give very high molecular weight hydrocarbons at high pressure. Palladium can catalyse the formation of methane, and rhodium can make C2 oxygenated products; the distinction here between these two metals and the others of Groups 8 to 10 is a clear reflection of the weaker bonding of carbon monoxide to their surfaces. In the same way, copper is the metal of choice for the industrial synthesis of methanol, although gold also works, but much less effectively (3). The carbonylation of methanol to acetic acid was for many years operated with a homogeneous rhodium catalyst, although an iridium compound is now also used (the Cativa process) (16). This is one of the very few examples of the use of iridium in industrial homogeneous catalysis. Coordination and Chemisorption of Unsaturated Hydrocarbons Much work has been performed on these subjects (1, 2, 1719), so it is necessary to draw some broad generalisations and to select just a few examples for closer attention. Many metal atoms and ions coordinate alkenes, alkynes and alkadienes, and these molecules are also strongly chemisorbed by the metals of Groups 8 to 10. Most work has been done on ethene, so it will be discussed first. The archetypal complex is Zeise’s salt, K[PtCl3(C2H4)].H2O, in which the ethene molecule coordinates sideways on to the PtII ion. The C-C bond is stretched and the hydrogen atoms move backwards, see Figure 5. The bond is very similar to that described above for carbon monoxide: electrons pass from a filled π orbital into a vacant d orbital on the metal (giving the σ component) and there is a reciprocal transfer back of electrons from a filled orbital into a vacant antibonding π* orbital of the ethene (the π component). Similar coordination occurs with other metal atoms and ions of Groups 8 to 10, and with the univalent cations of Group 11, although here the bonding is weaker because there are no vacant d orbitals on the metal. There are also complexes of the type M0(PR3)2(C2H4) in which the geometry is distorted tetrahedral, see Figure 6. The bonding has predominantly π character, again because of the absence (in the case of palladium) or lesser availability (in the case of platinum) of d -orbital vacancies. The coordinated ethene molecule may therefore be considered to have a metallacyclopropane structure. The variable extents of the two types of orbital overlap, depending upon the electronic structure of the metal species, the nature of the other ligands and the substituents on the alkene, mean that there is a continuous range of structures, from an almost unaltered ethene molecule through the σπ complexed Zeise’s salt structure to the ethane-like metallocyclopropane mode, rather than there being clearly divided classes. Fig. 5 Molecular orbital diagram for the bonding of ethene to PtII in [PtCl3(C2H4)] (the anion of Zeise’s salt). Note that there are other Cl ions in front of and behind the platinum atom Molecular orbital diagram for the bonding of ethene to PtII in [PtCl3(C2H4)]− (the anion of Zeise’s salt). Note that there are other Cl− ions in front of and behind the platinum atom Fig. 6 Molecular orbital diagram for the bonding of ethene to Pt0 in Pt0(PPh3)2(C2H4): electron donation from ethene is supposed to be to a partly-filled dp2 hybrid orbital on the metal. This complex is almost planar; the phenyl groups are not shown It is difficult to find quantitative information on the strengths of the coordinate bond between ethene and metal atoms or ions. There is however a general impression that platinum complexes are more strongly bonded than palladium, due to the greater spatial extension of the platinum’s 5d orbitals, and the larger orbital overlap which is therefore possible. This impression is confirmed by measurements of the equilibrium constant for the formation of M(PR3)2(C2H4) complexes in benzene solution at 298 K: the values of which are ∼ 300 for nickel, 0.013 for palladium and 0.122 for platinum (20). Bond energies for these complexes and for those of the type M(CO)4(C2H4) have been calculated by relativistic DFT, and are, in both cases, smallest for palladium, see Table III, (21). Table III Ethene-Metal Bond Dissociation Energies Calculated by Relativistic Density Functional Theory (21) ComplexBond dissociation energy, kJ mol-1 Similar structures to those seen in coordination complexes have also been identified when ethene is chemisorbed by metal surfaces or particles (1719). Indeed, a simple-minded theoretical basis for the correspondence was presented in the mid-1960s (22). In particular the πσ form (as in Zeise’s salt) is seen at low temperature or high surface coverage on a number of surfaces, and an analogue of the metallacyclopropane form may also have been detected, see Figure 7. However, because multi-atom sites are available, other structures can arise, especially the σ-diadsorbed ethane-like structure where each carbon atom forms a σ-bond to two metal atoms, as shown in Figure 7, (23). The only analogy for this type of structure in organometallic chemistry is the complex Os2(CO)8(C2H4), which may also owe its existence to the relativistic d orbital destabilisation. Fig. 7 Molecular orbital diagrams for ethene chemisorbed on metal surfaces:(a) the σ-diadsorbed form on platinum, (b) the πσ form on a platinum atom (the analogue of the structure shown in Fig. 6) and (c) on atoms of Group 11, where back-bonding from ethene to metal cannot occur because of the lack of a suitable vacant orbital The use of spectroscopic and structure-determining techniques such as FTIR and LEED have provided estimates of the stretching of the C-C bond following chemisorption, either by: (i) a “π,σ parameter” derived from the change in vibration frequency of the C-C bond, taking values between zero (for the free molecule) and unity (for C2H4Br2) (24), or by (ii) a bond order between unity and two derived from the length of the C-C bond (19). Some values of both parameters are given in Table IV, and show that the distortion due to stretching is slight for the Group 11 metals but large for iron, ruthenium, nickel and platinum surfaces. Table IV Values of the π,σ Parameter (24) and of the C-C Bond Order (19) for Ethene Chemisorbed on Various Metal Surfaces Surfaceπ,σ ParameterBond order Ag film-1.88 Au foil0.25- From the very extensive literature, the following generalisations may be made: (i) The σ-diadsorbed form of ethene is commonly seen on platinum surfaces, but rarely on palladium, which usually shows the πσ type of structure. (ii) The πσ type occurs together with the σ-diadsorbed form on stepped platinum surfaces. (iii) Pre-adsorbed oxygen atoms favour the πσ form, or structures which tend towards this form. The clear conclusion is that here is a marked tendency for ethene to be more strongly chemisorbed (as the σ-diadsorbed form) on platinum than on palladium (where the πσ form is favoured); the parallels with coordination chemistry are also very marked, and the explanations are the same. Other Alkenes Alkenes having a methylene group adjacent to the double bond may on coordination lose a hydrogen atom to form a π-allylic ligand (1, 2). This occurs with a number of metal atoms and ions, but more readily and extensively with nickel and palladium than with platinum. Here is another distinction that may have its origin in relativistic effects. Dienes coordinate strongly through both double bonds to many metal species, for instance, on palladium, 1,3-butadiene is chemisorbed by both double bonds in the πσ mode, but on platinum, only through one double bond in the σ-diadsorbed form (18). Homogeneous and Heterogeneous Catalysis of Reactions of Unsaturated Hydrocarbons It would be possible to summarise the homogeneous catalysis of alkene reactions and of metal-mediated reactions of organic molecules in general by saying simply that (with the sole exception of hydrosilylation) all require a metal compound or complex drawn from the elements in the first two rows of Groups 8 to 10 . This remarkably clear generalisation is however not often noted. The distinction between the second and third row metals is strikingly confirmed by inspecting, for example, Number 3 of this Journal for 1999, where in the reviews on pages 103 and 114, and in the Homogeneous Catalysis’ Abstracts and New Patents sections, every reference concerns either ruthenium, rhodium or palladium. More generally, complexes of the three base metals: iron, cobalt and nickel, are also effective, while those of the 5d metals and compounds of the Group 11 metals are not. This illustrates the operation of a “Volcano Principle” in homogeneous catalysis: the 5d metals form complexes that are too strong, the Group 11 metal complexes are too weak, while those of the 3d and 4d metals fall in the acceptable range. Unfortunately there seems to be little quantitative information available (apart from that mentioned above) to underpin that statement. It is only necessary to record briefly some of the observations which lead to this concept. Early work on homogeneous hydrogenation used compounds or complexes of iron, cobalt, ruthenium and rhodium (for example, Wilkinson’s complex Rh(PPh3)3Cl); hydroformylation used initially cobalt, but this was later replaced by rhodium which gave greater selectivity to terminal aldehydes (2). Complexes of NiII and Rh1 catalyse the dimerisation of alkenes (2), and complexes of FeII, NiII and PdII with suitable nitrogen-containing ligands catalyse the polymerisation of ethene either to (α-alkenes or to high-density polyethene, depending on the type of ligand (25). NiII complexes also polymerise ethyne to either benzene or cyclo-octatetraene. In the case of hydrosilylation, the use of platinum is probably required in order that an alkene complex of sufficiently great stability can be formed in the presence of bulky -SiR3 ligands. Alkenes coordinated to nickel and palladium species are susceptible to nucleophilic attack, for example by OH ions. The best-known case of this is the oxidation of ethene by PdII to ethanal (acetaldehyde), or in the presence of ethanoic (acetic) acid to ethenyl ethanoate (vinyl acetate). Here the PdII is reduced to Pd0, and an oxygen-carrying CuII species is needed to complete the catalytic cycle (1, 2). In heterogeneous catalysis, the clearest proof of the operation of a relativistic effect lies in the reactions of unsaturated hydrocarbons with hydrogen or deuterium. There is a very clear distinction between: (A) nickel and palladium (and copper) on the one hand, and (B) platinum (and iridium) on the other, in the following reactions (26): (i) Reaction of ethene with deuterium, where the A Group metals allow a much greater return of deuterium-substituted alkenes to the gas phase. (ii) Reactions of C4 or higher alkenes where molecules altered by double-bond migration or E/Z -isomerisation appear in the gas phase to a much greater extent with the A Group than the B Group. (iii) Hydrogenation of alkynes and alkadienes, for which the A Group metals are more active, and on which they afford the intermediate alkene with much higher selectivity, and preferential Z -addition of hydrogen; with 1,3-butadiene, for example, a high yield of 1-butene is obtained. All of these observations are consistent with a weaker chemisorption of the alkene compared to that of the alkyne or alkadiene on the A Group metals, as indeed was suspected many years ago (26). These and other features of the reaction have been rationalised by proposing that πσ or π-allylic intermediates occur with the A Group, but σ-diadsorbed intermediates with the B Group. Once again there is a clear correspondence between the findings of organometallic chemistry and homogeneous catalysis and with those of heterogeneous catalysis. The second point (ii) above also explains why fat-hardening is conducted with nickel (27) or palladium (28) catalysts, but not with platinum. Expressed briefly, a C18 chain containing three non-conjugated C=C bonds has to be hydrogenated so that only one C=C bond is left. This requires first an isomerisation to bring them into conjugation, and then a selective hydrogenation, for which nickel under hydrogen-diffusion-limited conditions is suitable. Palladium is also suitable. We may also note that ethyne trimerises to benzene on palladium (best on the (111) surface), just as occurs with NiII complexes (29). Finally we may ask why platinum was the metal of choice for petroleum reforming, since the opening step is dehydrogenation of an alkane, followed by desorption of an alkene and its migration to an acid site. In fact nickel was the metal first used, and recently palladium has found some use, so the answer probably lies in the fact that platinum, in conjunction with tin or rhenium, is much less susceptible to deactivation by carbon deposition than nickel or palladium would be. Some Final Thoughts This review has indicated that there are many respects in which the 5d metals (Os, Ir, Pt, Au) differ from the 3d base metals (Fe, Co, Ni, Cu) and the 4d metals (Ru, Rh, Pd, Ag) (30). These differences can be explained in terms of the stabilisation of the 6s level and the destabilisation of the 5d level, compared to the situation in the earlier series, see Figure 2. The origin of this phenomenon is the operation of a relativistic effect on all the s orbitals. Metal catalysts for those reactions involving only σ-bonded alkyl radicals or multiple C=M bonds (equilibration of alkanes with deuterium, alkane hydrogenolysis, etc.) do not show preferential activity related to this effect. It may be wondered what other aspects of the chemistry of the elements may be traced to this cause. In the bioinorganic field, there is platinosis, but not palladosis; there is the well-known toxicity of many of the heavy elements, but the beneficial use of platinum complexes in chemotherapy (7) (but not so much those of palladium or other metals) and of gold (but not silver) in treating arthritis. It would be surprising if there were not some underlying connection. So next time you touch gold, or drive a car with a platinum catalyst beneath it, or take your temperature with a mercury-in-glass thermometer do remember: you are now directly in touch with some consequences of the principles that shape the Universe and are determining its evolution. 1. 1 N. N. Greenwood and A. Earnshaw, “ Chemistry of the Elements ”, 2nd Edn., Butterworth-Heinemann, Oxford, 1997 2. 2 F. A. Cotton and G. Wilkinson, “ Advanced Inorganic Chemistry ”, 5th Edn., Wiley-Interscience, New York, 1988 3. 3 G. C. Bond and D. Thompson, Catal. Rev.: Sci. Eng ., 1999, 41, 319 4. 4 D. C. Hoffmann and D. M. Lee, J. Ciem. Educ ., 1999, 76, 332 5. 5 K. S. Pitzer, Acc. Chem. Res ., 1979, 12, 271 6. 6 P. Pyykkö and J.-P. Desclaux, Acc. Chem. Res ., 1979, 12, 276 7. 7 P. Pyykkö, Chem. Rev ., 1988, 88, 563 8. 8 P. Chini,, G. Longoni, and V. G. Albans,, Adv. Organomet. Chem ., (eds. F. G. A. Stone and R. West ), 1976, 14, 285 9. 9 E. P. Kündig,, M. Moskovits and G. A. Ozin, Can. J. Chem ., 1972, 50, 3587 10. 10 J. Li,, G. Schreckenbach and T. Ziegler, J. Am. Chem. Soc ., 1995, 117, 486 11. 11 K. Dückers and P. Bonzel, Surf. Sci ., 1989, 213, 25 12. 12 A. Sandell,, P. Bennich,, A. Nilsson,, B. Hernnäs,, O. Björneholm and N. Mårtenson, Surf. Sci ., 1994, 310, 16 13. 13 P. Dumas,, R. G. Tobin and P. L. Richards, Surf. Sci ., 1986, 171, 555, 579 14. 14 G. Paccioni,, S.-C. Chung,, S. Krüger and N. Rosch, Surf. Sci ., 1997, 392, 173 15. 15 J. Li,, R. M. Dickson and T. Ziegler, J. Am. Chem. Soc ., 1995, 117, 11482 16. 16 B.-J. Deelman, Platinum Metals Rev ., 1999, 43, ( 3 ), 105 ; J. H. Jones, ibid ., 2000, 44, ( 3 ), 94 17. 17 N. Sheppard and C. De la Cruz, Adv. Catal, 1996, 41, 1 ; ibid ., 1998, 42, 181 18. 18 F. Zaera, Chem. Rev ., 1995, 95, 2651 19. 19 E. Yagaski and R. Masel, in “ Specialist Periodical Reports: Catalysis ”, Roy. Soc. Chem ., London, 1994, Vol. 11, p. 164 20. 20 C. A. Tolman,, W. C. Seidel and G. H. Gerlach, J. Am. Chem. Soc ., 1972, 94, 2669 21. 21 J. Li,, G. Schreckenbach and T. Ziegler, Inorg. Chem ., 1995, 34, 3245 22. 22 G. C. Bond, Discuss. Faraday Soc ., 1966, 41, 200 ; Platinum Metals Rev ., 1966, 10, ( 3 ), 87 23. 23 A. M. Bradshaw, Surf. Sci ., 1995, 331–333, 978 24. 24 E. M. Stuve and R. J. Madix, J. Phys. Chem ., 1985, 89, 105, 3183 25. 25 V. Gibson and D. Wass, Chem. Br ., 1999, 35, ( 7 ), 20 26. 26 G. C. Bond and P. B. Wells, Adv. Catal ., 1964, 15, 92 27. 27 G. C. Bond, “ Heterogeneous Catalysis – Principles and Applications ”, 2nd Edn., OUP, Oxford, 1987 28. 28 V. I. Savchenko and I. A. Makaryan, Platinum Metals Rev ., 1999, 43, ( 2 ), 74 29. 29 T. M. Gentle and E. L. Muetterties, J. Phys. Chem ., 1983, 87, 2469 30. 30 G. C. Bond, J. Mol. Catal. A: Chem ., 2000, 156, 1 The Author Geoffrey Bond has retired from Brunei University and is Emeritus Professor. He is also Visiting Professor at the University of Salford. His major interests are in metal-catalysed reactions of hydrocarbons, supported oxides and kinetics of catalysed reactions. Read more from this issue »
79827e11e8a72814
Quantum Matter Animated! by Jorge Cham – “I don’t remember anything I learned in college” Screen Shot 2013-06-11 at 12.15.21 AM Screen Shot 2013-06-11 at 12.16.55 AM Screen Shot 2013-06-11 at 12.17.20 AM Watch the first installment of this series: Transcription: Noel Dilworth Thanks to: Spiros Michalakis, John Preskill and Bert Painter 65 thoughts on “Quantum Matter Animated! 1. Is it still not possible that the laser gave some part of it’s energy to the mirror? Is it possible to detect such small instantaneous rise in temperature (which will be dispersed to the surroundings within fraction of a second as it is maintained at 0K) ?? Because if it is not completely possible to measure such small changes in temperature in such small time then how can we be sure that the red shifted laser is NOT due to the laser giving off it’s energy ?? And if this is the reason then this still does not prove that the mirror was vibrating. It started vibrating only after being hit by the laser. But due to temperature dispersal the mirror was instantly damped and brought again to zero vibrations or ground state. • I am not a physicist, but the intuitive answer to your question is that if the laser were imparting energy to the mirror, and that was where the red shift was coming from, then there would still be a corresponding blue shift. • Right. When the oscillator is in its quantum ground state, it can absorb energy but cannot emit energy because it is already in its lowest possible energy state. Reflected light can be shifted toward the red (have lower energy than the incident light, because the oscillator absorbed some of the incident energy), but cannot be shifted toward the blue (have higher energy than the incident light). That’s what the experiment found. • Just a follow-on to John’s response… The inability of the mechanical resonator to give up energy when it is in its lowest energy state seems like an obvious statement (by definition of “lowest energy state”), and so why is the experiment interesting then? All it did was confirm that indeed this energy emission goes away as the object gets colder and colder and approaches its ground (lowest energy) state. It is really the fact that the mechanical resonator can absorb energy when it is in the ground state that is interesting. The classical description of the motion of a mechanical object has no way of allowing for this asymmetry in the emission and absorption of energy with the environment; the processes must be symmetric and zero when the object is not moving at temperature=0K. Think of it from the stand-point that the mechanical object isn’t moving when in its classical ground state, and thus it is not doing work on its environment and the environment is not doing work on it. That is what makes the quantum description of the ground state of motion interesting; it allows for the asymmetry in the process of emission and absorption of energy by the mechanical resonator to (or from) the environment. I like to make the analogy to the spontaneous emission of light from an atom, in which there is no corresponding spontaneous absorption process of light. A well defined “mode” (think of it as a particular direction and polarization) of light can be described by a similar set of quantum equations as that describing the mechanical resonator, and thus also has a ground state with intrinsic fluctuations. These “zero-point fluctuations” or “vacuum fluctuations” can be thought of as triggers for atomic spontaneous decay and emission of light by the atom, but do not cause the reverse process of spontaneous excitation of the atom. [Aside: This used to really mystify me when I first learned about spontaneous (and the related stimulated) emission of atoms. The excellent little book by Allen and Eberly, does a nice job of de-mystifying the vacuum fluctuations.] A nice description of the above argument is also given in Aash Clerk’s Physics Viewpoint accompanying article: • Hi Oskar, John, and Paras: 0. For some odd reason, while fast browsing, I first read Oskar’s reply, and then John’s, and only after both, Paras’ original question. (Oskar’s was the longest and innermost indented reply, and so it sort of first caught the eye in the initial rapid browsing.) Even before going through your respective replies, I had happened to think of what in many ways is the same point as Paras has tried to point out above. … Ok. Let me put the query the way I thought of. 1. Here is a simple model of the above experimental arrangement, simplified very highly, just for the sake of argument. The system here consists of the mechanical oscillator and the light field. The environment consists of the light source, the optical measurements devices, the cooling devices, and then, you know, the lab, the earth, all galaxies in the universe, the dark matter, the dark energy … you get the idea. The environment also includes the mechanical support of the oscillator, which in turn, is connected to the lab, the earth, etc. *Only* the system is cooled to 0 K. [Absolutely! 😉 Absolutely, only the system is cooled “to” “0” K!!] The measurement consists of only one effect produced by the light-to-the mechanical oscillator interaction: the changes effected to the reflected light. This effect, it is experimentally found, indeed is in accordance with the QM predictions. (BTW, in fact, the experiment is much more wonderful than that: it should be valuable in studying the classical-to-QM transitions as well. But that’s just an aside as far this discussion goes.) 2. Now my question is this: what state of |ignorance> + |stupidity> + |insanity> + |sinfulness> [+ etc…] do I enter into, if I forward the following argument: At “0” K, the system gets into such a quantum mode that as far as the *reflection* of the light is concerned, if “I” is the amount of the incident light energy (say per unit time), then only some part of it (i.e. the red-shifted part of it) is found to be reflected. However, there still remains an “I – R” amount of energy that the system gives back to the environment via some *experimentally* unmeasured means. If it doesn’t, the first law of thermodynamics would get violated. We may wonder, what could be the form taken by such an energy leakage? Given the bare simplicity of the above abstract description as to what the system and environment here respectively consist of, the answer has to be: via some mechanical oscillation modes of the mechanical oscillator that we do not take account of (let alone measure) in this experiment. The leakage would affect the mechanical support of the oscillator, which, please note, lies *outside* of the system. [The oscillation modes inside the system may be taken to be quantum-mechanical ones; outside of it, as classical ones. But I won’t enter into debates of where the boundary between the quantum and the classical is to be drawn, etc. As far as this experiment—and this argument—goes, we know that “inside” the system, it has to be treated QMechanically; outside, it’s classical mechanically; and that’s enough, as far as I am concerned!] Since the system here is not a passive device but is *actively* being kept cooled down “to” “0” K, it means: it’s the “freeze” sitting in the environment, not to mention the earth and the rest of the universe, which absorb these leaked out vibrations of the mechanical oscillator. The missing energy corresponds to *this* leakage. 3. Of course, I recognize that my point is subtly different from Paras’. His write-up seems to suggest as if there is an otherwise classically rigid-body oscillator sitting standstill, which begins to vibrate only after being hit by the laser. In contrast, I don’t have this description in mind. He also seems to think rather in terms of a *transient* damping out of the mechanical oscillations. Though I do not rule out transients in the system, that wouldn’t be the simplest model one might suggest here: I would rather think of the situation as if there were a more or less “steady-state” leakage of the missing energy into the environment. Yet, Paras does seem to appreciate the role of the environment—the unmeasured side-effects, so to speak, that the system produces on the environment. 4. Anyway, I would appreciate it if you could kindly let me know in what final state should I collapse: |ignorance> or |stupidity> or … . And, why 🙂 [BTW, by formal training, I am just an engineer. And, sorry if my reply is too verbose and had too many diversions…] Thanks in advance. • About your parts (2) and (3), I think it is easier to think of it this way: At the low temperatures the system is subjected to (I really don’t think it even makes sense to say that “only the system is cooled down to 0K”; instead, just say that the system is cooled down to low temperatures is enough), a lot of the system’s constituent particles are in their ground states. What is happening in this experiment, is that they are observing that absorption and excitation of constituent particles up from ground states is observed without the corresponding “classical” de-exciting reflection wave that you normally get. This is predicted from the quantum physics. The special thing about this experiment, though, is that they are also saying that the entire system itself, a macroscopic body, has a quantum wavefunction just like their microscopic parts. That is the part that is interesting and worth reporting upon. Because, if a macroscopic body has a quantum wavefunction, then it can also do all the rest of the quantum weirdness, and that applies to us humans, the Earth, being able to, say, perform quantum tunnelling. Once you see the experiment in this way, it is then obvious that the loss of energy that you perceive, is merely the spontaneous emission of light by the excited particles, and, in this way, they drop back into the ground state of the entire system. This is important, because spontaneous emission is basically undetectable in our case, which is what the experiment observed. The point is that, classically, you are supposed to observe substantial energetic reflection (along with the spontaneous emission that you cannot remove), and you do not observe that in this experiment. 2. Could you add a link to the paper about the experiment for those readers who want more details about it? 3. Whenever someone asks me for a book to explain quantum mechanics to laymen, I always point them to this: It’s an illustrated book about the history of quantum mechanics created by Japanese translation students studying english. They chose the topic because they needed to be able to accurately translate relatively technical material. It’s wonderful for answering the questions you raise in the post above. 4. Great video describing a really interesting experiment. However, it is far from reaching the important lessons from Quantum Mechanics that have shaped the way we see the universe. Forget Quantum Computing. I am not saying that Quantum Computing is not sexy or something, but it is not where the paradigm shift is. One of the greatest thing that a Physics undergraduate degree forces you through, is to learn about Condensed Matter Physics. You might think that, in contemporary Physics education, they would certainly teach you both Quantum Mechanics, and General Relativity. After all, they are what we call the new world view, that revolutionised how we as a species see ourselves. The truth, however, is that, if I did not force them to teach me, they would have ignored General Relativity and just taught me Quantum Mechanics. Lots of it. Without motivation. It is only at the end of the Physics degree do you get to see why it is arranged in the way it is. Special Relativity, the one that Einstein published in 1905, is a really easy thing. Yes, it is bizarre, but you can easily teach it, and later on, you can tell students to apply what they have learnt. That it is reducible into small equations that are easy to memorise, is another plus point. General Relativity, on the other hand, is a pain to teach — everybody, mathematician or physicist, would be confused by the initial arguments, the mathematical notation and all that jazz, until you have completed the module. And even after that, some people just never get it (although, luckily, it is simple enough that a large chunk of people actually understand it very fully, to the contrary of Eddington’s bad joke). The deal breaker, however, is that the ideas from General Relativity, although a nice help to the other parts of Physics, is very far away from essential. i.e. People can make do without any knowledge of that, and still contribute to the rest of Physics in a proper way. That is not the same with Quantum Mechanics. The standard way they teach Quantum Mechanics these days, is to throw the mathematics at you, right at the start. Just write down your energy equation (that you can remember from high school), do your canonical quantisation (which is nothing other than replacing symbols you know about with derivative signs; a monkey can do that), and tack on something magical that we call the wavefunction, and Viola, Quantum Goodness! Since there is nothing to actually understand about it, I watched in amusement as everybody around me struggled to understand something out of nothing, congratulating myself for actually knowing the meaninglessness of it all. Boy, what do I know? The next module, aptly named “Atomic and Molecular Physics”, looked like nothing but applications of the mathematics learnt. It was HORRIBLE to go through, especially since it looked like vocational training — approximation and other calculational techniques that are hardly useful outside higher and higher corrections to the properties of materials that classical physics could have found out about (except quantisation, of course). It was important to have learnt it (not least because it was the first place in which Quantum Entanglement was taught), but it felt like we are just learning tricks instead of ideas. Statistical Thermodynamics was better. Building upon Thermal Physics in first year, there was a bit of Quantum effects being shown in action, especially the Quantum Degeneracy pressure that keeps stars the size they are. Then BOOM! Condensed Matter Physics (I learnt it under the older name, Solid State Physics). I had to completely rewrite what I thought I had known about Quantum Theory, for it is obvious I knew nothing. I am sure you guys have heard of the adage: “When stuff are moving fast, are large, or heavy, General Relativity cannot be neglected. When things are small, Quantum corrections cannot be neglected.” It is still true, but there is a sleight of hand here — we have yet to define what it means to be “large” or “small”. In particular, whenever you have a lot of material squeezed into a small space, i.e. high density, it is small. Thus, something can be both large and small at the same time, requiring both General Relativity and Quantum Theory to describe. A black hole is one such object. The name “Condensed Matter”, is a really good one. Any liquid or solid, really, is condensed, so condensed, that actually, it is no longer a classical system — the quantum effects DOMINATE. Without incorporating Quantum Theory right into the heart of it all, nothing you calculate even makes sense. And since our first approximations here beat the best classical calculations left-right-centre, there was also no reason to teach the classical approximation techniques either. Specifically, notice how, in high school, people teach you that heat and sound are just atoms moving about in different ways? Classical theories can talk about heat propagation and sound propagation and motion. But they are three different islands that don’t even make sense together. So different, that even their mathematical tools are different. But in Quantum Theory, the same mathematics describe all three as one united whole, on the zeroth approximation, and even give you dispersion, which is something classical theories cannot explain without complicated methods. After being floored by how it actually is done, the icing on the cake is Transistors. The theory was originally made in order to explain how metals behave, and we talk about a free electron gas, to explain how metals conduct so well. So, it came as a complete shock that any improvement, notice, ANY SIMPLE improvement to the free electron picture, be it Nearly Free Electron model, or the Tight Binding model, energy bands appear. In practical terms, the theory that sought to explain metals, now explains insulators, and even more, predicts the existence of this previously unheard of class of materials, known as semiconductors. Indeed, it does even better. It predicts the existence, how to make them, and how they would be useful. It is the first time that Physics THEORY had been faster and earlier than the experimenters at any topic. So, yeah, while you are enjoying your computers reading this piece, appreciate the sheer ingenuity and wonder that is brought to you by the Quantum revolution. Please alert Jorge to this. He can do wonders with information. Sound propagation and bulk motion can be treated the same way, because they are both forms of characteristic wave propagation and show up as eigenvalues of the same equation set. Heat transfer, viscous momentum transfer, and diffusive mass transfer all work basically the same way, because they are closely related effects of the same basic process. All of this can be derived in a unified framework using the principles of classical kinetic theory, because all of it is inherent in the Boltzmann equation. It’s true that you need quantized internal energy states to accurately predict something as simple as the temperature dependence of specific heat in a gas. But it seems to me that you are somewhat exaggerating the shortcomings of classical physics. I am really doubtful of that. The reason is that the mathematical apparatus is just not the same. For the propagation of heat, you have the heat equation in classical physics, with the propagation constant kappa. For sound, the Harmonic approximation gives rise to a fixed speed of sound, which you later improve upon by adding anharmonic terms so that the speed of sound becomes a variable. Those two constants are not the same. Granted, they are dimensionally inconsistent, but the fact is that you have to treat them rather differently. The reason for this discrepancy is that sound propagation exhibits higher frequency dependency, so that it is easier to look at one frequency. Heat, on the other hand, is usually averaged over in the context of classical heat propagation. This makes it really complicated, as you have to average over both spacetime and weight them according to the probabilities of being in so-and-so states. Note that this last thing is also itself temperature dependent, so classical physics is crazy. Nothing stops a person from combining the heat and sound contributions in classical physics, but they are like Frankenstein combinations — oh, this contribution is for heat, and that for sound, and this for their interaction. That is very different from truly unified descriptions in Quantum theory, where it is one term, and one term only, that we are looking at. Because of that, I do not think I am exaggerating the shortcomings of classical physics. It simply is not a unified framework, although it is frequently possible to push approximations in classical physics to really high orders of accuracy. That, I can give, but not unification. And even then, one should notice the tremendous difference in the mathematical methods involved. Yes, both approaches would heavily depend on Fourier analysis, but that is just about their similarities. Instead, a knowledge of the approximation techniques in classical physics is only useful for the continuum free-space approximation of the transport of various quantum objects, whereas proper quantum approximation techniques is frequently simpler than the classical counterparts Finally, bulk motion is very different from either of sound nor heat in any case, except the fact that they are all of zero frequency (actually, this is how the normal mode mathematical technique announces its own failure, and there are ways to compensate formally). Luckily, it is seldom a problem that this is happening — after all, bulk motion would, somewhat, be better treated with relativistic methods. • I suspect we’re talking past one another a bit here. I’m a fluid dynamicist. I’ve studied some advanced solid mechanics and continuum mechanics, but mostly I’m a fluid dynamicist. When you say stuff like “bulk motion is very different from sound”, I think of the underlying physical principles, because in the derived practical equations I use this is not true. But when you say stuff like “the heat equation in classical physics, with the propagation constant kappa”, I think of the phrase “toy equation”. Even in the engineering form of the heat equation, or the Navier-Stokes equations for a linear isotropic fluid, kappa is a coefficient, not a constant (though turbulence modellers generally ignore its thermodynamic derivatives). And it doesn’t show up at all in the Boltzmann equation, unless you do the math and derive it. Regarding “unified framework”, I expressed that poorly. Sure, in the engineering equations, first-order fluxes like acoustic propagation and bulk motion are handled differently than second-order fluxes like heat transfer and viscous stress. This is because their behaviours are different, so the simplest reasonably accurate mathematical descriptions of them will unavoidably be different. But it should never be forgotten that they can both be derived from the same statistical mechanical representation. It strikes me that what the Boltzmann equation is to fluid mechanics is somewhat analogous to what Schrödinger’s equation is to quantum condensed matter physics (though it isn’t quite as fundamental). The general form isn’t very useful by itself, but specializations and approximations can produce good enough results to translate into engineering equations. The key to the Boltzmann equation (assuming you have enough dimensions to describe all important degrees of freedom) is the collision operator, which could be said to be analogous to the Hamiltonian in the Schrödinger equation. The collision operator describes all interactions between particles and is very difficult to specify exactly for real physical systems, though a number of popular approximations exist. I gather this is a bit different from the quantum-mechanical approach you’re talking about, where a lot of condensed systems can be described surprisingly well with “noninteracting” approaches… People have tried to use the Boltzmann equation (with or without quantum effects) to model solids, with mixed success. It seems to be best at fluids, especially gases and plasmas, perhaps because the molecular chaos assumption is difficult to remove. Look, I’m not claiming that quantum physics is no better than classical physics. But you seem to be saying “classical physics” when you should be saying “classical engineering approximations”, and then drawing conclusions based on the conflation of the two. Comparing the Schrödinger equation to something like the Navier-Stokes equations, never mind the heat equation, is apples-to-oranges. You can actually derive all of the basic principles of fluid mechanics from Newtonian mechanics, without even referencing electromagnetics, though your accuracy won’t be very good… I shouldn’t have gotten involved. I have a segfault to chase down… • I better see where you are coming from. You are clearly talking about deeper stuff, and good luck with your segfault. However, I do not think that your argument is convincing enough. Yes, it is possible to derive fluid equations and so forth from Newtonian mechanics. The problem still persists, however, that after the derivation (in which kappa turns out to be a derived quantity and actually not a constant), that the treatment of heat and sound needs to be done as stitched patchworks on top of the same fundamentals. As you rightly noted, I was saying that you don’t treat it that way in quantum physics, and it is quite important to see how it is actually handled differently. Also, the “proper” way to deal with interacting quantum systems is to couple them. For example, phonons and photons, by interacting, means that a proper treatment is to deal with waves that are half-phonon and half-photon and then quantise them yet again. This is completely different from how classical approaches tackle these problems. Yet again, I have to reiterate that, I am not saying that you cannot get good results from classical considerations. What I am saying, is that, due to how classical ideas actually arise from quantum fundamentals (namely, that everything classical tends to just be the conflation of modal [as in, most probable] behaviour as the _only_ [or mean, if you are talking about bulk stuff] possibility), the approximation schemes are doomed to complications for little gain. One of which, is the asymmetrical treatment of heat and sound. That is, even after you derive the heat and sound from the same underlying bulk motion of continuum mechanics, you still have to treat them separately, whereas quantum physics insists that they are _exactly_ the same thing, just different limits of the same _one_ term in any approximation scheme. It is the same thing with fluids. Very few physicists are dealing with Navier-Stokes equation itself, since it is now the preferred game of applied mathematicians. Instead of asking whether Navier-Stokes equations can have solutions for so-and-so kinds of problems, the physicists working on fluids tend to be working, instead, on the quantum corrections that should be added onto Navier-Stokes equations. After all, chaos sets in earlier than Navier-Stokes equations imply, because, near the critical points, modal behaviour is nowhere near the mean behaviour that we should have been focusing upon all this while. Sadly, this is so difficult that we have yet to do something fundamentally good about it. In that case, I am not saying that the corresponding classical problems are not important or not good at describing physical systems, but that the quantum world view is very different. And since the fundamental picture clearly needs to be quantum, I merely mean to say that those quantum considerations happen to be even more important than the classical problems. • Well, I got led on a merry chase and finally found what was causing the memory error. Turns out it was my fault all along… Rather than “the problem still persists” after the derivation, I would say that the problem ARISES in the derivation of transport equations from the fundamentals. The Boltzmann equation doesn’t have separate terms for heat, sound, bulk motion, viscous stress, etc., because it directly describes the molecular motion those things are emergent properties of. It’s not continuum mechanics either; it’s perfectly capable of describing rarefied gases and even free molecular flow. Of course quantum mechanics is a much better model than classical physics for condensed matter behaviour, and even some aspects of gas/plasma behaviour. I completely agree with you there. But I maintain that the specific criticism I was responding to, that of classical physics having an inherently fragmented picture of material mechanics, was not accurate, seemingly because of a mismatch in the fundamentality of the descriptions being compared. • Sorry, I don’t know why, the comment system won’t allow me to reply to yours. I see. That would be totally my ignorance, then. However, I would like to point out, to replace the original wrong argument, that the natural ideal gases that we are familiar with, are actually Fermi gases in the high temperature and low density limit. If that were not the case, we would run into what is known as the Gibb’s paradox, in which a classical gas, in the equations, somehow has a lot less pressure than expected. In particular, the ideal gas equation of pV = NkT, would miss out the N which is around 10^24. That makes no sense, until one realises that the quantum indistinguishability (which is basically quantum entanglements, really) needs to be taken into account. I hope that little bit, which basically states that, even for dilute gases in which we do not expect quantum effects to be important, turn out to be critically dependent upon quantum ideas nonetheless. Of course, the rest of the system does not require quantum corrections, and there is an easy fudge factor to fix that problem, but it does show how quantum theory is still a vital component of everyday life, not some esoteric correction that only people caring about precise effects can observe. (Which is the underlying point I really wanted to outline, although my choice of example turned out to be wrong.) Thanks. It seems, however, that it may be that the “classical atoms” view that is given by Boltzmann equation thus incorporates enough physics to reproduce the important things I was caring about. Interesting. • I hate to keep doing this, but… The Gibbs paradox has to do with the definition of entropy. If you don’t assume indistinguishability, you can toggle the entropy up and down by opening and closing a door between two identical reservoirs. You can get the correct pressure just fine with classical gas kinetics. But there are other things about gas dynamics that require quantum treatment. The temperature dependence of ideal-gas specific heat in multiatomic substances, for instance, is quite substantial and entirely due to the quantization of internal energy storage modes (at lower temperatures, there usually isn’t enough energy in a collision to excite these modes). • Or something like that – I had to look up Gibbs’ paradox, and I’m not completely sure my facile description above is right… • Nah, I know the classical gas kinetics can derive the pressure just fine. Why, indeed, I was just teaching my student that elementary derivation. But it does mean that both Boltzmann and Gibbs entropy cannot be derived from classical reasoning without the indistinguishability fudge factor. You would have to rely on Sackur–Tetrode entropy (removing all the quantum stuff and replacing them with an unknown constant, of course). It might not seem like a big issue at first glance, but it actually is. Other than the fact that entropy of mixing (that you were describing) had to be discontinuously and manually handled, it does also mean that stuff like phase changes go haywire. Again, that is useless to a fluid dynamicist until you want to deal with, say, ice-water mixtures or critical phenomena. Or worse, the theory is inconsistent. Judging by how seriously you take the mathematics, it is either screaming at you that you are doing something wrong, or that phenomenology needs to be used (by curve-fitting the unknown constant there, for one). Instead, what I wanted to impress upon you is that, instead of deriving the pressure from kinetic theory (actually, what a bad name! It is not a theory, nor does kinetic make sense as its modifier. Instead, classical atomic model would be its rightful name), it is possible to subsume the entirety of classical thermodynamics into the 2nd Law. That is, given the existence and some assumed properties of the entropy, you can construct everything you find in classical thermodynamics, even without statistical thermodynamics. That is, 0th Law and 1st Law, in particular, are theorems if you assume the 2nd Law to be your postulate. Actually, it is even a bit less — you assume parts of 2nd Law, and prove the full form of the 2nd Law with the assumptions. The issue I was referring to, is that, if you take this view, in which pressure is just a derivative of the entropy via Maxwell’s relations, and then you try to construct the statistical thermodynamics from it, you will run into Gibbs’ paradox. At the end, there is no need to worry about you dragging the conversation out. Actually, I was still waiting for some insights from you — you have already shown me wrong once, and there is no reason why you cannot teach me more. 5. I particularly like the statement: 6. Okay, physicist, most of the things in the video are not new to me, but good presentation. Commented, though, to point out that the “everything is named after Quantum” is an interesting recurring phenomenon in the USA. Perhaps the largest one was the use of “Radio” in naming things. Radio was the internet on steroids, the “tech stock” of the 1920s bubble. One of the most famous meaningless uses of Radio from the time was the little red wagon called a Radio Flyer. The company just put two hot buzz words together, and created a legendary product. 7. Pingback: The Webcomic Guide to Quantum Physics | Slackpile 8. Dear Jorge Cham, I enjoyed your cute animation. Since you said you were looking for ways to think about quantum mechanics, I thought the resource list below might be interesting. Please feel free to contact me with questions. David Liao One of my physics professors from Harvey Mudd College (half-hour east of Caltech) wrote a wonderful book on quantum mechanics for junior physics majors: John Townsend, A Modern Approach to Quantum Mechanics, University Press: Sausalito, CA (2000) (http://www.amazon.com/A-Modern-Approach-Quantum-Mechanics/dp/1891389785). The academic pedigree of this book comes through Sakurai’s Modern Quantum Mechanics. Get a hold of Professor David Mermin at Cornell. Tell him you are working with Caltech on this animation series, and ask him to walk you through his slides on Bell’s inequalities and the Einstein-Podolsky-Rosen paradox (http://www.lassp.cornell.edu/mermin/spooky-stanford.pdf). If you can meet with Sterl Phinney at Caltech, talk to him. He seems to know a lot about a lot, and he’s really fun to be around. Fundamental concepts: There is a variety of ways to introduce quantum mechanics. The following two flavors can be provide particularly satisfying insight: Path-integral formulation — A creative child can tell a bunch of different imaginary stories to explain how a particle got from situation A to another situation B during the course of a day. A mathematician can associate with each story a complex phasor. The phasors can be added (in a vector-like head-to-tail fashion) to obtain an overall complex number for getting from A to B, whose squared magnitude is the overall probability of getting from A to B. The concept of extremized action from classical mechanics (think of light taking the path of least time) is a limiting approximation of the quantum-mechanical path-integral formulation. For this brief description, I skipped a variety of details. This perspective is attributed to Richard Feynman. State vector, operators — An older, more traditional description of quantum mechanics centers around the state vector (often denoted |psi>). “All that can be described” about an entity of interest is hypothetically abstracted as a vector from a vector space of all possible descriptions that can be associated with the entity. It is hypothesized that the outcomes of measurements correspond to [real] eigenvalues of [Hermitian] operators that can act on the state vector, and that when it is appropriate to describe an entity using one single eigenstate of an operator, this means that observation corresponding that operator will without doubt yield the corresponding eigenvalue as the measured result. Note: State |psi> is *not* wavefunction psi(x). psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>. Risky vocabulary: It is important to be aware of verbal shortcuts that are used to make quantum seem more conceptually accessible in the short term that, unfortunately, also make quantum much more difficult to understand fundamentally in the long term: There is no motion in any energy eigenstate (ground state or otherwise). Words such as “vibration” and “zooming around” are only euphemistically associated with any *individual* energy eigenstate. As an example, the Born-Oppenheimer approximation for solving the time-independent Schrödinger equation by separating the electronic and nuclear degrees of freedom is often justified using a story that involves the phrase “the light electrons are whizzing around as the nuclei faster than the massive nuclei are slowly vibrating around their equilibrium positions.” This is shorthand for saying that the curvature term associated with the nuclear coordinates is ignored as the first term in a perturbative expansion because it is suppressed by the ratio of the nuclear mass, M, to the electron mass, m, (for details, http://www.math.vt.edu/people/hagedorn/). Even though the Heisenberg relationship is often described using phrases such as “not knowing how we disturbed a particle by looking at it,” a more fundamentally satisfying understanding is obtained by seeing that some operators don’t commute. Because some pairings of operators, such as position and momentum, don’t share eigenvectors, it is impossible for an entity to simultaneously be in an eigenvector for one operator, say, x position, while also being in an eigenvector for the other operator, in this example, x momentum. Having the momentum well defined (being in an eigenvector for momentum) corresponds to being unable to associate one particularly narrow range of position eigenvalues with the entity. This is essentially the Fourier cartoon you used in the animation (narrowness in space corresponds to less specificity in frequency/wavelength and vice versa). Beware of popular reports of the experimental observation of a wavefunction. Pull up the abstract from the underlying peer-reviewed manuscripts. I bet that the wavefunction has not been directly observed. Instead, the squared-magnitude (probability distribution) has been inferred from a large collection of individual experiments. As an example, a recent work inferring the nodal structure (radii where probability of finding electron around an atomic core vanishes) became popularized as direct observation of the wavefunction, which is not the claim in the original authors’ abstract. • Hi David, By and large, a good write-up. But, still… 1. A minor point: Did you miss something on the right-hand side of the equality sign? In any case, guess you could streamline the line a bit here. 2. A major point: “There is no motion in any energy eigenstate.” And, just, how do you know? [And, oh, BTW, you could expand this question to include any other eigenstates as well.] Anyway, nice to see your level of enthusiasm and interest for these *conceptual* matters as well. Coming from a physics PhD, it is only to be appreciated. • Thank you for your reply. Hope the following is helpful! 1) Thank you for catching the typo in the sentence, “psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” This sentence should, instead, read, “psi(x) is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” I don’t know how to edit my post to correct this sentence. 2) You asked how it is possible to know that there is no motion in an energy eigenstate. Below, I include two ways to respond. The abstruse response is an actual answer and points to the insight you are seeking. If you look closely, you will see that the graphical response is not an actual answer. Instead, it is a fun exercise for “feeling the intuition” that energy eigenstates do not have motion. Both responses are important (many physicists enjoy both casual “proofs” and fluffy intuition). Abstruse response: We argue that an object that is completely described by one energy eigenstate has no motion. An energy eigenstate is a solution to the time-INDEpendent Schrödinger equation. It’s very “boring.” The only thing that happens to it, according to the time-DEpendent Schrödinger equation is, a rotation of its overall complex phase. This phase does not appear in expectation values, and so all expectation values are constants with time. To obtain motion, it is necessary to have a superposition of more than one state corresponding to at least more than one energy eigenvalue. In such circumstances, at least some of the complex phases will rotate at different time frequencies, allowing *relative* phases between states in the superposition to change with time. I am not claiming that experimental systems that people abstract using energy eigenstates will never turn out, following additional research, to have any aspect of motion. I am saying that the *abstraction* of a single energy eigenstate itself (without reference to whether the abstraction corresponds to anything empirically familiar) is a conceptual structure that contains no concept of motion (save for the rotating overall phase factor). The mathematics described above are very similar to the mathematics that describe the propagation of waves in elastic media. A pure frequency standing wave always has the same shape (though it might be vertically scaled and upside down). A combination of standing waves of different frequencies does not always maintain the same shape. Graphical response: Go to http://crisco.seas.harvard.edu/projects/quantum.html and play with the simulator. Now set the applet to use a Harmonic potential, and try to sketch, using the “Function editor,” the ground state from http://en.wikipedia.org/wiki/File:HarmOsziFunktionen.png You might want to turn on the display of the Potential energy function to ensure an accurate width for the state you are sketching. Run the simulation. Notice that the function doesn’t move very much (or in the case that you sketched the ground state with perfect accuracy, it shouldn’t move at all). Now, sketch a different state that doesn’t look like any one of the energy eigenstates in the Wikipedia image above. This should generate motion (to some extent looking like a probability mound bouncing back and forth in the well). You can also look at the animations at http://en.wikipedia.org/wiki/Quantum_harmonic_oscillator and see that the energy eigenstate examples (panels C,D,E, and F) merely rotate in complex space (red and blue get exchanged with each other), but the overall spatial probability distribution is unchanged. 3) You asked whether one would assert absence of motion for other eigenstates. Not as a general blanket statement. The reason that energy eigenstates have no motion is that they are eigenstates, specifically, of the Hamiltonian. Yes, in some examples, it is possible for an eigenstate of another operator to have no motion (i.e. when that state is both an eigenstate of another operator, as well as of the Hamiltonian). • Cool. Your abstruse response really wasn’t so abstruse. But anyway, my point concering the quantum eigenstates was somewhat like this. To continue on the same classical mechanics example as you took, consider, for instance, a plucked guitar string. The pure frequency standing wave is “standing” only in a secondary sense—in the sense that the peaks are not moving along the length of the string. Yet, physically, the elements of the string *are* experiencing motion, and thus the string *is* in motion, whether you choose to view it as an up-and-down motion, or, applying a bit of mathematics, view it as a superposition of “leftward” and “rightward” moving waves. The issue with the eigenstates in QM is more complicated, only because of the Copenhagen/every other orthodoxy in the mainstream QM. The mainstream QM in principle looks down on the idea of any hidden variables—including those local hidden variables which still might be capable of violating the Bell inequalities. They are against the latter idea, in principle—even if the hidden variables aren’t meant to be “classical.” Leave aside a few foundations-related journal, the mainstream QM community, on the whole, refuses to seriously entertain any idea of any kind of a hidden variable—and that’s exactly the way in which the relativists threw the aether out of physics. … I was not only curious to see what your inclinations with respect to this issue are, but also to learn the specific points with which the mainstream QM community comes to view this particular manifestation of the underlying issue. In particular, do they (even if epistemologically only wrongly) cite any principle as they proceed to wipe out every form of motion out of the eigenstates, or is it just a dogma. (I do think that it is just a dogma.) Anyway, thanks for your detailed and neatly explanatory replies. … Allow me to come back to you also later in future, by personal email, infrequently, just to check out with you how you might present some complicated ideas esp. from QM. (It’s a personal project of mine to understand the mainstream QM really well, and to more fully develop a new approach for explaining the quantum phenomena.) • Ah, I see better where you are coming from. You are wondering what explanations someone might give for focusing on mainstream QM interpretations and de-emphasizing hidden variables perspectives. Off the top of my head, I can imagine what people might generally say. I can also rattle off a couple thoughts as to why my attention does not wander much into the world of hidden variables. Anticipated general responses (0) I imagine usual responses would refer to Occam’s Razor and/or the Church of the Flying Spaghetti Monster. People might say that Occam’s Razor (or something along the same lines) is a fundamental aesthetic aspect of the Western idea of “science.” I am not saying these references directly address the most logically reasoned versions of the concerns you might be raising. (0.1) I think some professional scientists are laid back conceptual cleanliness. It doesn’t bother them enough to “beat” the idea of motion in eigenstates out from students in QM. I know a couple professional scientists who are OK with letting students think that electrons are whizzing around molecules. Personal thoughts (1) I don’t necessarily “believe” mainstream QM in a religious sense, but it feels natural (for my psychology). My gut feelings of certainty about existence of things somewhat vanish unless I am directly looking at them, touching them, and concentrating with my mind to force them “into existence” through brutal attention. People like to sensationalize mainstream QM by saying that it has counterintuitive indeterminacy. At the end of the day, what offends one person’s intuitions can be instinctively natural for someone else. I hear that mainstream QM is also “OK” for people who hold Eastern belief systems (I’m atheistish, so I don’t personally know). (2) Mainstream QM has a particular pedagogical value. It offers an exercise in making reasoned deductions while resisting the urge to rely on (some) inborn intellectual instincts. I think it’s good for learning that we sometimes confuse [1] the subjective experience of *projecting* a well-defined, deterministic mental image of the dynamics of a system onto a mental blank stage representing reality with, instead, [2] the supposed process of directly perceiving and “being unified with” reality. Yes, philosophy courses can be valuable too, but in physics you can also learn to calculate the photospectra* of atoms and describe the properties of semiconductors and electronic consumer goods. * Surprisingly difficult to do in a fully QM treatment at the undergraduate level. Perturbing the atom with a classical oscillating electric field is *not* kosher. It’s much more satisfying to quantize the EM field. Does any of this mean that mainstream QM is true? No. No scientific theory is ever “true” (quotation marks refer to mock hippee existential gesture). David Liao P.S. I am happy to share my email address with you–how do I do that? Does this commenting platform share my address (sorry, not used to this system)? • Hi David, 1. Re. Hidden variables. Philosophically, I believe in “hidden variables” to the same extent (i.e. to the 100% extent) and for the same basic reason that I believe that a trrain continues to exist after it enters a tunnel and before it emerges out of the same. Lady Diana *could* suffer an accident inside a tunnel, you know… (I mean, she would have continued to exist even after entering that tunnel—whether observed by those paparazzis or not. That is, per my philosophical beliefs…) Physics-wise, I (mostly) care for only those hidden variables which appear in *my* (fledgling) approach to QM (which I have still to develop to the extent that I could publish some additional papers). I mostly don’t care for hidden variables of any other specifically physics kinds. Mostly. Out of the limitations of time at hand. 2. Oh yes, (IMO) electrons do actually whiz around. Each of them theoretically can do so anywhere in the universe, but practically speaking, each whizzes mostly around its “home” nucleus. 3. About mysticism: Check out J.M. Marin (DOI: 10.1088/0143-0807/30/4/014). Mysticism was alive and kicking in the *Western* culture even at a time that Fritjof Capra was not even born. The East could probably lay claim to the earliest and also a very highly mature development of mysticism, but then, I (honestly) am not sure to whom should go the credit for its fullest possible development: to the ancient mystics of India, or to Immanuel Kant in the West. I am inclined to believe that at least in terms of rigour, Kant definitely beat the Eastern mystics. And that, therefore, his might be taken as the fullest possible development. Accordingly, between the two, I am inclined to despise Kant even more. 4. About my email ID. This should be human readable (no dollars, brackets, braces, spaces, etc.): a j 1 7 5 t p $AT$ ya h oo [DOT} co >DOT< in . Thanks. • Entering this comment for the third time now (and so removing bquote tags)–ARJ Hi David, 1. A minor point: 2. A major point: >> “There is no motion in any energy eigenstate.” And, just how do you know? 9. Great idea for doing this. Just a hint for getting more non physicists involved: talk at least half as fast as you do, people need time to absorbe and self explain, othewise no matter how simple it is, they lose you at the beginning. 10. Pingback: Quantum Matter Animated! | Astronomy physics an... 11. Pingback: Quantum Frontiers and Tuba! | Creative Science • Mankei, Interesting. You seem to be having been fun thinking about this field for quite some time. Anyway, here is a couple of questions for you (and for others from this field): (i) Is it possible to make a mechanical oscillator/beam detectably interact with single photons at a time (i.e. statistically very high chance of only one photon at a time in the system)? [For instance, an oscillator consisting of the tip of a small triangle protruding out of a single layers of atoms as in a graphene sheet? … I am just guessing wildly for a possible and suitable oscillator here.] Note, for single photons, it won’t be an _oscillator_ in the usual sense of the term. However, any mechanical device that mechanically responds (i.e. bends), would be enough.] (ii) If such a mechanical device (say an oscillator) is taken “to” “0” K, does/would/will it continue to show the red/blue asymmetrical behavior? [Esp. for Mankei] What do you expect? • (i) In theory it’s possible, there have been a few recent theoretical papers on “single-photon optomechanics” that explore what would happen, but experimentally it’s probably very, very hard. Current experiments of this sort use laser beams with ~1e15 photons per second. (i) I have no idea what would happen then, because my math and my intuition always assume the laser beam to be very strong. Other people might be able to answer you better. • Hi Mankei, 1. Thanks for supplying what obviously is a very efficient search string. (The ones I tried weren’t even half as efficient!) … Very interesting results! 2. Other people: Assuming that the gradual emergence of the red-blue asymmetry with the decreasing temperatures (near the absolute zero) continues to be shown even as the *light flux* is reduced down to (say) the single-photon levels, then, how might Mankei’s current model/maths be reconciled with that (as of now hypothetical) observation? I thought of the single-photon version essentially only in order to remove the idea of “noise” entirely out of the picture. If there is no possibility of any noise at all, and *if* the asymmetry is still observed, wouldn’t it form a sufficient evidence to demonstrate the large-scale *quantum* nature of the mechanical oscillator (including the possibilities of a transfer of a quantum state to a large-scale device)? Or would there still remain some source of a doubt? • Hi Mankei, We also thought about the issue you brought up in arxiv:1306.2699. See, for instance, a recent paper we published with Yanbei Chen and Farid Khalili (http://pra.aps.org/abstract/PRA/v86/i3/e033840). I would consider that our experiment measured both the sum AND difference of the red and blue sideband powers. The DIFFERENCE is indeed, as shown in your arxiv post mentioned above, due to the quantum noise of the light field measuring the mechanics. The noise power of the mechanics is in the SUM of the red and blue sidebands. Our experimental data was plotted as the ratio of the red and blue sidebands, which depends upon both the sum and difference of the sidebands powers, and looks very much different than what would be expected even for a semi-classical picture in which the light is quantized and the motion not. • I guess we’ve already exchanged emails and come to a consensus, but just to recap, I agree that, through your calibrations, you’ve inferred zero-point mechanical motion and your result is consistent with quantum theory. The word “quantum” of course literally means something discrete and one could argue you haven’t observed “quantum” motion yet, but that’d be nitpicking. • And to clarify, the asymmetry itself is not proof of zero-point mechanical motion or anything quantum. The mechanical energy was obtained from the SUM of the sidebands (as Oskar said), and the asymmetry was used as a *calibration* to compare the mechanical energy with the optical vacuum noise. • So me and my boyfriend are going on a two hour car ride together in a few wes03&#82ek; I know that’s not a a long time, but I still feel like we’ll need some conversation material on the ride. Last time we took the ride, there were a few awkward silences and I just want to make sure that for most of it, we have something to chat about. Are there any car games that you guys know of that force people to talk?. • Hi Mankei, Thanks for your response. There are two main claims in your manuscript, 1) centers around the interpretation of our result, 2) is a strong claim about classical stochastic processes being the source of our observed asymmetry. In response to 1), the different interpretations of the result (and in particular, the relation between the optical vacuum noise and the zero-point motion) have been considered previously in great depth by our colleagues at IQIM (Haixing Miao and Yanbei Chen) and in Russia (Farid Khalili). I would like to point you to this paper: http://pra.aps.org/abstract/PRA/v86/i3/e033840. In response to 2), you claim to “show that a classical stochastic model, without any reference to quantum mechanics, can also reproduce this asymmetry”. We also consider this possibility in a follow-up paper which came out last year (http://arxiv.org/abs/1210.2671), where we show a derivation exactly analogous to what you’ve shown, and then go to great lengths to experimentally rule out classical noise as the source of asymmetry (by varying the probe power and showing that the asymmetry doesn’t change, and by carefully characterizing the properties of our lasers). More generally, there are fundamental limits as to what can be claimed regarding `quantum-ness’ in any measurement involving only measurements of Gaussian noise. To date there have been 5 measurements of quantum effects in the field of optomechanics, our paper being the first one (the others are Brahms PRL 2012, Brooks Nature 2012, Purdy Science 2013, and Safavi-Naeini Nature 2013 (in press)). Unfortunately, all of these measurements are based on continuous measurement of Gaussian noise. There are several groups working hard on observing stronger quantum effects (as O’Connell Nature 2010 did in an circuit QED system), but we are still some months away from that. Best, Amir • Actually, I’d like to make that 6 papers – last week Cindy Regal’s group released this beautiful paper on arXiv: http://arxiv.org/abs/1306.1268. Here as well, the `quantum-ness’ can only be inferred after careful calibration of the classical noise in the system, since the measurement is based on continuous measurement of Gaussian noise. • Actually I’d like to make that 7 papers – I forgot about the result from 2008 from Dan Stamper-Kurn’s group: Murch, et al. Nature Physics, 4, 561 (2008). 12. Pingback: Quantum Matter Animated! | Space & Time | S... 13. Pingback: Quantum Matter Animated! | Far Out News | Scoop.it 14. Pingback: Quantum Theory and Buddhism | Talesfromthelou's Blog 15. I get very annoyed whenever somebody uses the phrases “quantum jump” or “quantum leap” to imply a BIG change in some domain (such as “our new Thangomizer represents a quantum jump in Yoyodyne’s capabilities). A quantum jump is the SMALLEST POSSIBLE state change in quantum mechanics, so when somebody claims their product represents a “quantum leap,” I mentally translate that as “smallest possible degree of incremental improvement over their previous product!” 16. Pingback: My comments at other blogs—part 1 | Ajit Jadhav's Weblog 17. Is it that higher red shift and lower blue shift indicates constant shrinking of the mirror? If that is true then do we expect red shift to die down say we keep the mirror at 0K for long enough time? 18. Pingback: Squeezing light using mechanical motion | Quantum Frontiers 19. Pingback: The Most Awesome Animation About Quantum Computers You Will Ever See | Quantum Frontiers 20. Pingback: Hacking nature: loopholes in the laws of physics | Quantum Frontiers 21. Pingback: Human consciousness is simply a state of matter, like a solid or liquid – but quantum | Tucson Pool Saz: Tech - Gaming - News 22. Pingback: This Video Of Scientists Splitting An Electron Will Shock You | Quantum Frontiers 23. No. No, this shall not stand. Have you no heart, sir? You have a family now, as do I. You simply cannot go around throwing the videogaming equivalent of heneri-lacod crack at folks.Shame! SHAME! Leave a Reply to H2Omusic Cancel reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
0d78efda1398057b
Skip to main content Hello Visitor!     Log In Share | The Integration of Knowledge ARTICLE | | BY Carlos Blanco Carlos Blanco  Get Full Text in PDF The exponential growth of knowledge demands an interdisciplinary reflection on how to integrate the different branches of the natural sciences and the humanities into a coherent picture of world, life, and mind. Insightful intellectual tools, like evolutionary Biology and Neuroscience, can facilitate this project. It is the task of Philosophy to identify those fundamental concepts whose explanatory power can illuminate the thread that leads from the most elementary realities to the most complex spheres. This article aims to explore the importance of the ideas of conservation, selection, and unification for achieving the goal. We live in a fascinating time for the integration of knowledge. In particular, we have developed three great theoretical pillars whose immense explanatory power is destined to contribute to the unification of knowledge, a goal sought by so many visionary minds throughout the centuries: fundamental physics, evolutionary biology and neuroscience. 1. Physics, Biology, and Neuroscience Physics has accomplished the feat of condensing the structure of the universe in a succinct elenchus of equations, such as the field equations of general relativity and the Schrödinger equation. It has not discovered the equation that rules the complete description of the universe, but it has notably approached this titanic dream; a utopia illusory for many, yet unquestionably legitimate. Physics is built upon two fundamental models: general relativity and quantum mechanics. We do not know how to harmonize these two divergent pictures of reality. General relativity offers a geometrical theory of gravitation, where the idea of relativity of all inertial frames of reference is generalized to cover accelerated frames of reference. It has led to the formulation of covariant equations whose sophisticated mathematical expression—through the language of tensor calculus—has given us the finest, deepest, and most rigorous description of the large-scale structure of the cosmos. According to the theory, gravity emerges as the effect of the geometry of space-time, as the result of the curvature produced by the presence of a density of energy and momentum. However, for understanding the three remaining fundamental forces of nature, quantum mechanics has proven uniquely powerful. Unlike general relativity and its geometrical image of force, quantum mechanics recapitulates our understanding of the physical world through a theory of fields in which the force is mediated by a set of elementary particles of bosonic nature. The 20th century has therefore seen a formidable extension of the unifying power of the human mind. Major advances in the domain of the physical sciences have stemmed from the epistemological questioning of their basic concepts. Neither the work of Einstein nor the developments in quantum physics can be fully grasped without the examination of this profound immersion, with vivid philosophical resonances, into the fundamental categories of physics and the logical criteria required to stipulate a meaning for our notions about the objects of experience. With Einstein, ideas like space, time, simultaneity, and privileged states of motion underwent an exhaustive interrogation. This reflects a search for concepts that could be unambiguously assigned to the properties observed in the course of experiments. An analogous comment can be made about Heisenberg, whose famous Uncertainty Principle (a humbling truth for humankind) is the fruit of a careful revisiting of the meaning of basic kinematic and mechanical concepts. This criticism of our intuitive notions has triggered key theoretical—and therefore also practical—advances, propitiating the fusion of pure thought and empirical knowledge. It constitutes the most faithful reproduction of the intimate functioning of a human mind in its restless quest for unification. Biology, the science that tries to understand the world of life, bestows upon us a wonderful unifying tool: the theory of evolution. This model unifies ecological, morphological, and genetic knowledge about living beings. Through the lenses of evolution, the elucidation of the history of life allows us to delve into the structure and explore functioning of biological entities. Neuroscience is on its way to developing a unifying instrument of immense power and amplitude: the scientific understanding of mind. From the level of the nerve cells to the sphere of the activity of the brain as a whole (the synchronization of its different regions), progress has been steady, though insufficient. As soon as we understand how the mind works, the origin of its abilities and the scope of its capacities, we shall be ready to unify the domain of the Humanities, a goal which until very recently seemed unattainable for science, as if it were fragmented in irreconcilable approaches and inimical cultures. Through a neuroscientific theory of mind we will be able to examine the source of the human being’s symbolic creations. This task will contribute to building the neuroscientific foundations for the study of society, law, religion, and art. 2. Conservation, Selection, Unification One of the neuralgic principles of reality elucidated by the physical sciences refers to the idea of conservation of certain quantities in the processes experienced by the objects of nature. According to Noether’s theorem, we know that any differentiable symmetry is associated with a law of conservation. The most important concept used to express this principle of the working of nature is action, perhaps the most relevant and profound of all physical categories. Invariance under time translation yields the principle of conservation of energy; invariance under space translation yields the principle of conservation of momentum; invariance with respect to rotation yields the principle of conservation of angular momentum. In quantum physics, a gauge symmetry related to the conservation of charge has also been discovered. In summary, physics has unfolded principles of conservation which, from the realm of subatomic particles to the domain of thermodynamic systems, are capable of establishing laws of apparent inviolability (the status of the principle of conservation of energy in a cosmological scale is under discussion). “Knowing involves unifying, connecting, integrating that which is different on the basis of shared relations.” In biology, the category of selection is as important as the concept of conservation is in physics. Transmitted through the power of replication that living beings possess, variability is selected by the environment in accordance with its reproductive efficiency. If we ascend in the scale of material complexity and reach the universe of human consciousness, is it possible for us to identify a principle endowed with similar theoretical power? I believe that such a principle is the idea of unification. The conscious mind unifies the perceptions which it receives. The result is the integration of data susceptible to subjective assimilation. With the exception of some sensory systems (like the visual system), we do not know the precise mechanisms through which this phenomenon occurs, but we do know that the human mind holds the unusual privilege of unifying the multiplicity of the world through the filter of its rationality. This unitary grasping of reality (Kant’s “unity of apperception” in the ich denke) means the insertion of nature into logical patterns that consciously revert to the subject. It is one of the most remarkable progresses in the long path of evolution, for it represents the dawn of knowledge as the most powerful force of life and the pinnacle of its activities. Knowing involves unifying, connecting, integrating that which is different on the basis of shared relations. Behold the most genuine meaning of the Greek term logos and the philosophical scope of the verb legein since Thales and the pre-Socratics. 3. The Unity of Nature These three notions (conservation, selection, and unification) are not strictly discontinuous. Any hypothetical tripartition of the universe in matter, life, and consciousness obeys instructive and epistemological schemes, not reality as such, independent from the judgement of human intelligence. Along its history, nature has been capable of rising on its own from one level onto another, and this suggests a profound ontological continuity between all realms of reality. It is in fact possible to draw a narrow analogy between a principle like the law of stationary action in physics (the action integral of a particle will manifest extreme values—i.e. maximal or minimal—so that the value of action may be stationary) and the idea of natural selection, a mechanism that seeks an optimal point in the relationship between genetic variations and the surrounding environment. Also, to unify, the act of integrating perceptions in a unitary consciousness of external and internal reality can be contemplated as a simultaneous optimization in the value of the information coming from the world and the information elaborated by the subject, with the goal of reducing the boundless multiplicity of phenomena into the unity of the conscious being. An entity capable of extracting, from the copious concatenation of stimuli, information of greater value, more profitable and meaningful, is certainly more conscious of the world and its own being. “The integration of knowledge cannot seek to eradicate any trace of contingency or to reduce every explanation to a physical proposition, but should rather serve to expose the inextricable imbrication that binds all domains of reality.” The reduction of chemistry to physics has been accomplished, thanks to the quantum theory of orbitals. Our deep understanding of how electrons are distributed in atoms is illuminated by quantum principles like Pauli’s exclusion principle. Physics has therefore conferred upon human rationality an appropriate tool for understanding the periodic table of elements and the organization of chemical elements. The almost infinite universe of inorganic and organic reactions can be harmoniously inserted into the scientific view of the world that emanates from the physical sciences, from its small but powerful elenchus of laws and fundamental forces. This is one of the most admirable achievements of quantum mechanics: the complete explanation of the atomic structure of elements and the justification of their principal physical-chemical properties. With no need to incorporate theoretical principles of substantive newness, or principles that cannot be easily deduced from basic laws, physics has allowed for a fluid integration of the vast domain of chemistry. Evolutionary biology covers a new semantic field of science: life. Of course, it is based upon the fundamental laws of physics, mediated through chemistry (specifically, organic chemistry, which elucidates the structure of compounds like aminoacids and nucleic acids). However, it assumes a series of concepts which are virtually absent in the domains of physics and chemistry. These notions are essential for our understanding of life and its development. They are crystallized in the theory of evolution, a model of exceptional explanatory power. We should not forget, however, that we lack a complete theory of evolution. Research in the fields of genetics and epigenetics could actually lead to a substantial revision of some fundamental concepts of evolutionary biology. Nevertheless, as a paradigm, the evolutionary frame has not been surpassed, and it is highly improbable that it will be substantially overcome in the future, at least in its capital aspects. But just as classical physics was not suppressed by 20th century physics, which rather showed the limits of its approach and expanded its theoretical power, future progress in biology can actually broaden the scope of this science and enlarge its categories. The thread behind the transition from physical chemistry into biology has not been entirely elucidated, for we do not know how life flourished from inert organic matter. However, it is legitimate to hope that we shall soon solve this intricate problem. It is reasonable to think that life on Earth appeared by virtue of a set of chemical conditions which facilitated the creation of molecules susceptible to replication, whose increasing degrees of autonomy from the environment allowed them to induce certain metabolic reactions in the interior of cells. But in the absence of a fully convincing itinerary as to how inert matter conquered the domain of life, we still have to distinguish physics from evolutionary biology, even if a congruent framework with the scientific view of the universe clearly points to the existence of profound coherences and continuities between the inert and the living worlds. The impossibility of reducing the biological level to the physical-chemical level does not stem from an intrinsic prohibition but from the overwhelming complexity of the system. As soon as we unveil the origin of life, there is no de iure interdiction forbidding the unfolding of the fine thread connecting the world of chemistry and the realm of biology. Of course, the complexity of biological systems is not the sole result of their intrinsic elements but of a factor which becomes extremely relevant for biology: the effect of contingencies. The study of life demands knowing the prolix historical itinerary through which organisms have passed. History contains necessity but above all it is permeated with contingency. Only Laplacian intelligence could have foreseen the arrival of a meteor whose devastating consequence for most of living species triggered the massive Cretaceous extinction. Also, we know that there are unsurmountable uncertainties in the quantum scale. Therefore, the integration of knowledge cannot seek to eradicate any trace of contingency or to reduce every explanation to a physical proposition, but should rather serve to expose the inextricable imbrication that binds all domains of reality. This goal highlights the power of the human mind to perceive the fundamental principles behind the unity of such heterogeneous spheres. In considering history, we cannot override the shadow of contingency. However, we can understand the human constants that pervade spaces and times. Thanks to the scientific study of mind, it is possible to understand human motivations, their logic and—why not?—the seeds of their admirable creative capacity. This yields a fundamental framework for understanding great civilizations and the most sublime productions of the spirit. Even without exorcizing the specter of contingency, it is still feasible to identify the fundamental axes around which human action gravitates. In our days, this knowledge comes from the neurosciences. It is not utopian to dream of an explanation for the neurobiological bases of consciousness. Again, this goal does not exhaust the understanding of every specific consciousness, because this power of Homo sapiens is nurtured by sustained interaction with both the external and the internal environments. It is utterly impossible to reproduce every single detail that forms the vivid experiences of conscious subjects (we would need a rigorous replication of every physical and psychological condition in which this capacity is manifested, as if we were trying to draw a 1:1 scale map). But this deep obstacle does not prevent us from uncovering the neuroscientific foundations of consciousness, which probably lie in certain anatomical structures responsible for connecting perceptual and associative areas, like the claustrum and the superior longitudinal fasciculus. 4. The Integration Science is in possession of the most rigorous and universal language that the human mind has developed: mathematics. The progress of this discipline over the last few centuries, especially in the elucidation of its fundamental principles, its scopes and limits, has granted us an unsurpassed formalism for describing the structure and functioning of the universe. We know, however, that this depiction of reality cannot be complete for at least two reasons: first of all, these models tend to use the language of differential equations, while our knowledge of matter has revealed the discontinuity that exists in the fundamental levels of nature, in particular at a quantum scale. Secondly, the use of mathematical language compels us to draw a distinction between formal and material equality. When, in the field equations of general relativity, we find the number π and in the Schrödinger equation we contemplate the imaginary number i, it is clear that the notion of equality needs to be interpreted as the equivalence of pure objects of thought (abstractions which do not necessarily enjoy ontological independence in the realm of nature). The mathematical expression of physical categories represents the deepest and finest approach to the material universe conceived by the human mind, but only in an asymptotic limit, in whose ideality material objects fully converged with the pure objects of thought; it would be correct to say that one member of the equation is strictly equal to another. “Our mind, our logic, our intuition..., must be in a constant state of improvement through their interaction with reality, so that the deciphering of the basic axes of the universe will also unveil the true possibilities of human intelligence, of its logic and its language.” The indubitable advantage of mathematical language resides in its versatility, for it is flexible enough to cover the practical totality of natural registers. The invention of new mathematical tools throughout history is the best proof of this fruitful plasticity. This is the reason why the limits of thought do not inexorably seal the frontiers of being. Against Parmenides’ thesis, the realm of mind is eminently ductile and it can adapt itself, both in its language and its categories, to the pressing challenges posed by reality. We have even managed to expand the limits of our imagination. Before Cantor, it was generally accepted that infinity could not be properly scrutinized by reason. After Cantor, we have learned that different types of infinity exist and that we can have infinite sets which are numerable. The borders of thought have been wonderfully extended, helping us discover unexplored territories of both the real and the possible. Beyond the difficulties, it is admirable to reflect on the achievements of our Promethean longing for knowledge, in our indefatigable desire of grasping the vastness of the universe in the lightness of the concept. Every act of cognition is guided by logic, whose premises and operative rules articulate human reasoning. However, its quantitative expression has only reached an adequate expression in sciences like physics, chemistry and—to a lesser degree—biology. Attempts at extrapolating this language onto social studies have been successful only to a limited extent. But logic is equally applied regardless of the field of knowledge. A physicist’s mind is not governed by different logical rules compared to the mind of a philosopher. Any advance towards the improvement of our logical categories and the unveiling of their possibilities, their elasticity and foundation, will provide the human intellect with new and more acute tools for apprehending realms of reality which until now have remained beyond the scope of our knowledge. Of course, the struggle to integrate knowledge by founding the most complex realities upon the simplest ones cannot be claimed to exhaust our understanding of reality. The world will surely never cease to amaze us with unforeseen wonders, and blessings for our intellect. But the richness and inexhaustibility of the world do not prevent us from identifying the fundamental principles behind its vast and astonishing nature. Our mind, our logic, our intuition…, must be in a constant state of improvement through their interaction with reality, so that the deciphering of the basic axes of the universe will also unveil the true possibilities of human intelligence, of its logic and its language. About the Author(s) Carlos Blanco Professor, Comillas Pontifical University, Spain; Associate Fellow, WAAS
62c4f3f04f6c5195
So, this crank John Gabriel exploded on the Mathmatical Mathematics Memes page on facebook recently, and he’s hilarious. Now, there’s cranks in every area of science of course; most notably in physics (quantum woo), biology (creationists), geology (creationists again), history (creationists again, holocaust-deniers), philosophy (theologians 😉) and of course medicine (alternative medicine, faith healing…), but in mathematics they happen to be rather rare – or at least there are few interesting ones. Or I just haven’t found their hiding place yet. But I suspect that’s because to be a crank you either have to flat-out lie to people (and what would be the point with math?) or 1. Not know enough about the subject to realize you’re wrong, while at the same time 2. think you know enough to boldly proclaim your wrongness to the public. I imagine that’s easier with e.g. physics, where people can read popular books dumbed down for a lay audience (and I don’t mean that in a derogatory way – I love pop science!) and come away thinking that they now know all the important stuff and can start drawing their own conclusions on the subject matter (Spoiler alert: No, you can’t. If you can’t solve a Schrödinger equation, you’re simply not qualified when it comes to quantum physics, period.) But with math I can imagine it being a lot harder to both think you understand something well enough to pontificate about it while at the same time not understanding it enough to realize your pontifications make no damn sense. John Gabriel manages to do both, and it’s fascinatingly weird. He’s the perfect embodiment of the Dunning-Krüger effect on steroids: He understands so little about modern mathematics that he doesn’t even realize how little he understands, and instead thinks he’s the only one who really gets how math works. In typical crank fashion he rails against “stupid academia” who get so hung up on useless concepts like “reason” or “making any sense whatsoever” that they just don’t realize what a genius he is. Or it could be that he’s just wrong and makes no fucking sense. It’s a toss-up. John, let me recite Potholer’s Trichotomy to you: If something in science doesn’t make sense to you, you have to conclude that either 1. Research scientists are all incompetent, or 2. they’re all in on a conspiracy to deceive you, or 3. they know something you don’t, and you need to find out what that is. Hint: Try option three first.Potholer54 Interestingly enough, I had read about Gabriel before – years ago on good math, bad math, where he ended up arguing with Mark Chu-Carroll about Cantor’s second diagonal argument. That article is from 2010, but apparently about a year ago Gabriel started a youtube channel, presumably in the hopes to bring more people to his more enlightened (i.e. nonsensical) side and to proclaim the fact that he invented a new calculus! That’s right, he has reinvented calculus, and his version is much better and simpler and it’s easy to understand for anyone open enough to abandon sense and rigor, unlike all those stupid academics. And given that I’ve just been made aware of his existence again, I figured I’d give it a go and dissect that guys videos, because 1. it’s fun (at least to me) and 2. it’s as good a reason as any to explain some of the stuff he gets wrong in some more detail, and any attempt to explain math to people is time well spent in my opinion. So let’s start with his first video: 1. The Arithmetic Mean This is just a short video on the arithmetic mean – i.e. the “average”. This isn’t as cranky as his other stuff, but it already gives a fascinating glimpse into the way Gabriel thinks. Now, as I said, the arithmetic mean is just the average of a bunch of numbers. We all know how to compute it, we all know why it’s useful – we all remember computing or getting told the average grade in exams, for example. And there is absolutely no reason why I mention that particular example. Here’s what Gabriel’s video description says about it: The arithmetic mean is one of the most important concepts in mathematics. While just about anyone knows how to construct an arithmetic mean, almost no one understands it. Right… the average of a bunch of numbers is really hard to grasp. I remember struggling with it in elementary school as well… no, wait, I didn’t. Maybe that’s just because I didn’t realize how awfully complicated it in fact is, after all, almost no one understands it. But Gabriel does, of course. To compute the arithmetic mean of a bunch of numbers, we just add them all up and divide the sum by how many numbers we had. In mathspeak: Definition: The arithmetic mean \overline{(a_n)} of a finite sequence of real numbers a_1, \ldots ,a_n is given by \displaystyle \overline{(a_n)}:=\dfrac{\sum_{i=1}^na_i}n . We’ve all done that for grades: Add up all the grades of all the students in an exam, divide the result by how many students there are and you get the average grade in that exam. Here, by contrast, is Gabriel’s “definition” (and yes, he means definition): An arithmetic mean or arithmetic average is that value which would represent all the elements of a set, if those elements are made equal through redistribution. The Arithmetic Mean (0:18) …now I don’t know about you, but… is that even a sentence? What does that mean? “That value which would represent all the elements of a set“? “If those are made equal…” …well, then the set only has one element, doesn’t it? (Sets have no multiplicity – either a number is in a set or it isn’t.) OK, at least then I can guess what he means by “represent”. But “through redistribution“? What does “redistribution” mean in this context? This is not a definition. This is at best a clumsy attempt at explaining a definition. But he actually calls this a definition, and he runs with it. So here’s a beautiful example of why definitions fucking matter. He goes on to explain, that you can compute the arithmetic mean by drawing squares. He demonstrates this with three sets of squares, the first one having one square, the second two, the third three. He moves one square from the last set to the first so that every set has two squares, thus “making them equal”, hence the arithmetic mean is two. Now at least one can understand what his so-called “definition” was supposed to mean, but the immediate problem now is: What if the total number of squares isn’t divisible by the number of sets you have? Then his “redistribution” attempt fails, so according to his definition there is no arithmetic mean in that case. But he also shows us how to compute it using “algebra“, by which he means arithmetic (pun intended – and yeah, he can’t even get that right) – i.e. summing up and dividing the result according to the definition I stated above. But that’s not what his definition says. See what I mean when I say this guy makes no sense? But yeah, he runs with it: A useful arithmetic mean is one where it makes sense to redistribute the values. Example: Three friends each need $2 to buy lunch. They decide to pool their money because one of the friends may not have enough. If the total they have is $6, then it’s evident there is enough money for all three to buy lunch. Redistribution is accomplished by sharing the money. A useless arithmetic mean is one where it makes no sense to redistribute the values. Example: The arithmetic mean of student grades in a given class is a senseless calculation because students cannot share their marks. Redistribution cannot be accomplished by sharing grades. The Arithmetic Mean (1:26)  …jupp. First, notice how no arithmetic mean appears in his first example. Anywhere. Something costs $2, three friends pool their money, they need at least $6. The conclusion I’m left to draw is, that a “useful arithmetic mean” is one which isn’t even used, despite the name. Quite counter-intuitive. However, the prime example for an average – namely the average grade in an exam, something everyone has seen hundreds of times in school – is, to him, a “useless arithmetic mean“, because students can’t share grades. How does that even make sense? And don’t think that’s just a term he’s introducing, and that he doesn’t mean the word “useless” in a literal sense. Listen to the derision in his voice when he talks about the “senseless computation“. Of course it makes sense to compute the average grade – it gives you a good baseline to compare your own grade to, a sense of how well you did in comparison to the others without needing to know everyone’s specific result (which are confidential, after all). It gives you a sense of how difficult the exam was, or how lenient it was graded. But no, that’s all meaningless because students can’t share grades. But also, why does this matter? Math is abstract, it doesn’t care how you apply it, what you apply it to and whether the result of that application still has any meaningful interpretation in the real world! Yeah, this is how Gabriel works in a nutshell: 1. He takes a mathematical concept with a proper definition which he either doesn’t know, like or understand (or any non-empty subset of the three), 2. he visualizes or interprets it in some vague way (“making things equal through redistribution“), 3. he insists on his ill-defined vague interpretation to be the actual definition (even though it’s hand-wavy, vague nonsense), 4. he labels everything outside of his vague interpretation as “meaningless” and therefore void and draws absurd conclusions from his “definition”, 5. he proclaims that he has found the ultimate real meaning of the mathematical concept and rails against stupid academia. It’s glorious in its arrogance and ignorance. (Next post on John Gabriel: Calculus 101 (Convergence and Derivatives))
c3be1a342ca789df
Microwave billiards and quantum chaos From Scholarpedia Hans-Jürgen Stöckmann (2010), Scholarpedia, 5(10):10243. doi:10.4249/scholarpedia.10243 revision #127194 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Hans-Jürgen Stöckmann Up to about 1990 the quantum mechanics of classically chaotic systems, shortly termed "quantum chaos", was essentially a domain of theory (Haake 2001). Only two classes of experimental result had been available at that time. First, there were the spectra of compound nuclei giving rise to the development of random matrix theory in the sixties of the last century, and second the experiments with highly excited hydrogen and alkali atoms in strong magnetic or strong radio frequency fields. The situation changed with the appearance of experiments using classical waves, starting with microwave billiards . The distinction between classical waves and matter waves is not of relevance in the present context, since all features touched in this article are common to all types of waves. This is why some authors prefer the term "wave chaos" to describe this field of research. From classical to quantum mechanics Figure 1: Classical trajectories in a circular(a) and a stadium (b) billiard. Billiards are particularly well suited to illustrate the difficulties one is facing with the concept of chaos in quantum mechanics. For a circular billiard the trajectory is regular ( Figure 1(a)). There are two constants of motion, the total energy \(E\ ,\) and the angular momentum \(L\ .\) Since there are two degrees of freedom as well, the system is integrable, and the distance between two nearby trajectories increases linearly with time. The situation is qualitatively different for the stadium billiard ( Figure 1(b)). There is only one constant of motion left, the total energy \(E\ ,\) and the distance between neighbouring trajectories increases exponentially with time. The stadium billiard thus is chaotic. In quantum mechanics this distinction between integrable and chaotic systems does not work any longer. The initial conditions are defined only within the limits of the uncertainty relation \[ \Delta x\,\Delta p\ge \frac{1}{2}\hbar\,, \] and the concept of trajectories looses its significance. One may even ask whether quantum chaos does exist at all. Since the Schrödinger equation is linear, a quantum mechanical wave packet can be constructed from the eigenfunctions by the superposition principle. There is no room left for chaos. On the other hand the correspondence principle demands that there must be a relation between linear quantum mechanics and nonlinear classical mechanics at least in the regime of large quantum numbers. This defines the program of quantum chaos research, namely to look for the fingerprints of classical chaos in the quantum mechanical properties of the system. Billiards are ideally suited systems for this purpose. The numerical calculation of the classical trajectories is elementary, and the stationary Schrödinger equation reduces to a simple wave equation \[ -\frac{\hbar^2}{2m}\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2} \right)\psi_n=E_n\psi_n\,. \] The potential appears only in the boundary condition, \( \left.\psi_n\right|_S=0\ ,\) where \(S\) is the surface of the billiard. In the absence of potentials the stationary Schrödinger equation is equivalent to the time-independent wave equation, the Helmholtz equation \[\tag{1} -\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2} \right)\psi_n=k_n^2\psi_n\,, \] where \(\psi_n\) now means the amplitude of the wave field. This opens the opportunity to study questions and to test theories, originally initiated by quantum mechanics, by means of classical waves. The boundary conditions for the classical and the corresponding quantum mechanical systems may differ, but this is not of relevance for the questions to be treated in this article. Microwave billiards Figure 2: Chladni figures for plates with various shapes mounted in the centre (Stöckmann 1992, Nat.Wiss. 79: 443). The first experiment of this type dates back already more then 200 years. At the end of the 18th century E. Chladni developed a technique "to make sound visible" by decorating the nodal lines of vibrating plates with grains of sand (Chladni 1802). Figure 2 shows Chladni figures for three typical situations. The plates are fixed in the centre and had been excited to vibrations by means of a loudspeaker. Figure 2(a) shows a typical pattern for a circular plate. In this case the integrability of the system is not perturbed by the mounting. One observes a regular pattern of nodal lines with many intersections. The next example in Figure 2(b) shows a rectangle. It is integrable, too, but now the integrability is slightly perturbed by the mounting resulting in a curvature of the nodal lines and a partial conversion of crossings into anti-crossings. The last example in Figure 2(c) belongs to the class of Sinai billiards, a rectangle with an excised quarter circle from one of the corners. Now all nodal line crossings have completely disappeared resulting in a meandering pattern of nodal lines. Two centuries after Chladni's discovery the study of nodal lines of chaotic plates has become a very active field of research again. Figure 3: Microwave set-up to study spectra and wave functions (top), and a typical microwave reflection spectrum for a quarter-stadium (b= 20 cm, l=36 cm) shaped resonator (bottom) (Stöckmann and Stein 1990, Phys. Rev. Lett. 64: 2215). First modern experimental billiard studies started with microwave resonators. Meanwhile the technique is used by several groups worldwide (see Section Further reading for more details).Figure 3(top) shows a typical set-up. The cavity is formed by a bottom plate supporting the entrance antenna, and by an upper part whose position can be moved with respect to the lower one. As long as a maximum frequency \(\nu_{\rm max} = c/2d\) is not exceeded, where \(d\) is the height of the resonator and \(c\) the velocity of light, the system can be considered as quasi-two-dimensional. In this situation the electro-magnetic wave equations reduce to the scalar Helmholtz equation (1), where \(\psi_n\) corresponds to the electric field pointing perpendicularly from the bottom to the top plate. Since the electric field component parallel to the wall must vanish, we have the condition \(\psi_n|_S = 0\) on the outer circumference \(S\) of the resonator. We have thus arrived at a complete equivalence between a two-dimensional quantum billiard and the corresponding quasi-two-dimensional microwave resonator, including the boundary conditions. As an example Figure 3(bottom) shows the reflection spectrum of a microwave resonator of the shape of a quarter stadium. Each minimum in the reflection corresponds to an eigenfrequency of the resonator, and the depth of the resonance corresponds to the modulus square \(|\psi_n(r)|^2\) of the wave function at the antenna position. Figure 4: Wave functions in a stadium-shaped microwave resonator (Stein et al. 1992, Phys. Rev. Lett. 68: 2867). The figure shows \(\left|\psi_n(\vec{r})\right|^2\) in a colour plot. By scanning with the antenna through the billiard \(|\psi_n(\vec{r})|^2\) may thus be spatially resolved. To get the sign as well, a transmission measurement to an additional fixed antenna has to be performed. Figure 4 shows a number of stadium wave functions obtained in this way. All wave functions show the phenomenon of scarring, meaning that the wave function amplitudes are not distributed more or less homogeneously over the area, but concentrate along classical periodic orbits. One could get the impression that scarred wave functions are dominating, but this is only true for the lowest eigenvalues. With increasing energy the fraction of scarred wave functions tends to zero. For a quantitative description of the experiments scattering theory has to be applied, developed half a century ago in nuclear physics. Compared to nuclei microwave billiards have a number of advantages: wave lengths are of the order of mm to cm, resulting in very convenient sizes for the used resonators, and all relevant parameters can be perfectly controlled. This is why a number of predictions of scattering theory have been tested not in nuclei but microwave billiards (Mitchell et al. 2010). Random matrices In the midst of the last century little was known on the origin of the nuclear forces. Here one idea showed up to be extremely successful, notwithstanding its obviously oversimplifying nature: If the details of the nuclear Hamiltonian \(H\) are not known, just let us take its matrix elements in some basis as random numbers, with only some global constraints, e. g. by taking the matrix \(H\) symmetric for systems with, or non-symmetric Hermitian for systems without time-reversal symmetry, and by fixing the variance of the matrix elements. Assuming basis invariance the matrix elements can be shown to be uncorrelated and Gaussian distributed (Mehta 1991). The classical Gaussian ensembles are the orthogonal one (GOE) for time-reversal invariant systems with integer spin, the unitary one (GUE) for systems with broken time-reversal symmetry, and the symplectic one (GSE) for time-reversal invariant systems with half-integer spin. Here "orthogonal" etc. refers to the invariance properties of the respective ensembles. Figure 5: Level spacing distribution for a Sinai billiard (a), a hydrogen atom in a strong magnetic field (b), the excitation spectrum of a NO2$ molecule (c), the acoustic resonance spectrum of a Sinai-shaped quartz block (d), the microwave spectrum of a three-dimensional chaotic cavity (e), and the vibration spectrum of a quarter-stadium shaped plate (f) (taken from Stöckmann 1999). In all cases a Wigner distribution is found though only in the first three cases the spectra are quantum mechanically in origin. The quantity most often studied in this context is the distribution of level spacings \(p(s)\) normalised to a mean level spacing of one. For \( 2\times 2\) matrices this quantity can be easily calculated, yielding for the GOE the famous Wigner surmise \(\tag{2} p(s)=\frac{\pi}{2}s\exp\left(-\frac{\pi}{4}s^2\right)\,. \) For large matrices Eq. (2) is still a good approximation with errors on the percent level (Haake 2001). Figure 5 shows level spacings distributions for a variety of chaotic systems, all exhibiting the same behaviour. Such observations had been the motivation for the famous conjecture by Bohigas, Giannoni and Schmitt (1984) that the spectra of completely chaotic time-reversal-invariant systems should show the same fluctuation properties as the GOE. The replacement of \(H\) by a random matrix means to abandon any hope to learn more about nuclei from the spectra but some average quantities such as the mean level spacings. But the loss of individual features in the spectra on the other hand suggests that it might be worthwhile to look for universal features being common to all chaotic systems. This approach showed up to be extremely fruitful. It allowed to apply results originally obtained for nuclei to many other systems, e. g. quantum-dot systems (Beenakker 1997) and microwave billiards (Stöckmann 1999). Figure 6: Spectral form factor for the spectrum of a microwave hyperbola billiard (top), and for the subspectrum obtained by considering only every second resonance (bottom) (Alt et al. 1997, Phys. Rev E 55: 6674). In addition to the level spacings distribution in particular spectral correlations related to the spectral auto-correlation function \(C(E)=\langle \rho(E_2)\rho(E_1)\rangle-\langle \rho(E_2)\rangle\langle\rho(E_1)\rangle\) are considered, where \(\rho(E)\) is the density of states, \(E=E_2-E_1\ ,\) and the brackets denote a spectral average. Quantities often studied in literature are number variance and spectral rigidity (Stöckmann 1999). Here another object shall be considered, the spectral form factor, which is obtained from the Fourier transform of the spectral auto-correlation function. Figure 6 shows an experimental illustration for a hyperbola microwave billiard. In the upper part of the figure the spectral form factor for the complete spectrum is shown. There is a good agreement with random matrix predictions from the GOE. This is consistent with the fact that microwave billiard systems are time-reversal invariant, and there is no spin. Spectra showing GSE statistics have not yet been studied experimentally, but there is the remarkable fact that GSE spectra can be generated by taking only every second level of a GOE spectrum (Mehta 1991). Exactly this had been done with the spectrum of the hyperbola billiard to obtain the spectral form factor in the lower part of the figure, being in perfect agreement with the expected GSE behaviour. Semiclassical quantum mechanics Before the final establishment of quantum mechanics Born and Sommerfeld developed a technique today known as semi-classical to calculate the spectrum of atomic hydrogen. At that time Einstein argued that this approach must be a dead end, since semi-classical quantisation needs invariant tori in phase space, preventing a semi-classical quantisation for non-integrable systems. This was one of the rare cases, where Einstein was wrong, though it needed half a century until Gutzwiller (1990) showed in a series of papers that chaotic systems, too, allow for a semi-classical quantisation. Figure 7: Squared modulus \(|\hat{\rho}(t)|^2\) of the Fourier transform of the spectrum of a quarter Sinai billiard (a=56 cm, b=20 cm, r=7 cm). Each resonance can be associated with a classical periodic orbit~(Stöckmann and Stein 1990, Phys. Rev. Lett. 64, 2215). For the density of states Gutzwiller's approach yields his famous trace formula. It becomes particularly simple in billiard systems, if the wavenumber \(k\) is taken as the variable. In terms of \(k\) the density of states reads \(\tag{3} \rho(k)=\rho_0(k)+\sum\limits_n A_ne^{\imath kl_n}\,. \) The first term varies smoothly with \(k\) and is given in its leading order by Weyl's formula \( \rho_0(k)=\frac{A}{2\pi}k\,, \) where \(A\) is the area of the billiard. The second term is heavily oscillating with \(k\ .\) The sum runs over all periodic orbits including repetitions. \(l_n\) is the length of the orbit, and \(A_n\) is a complex factor weighting the stability of the orbit. The periodic orbit sum (3) is divergent for real \(k\ ,\) and resummation techniques are needed to calculate the spectrum from the periodic orbits. But the inverse procedure, namely to extract the contributions of the different periodic orbits out of the spectra, is straightforward. For billiards the Fourier transform of the fluctuating part of the density of states, \(\tag{4} \hat{\rho}_\mathrm{osc}(l)=\int\rho_\mathrm{osc}(k)e^{-\imath kl}\,dk=\sum_n A_n\delta(l-l_n)\,, \) directly yields the contributions of the orbits to the spectrum. Each orbit gives rise to a delta peak at an \(l\) value corresponding to its length, and a weight corresponding to the stability of the orbit. For illustration Figure 5 shows the squared modulus of the Fourier transform of the spectrum of a microwave resonator shaped as a quarter Sinai billiard. Each peak corresponds to a periodic orbit of the billiard. For the bouncing ball orbit, labelled by "1", three peaks associated with repeated orbits are clearly visible. The smooth part of the density of states is responsible for the increase of \(|\hat{\rho}(k)|^2\) for small lengths. Semiclassical quantum mechanics relates the spectrum to the classical periodic orbits of the system, i. e. to individual system properties. In view of this fact one may wonder where the universal features discussed in Section 3 come in. Let as have again a look on Eq. (4) to answer this question. For short orbits \(\hat{\rho}(l)\) exhibits a well-resolved length spectrum. This is the individual regime. But with increasing length the peaks become denser and denser, until they eventually cannot be resolved any longer. This is the universal regime. It needed more then 25 years of research to prove the equivalence between random matrix theory and semiclassical quantum mechanics in the universal regime explicitly. Occasionally people doubt whether chaotic systems really need an extra quantum-mechanical treatment. The Schrödinger equation after all gives exact results both for regular and chaotic systems. Why should one resort to old-fashioned techniques which had been abandoned already 80 years ago, after the development of "correct" quantum mechanics had been completed? The answer is simple: the numerical solution of the Schrödinger equation means a black-box calculation, and the human brain is not adopted to perform Fourier transforms. This is why spectra as the one shown in Figure 3 seemingly do not contain any relevant information. But the brain is extremely good in identifying paths and trajectories, and therefore the representation of the spectra in terms of classical trajectories, as shown in Figure 5 allows an immediate suggestive interpretation. Figure 8: Snapshot of the pulse propagation in a dielectric quadrupole cavity made of teflon (length of the long axis l=113 mm). The upper figure shows the pulse intensity inside the teflon at the moment of strongest emission in a colour plot. In addition the Poynting vector is shown in the region outside of the teflon. The lower figure shows the Husimi distribution of the pulse in a Poincarè plot. In addition the unstable manifold of the rectangular orbit is shown. See Media:movie.gif for the complete sequence (Schäfer et al. 2006, New J. of Physics 8, 46). All this is not just l'art pour l'art, as shall be demonstrated by one example. The relation between wave propagation and classical trajectories had become of practical importance for the optimization of the emission behaviour of microlasers. Again this shall be demonstrated by a microwave study. The upper part of Figure 8 shows the snapshot of the pulse propagation in a dielectric quadrupole resonator made of teflon. The pulse starts as an outgoing circular wave from an antenna close to the boundary in the lower part of the cavity, but already after a short time only two pulses survive circulating clock- and counter-clockwise close to the border. The figure shows a moment where there is a particularly strong emission to the outside. Contrary to intuition the strongest emission does not occur a the point of largest curvature. In the lower part of the figure the same situation is shown in a Poincarè plot, with the polar angle as the abscissa, and the sine of the incidence angle as the ordinate (in a Poincarè plot each trajectory is mapped onto a sequence of points representing the reflections at the boundary). The tongue-like structure is the instable manifold of the rectangular orbit. It had been obtained by pursuing a trajectory starting with a minute deviation from the ideal orbit. In addition the Husimi representation of the pulse is shown (a Husimi representation is a convenient tool to embed wave functions into the classical phase space). Now the observed emission behaviour can be explained. Teflon has an index of refraction of n=1.44 meaning a \(\sin\chi_{\rm crit}=0.69\) for the critical angle of total reflection. Thus the circulating pulses are trapped by total reflection. But whenever the critical line of total reflection is surpassed, there is a strong escape. This happens exactly in the region of the most pronounced tongues of the instable manifold of the rectangular orbit. Meanwhile "phase-space engineering" has become a standard tool in shape optimization of microcavities (Kwon et al. 2010). The number of activities on the transport of different types of waves, light, seismic waves, water waves, sound waves etc. through disordered media is steadily increasing, many of them based on theories and techniques originally developed in wave and quantum chaos. After several decades of basic research the time for applications has come. Most experimental examples presented in this article have been obtained in the author's group at the university of Marburg. I want to thank all my coworkers, in particular my senior coworker U. Kuhl. The experiments had been funded by the Deutsche Forschungsgemeinschaft by numerous grants, amongst others via the research group 760 Scattering systems with complex dynamics. • Beenakker, C. W. J. (1997). Rev. Mod. Phys. 69: 731. • Bohigas, O.; Giannoni, M. J. and Schmit, C. (1984). Phys. Rev. Lett. 52: 1. • Chladni, E. F. F. (1802). Die Akustik. Breitkopf und Härtel, Leipzig. • Gutzwiller, M. C. (1990). Chaos in Classical and Quantum Mechanics, Interdisciplinary Applied Mathematics, Vol. 1. Springer, New York. • Haake, F. (2001). Quantum Signatures of Chaos, 2nd edition. Springer, Berlin. • Mehta, M. L. (1991). Random Matrices. 2nd edition. Academic Press, San Diego. • Mitchell, G. E.; Richter, A. and Weidenmüller, H. A. (2010). Random Matrices and Chaos in Nuclear Physics: Nuclear reactions. Rev. Mod. Phys. 82: 2845. • Kwon, O.; An, K. and Lee, B. (Eds.) (2010). Trends in Nano- and Micro-Cavities. Sharjah, U.A.E.: Bentham Science Pub. • Stöckmann, H.-J. (1999). Quantum Chaos - An Introduction. University Press, Cambridge. Further reading • Heller, E. J. (1984). Bound-state eigenfunctions of classically chaotic Hamiltonian systems: Scars of periodic orbits. Phys. Rev. Lett. 53, 1515. In this paper the term "scar" had been introduced, and the phenomenon had been described for the first time. • Kuhl, U.; Stöckmann, H.-J. and Weaver, R. (2005). Classical wave experiments on chaotic scattering. J. Phys. A 38: 10433. A discussion of the scattering aspects of wave chaotic systems. • Nöckel, J. U. and Stone, A. D. (1997). Ray and wave chaos in asymmetric resonant cavities. Nature 385, 45. Here the relevance of the classical phase-space properties for the emission behaviour of microcavities had been pointed out for the first time. • Richter, A. (1999). Playing billiards with microwaves - quantum manifestations of classical chaos. In: Hejhal et al.: Emerging Applications of Number Theory. The IMA Volumes im Mathematics and its Applications, Vol 109. Springer, New York. A report on the microwave experiments of the Darmstadt group. • Sirko, L.; Koch, P.M. and Blümel, R. (1997). Experimental identification of non-Newtonian orbits produced by ray splitting in a dielectric-loaded microwave cavity. Phys. Rev. Lett. 78: 2940. Microwave results on a generalization of Gutzwiller’s trace formula to ray splitting occurring in systems with sharp interfaces. • Stöckmann, H.-J. (2007). Chladni meets Napoleon. Eur. Phys. J. Special Topics 145: 17. A report on Chladni's Paris visit 1809. In addition to the groups in Marburg and Darmstadt represented by experimental results in the article, there are a number of other microwave laboratories listed here with representative publications. Only groups where there are at least two publications are mentioned: • Kudrolli, A.; Kidambi, V. and Sridhar, S. (1995). Experimental studies of chaos and localization in quantum wave functions. Phys. Rev. Lett. 75: 822. The Boston group. • So, P.; Anlage, S. M.; Ott, E. and Oerter, R. N. (1995). Wave chaos experiments with and without time reversal symmetry: GUE and GOE statistics. Phys. Rev. Lett. 74: 2662. The Maryland group • Hul, O.; Tymoshchuk, O.; Bauch, S.; Koch, P. and Sirko, L. (2005). Experimental investigation of Wigner's reaction matrix for irregular graphs with absorption. J. Phys. A 38: 10489. The Warsaw group. • Barthèlemy, J.; Legrand, O. and Mortessagne, F. (2005). Complete S-matrix in a microwave cavity at room temperature. Europhys. Lett. 70:162. The Nice group. Internal references External links Website of the Marburg quantum chaos group See also internal links Bohigas-Giannoni-Schmit conjecture, Gutzwiller trace formula, Random matrix theory Personal tools Focal areas
2a105e12a8dcb368
VeloxChem is one of the supported software at ENCCS. Molecules are the fundamental building blocks of nature. Their microscopic properties determine the large-scale behavior of materials. Quantum chemistry deals with simulating and predicting such molecular properties by solving the Schrödinger equation, the fundamental equation of the microscopic quantum world, for the electrons in molecular aggregates. Quantum chemistry heavily relies on efficient computational algorithms and the availability of high-performance computational resources. VeloxChem is a new quantum chemistry code developed primarily at KTH in Stockholm. The code aims to provide implementations of many quantum chemical methods that can scale efficiently from laptops to next-generation high-performance computing clusters. The computational core of VeloxChem is written in modern C++ with binding to Python. This hybrid language choice allows to quickly prototype new algorithms in the high-level language. VeloxChem can exploit the resources of large computational clusters with a hybrid MPI and OpenMP parallelization strategy. With the support of ENCCS, VeloxChem will be able to leverage emerging (pre)-exascale heterogeneous computational architectures. VeloxChem will empower large scale simulations of chemical processes. These simulations are essential for the in silico design and tuning of chemical reactions and innovative materials. For more information on VeloxChem visit [post_grid id=’651′]
ca6ba6bc66c52fb9
APCAdvances in Physical Chemistry1687-79931687-7985Hindawi Publishing Corporation16475210.1155/2012/164752164752Review ArticleConstructing Potential Energy Surfaces for Polyatomic Systems: Recent Progress and New ProblemsEspinosa-GarciaJ.Monge-PalaciosM.CorchadoJ. C.BytautasLaimutisDepartamento de Química FísicaUniversidad de Extremadura06071 BadajozSpainunex.es20125102011201211052011140720112012Copyright © 2012 J. Espinosa-Garcia et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Different methods of constructing potential energy surfaces in polyatomic systems are reviewed, with the emphasis put on fitting, interpolation, and analytical (defined by functional forms) approaches, based on quantum chemistry electronic structure calculations. The different approaches are reviewed first, followed by a comparison using the benchmark H + CH4 and the H + NH3 gas-phase hydrogen abstraction reactions. Different kinetics and dynamics properties are analyzed for these reactions and compared with the available experimental data, which permits one to estimate the advantages and disadvantages of each method. Finally, we analyze different problems with increasing difficulty in the potential energy construction: spin-orbit coupling, molecular size, and more complicated reactions with several maxima and minima, which test the soundness and general applicability of each method. We conclude that, although the field of small systems, typically atom-diatom, is mature, there still remains much work to be done in the field of polyatomic systems. 1. Introduction At the heart of chemistry lies the knowledge of the reaction mechanism at atomic or molecular levels, that is, the motion of the nuclei in the potential field due to the electrons and nuclei. When the separation of the nuclear and electronic motions is possible, that is, within the Born-Oppenheimer (BO) approximation [1], the potential energy surface (PES) is the electronic energy (which includes electronic kinetic and Coulomb interaction energy) plus nuclear repulsion of a given adiabatic electronic state as a function of geometry. The PES is then the effective potential energy for the nuclear motion when the system is in that electronic state. The actual knowledge of the chemical reactivity is based on quantum mechanical first principles, and the complete construction of the PES represents a very important challenge in theoretical chemistry. For small reactive systems (three or four atoms), the PES construction is a relatively mature field, although even today new surfaces are still being developed for atom-diatom systems [26] using high-level ab initio calculations. For instance, one of the latest atom-diatom surfaces has been developed in the present year by Jiang et al. [6] for the H + HBr hydrogen abstraction reaction. Ab initio calculations were performed based on the multireference configuration interaction (MRCI) method including Davidson’s correction using augmented correlation-consistent-polarized valence X-tuple zeta basis sets, with X up to 5, extrapolating the energies to the complete basis set (CBS) limit, and taking into account the spin-orbit correction. In order to define the complete configuration space for the BrH2 system, ab initio calculations were performed at more than 12000 geometries. Then, a three-dimensional cubic spline interpolation was employed to yield the ground-state PES for this system. As can be seen, the computational effort (correlation energy + basis set + number of points) is enormous, and doing this for larger systems is still clearly prohibitive. The extension to larger reactive systems (more than 4 atoms) is an open and promising field, although computationally very costly. Thus, for polyatomic systems of n atoms, with 3n-6 internal degrees of freedom, if one needs m configurations for each degree of freedom, one would need a total of m(3n-6) evaluations of the energy. For instance, taking a typical value of m of 10, for the H + CH4 hydrogen abstraction reaction 1012 evaluations would be needed. This implies too much computer time to perform ab initio calculations with chemical accuracy (1 kcal mol−1) and becomes totally unaffordable if one aims to achieve spectroscopic accuracy (1 cm−1). The accuracy of the kinetics and dynamics description of a chemical reaction depends, aside from the quality of the PES, on the dynamics method used. If the motion of the nuclei on the PES is determined by the Hamilton equations in the phase-space configuration, the dynamics methods are classical or quasiclassical trajectory (QCT) approaches [7], and if the system is described by the Schrödinger equation (time dependent or time independent), the dynamics methods are quantum mechanical (QM) [8]. Finally, variational transition-state theory with multidimensional tunneling contribution (VTST/MT) can be derived from a dynamical approach by statistical mechanics and provides an excellent and affordable method to calculate thermal rate and state-selected constants [9]. In sum, electronic structure theory, dynamics methods, and constructed potential energy surfaces are the keystones of the theoretical study of chemical reactivity. Recent years have seen a spectacular development in these strongly related disciplines, paving the way towards chemical accuracy in polyatomic systems and toward spectroscopic accuracy in smaller problems. The scope of this paper is the construction of potential energy surfaces for bimolecular gas-phase polyatomic reactive systems in their electronic ground-state. Section 2 describes different methods of constructing potential energy surfaces, with especial focus on the methodological approach developed by our research group. Section 3 presents a library of PESs available for the scientific community, and Section 4 presents some results for the H + CH4 benchmark polyatomic system and the H + NH3 reaction, which permits us to compare results from different methods. Section 5 is devoted to analyzing the treatment of the peculiarities found in some reactions that entail additional challenges to the construction of potential energy surfaces, such as spin-orbit coupling, the molecular size, and the presence of various maxima and minima. Finally, Section 6 summarizes the main conclusions of the paper. 2. Constructing Potential Energy Surfaces The construction of potential energy surfaces in reactive systems began with the dawn of Quantum Chemistry when different quantum approaches were used, basically empirical or semiempirical. In present days, the data needed for this construction are obtained from high-level ab initio calculations, although density functional theory (DFT) methods are also widely used, especially in the case of larger molecular systems. As is well known, and this special issue is a clear indication of it, the construction of PESs has a long tradition in theoretical and computational chemistry, and an exhaustive review of the literature is beyond the scope of the present paper (see, for instance, a recent review in [10]). Here we only highlight some important contributions with the aim that the reader will get an overall perspective on this wide field of research. The most straightforward procedure to describe a reactive system is from electronic structure calculations carried out “on the fly,” sometimes also called “direct dynamics” [11], where every time the dynamics algorithm requires an energy, gradient, or Hessian, it is computed from electronic structure calculations or molecular mechanics methods. The advantage of this approach is the direct application of electronic structure calculations to dynamics problems without intermediaries, but when chemical accuracy is needed, very high-level ab initio calculations are required, and hence the time and computational cost are very high. For polyatomic systems, such an approach is still prohibitive. A recent example is the direct dynamics trajectory study of the formaldehyde cation with molecular hydrogen gas-phase reaction [12]. Even using a modest MP2/6-311G(d,p) ab initio level, which yields a barrier only 0.5 kcal mol−1 below the benchmark value, the 9600 trajectories required about 2.5 CPU years on a small Linux cluster. In this regard, one has to keep in mind that QCT calculations require a large number of trajectories to adequately sample the events and obtain results that from a purely statistical point of view are tolerable, that is, with very small errors. Note that the time devoted to the QCT calculation is negligible in this time scale. The alternative to direct dynamics calculations is to construct a mathematical PES, that is, developing an algorithm that can provide the potential energy for any given geometry of the system without depending on “on-the-fly” electronic structure computations. In this sense, three basic approaches have been considered: fitting, interpolation, or analytical (defined by functional forms). In fact, the fitting and analytical procedures share the idea of functional form, and this analytical function needs to be fitted to theoretical information. This classification is therefore arbitrary, and other possibilities can be found in the literature [10]. We think, however, that it can help the reader get a better general view of this field. The interpolation methods have been widely applied in the construction of potential energy surfaces, and different strategies have been developed [9, 10, 1315]. Thus, Collins et al. [14, 16, 17] developed and applied a method for generating potential energy surfaces using a modified Shepard interpolation. The PES at any configuration (Z) is represented by a weighted average of Taylor series Ti(R):V=i=1Ndatawi(Z)Ti(R), where Ndata is the number of molecular configurations whose energy and its first (gradient) and second (Hessian) derivatives have been evaluated, wi(Z) is the normalized weighting factor, R is a set of internal coordinates, and the Taylor series are expanded around the point i:Ti(R)=V(R)+k=13n-6VRk[Rk-Rk(i)]+12!k=13n-6j=13n-6[Rk-Rk(i)]2VRkRj[Rj-Rj(i)]. The use of the energy and its first and second derivatives makes the method very robust, but computationally very expensive, especially if very high-level ab initio calculations are needed for a correct description of the reactive system. For instance, very recently a new PES for the H + CH4 gas-phase reaction has been developed by Collins by interpolation of 30000 data points obtained with high-level ab initio calculations [18]. The ab initio calculations took about 600 days on a workstation with 8 CPU cores in total, while the evaluations of the potential values on all the grids for this reaction took 400 days. Three advantages of this method are the following: first, that the interpolated function agrees precisely with the values of the data points; second, that new electronic structure points may be incorporated in the process to improve the PES; third, that the PES is invariant to the permutation or exchange of indistinguishable nuclei. However, Shepard interpolation often yields slight oscillations in equipotential contours and vibrational frequencies (second derivatives of the energy); that is, the potential energy surface is not smooth enough to provide smooth changes in the vibrational frequencies. This is a limitation when one wants to compute tunneling effects at low energies, since the barrier to tunnel through is unrealistic. Fitting methods, such as the interpolated ones considered above, have been widely used in the construction of potential energy surfaces [1921], and they have been recently reviewed [10]. Thus, for instance, a fitting procedure widely used is based on the many-body expansion (MBE) method [22], amplified by Varandas [23] in the so-called double many-body expansion (DMBE) method. The potential is given by a sum of terms corresponding to atoms, diatoms, triatoms, and tetra-atoms and a series of adjustable coefficients. Another fitting procedure has been developed and applied by Bowman et al. [2427], in which the ab initio data are globally fitted to a permutational symmetry invariant polynomial. The function describing the PES has the form V=p(x)+i<jqi,j(x)yi,j, where x is an n-dimensional vector, depending on n internuclear distances, with components xi,j=exp(-rij), rij being the distance between nuclei i and j, and yi,j given byyi,j=e-ri,jri,j, and p(x) and q(x) are polynomials constructed to satisfy the permutation symmetry with respect to the indistinguishable nuclei. For instance, for the CH5   + polyatomic system [28], the PES was least-squares fitted to 20 639 ab initio energies, obtained at the MP2/cc-pVTZ level. This fit contains 2303 coefficients and an rms fitting error of 51 cm−1. Bowman and coworkers have developed 16 of such PESs for polyatomic systems, and they have been recently revised [27]. In general, the fitting approach is linear least squares, and therefore these surfaces do not exactly reproduce the ab initio data. The third alternative consists of the analytical surfaces defined by functional forms. In this method the ab initio data are fitted to a valence bond (VB) functional form, augmented with molecular mechanics (MM) terms which give great flexibility to the potential energy surface. These VB/MM functional forms have a long history in the development of potential energy surfaces [2933], although at first the surfaces were semiempirical in the sense that theoretical and experimental data were used in the fitting procedure. This has been the methodological approach developed by our group [10, 34, 35]. The first surfaces were developed for the H + CH4 hydrogen abstraction reaction, as a paradigm of polyatomic systems [2933]. The potential energy for a given geometry, V, is given by the sum of three terms: stretching potential, Vstretch, harmonic bending term, Vharm, and anharmonic out-of-plane potential, Vop, V=Vstretch+Vharm+Vop. The stretching potential is the sum of four London-Eyring-Polanyi (LEP) terms, each one corresponding to a permutation of the four methane hydrogens:Vstretch=i=14V3(RCHi,RCHB,RHiHB), where R is the distance between the two subscript atoms, Hi stands for one of the four methane hydrogens, and HB is the attacking H atom. Although the functional form of the V3 LEP potential is well known, we will recall it for the sake of completeness:V3(RCHi,RCHB,RHiHB)=Q(RCHi)+Q(RCHB)+Q(RHiHB)-12[J(RCHi)-J(RHiHB)]2+12[J(RHiHB)-J(RCHB)]2+12[J(RCHB)-J(RCHi)]2,Q(RXY)=E1(RXY)+E3(RXY)2,J(RXY)=E1(RXY)-E3(RXY)2,E1(RXY)=DXY1{exp(-2αXY[RXY-RXYe])-2exp(-αXY[RXY-RXYe])},E3(RXY)=DXY3{exp(-2αXY[RXY-RXYe])+2exp(-αXY[RXY-RXYe])},where there are 12 fitting parameters, four for each of the three kinds of bond, RCHi, RCHB, and RHiHB. In particular, these are the singlet and triplet dissociation energies, DXY1 and DXY3, the equilibrium bond distance, RXYe, and the Morse parameter, αXY. The Morse parameter for the CHi bonds, αCH, however, is not taken as a constant but rather as a function of the CH distances,αCH=aCH+bCH(tanh[cCH(R̅-RCHe)]+12), with R̅ being the average RCHi distance, R̅=14i=14RCHi. In this way, αCH changes smoothly from its value at methane, aCH+(bCH/2), to its value at the methyl radical, aCH+bCH, as the reaction evolves. Therefore, 14 parameters are required to describe the stretching potential. One of the problems with this functional was that the equilibrium C–H distances for the reactants, saddle point, and products are the same, leading to a very rigid surface. Chakraborty et al. [36], for the H + C2H6 reaction, included a modification to endow the surface with greater flexibility. The reference C–H bond distance is transformed smoothly from reactant to product using the following equation:RCHo=P1RCH,Ro+(1-P1)RCH,Po, where P1 isP1=i=14T1(RCHi), which is symmetric with respect to all the four hydrogen atoms and goes to zero as one of the hydrogen atoms is abstracted, and T1 is a geometry-dependent switching function, given byT1(RCHi)=1-tanh[w1(RCHi-w2)], where w1 and w2 are adjustable parameters. Therefore, this adds 2 new parameters (total 16 parameters) to describe the stretching potential. The Vharm term is the sum of six harmonic terms, one for each bond angle in methane:Vharm=12i=13j=i+14kij0kikj(θij-θij0)2, where kij0 and ki are force constants and θij0 are the reference angles. The kij0 force constants are allowed to evolve from their value in methane, kCH4, to their value in methyl, kCH3, which are two parameters of the fit, by means of switching functions: kij0=kCH4+kCH4S1(RCHi)S1(RCHj)-1+(kCH4-kCH3)S2(RCHi)S2(RCHj)-1, while ki is a function of both the RCHi and RHiHB distances:ki=A1exp-A2(RCHi-RCHe)2,A1=1-exp-aa1(RHiHB)2,A2=aa2+aa3exp-aa4(RHiHB-RHiHBe)2. Thus, four adjustable parameters are involved in the definition of ki. The reference angles are also allowed to change from their value at methane, τ=109.47°, or arccosine  (-1/3), to methyl, 120° or 2π/3 radians, by means of switching functions [37]: θij0=acos(-13)+[acos(-13)-π2]Sφ(RCHi)Sφ(RCHj)-1+[acos(-13)-2π3][Sθ(RCHk)Sθ(RCHl)-1]. Finally, the switching functions are given byS1(RCHi)=1-tanhα1s(RCHi-RCHe)(RCHi-β1s)8,S2(RCHi)=1-tanhα2s(RCHi-RCHe)(RCHi-β2s)6,Sφ(RCHi)=1-tanh{Aφ(RCHi-RCHe)exp[Bφ(RCHi-Cφ)3]},Sθ(RCHi)=1-tanh{Aθ(RCHi-RCHe)exp[Bθ(RCHi-Cθ)3]}, involving 10 more adjustable parameters. In total, 16 terms need to be fitted for the calibration of the Vharm potential. The Vop potential is a quadratic-quartic term whose aim is to correctly describe the out-of-plane motion of methyl:Vop=i=14fΔijij=14(Δij)2+i=14hΔijij=14(Δij)4. The force constants, fΔi and hΔi, have been incorporated into a new switching function which is such that Vop vanishes at the methane limit and which directs the change of the methyl fragment from pyramidal to planar as the reaction evolves:fΔi=[1-S3(RCHi)]fΔ,hΔi=[1-S3(RCHi)]hΔ  ,S3(RCHi)=1-tanhα3s(RCHi-RCHe)(RCHi-β3s)2, with fΔ  , hΔ, α3s, and β3s being the only parameters of Vop that enter the fitting process. Δij is the angle that measures the deviation from the reference angle:Δij=acos((rk-rj)×(rl-rj)(rk-rj)×(rl-rj)riri)-θij0, where ri, rj, rk, and rl are vectors going from the carbon atom to the i, j, k, and l hydrogen atoms, respectively, and θij0 are the reference angles defined in (16). The first term to the right of (20) is therefore the angle between the CHi bond and a vector perpendicular to the plane described by the j, k, and l hydrogen atoms and centred at the j atom. To correctly calculate Δij, the motion from k to l has to be clockwise from the point of view of the i atom. In the case of the H + CH4 reaction the CH3 radical product is planar. What happens if the product presents a nonplanar geometry? In these cases of CX3 products, we have modified the reference angle θijo in (13) and (20). Thus, the original expression,θijo=τ+(τ-π2)[Sφ(RCXi)Sφ(RCXj)-1]+(τ-2π3)[Sϑ(RCXk)Sϑ(RCXl)-1], where τ  =109.47°, is replaced byθijo=τ+(τ-τ1)[Sφ(RCXi)Sφ(RCXj)-1]+(τ-τ2)[[Sϑ(RCXk)Sϑ(RCXl)-1], where τ2 is the bending angle in the non-planar product, and τ1 is related to τ2 by the expression, τ1=π-arcsin[sin(τ2/2)sin(π/3)]. In these cases of non-planar products, CX3, this correction would add new parameters in the fitting procedure. The PES, therefore, depends on at least 36 parameters, 16 for the stretching, 16 for the harmonic term, and 4 for the out-of-plane potential. These 36 parameters give great flexibility to the PES, while keeping the VB/MM functional form physically intuitive. In the case of the H + CH4 reaction the CH4 reactant presents Td symmetry. What happens if the reactant presents a different symmetry? For instance, ammonia presents symmetry C3v, characterized by an inversion mechanism through a planar structure with symmetry D3h. The functional form (5) used for the H + CH4 reaction cannot be applied without modification to any kind of system. Thus, when this potential is applied to the study of the H + NH3 → H2 + NH2 reaction, a major drawback was observed [38], namely that it wrongly describes the NH3 inversion reaction, predicting that the planar ammonia (D3h symmetry), which is a saddle point to the ammonia inversion, is about 9 kcal mol−1 more stable than the pyramidal structure (C3v symmetry). In the original expression for the H + CH4 reaction (5), the Vop term was added to obtain a correct description of the out-of-plane bending in the methyl radical, 580 cm−1. However, Yang and Corchado [38] noted that this term leads to unphysical behaviour along the ammonia inversion path. To avoid this drawback, our laboratory recently developed [39] a new PES where the Vop term is removed: V=Vstretch+Vharm. Consequently, each of the 16 parameters is required for the stretching terms and the harmonic bending terms of the PES which were fitted to high-level ab initio CCSD(T)/cc-pVTZ calculations. In sum, starting from a basic functional form, this form must be adapted to each particular case, looking for the greatest flexibility and suitability for the problem under study. This has been the main aim of the modifications on the original functional form (5), described by (10)–(12), (22), (23), and (24). While this could represent a disadvantage with respect to the fitting or interpolation methods, because new functional forms must be developed in each case, it also represents an advantage, because simple stepwise modifications can give great flexibility to the surface. Once the functional form is available, the fitting procedure is started. A very popular approach for fitting a function is the least-squares method, which, using some local optimization algorithm, gives values of the parameters that minimize (locally) the functionR=x|E(x)-F(x,p)|2, where E(x) is the ab initio energy associated with a particular molecular configuration specified by “x” and F(x,p) is the energy predicted by the analytical function at the same molecular configuration, which depends on a set of m parameters denoted as p. One must note, however, that any fitting procedure has certain limitations. First, due to the large number of parameters, it is very hard to find a global minimum for the fit, which accurately describes the entire surface. Second, due to the nature of the linear least-square method, the result is dependent on the initial parameters; third, since we use a mathematical approach without physical intuition, one usually obtains a number of distinct sets of mathematical parameters, all equally probable and good at reproducing the complete system. To make matters worse, as noted by Banks and Clary [40], while the differences between these parameter sets may be small, the dynamics information obtained from them can vary notably. This is especially true because small changes in the energy derivatives can cause large changes in dynamical properties of the PES. One therefore requires an extremely accurate fit to the topology of the given points to obtain an acceptable set of parameters. To solve at least partially some of the above problems, we adopt a different approach [41]. Firstly, we will try to obtain the values of the parameters that minimize the functionR=xwxe|E(x)-F(x,p)|2+xwxg|g(x)-F(x,p)x|2+xwxH|H(x)-2F(x,p)x2|2, where g(x) denotes the gradients (first derivative of the energy), H(x) the Hessian elements (second derivatives of the energy), and wxe, wxg, and wxH are weights. In practice, the gradients enter the fitting process only at the stationary points, and the Hessian elements by means of the harmonic vibrational frequencies at selected points. Secondly, the choice of the number of points to be fitted is critical, and this strategy allows this number to be reduced drastically (see more details in the original paper [41]). We begin by choosing the stationary points (reactants, products, saddle point, and intermediate complexes) as data points. The geometry, energy, and frequencies are fitted and, thus, indirectly, the first and second derivatives of the energy. It is necessary to remember that if the information included was only the energy, a mesh of m3n-6 points would be required. However, when the information included is not only the energy at a stationary point but also the first and second derivatives, only one point is needed to reproduce each stationary point configuration. Moreover, additional points on the minimum energy path are included. Thus, with the inclusion of the stationary points and representative points on the reaction path, we search for a good reproduction of the topology of the path connecting reactants to products. Finally, we add the energy of a point not on the reaction path to describe zones of the reaction valley relevant to tunneling dynamics and rotational excitation of the products. The fitting procedure is currently automated on a computer with no user intervention except to analyze the results and, when called for, to change the weights. This strategy presents certain advantages over other methods. The first is transferability of the functional form and the fitted parameters. Since one can regard VB/MM as some kind of highly specific MM force field, it can be expected that some of the fitting parameters are transferable to a similar system, although obviously the parameter values are system specific. In our group we have used this feature to obtain, for example, the PES for the F + CH4 reaction [42] using as the starting point the analytical PES for H + CH4 [43]. Secondly, the PESs are guaranteed to some extent to have the capability of reproducing the energetic interactions of the chemical system in region not included in the fit. For example, high-energy collisions might require knowledge of the PES at very high energies that may not have been sampled in the fitting process. An interpolation method cannot ensure the absence of spurious wells or unphysical repulsive regions when nearby points are absent, while MM force fields can. Thirdly, VB/MM PES parameters can be refitted so as to fine-tune the PES using additional data (e.g., higher ab initio calculations at selected points) at a low computational cost. Furthermore, VB/MM surfaces and their energy derivatives (when they can be analytically calculated) are usually smoother than interpolated surfaces. As was mentioned above, Shepard interpolation sometimes gives discontinuities in the derivatives, which is a highly undesirable feature for kinetics and dynamics studies. Thus, our research group has developed economical alternatives for constructing analytical PESs of polyatomic systems, which basically are VB/MM-type surfaces. This strategy of seeking an optimal tradeoff of time and computational cost has been successfully used in several gas-phase hydrogen abstraction reactions of five [44, 45], six [42, 43, 4652], and seven [53] atoms. In general, a good correspondence between kinetics and dynamics theoretical results and experimental measurements is found. In the first phase of our research, theoretical and experimental information was used in the fitting procedure, making the analytical surface semiempirical in nature, which was a serious problem and a limitation in kinetics and dynamics studies. However, in the last few years, our surfaces have been fitted exclusively to very high-level ab initio calculations, avoiding the aforementioned limitations. 3. Library of Potential Energy Surfaces The FORTRAN codes and the fitted parameters of the polyatomic potential energy surfaces developed by our group are available, free of charge, for the scientific community, and can be downloaded from the POTLIB library [54]: http://comp.chem.umn.edu/potlib/ or http://users.ipfw.edu/DUCHOVIC/POTLIB2001, or obtained from the authors upon request. 4. Applications(a) The H + CH<sub>4</sub> Reaction The gas-phase H + CH4 → H2 + CH3 hydrogen abstraction reaction, as well as its deuterated isotopomers, is the prototype polyatomic reactive system and has been widely studied both theoretically and experimentally [10]. The construction of its PES has a long history, and it is one of the few reactive systems for which different approaches to the construction have been developed, which permits a direct comparison. We will focus attention, except otherwise stated, on the three most recent and accurate surfaces for this reactive system, which were constructed with different strategies. Chronologically, in 2006 Zhang et al. [55, 56] developed the family of ZBBi surfaces, using the invariant polynomial method, based on the fitting to more than 20000 ab initio energies at the RCCSD(T)/aug-cc-pVTZ level. In 2009 we developed [41] an analytical PES, CBE surface, which is a VB/MM functional form, and the 36 parameters are fitted using exclusively high-level electronic structure calculations at the CCSD(T)/cc-pVTZ level. Very recently, in 2011, Zhou et al. [18] constructed a full-dimensional surface using the modified Shepard interpolation scheme based on 30000 data points at the CCSD(T)/aug-cc-pVTZ level—the ZFWCZ surface. The energy, geometry and vibrational frequencies of the saddle point are summarized in Table 1 for the three surfaces. Comparing the wide range of properties, we conclude that the three surfaces present excellent agreement, with small differences in the CBE with respect to the other two surfaces in the <H–C–H′ bending angle, 3°, and in the ZBB3 compared to the other two surfaces in the imaginary frequency, 100 cm−1. The three surfaces present a colinear saddle point, 180.0°, with a similar barrier height, with small differences, ±0.2 kcal mol−1, that is, within the chemical accuracy, and very close to the best estimation predicted at the CCSD(T)/aug-cc-pVQZ level, 14.87 kcal mol−1 [57]. The best macroscopic measure of the accuracy of a potential energy surface is probably the rate constant, at least in the thermal bottleneck region. When we compare different dynamics methods using the same surface, the dynamics approach is tested, and when we compare theoretical and experimental results, both the dynamics method and the surface are tested. Figure 1 plots the thermal rate coefficients computed with accurate fulldimensional quantum dynamics approaches, on the analytical CBE surface; a very accurate interpolation surface developed by Wu et al., WWM surface [5759], which used the modified Shepard interpolated method developed by Collins et al., based on CCSD(T)/cc-pVQZ or CCSD(T)/aug-cc-pVQZ ab initio calculations; and a fitted surface, named ZBB2, an earlier version of the ZBB3 surface developed by Bowman et al. In the same figure, there also appear the results obtained with the variational transition-state theory with multidimensional tunneling effect [41] and the experimental data [60] for comparison. First, with quantum dynamics approaches, the rate coefficients obtained with the three surfaces agree almost perfectly in the common temperature range. Second, when the rate coefficients are obtained using different dynamics approaches, VTST/MT and MCDTH (multiconfigurational time-dependent Hartree approach [58, 59]) on the same CBE surface, excellent agreement is found, and both reproduce the experimental information in the common temperature range. These results indicate, first, that VTST/MT is a powerful and computationally economic tool for the kinetics study of polyatomic systems, with results comparable to those obtained with computationally more expensive quantum dynamical methods. Second, the CBE surface is accurate at least in the region of low energies, which is the most relevant region for thermal rate coefficient calculations and, therefore, provides a satisfactory description of the reaction path and the transition-state region. Next, we analyze some dynamics properties where different surfaces have been used and compared with the sparse experimental data. We focus on the H + CD4 → HD + CD3 reaction because there is more experimental information available for comparison. The product energy partitioning has been experimentally obtained by Valentini’s group. [61] only for the vibration and rotation of the HD product, and this same group found that more than 95% of the HD product is formed in the v=0 and v=1 vibrational states. These results appear in Table 2, together with the QCT results on the analytical CBE and the fitted ZBB1 [55] surfaces. The QCT calculations on these two PESs show excellent agreement for the HD product energy partitioning as well as for the HD vibrational distribution. In this latter case, the QCT results reproduce the experimental evidence, but, in the first case, they strongly contrast with the experimental measurements, which measure an internal excitation of the HD product of 7% and 9% for vibration and rotation, respectively. The agreement between the results from the two surfaces leads us to think that the discrepancies with experiment are mainly due to the dynamical method, that is, to limitations of the QCT approach. Experimental problems, however, cannot be totally ruled out, as was recently observed by Hu et al. [62], who suggested that the conclusions from Valentini et al.’s CARS experimental study might need to be reinterpreted. The product angular distribution is, doubtless, one of the most sensitive dynamics features with which to test the quality of the potential energy surface, but experimentally it is very difficult to measure in some cases. When the Photoloc technique is used, the laboratory speed depends on both the scattering angle and the speed of the CD3 product, which is influenced by the HD coproduct internal energy distribution. Uncertainties in this quantity could produce errors in the scattering angle. Camden et al. [63] reported the first study of the state-to-state dynamics differential cross-section at high energies (1.95 eV) for the H + CD4 gas-phase reaction using the Photoloc technique. They found that the CD3 products are sideways/forward scattered with respect to the incident CD4, suggesting a stripping mechanism (note that in the original papers [6365] the CD3 product is measured with respect to the incident H). Later, this same laboratory [64, 65] reported new experimental studies, also at high energy (1.2 eV), finding the same experimental behaviour. Experimentally, state-to-state dynamics studies are difficult to perform at low energies for the title reaction, because the H atoms, which are produced in a photolysis process, are hot. Only very recently have Zhang et al. [66] reported crossed molecular beam experiments for the H + CD4 reaction at lower collision energies, ranging from 0.72 to 1.99 eV. Note that the lower value, 0.72 eV, is close to the barrier height, and consequently its dynamics will be influenced mainly by the transition-state region. Figure 2(a) plots these experimental results. A backward angle is clearly observed at low energies (rebound mechanism), changing towards sideways when the energy increases (stripping mechanism). This is an excellent opportunity to test the quality of the PES and the dynamics methods (Figures 2(b)2(d)). Only two surfaces have been used to study this problem theoretically: an analytical surface from our group (versions 2002 [43, 67] and 2008, labeled here as CBE [41, 68]), using both QCT and QM calculations, and an “on-the-fly” B3LYP/631G(d,p) density functional theory surface [65], using QCT calculations. At low energies, 0.7 eV, our analytical surfaces, PES-2002 and CBE, using QCT and QM methods (Figures 2(b) and 2(c)), show backward scattering, associated with a rebound mechanism, reproducing the recent experimental data [66]. The B3LYP “on-the-fly” surface using QCT calculations (Figure 2(d)) yields a more sideways scattering, with large uncertainties, in contrast with experiment. These differences could be due to the poor statistics on the B3LYP surface [65] and to the severe underestimation of the barrier heights, about 5 kcal mol−1 lower than the best ab initio calculations. This low barrier artificially permits reactive trajectories with larger impact parameters, favouring the sideways scattering region. When this error in the barrier height is corrected, our PES-2002 and, more noticeably in the better CBE surface, the low impact parameters are favoured, and this explains the rebound mechanism. Note that, interestingly, this experimental behaviour was already predicted by our group in 2006 [67] before the experimental data were available. At higher collision energies, as they increase from 1.06 to 1.99 eV, Zhang et al. [66] found a shift of the product angular distribution from backwards to sideways. This behaviour agrees qualitatively with the previous observations of Camden et al. [6365], although these latter workers found a clear sideways distribution at 1.21 and 1.95 eV, with practical extinction of the backward signal. This may have been due to the use of the Photoloc technique, which neglects the internal energy contribution of the HD co-product, and, as the authors themselves recognized in 2006, “clearly a more detailed picture of the differential cross section is desirable but it will have to await more experimental work.” The QCT results using the analytical CBE surface (Figure 2(b)) reproduce the new experimental evidence. QM calculations on the same CBE surface [68] (Figure 2(c)) give more sideways scattering than the QCT calculations and experiment, although it is not clear whether this is due to significant quantum effects or simply are an artifact of the reduced dimensionality approach in the QM calculations. Note that QCT calculations on the old PES-2002 surface also predicted the subsequently experimentally observed behaviour [66]—backwards-sideways scattering. However, they contradicted the experimental information available at the time the study was carried out—in 2006. In addition, since direct-dynamics QCT calculations at the B3LYP/631G(d,p) level showed sideways scattering, reproducing the experimental measurements [6365], the validity of the analytical PES-2002 was questioned. The recent experiments of Zhang et al. [66], however, changed the whole picture and now the direct-dynamics QCT calculations at low collision energies are questioned because of, as noted above, the presence of reactive trajectories with erroneously large impact parameters due to the underestimation of the barrier height. In the case of higher collision energies, however, this error in the barrier should be of less concern than at lower energies. Very recently, Zhou et al. [18] have performed an exhaustive analysis of the total reaction probabilities and integral cross-section using quantum dynamics calculations on the three most recent surfaces: fitted ZBB3, analytical CBE, and interpolated ZFWCZ surfaces, with collision energies up to 1.7 eV. Figure 3 plots the integral cross-section results. At collision energies up to 1.0 eV, the three surfaces show satisfactory agreement, while at higher energies of collision, the CBE surface overestimates this dynamics property, while the remaining two surfaces show good agreement. This could be attributed to deficiencies of the CBE surface at high energies. This is not surprising since the calibration of the CBE surface was done thinking of thermal behaviour. The information used during the fit focused on the reaction path and reaction valley, and higher energy areas were neither sampled nor weighted sufficiently. Therefore, as the collision energy increases, the accuracy of the CBE surface diminishes. Finally, another severe test of the quality of the PES is the study of the effect of the vibrational excitation on the dynamics. In fact, the dynamics of a vibrationally excited polyatomic reaction presents a challenge both theoretically and experimentally. Camden et al. [69] carried out the first experimental study on the effect of the C–H stretch excitation on the gas-phase H + CH4 hydrogen abstraction reaction. They found that the excitation of the asymmetric C–H stretch mode enhances the reaction cross-section by a factor of 3.0 ± 1.5 with respect to the ground-state methane, and this enhancement is practically independent of the collision energies for the three cases analyzed—1.52, 1.85, and 2.20 eV. In the following year, 2006, two theoretical papers on this issue were published: one by Xie et al. [56] using QCT calculations on the ZBB3 surface, and another from our laboratory [70] also using QCT calculations on the older analytical PES-2002 surface. At 1.52 eV, the theoretical results are close, with computed enhancement factors of 2.3 and 1.9, respectively, both of them within the experimental uncertainties. Note that our more recent CBE surface also predicts an enhancement factor of 1.9 (unpublished results). CH5 saddle point properties.a Fitting (ZBB3)bAnalytical (CBE)cInterpolated (ZFWCZ)d Barrier height14.7815.0115.03   Vibrational frequencies aEnergy in kcal mol−1, geometry in Å and degrees, and vibrational frequency in cm−1; b[55]; c[41]; d[18]. HD product energy partitioning and vibrational distribution (percentages) for the H + CD4 reaction at 1.52 eV. Surfacefvib (HD)frot (HD)HD (v=0)HD (v=1) aCBE surface, [41] bZBB1 surface, [55] cValentini et al. [61]. Arrhenius plots of lnk (cm3 molecule−1 s−1) for the forward thermal rate coefficients of the H + CH4 reaction against the reciprocal of temperature (K), in the range 250–1000 K. Black line: MCDTH quantum calculations on the CBE surface; red line: MCDTH quantum calculations on the WWM surface; blue line: quantum calculations on the ZBB2 surface; black dashed line: VTST/MT calculations on the CBE surface; crosses: experimental values [60]. CD3 product angular distribution (with respect to the incident CD4) for the H + CD4 → HD + CD3 reaction at different collision energies. (a) Experimental results from [64]; (b) QCT angular distribution on the CBE analytical surface [66]; (c) QM angular distribution on the CBE analytical surface [66]; (d) QCT angular distribution on the B3LYP “on-the-fly” surface [65]. Quantum mechanical integral cross-section (ao2) versus collision energy (eV) for the H + CH4 → H2 + CH3 reaction on the CBE (black line), ZBB3 (blue line), and ZFWCZ (red line) surfaces. (b) The H + NH<sub>3</sub> Reaction The reaction of hydrogen atom with ammonia is a typical five-body reactive system, and presents a rare opportunity to study both intermolecular and intramolecular dynamics. For the intermolecular case, the gas-phase H + NH3 → H2 + NH2 hydrogen abstraction reaction is similar to the H + CH4 reaction. It presents a barrier height of 14.5 kcal mol−1 and a reaction exoergicity of 5.0 kcal mol−1, as compared to 14.87 and an endoergicity of 2.88 kcal mol−1, respectively, for the H + CH4 analogue. Also, the inversion of ammonia between two pyramidal structures (C3v symmetry) passing through a planar structure (D3h symmetry, which is a saddle point) is an example of intramolecular dynamics [71] with a barrier height in the range 5.20–5.94 kcal mol−1, depending on the level of calculation. Only three potential energy surfaces have been developed for the H + NH3 system. In 2005, Moyano and Collins [72] developed an interpolation potential energy surface for the ammonia inversion, and the hydrogen abstraction and exchange reactions of H + NH3 using a modified Shepard interpolated scheme based on 2000 data points calculated at the CCSD(T)/aug-cc-pVDZ level (PES1 version) or as single-point calculations at the CCSD(T)/aug-cc-pVTZ (PES2 version) level. The first and second derivatives of the energy were calculated by finite differences in the energy. Previously, in 1997, our group constructed the first surface for the hydrogen abstraction reaction exclusively, CE-1997 [44], which was fitted to a combination of experimental and theoretical information that is, it was semiempirical in nature, which represents a limitation for dynamics studies. Moreover, as was previously noted, recently Yang and Corchado [38] reported a major drawback of the CE-1997, namely that it describes incorrectly the NH3 inversion motion, predicting incorrectly that the planar ammonia (D3h symmetry) is about 9 kcal mol−1 more stable than the pyramidal structure (C3v symmetry). To correct this behaviour of the NH3 inversion, together with its semiempirical character, a new analytical potential energy surface, named EC-2009, was recently developed by our group [39], describing simultaneously the hydrogen abstraction and ammonia inversion reactions. This EC-2009 surface is basically a valence bond-molecular mechanics (VB/MM) surface, given by (24), and was fitted exclusively to very high level ab initio calculations at the CCSD(T)/cc-pVTZ level. Note that the first derivatives of this surface are analytical, which implies a significant reduction in the computer time required for dynamical calculations as well as more accurate derivatives than is possible with numerical methods. We begin by analyzing the hydrogen abstraction reaction. Table 3 lists the energy, geometry, and vibrational frequencies of the saddle point, with the interpolated MC (PES2 version) and the analytical EC-2009 surfaces. Using as target the CCSD(T)/cc-pVTZ ab initio level, the two surfaces present similarities, although important differences must be noted. First, while the interpolated MC surface reproduces the ab initio N–H′–H bend angle, the analytical surface yields a collinear approach. However, in previous papers [73, 74], we demonstrated that this was not a serious problem in the kinetics and dynamics description of the system. Second, the imaginary frequency obtained with the MC surface differs by about 300 cm−1 from the ab initio value. For the two surfaces, Figure 4 shows the energy along the minimum energy path (VMEP) and the ground-state vibrationally adiabatic potential curve (VaG), which is defined as the sum of the potential and vibrational zero-point energies along the reaction path. Note that s is the reaction coordinate, being zero at the saddle point, positive in the product channel, and negative in the reactant channel. Firstly, the VMEP curve is smooth and shows no oscillations in either surface. However, VaG shows significant oscillations when computed for the interpolated surface. The reason is that the potential energy surface is not smooth enough to provide smooth changes in the frequencies (second derivatives of the energy). The situation becomes worse as one moves away from the saddle point and approaches areas where little ab initio information for interpolation is available. Thus, the bump in VaG at around s=-1 a.u. is due to wild oscillations of the frequencies that lead to unphysical values of VaG for |s|>1. This is a limitation when one wants to compute the tunneling effect at low energies (see below), since the barrier to tunnel through is totally unrealistic. Figure 5 plots the thermal rate coefficients computed with the VTST/MT approach on both surfaces, where tunneling was estimated using the least-action tunneling (LAT) method [75, 76], together with experimental values [77] for comparison. In the common temperature range, 490–1780 K, both surfaces reproduce the experimental data, which is a test of both the surface and the dynamical method. The differences between the rates for the two surfaces are relatively small, 40% at 600 K, and diminishing as temperature increases. At low temperatures, the differences between the two surfaces increase, with the rates computed on the MC surface being 92% larger than the ones with the EC-2009 surface at 200 K. In this low temperature regime, where tunneling is important, the analytical EC-2009 surface is more accurate due to the unrealistic oscillations in the adiabatic reaction path on the MC surface, which negatively influence the tunneling. The integral cross-sections in the range 10–30 kcal mol−1 have been evaluated using quasi-classical trajectory (QCT) calculations on both surfaces [72, 78] and are plotted in Figure 6. As apparent from this figure, both surfaces agree reasonably, showing typical threshold behaviour, starting from 10 kcal mol−1 and increasing with the translational energy. Unfortunately, there is no experimental data for comparison, and we think that these theoretical results might stimulate experimental work on this little studied system. Finally, the angular distributions of the H2 product with respect to the incident H atom has only been determined using the EC-2009 surface [78]. Figure 7 plots this property for collision energies of 25 and 40 kcal mol−1. At 25 kcal mol−1, the scattering distribution is in the sideways-backward hemisphere, associated with a rebound mechanism and low impact parameters. When the collision energy increases, 40 kcal mol−1, the scattering is shifted slightly towards the sideways hemisphere, due to larger impact parameters. As was mentioned above, the EC-2009 surface describes, in addition to the aforementioned hydrogen abstraction reaction, the ammonia inversion, an example of interesting intramolecular dynamics. Figure 8 plots the equipotential contours in the two significant coordinates, r(N–H) bond length and <H–N–H angle, for the analytical EC-2009 PES and the CCSD(T)/cc-pVTZ ab initio level, for comparison. As seen, the contours are smooth and the analytical PES reproduces the fitted ab initio data points [39]. A very stringent test of the quality of this surface is the ammonia splitting, which demands spectroscopic accuracy. The inversion motion is represented by a symmetric double-well potential. As a consequence of the perturbation originating this double well, a splitting of each degenerate vibrational level into two levels appears, ΔE, due to quantum mechanical tunneling [71], and the splitting increases rapidly with the vibrational number. The splitting ΔE can be computed from the tunneling rate of inversion, ktunn, which is in turn obtained from the imaginary action integral, θ(E), using the WKB approximation [79]: ΔE0=ktunn2c,ktunn=2cν2πexp[-θ(E)], with c being the speed of light and v2 the eigenvalue associated with the inversion mode of ammonia, 1113 cm−1. Figure 9 plots this path for ammonia where the first two pairs of split eigenvalues are superimposed. The computed splitting is listed in Table 4 together with experimental values [80] for comparison. The EC-2009 results overestimate the experimental values. The reason for the discrepancy mainly lies in the shape of the PES, although other factors such as the tunneling calculation cannot be discarded. Indeed, we found [39] that the overestimation we obtain in tunneling splitting is due to our barrier to inversion being slightly lower and thinner than those of other studies. Unfortunately, tunneling splitting is so sensitive to the shape of the PES that a few tenths of kcal·mol−1 give rise to a factor of four in the computed splitting. With respect to the effect of isotopic substitution, for the ND3 case we obtain a value ΔE=0.37 cm−1. Although this is greater than the experimentally reported value, 0.05 cm−1 [81], it correctly predicts about one order of magnitude reduction upon isotopic substitution. In sum, despite the enormous effort required for the construction of the potential energy surface, it does not suffice to obtain spectroscopic accuracy, and our surface can only give a qualitative description of the splitting in ammonia. NH4 hydrogen abstraction saddle point properties.a Interpolated (MC)bAnalytical (EC-2009)cAb initio d Barrier height14.6414.4814.73  Vibrational frequencies aEnergy in kcal mol−1, geometry in Å and degrees, and vibrational frequency in cm−1; b[72]; c [39]; d [39], at the CCSD(T)/cc-pVTZ level. Eigenvalues of ammonia inversion (in cm−1). a[39]; b[80]. Potential energy (b) and vibrational ground-state energy (a) along the reaction path of the H + NH3 → H2 + NH2 reaction computed using the EC-2009 (solid lines) and MC (dashed lines) surfaces. The zero of energy is set to the equilibrium potential energy of the reactants. Arrhenius plots of lnk (cm3 molecule−1 s−1) for the forward thermal rate coefficients of the H + NH3 → H2 + NH2 reaction against the reciprocal of temperature (K) in the range 200–2000 K. Solid black line: analytical EC-2009; dashed blue line: interpolated MC; dotted red line: experimental values from [77]. QCT reaction cross-section (a02) versus the collision energy (kcal mol−1) for the H + NH3 → H2 + NH2 reaction computed using the analytical EC-2009 (solid line) and interpolated MC (dashed line) surfaces. H2 product angular distribution (with respect to the incident H) for the H + NH3 → H2 + NH2 reaction at 25 kcal·mol−1 (solid line) and 40 kcal mol−1 (dashed line), computed using QCT calculations on the EC-2009 surface. Ammonia inversion reaction. Contour plots of the analytical EC-2009 (upper panel) and CCSD(T)/ccpVTZ ab initio surface (lower panel). Classical potential for the ammonia inversion path obtained from the EC-2009 surface. The first two pairs of eigenvalues are shown. 5. When the Problems Increase The benchmark H + CH4 hydrogen abstraction reaction, with five light atoms and a single heavy atom, which allows a large number of very high-level ab initio calculations to be performed, gives the impression of the process being a “piece of cake” regarding polyatomic bimolecular reactions. However, it still presents kinetics and dynamics differences depending on the potential energy used and discrepancies with the experimental measures. Then, what will the case be when more complicated systems are studied? In this section we will analyze some problems which can appear alone or in combination in the study of polyatomic systems and strongly complicate the construction of potential energy surfaces. 5.1. The Spin-Orbit Problem and Multisurface Dynamics This is a typical problem, for instance, in reactions involving halogen atoms, X(2P), with molecular systems, R–H: X(2P)  +  H–R  XH  +  R(X=F,Cl,Br). The halogen atom presents two spin-orbit electronic states, 2P3/2 and 2P1/2, with a splitting of 404 cm−1 (1.1 kcal mol−1), 882 cm−1 (2.5 kcal mol−1), and 3685 cm−1 (3.5 kcal mol−1) for F, Cl, and Br, respectively. A priori, the smaller the separation, the greater the possibility of the reaction coming from the two states, which complicates the PES construction and the dynamics study. This problem affects all the previously described theoretical methods to develop surfaces, because it is a problem intrinsic to the initial information required: the quantum mechanical calculations. For this problem, relativistic calculations would be needed, which would immensely increase the computational cost and would make these calculations impractical in polyatomic systems. In addition, new functional forms for the analytical functions need to be developed to include the coupling between states and its dependence on coordinates [82, 83], in order to make it possible to include nonadiabatic effects and hopping between surfaces in the dynamics study of these multistate systems. In the case of atom-diatom reactions, some results have shed light on the spin-orbit problem, although some theory/experiment controversies still persist. For instance, for the well-studied F(2P3/2, 2P1/2) + H2 reaction, Alexander et al. [82, 84] found that the reactivity of the excited s-o state of F is small, 10–25% of the reactivity of the ground s-o state, and concluded that the overall dynamics of the F + H2 reaction could be well described by calculations on a single, electronically adiabatic PES, although for a direct comparison with experiment the coupling between the ground and excited s-o surfaces must be considered. For the analogue Cl(2P3/2, 2P1/2) + H2 reaction, because of the larger energy separation, one would expect that the reaction could evolve on the ground-state adiabatic surface, with the contribution of the chlorine excited state, 2P1/2, being practically negligible according to the Born-Oppenheimer (BO) approach. However, recently a theory/experiment controversy has arisen on this issue. Thus, Lee and Liu [8587] demonstrated experimentally the contrary for the Cl + H2 reaction; that is, the excited chlorine atom (Cl*) is more reactive to H2 than the ground-state chlorine (Cl) by a factor of at least 6. In any case, even taking into account experimental error bars, the authors were confident about the reactivity relationship Cl* > Cl, which could have a significant effect on the temperature dependence of the thermal rate coefficients. The authors interpreted this surprising result by postulating a nonadiabatic transition (breakdown of the BO approximation) from the excited Cl* to the ground-state Cl surface by either electrostatic or spin-orbit coupling in the entrance channel. This experimental study initiated a major theoretical and experimental debate on the influence of the excited Cl* in the reactivity of the Cl + H2 reaction. Different laboratories [8893] performed theoretical/experimental studies of the validity of the BO approximation in this reaction, concluding that the adiabatically allowed reaction (Cl(2P3/2) + H2) will dominate the adiabatically forbidden reaction (Cl(2P1/2) + H2). Hence, these results are in direct contrast with the experiment of Liu et al. [8587], suggesting that this experiment claiming high reactivity of Cl* needs to be reexamined. In the case of polyatomic systems, this level of sophistication has not been achieved and would still be computationally prohibitive. Thus, some approaches have been considered to take into account, indirectly, the s-o effect in the construction of the PES and the dynamics study. First, for thermochemical or rate coefficient calculations, the s-o effect on the multiple electronic states is taken into account by the electronic partition function of the reactants in the usual expressionQe=4+2exp(-εkBT), where ε is the s-o splitting of the halogen atom. Second, there is an additional effect on the barrier height (Figure 10). In fact, if we assume that along the entire reaction path the states are fully quenched, considering the s-o effect would lower the energy of the s-o ground state of the halogen atom by 1/3 ε below its nonrelativistic energy, increasing the barrier height by this amount. This represents 0.38, 0.83, and 1.17 kcal mol−1, respectively, for F, Cl, and Br. Our group considered these approaches in the construction of the surface for the F(2P) + CH4 [42], Cl(2P) + CH4 [49], and Cl(2P) + NH3 [94, 95] reactions. Note, however, that this approach is to some extent incoherent in the sense that as we move away from the saddle point towards reactants the energy asymptotically tends to zero, the energy of the lower state of the halogen atom. However, as we move from the saddle point towards reactants, it should tend asymptotically to 1/3ε until it reaches the point where the two states interact. From that point towards the reactants, the surface has to tend to zero. There is therefore a gap of up to 1/3ε between the exact result and our approximation results. However, we can assume that this gap is sufficiently small and is located so far from the dynamically important regions of the surface that we can safely neglect it. Moreover, in our analytical surfaces, the changes in the energy can be fitted in order to change the slope of the reaction path so that this asymptotic behaviour can be corrected. In addition, in the latter case of the Cl(2P) + NH3 reaction, because of the presence of wells in the reactant channel (see below), this gap occurs before the system reaches the well in the region connecting the well with reactants, and its influence on the kinetics and dynamics is entirely negligible. Schematic representation of the potential energy along the reaction path for a reaction with spin/orbit effects on the reactants. Red line: nonrelativistic calculations. 5.2. The Molecular “Size” Problem Obviously, the cost of calculating the quantum chemical information needed to build the PES increases exponentially with the number of electrons involved, and it is still a prohibitive task for large molecules and heavy atoms. Thus, for instance, while the H + CH4 benchmark reaction involves five light hydrogen atoms with only eleven electrons, when third-row atoms are considered, for instance, H + SiH4, nineteen electrons are involved, or when larger systems are considered, for instance, H + CCl4, four heavy chlorine atoms and 75 electrons must be included in the calculations. This represents an enormous computational effort, and that is prohibitive if high-level ab initio calculations are used to obtain chemical accuracy. Obviously, fitting or interpolation approaches, based on grids of 20000–30000 data points or direct dynamics calculations are still unaffordable, although more economical alternatives could be used to calculate the quantum mechanical data, such as the “dual level” technique, where the geometries and vibrational frequencies are calculated at a lower ab initio or DFT level and the energies are calculated as single points on these geometries at a higher quantum mechanical level. Even so, the computational effort would be enormous. In these complicated cases with a large number of electrons, our strategy to build the PES, based on a smaller number of ab initio calculations, represents an interesting and practical alternative. Thus, our laboratory has constructed surfaces for several five-body systems, H + NH3 [39, 44], 11 electrons, F + NH3 [45], 19 electrons, and Cl(2P) + NH3 [94, 95], 27 electrons; six-body systems, H + SiH4 [46], 19 electrons, H + GeH4 [96], 37 electrons, Cl(2P) + CH4 [49], 27 electrons, Br(2P) + CH4 [51], 45 electrons, H + CCl4 [52, 97], 75 electrons; one of seven bodies, OH + CH4 [53], 19 electrons. In addition, we have also studied reactions with asymmetrically substituted methane, H + CH3Cl and Cl + CHClF2 [98, 99], which represent another challenge in the PES construction for polyatomic systems, because in addition to the aforementioned problems, the possibility of several reaction channels needs to be taken into account. Although the older surfaces were semiempirical, that is, they combined theoretical and experimental information in the fitting procedure due to computational limitations at that time, the newer surfaces, those developed from 2007 onward, are based exclusively on quantum mechanical information: H + CH4 [41], H + CCl4 [97], H + NH3 [39], and Cl(2P) + NH3 [95]. It is noteworthy that the functional form has remained almost unchanged, being that of the H + CH4 reaction, adding different modifications following the requirements of the systems under study. Thus, for the five-body systems we removed the dependency on one of the hydrogen atoms of CH4, and for the seven-body system, OH + CH4, we added additional Morse and harmonic terms to describe the OH bond and <HOH angle, while for the asymmetric reactions we allowed the four atoms bonded to the carbon atom to vary independently. In addition, we improved the analytical form when building the Cl + NH3 surface by allowing the equilibrium N–H distance to vary along the reaction path [95]. 5.3. Reactions with More Complicated Topology The last problem analyzed in this section is that associated with the presence of several maxima and minima in the polyatomic reaction. As was recently noted by Clary [100], “reactions with several maxima and minima in the potential energy surface present the severest challenge to calculating and fitting potential energy surfaces and carrying out quantum dynamics calculations.” Basically, taking into account the topology of the potential energy surface and independently of whether the reaction is exothermic, endothermic, or thermoneutral, the bimolecular reactions can be classified into three broad categories (Figure 11): reactions with a single barrier (Figure 11(a)); barrierless reactions (Figure 11(b)), and reactions with more complicated potentials (e.g., Figures 11(c) and 11(d)). We focus on this last case with polyatomic systems (n>4). Obviously, the existence of wells in the entry and exit channels is associated with the presence of heavy atoms, especially halogen atoms, which favour the formation of intermediate complexes, either van der Waals or hydrogen-bonded complexes. Thus, bimolecular reactions with a complicated topology, as shown in Figures 11(c) or 11(d), also tend to pose several of the problems considered above, that is, the increase of the number of electrons involved and the spin-orbit problem. In addition, the presence of several maxima and minima requires a fine evaluation of the energies, gradients, and Hessians needed for a correct description of the PES. This, obviously, represents a complication in the construction of the PES, and only very few global surfaces have as yet been developed [26, 95]. Schematic representation of the potential energy along the reaction path for different types of bimolecular reactions. R, P and C denote, respectively, reactants, products, and intermediate complexes. In all the cases, the vertical axis represents the energy and the horizontal axis the reaction coordinate. We finish this section by describing the most complicated surface analyzed by our group [95]: the analytical potential energy surface for the Cl(2P) + NH3 polyatomic reaction, which presents several wells in the entry and exit channels, with a topology showed in Figure 11(d). Different intermediate complexes were found in the entry and exit channels at the CCSD(T)/cc-pVTZ level, and the intrinsic reaction path was calculated (energies, gradients, and Hessians) starting from the saddle point. With respect to the barrier height, one of the most difficult energy properties of a PES to estimate, different laboratories report electronic structure calculations using different levels (correlation energy and basis sets) [94, 95, 101103]. Gao et al. [102] constructed the reaction path using the MPWB1K density functional (DFT) method [104], finding a barrier of 5.2 kcal mol−1, and, using different correlation energy levels and basis sets, despite, they reported values in the range 4.8–6.2 kcal mol−1. Xu and Lin [103] performed a computational study of the mechanisms and kinetics of the reaction. The geometries of the stationary points were optimized using the B3LYP DFT method, and their energies were refined with the modified Gaussian-2 (G2M) theory. They obtained a value of 7.2 kcal mol−1, in contrast with the preceding values. Finally, we [94] obtained a barrier height of 7.6 kcal mol−1 at the CCSD(T)/cc-pVTZ level and of 5.8 kcal mol−1 [95] at a higher level, CCSD(T) = FULL/aug-cc-pVTZ, close to Gao et al.’s result. These results illustrate the dramatic influence of the electronic correlation and basis set on the correct description of the barrier, and it must be borne in mind that small differences in the saddle point produce large deviations in the kinetics and dynamics analysis. To further complicate the study of this reaction, the spin-orbit coupling must be considered, since the chlorine atom has two low-lying fine structure electronic states, 2P1/2 and 2P3/2, with a separation of ε=882 cm−1  2.5 kcal mol−1. As discussed above, the spin-orbit coupling was taken into account in our nonrelativistic calculations in two ways: first, in the electronic partition function of the reactant (29), and second, by adding one-third of the split between the two states, that is, 0.8 kcal mol−1, to the barrier height, where we have assumed that the s-o coupling is essentially fully quenched at the saddle point. With this correction, our best estimate of the barrier height was 6.6 kcal mol−1. With all this information, an analytical PES was constructed and kinetics information was obtained using VTST/MT methods [95]. The results agree well with experimental values of the rate coefficients and equilibrium constants, showing that the wells have little influence on the kinetics of the reaction. This is mainly due to the fact that VTST/MT for this reaction, whose reaction path is lower in energy than the products, assumes that tunneling has no effect. Therefore, kinetics is controlled by the properties of a transition state, which is located near the saddle point. In addition, calculations of the recrossing of the transition state that the presence of wells could cause showed that it is very small. Therefore, only the saddle point region determines the reaction probabilities. An exhaustive dynamics study using QCT and QM methods is currently in progress in our group and should be published soon. From the results it seems that the reaction cross-sections show significant values at very low collision energies using both methodologies. In order to analyze to what extent the presence of wells (especially the well on the reactant side) is responsible for this behaviour, further studies will be carried out with a model surface similar to the Cl + NH3 from which the wells have been removed. Note that the latter study is possible because of the availability of an analytical surface that can be refitted to remove the wells without significantly modifying other regions of the PES, which is an added value to this kind of analytical PES. These kinds of study can help us to understand the dynamics of such a complicated system and whether transition-state theory, which (as noted above) assumes that only the saddle point region is significant, needs to be challenged. 6. Final Remarks The construction of potential energy surfaces in polyatomic reactive systems represents a major theoretical challenge, with a very high computational and human time cost. Based on high-level electronic structure calculations, fitting, interpolation, and analytical (defined by functional forms) approaches have been developed and applied in the kinetics and dynamics (classical, quasi-classical, and quantum mechanical) study of these reactions. However, in spite of the enormous progress in the last 20–30 years in theoretical algorithms and computational power, the construction of potential energy surfaces in polyatomic systems has still not reached the level of accuracy achieved for the triatomic systems. The quality of these surfaces is still an open and debatable question, and even for the benchmark H + CH4 polyatomic reaction, which involves only five light hydrogen atoms and eleven electrons, small differences are found depending on the PES construction and the dynamics method. Unfortunately, the problems will increase for other important chemical systems, where some effects are present such as spin-orbit coupling, increase in the number of electrons and molecular size, or potentials with more complicated topology. In these cases, the kinetics and dynamics results will be even more strongly dependent on the quality of the PES. In the last few decades, there has been much theoretical effort on the part of various laboratories, but there is still much to do in this research field. Throughout this paper, the emphasis has been put on the strategy developed by our group which has constructed about 15 surfaces for polyatomic systems of five, six and seven bodies. These are freely available for download from the POTLIB websites, http://comp.chem.umn.edu/POTLIB/ or http://users.ipfw.edu/DUCHOVIC/POTLIB2001/, or can be requested from the authors. The analytical surfaces developed in our group are based on a basic functional form with slight modifications to suit it to each of the systems studied. In this sense, they are very specialized force fields, with mathematical functions that allow further improvements to be introduced. For example, adding terms to describe anharmonicity, mode-mode coupling, or an improved dependence of the constants of the fit on the reaction coordinate are possibilities, which can improve the functional form. The inclusion of additional reaction channels, such as the exchange reactions in methane or ammonia, is another pending matter in work on our surfaces. This would open up the possibility of analyzing competitive channels and could lead to a better understanding of the behaviour of complex reactions. An additional advantage of this approach is the negligible computational cost of the evaluation of the potential energy surface and its derivatives, which is a very desirable feature when one wants to apply expensive QM methods for the study of these reactions. Undoubtedly, with the evolution of computer resources, eventually a point will be reached when direct dynamics calculations using high ab initio levels will be feasible. However, until then, the possibility of having an analytical function that can describe the potential energy with reasonable accuracy is the fastest option. In recent years ad hoc surfaces have been constructed which only reproduce a single kinetics or dynamics property but do not provide a complete description of the reaction system. However, the aim of our work is to build a PES that reproduces at least qualitatively all the kinetics and dynamics features of the reactive system. Obviously a quantitative description of all kinetics and dynamics properties will be the final goal, and with this target in mind in our group we have adopted a flexible functional form that can be easily improved and adapted to other similar systems. This work has been partially supported in recent years by the Junta de Extremadura, Spain, and Fondo Social Europeo (Projects nos. 2PR04A001, PRI07A009, and IB10001). One of the authors (M.Monge-Palacios) thanks Junta de Extremadura (Spain) for a scholarship. BornM.OppenheimerR.Zur quantentheorie der molekelnAnnals of Physics192784457484RamachandranC. N.De FazioD.CavalliS.TarantelliF.AquilantiV.Revisiting the potential energy surface for the He + H2+ → HeH+ + H reaction at the full configuration interaction levelChemical Physics Letters20094691–326302-s2.0-5854911363910.1016/j.cplett.2008.12.035GeorgeF.KumarS.Ab initio ground and the first excited adiabatic and quasidiabatic potential energy surfaces of H+ + CO systemChemical Physics201037332112182-s2.0-7795542913710.1016/j.chemphys.2010.05.012TanakaT.TakayanagiT.Quantum reactive scattering calculations of H + F2 and Mu + F2 reactions on a new ab initio potential energy surfaceChemical Physics Letters20104964–62482532-s2.0-7795598481510.1016/j.cplett.2010.07.070SkomorowskiW.PawlowskiF. KoronaT.MoszynskiR.ZuchowskiP. S.HutsonJ. M.Interaction between LiH molecule and Li atom from the state-of-the-art electronic structure calculationsJournal of Chemical Physics2011134114109JiangB.XieCh.XieD.New ab iniito potential energy surface for BrH2 and rate constants for the H + HBr → H2 + Br abstraction reactionJournal of Chemical Physics2011134114301KarplusM.PorterR. N.SharmaR. D.Exchange reactions with activation energy. I. Simple barrier potential for (H, H2)Journal of Chemical Physics1965439325932872-s2.0-36849129194KouriD.J.SunY.TruhlarD. G.New time-dependent and time-independent computational methods for molecular collisionsMathematical Frontiers in Computational Chemical Physics1988New York, NY, USASpringer207244Fernandez-RamosA.MillerJ. A.KlippensteinS. J.TruhlarD. G.Modeling the kinetics of bimolecular reactionsChemical Reviews200610645184584AlbuT. V.Espinosa-GarcíaJ.TruhlarD. G.Computational chemistry of polyatomic reaction kinetics and dynamics: the quest for an accurate CH5 potential energy surfaceChemical Reviews200710711510151322-s2.0-3684900236210.1021/cr078026xHaseW. L.SongK.GordonM. S.Direct dynamics simulationsComputing in Science and Engineering20035436442-s2.0-003878788510.1109/MCISE.2003.1208640LiuJ.SongK.HaseW. L.AndersonS. L.Direct dynamics trajectory study of the reaction of formaldehyde cation with D2: vibrational and zero-point energy effects on quasiclassical trajectoriesJournal of Physical Chemistry A20051095011376113842-s2.0-3054444159110.1021/jp052615uHoT.-S.RabitzH.A general method for constructing multidimensional molecular potential energy surfaces from ab initio calculationsJournal of Chemical Physics19961047258425972-s2.0-0000844757CollinsM. A.Molecular potential-energy surfaces for chemical reaction dynamicsTheoretical Chemistry Accounts200210863133242-s2.0-003688278510.1007/s00214-002-0383-5HoT.-S.RabitzH.Reproducing kernel Hilbert space interpolation methods as a paradigm of high dimensional model representations: application to multidimensional potential energy surface constructionJournal of Chemical Physics200311913643364422-s2.0-014211624710.1063/1.1603219IschtwanJ.CollinsM. A.Molecular potential energy surfaces by interpolationJournal of Chemical Physics199410011808080882-s2.0-36449006558AddicoatM. A.CollinsM. A.BrouardM.VallanceC.Potential energy surfaces: the forces of chemistryTutorials in Molecular Reaction Dynamics2010Cambridge, UKRSC Publishing28ZhouY.FuB.WangC.CollinsM. A.ZhangD. H.Ab initio potential energy surface and quantum dynamics for the H + CH4 → H2 + CH3Journal of Chemical Physics20111346432364330MurrellJ. N.CarterS.FarantosS. C.HuxleyP.VarandasA. J. C.Molecular Potential Energy Functions1984Chistester, UKWileySchatzG. C.The analytical representation of electronic potential-energy surfacesReviews of Modern Physics19896136696882-s2.0-000067736510.1103/RevModPhys.61.669BowmanJ. M.SchatzG. C.Theoretical studies of polyatomic bimolecular reaction dynamicsAnnual Review of Physical Chemistry19954611691952-s2.0-0011771941SorbieK.MurrellJ. N.Theoretical study of the O(1D)+H2(1Sg) reactive quenching processMolecular Physics197631905920VarandasA. J. C.Four-atom bimolecular reactions with relevance in environmental chemistryInternational Reviews in Physical Chemistry20001921992452-s2.0-003366042710.1080/01442350050020888BrownA.BraamsB. J.ChristoffelK.JinZ.BowmanJ. M.Classical and quasiclassical spectral analysis of CH5+ using an ab initio potential energy surfaceJournal of Chemical Physics200311917879087932-s2.0-034446729210.1063/1.1622379JinZ.BraamsB. J.BowmanJ. M.An ab initio based global potential energy surface describing CH5+CH3+ + H5Journal of Physical Chemistry A20061104156915742-s2.0-3254445224510.1021/jp053848oBraamsB. J.BowmanJ. M.Permutationally invariant potential energy surfaces in high dimensionalityInternational Reviews in Physical Chemistry20092845776062-s2.0-7244912939210.1080/01442350903234923BowmanJ. M.CzakoG.FuB.High-dimensional ab initio potential energy surfaces for reaction dynamics calculationsPhysical Chemistry Chemical Physics2011131880948111McCoyA. B.BraamsB. J.BrownA.HuangX.JinZ.BowmanJ. M.Ab initio diffusion monte carlo calculations of the quantum behavior of CH5+ in full dimensionalityJournal of Physical Chemistry A200410823499149942-s2.0-304255399810.1021/jp0487096ValencichT.BunkerD. L.Energy-dependent cross sections for the tritium-methane hot atom reactionsChemical Physics Letters197320150522-s2.0-4243940045RaffL. M.Theoretical investigations of the reaction dynamics of polyatomic systems: chemistry of the hot atom (T* + CH4) and (T* + CD4) systemsJournal of Chemical Physics1970222022442-s2.0-36749112174StecklerR.DykemaK. J.BrownF. B.HancockG. C.TruhlarD. G.ValencichT.A comparative study of potential energy surfaces for CH3 + H2 CH4 + HJournal of Chemical Physics19878712702470352-s2.0-0001031379JosephT.StecklerR.TruhlarD. G.A new potential energy surface for the CH3 + H2 CH4 + H reaction: calibration and calculations of rate constants and kinetic isotope effects by variational transition state theory and semiclassical tunneling calculationsJournal of Chemical Physics19878712703670492-s2.0-33846706071JordanM. J. T.GilbertR. G.Classical trajectory studies of the reaction CH4 +H → CH3 + H2Journal of Chemical Physics199510214566956822-s2.0-36449004568Espinosa-GarcíaJ.CorchadoJ. C.Global surfaces and reaction-path potentials. Applications of variational transition-state theoryRecent Research Development in Physical Chemistry19971165192Espinosa-GarciaJ.Analysis of economical strategies for the construction of potential energy surfacesTrends in Physical Chemistry200084973ChakrabortyA.ZhaoY.LinH.TruhlarD. G.Combined valence bond-molecular mechanics potential-energy surface and direct dynamics study of rate constants and kinetic isotope effects for the H + C2H6 reactionJournal of Chemical Physics2006124444315443282-s2.0-3154445213210.1063/1.2132276DuchovicR. J.HaseW. L.SchlegelH. B.Analytic function for the H + CH3 CH4 potential energy surfaceJournal of Physical Chemistry1984887133913472-s2.0-20544477119YangM.CorchadoJ. C.Seven-dimensional quantum dynamics study of the H + NH3 → H2 + NH2 reactionJournal of Chemical Physics2007126212143122143212-s2.0-3425002791510.1063/1.2739512Espinosa-GarciaJ.CorchadoJ. C.Analytical potential energy surface and kinetics of the NH3 + H → NH2 + H2 hydrogen abstraction and the ammonia inversion reactionsJournal of Physical Chemistry A201011412445544632-s2.0-7795027689410.1021/jp1001513BanksS. T.ClaryD. C.Reduced dimensionality quantum dynamics of Cl + CH4 → HCl + CH3 on an ab initio potentialPhysical Chemistry Chemical Physics2007989339432-s2.0-3384732264510.1039/b615460cCorchadoJ. C.BravoJ. L.Espinosa-GarciaJ.The hydrogen abstraction reaction H + CH4. I. New analytical potential energy surface based on fitting to ab initio calculationsJournal of Chemical Physics2009130181843141843232-s2.0-6724911002010.1063/1.3132223RángelC.NavarreteM.Espinosa-GarcíaJ.Potential energy surface for the F(2P3/2, 2P1/2) + CH4 hydrogen abstraction reaction. Kinetics and dynamics studyJournal of Physical Chemistry A20051097144114482-s2.0-1454428906510.1021/jp044765vEspinosa-GarcíaJ.New analytical potential energy surface for the CH4 + H hydrogen abstraction reaction: thermal rate constants and kinetic isotope effectsJournal of Chemical Physics20021162410664106732-s2.0-003715772110.1063/1.1480273CorchadoJ. C.Espinosa-GarcíaJ.Analytical potential energy surface for the NH3 + H NH2 + H2 reaction: application of variational transition-state theory and analysis of the equilibrium constants and kinetic isotope effects using curvilinear and rectilinear coordinatesJournal of Chemical Physics199710610401340212-s2.0-0000372194Espinosa-GarcíaJ.CorchadoJ. C.Analytical surface for the reaction with no saddle-point NH3 + F → NH2 + FH. Application of variational transition state theoryJournal of Physical Chemistry A199710140733673442-s2.0-0031549601Espinosa-GarcíaJ.SansónJ.CorchadoJ. C.The SiH4 + H → SiH3 + H2 reaction: potential energy surface, rate constants, and kinetic isotope effectsJournal of Chemical Physics199810924664732-s2.0-000019891410.1063/1.476581Espinosa-GarcíaJ.García-BernáldezJ. C.Analytical potential energy surface for the CH4 + O(3P) → CH3 + OH reaction. Thermal rate constants and kinetic isotope effectsPhysical Chemistry Chemical Physics2000210234523512-s2.0-003465644610.1039/b001038nEspinosa-GarcíaJ.BravoJ. L.RangelC.New analytical potential energy surface for the F(2P) + CH4 hydrogen abstraction reaction: kinetics and dynamicsJournal of Physical Chemistry A200711114276127712-s2.0-3424819085710.1021/jp0688759CorchadoJ. C.TruhlarD. G.Espinosa-GarcíaJ.Potential energy surface, thermal, and state-selected rate coefficients, and kinetic isotope effects for Cl + CH4 → HCl + CH3Journal of Chemical Physics200011221937593892-s2.0-003369164210.1063/1.481602RangelC.NavarreteM.CorchadoJ. C.Espinosa-GarcíaJ.Potential energy surface, kinetics, and dynamics study of the Cl + CH4 → HCl + CH3 reactionJournal of Chemical Physics2006124121243061243242-s2.0-3454713941110.1063/1.2179067Espinosa-GarcíaJ.Potential energy surface for the CH3 + HBr → CH4 + Br hydrogen abstraction reaction: thermal and state-selected rate constants, and kinetic isotope effectsJournal of Chemical Physics20021175207620862-s2.0-003667944310.1063/1.1490917RangelC.Espinosa-GarcíaJ.Potential energy surface for the CCl4 + H → CCl3 + ClH reaction: kinetics and dynamics studyJournal of Chemical Physics2005122131343151343192-s2.0-2414448416110.1063/1.1874992Espinosa-GarcíaJ.CorchadoJ. C.Potential energy surface for a seven-atom reaction. Thermal rate constants and kinetic isotope effects for CH4 + OHJournal of Chemical Physics200011213573157392-s2.0-0000602992DuchovicR.J.VolobuevY.L.LynchG.C.POLTLIB2001ZhangX.BraamsB. J.BowmanJ. M.An ab initio potential surface describing abstraction and exchange for H + CH4Journal of Chemical Physics200612420211040211082-s2.0-3074444995610.1063/1.2162532XieZ.BowmanJ. M.ZhangX.Quasiclassical trajectory study of the reaction H + CH4 (v3 =0,1) → CH3 + H2 using a new ab initio potential energy surfaceJournal of Chemical Physics2006125131331201331272-s2.0-3374949412010.1063/1.2238871WuT.WernerH. J.MantheU.Accurate potential energy surface and quantum reaction rate calculations for the H + CH4 → H2 + CH3 reactionJournal of Chemical Physics2006124161643071643182-s2.0-3454764927910.1063/1.2189223Huarte-LarrañagaF.MantheU.Quantum dynamics of the CH4 + H → CH3 + H2 reaction: full-dimensional and reduced dimensionality rate constant calculationsJournal of Physical Chemistry A200110512252225292-s2.0-0035967409SchiffelG.MantheU.NymanG.Full-dimensional quantum reaction rate calculations for H + CH4 → H2 + CH3 on a recent potential energy surfaceJournal of Physical Chemistry A201011436961796222-s2.0-7795653304410.1021/jp911880uSutherlandJ. W.SuM. C.MichaelJ. V.Rate constants for H + CH4, CH3 + H2, and CH4 dissociation at high temperatureInternational Journal of Chemical Kinetics200133116696842-s2.0-003549933310.1002/kin.1064GermannG. J.HuhY. D.ValentiniJ. J.State-to-state dynamics of atom + polyatom abstraction reactions. II. The H + CD4 → HD(v',J') + CD3 reactionJournal of Chemical Physics199296819571966HuW.LendvayG.TroyaD.SchatzG. C.CamdenJ. P.BechtelH. A.BrownD. J. A.MartinM. R.ZareR. N.H + CD4 abstraction reaction dynamics: product energy partitioningJournal of Physical Chemistry A20061109301730272-s2.0-3364566646110.1021/jp055017oCamdenJ. P.BechtelH. A.ZareR. N.Dynamics of the simplest reaction of a carbon atom in a tetrahedral environmentAngewandte Chemie20034242522752302-s2.0-024263600810.1002/anie.200352642CamdenJ. P.BechtelH. A.BrownD. J. A.MartinM. R.ZareR. N.HuW.LendvayG.TroyaD.SchatzG. C.A reinterpretation of the mechanism of the simplest reaction at an sp 3-hybridized carbon atom: H + CD4 → CD3 + HDJournal of the American Chemical Society20051273411898118992-s2.0-2414447311610.1021/ja052684mCamdenJ. P.HuW.BechtelH. A.BrownD. J. A.MartinM. R.ZareR. N.LendvayG.TroyaD.SchatzG. C.H + CD4 abstraction reaction dynamics: excitation function and angular distributionsJournal of Physical Chemistry A200611026776862-s2.0-3154448266610.1021/jp053827uZhangW.ZhouY.WuG.LuY.PanH.FuB.ShuaiQ.LiuL.LiuS.ZhangL.JiangB.DaiD.LeeS. Y.XieZ.BraamsB. J.BowmanJ. M.CollinsM. A.ZhangD. H.YangX.Depression of reactivity by the collision energy in the single barrier H + CD4 → HD + CD3 reactionProceedings of the National Academy of Sciences of the United States of America20101072912782127852-s2.0-7795559365910.1073/pnas.1006910107RangelC.SansónJ.CorchadoJ. C.Espinosa-GarciaJ.NymanG.Product angular distribution for the H + CD4 → HD + CD3 reactionJournal of Physical Chemistry A20061103710715107192-s2.0-3374963077910.1021/jp063298+Espinosa-GarcíaJ.NymanG.CorchadoJ. C.The hydrogen abstraction reaction H+ CH4. II. Theoretical investigation of the kinetics and dynamicsJournal of Chemical Physics2009130181843151843232-s2.0-6724908767110.1063/1.3132594CamdenJ. P.BechtelH. A.BrownD. J. A.ZareR. N.Effects of C-H stretch excitation on the H + CH4 reactionJournal of Chemical Physics2005123131343011343092-s2.0-2644443811610.1063/1.2034507RangelC.CorchadoJ. C.Espinosa-GarcíaJ.Quasi-classical trajectory calculations analyzing the reactivity and dynamics of asymmetric stretch mode excitations of methane in the H + CH4 reactionJournal of Physical Chemistry A20061103510375103832-s2.0-3374877005810.1021/jp063118wTownesC. H.SchawlowA. L.Microwave Spectroscopy1955New York, NY, USAMcGraw-HillMoyanoG. E.CollinsM. A.Interpolated potential energy surface for abstraction and exchange reactions of NH3 + H and deuterated analoguesTheoretical Chemistry Accounts200511342252322-s2.0-1874438828410.1007/s00214-004-0626-8Espinosa-GarcíaJ.Capability of LEPS surfaces to describe the kinetics and dynamics of non-collinear reactionsJournal of Physical Chemistry A200110511341392-s2.0-0035120884Espinosa-GarcíaJ.Capability of LEP-type surfaces to describe noncollinear reactions. 2. Polyatomic systemsJournal of Physical Chemistry A200110538874887552-s2.0-0035960228GarrettB. C.TruhlarD. G.A least-action variational method for calculating multidimensional tunneling probabilities for chemical reactionsJournal of Chemical Physics19837910493149382-s2.0-33749311086Meana-PañedaR.TruhlarD. G.Fernández-RamosA.Least-action tunnelling transmission coefficient for polyatomic reactionsJournal of Chemical Theory and Computation2010616172-s2.0-7795012976010.1021/ct900420eKoT.MarshallP.FontijnA.Rate coefficients for the H + NH3 reaction over a wide temperature rangeJournal of Physical Chemistry1990944140114042-s2.0-0000783727Espinosa-GarcíaJ.CorchadoJ. C.Quasi-classical trajectory calculations of the hydrogen abstraction reaction H + NH3Journal of Physical Chemistry A201011421619462002-s2.0-7795301943310.1021/jp101607nHapernA. M.RamachandranB. R.GlendeningE. D.The inversion potential of ammonia: an intrinsic reaction coordinate calculation for student investigationJournal of Chemical Education2007846106710722-s2.0-34249789580ŠpirkoV.Vibrational anharmonicity and the inversion potential function of NH3Journal of Molecular Spectroscopy1983101130472-s2.0-0002670091DeviV. M.DasP. P.RaoK. N.UrbanS.PapouseckD.SpirkoV.The diode laser spectrum of the V2 band of 14ND3 and 15ND3Journal of Molecular Spectroscopy198188293299AlexanderM. H.ManolopoulosD. E.WernerH. J.An investigation of the F + H2 reaction based on a full ab initio description of the open-shell character of the F(2P) atomJournal of Chemical Physics20001132411084111002-s2.0-0034498079CapecchiG.WernerH. J.Ab initio calculations of coupled potential energy surfaces for the Cl(2P3/2,2P1/2) + H2 reactionPhysical Chemistry Chemical Physics2004621497549832-s2.0-944425470910.1039/b411385cAlexanderM. H.WernerH. J.ManolopoulosD. E.Spin-orbit effects in the reaction of F(2P) with H2Journal of Chemical Physics199810914571057132-s2.0-2194444051610.1063/1.477192LeeS. H.State-specific excitation function for Cl(2P) + H2 (v = 0,j): effects of spin-orbit and rotational statesJournal of Chemical Physics199911017822982322-s2.0-0000705214LeeS.-H.LiuK.Exploring the spin-orbit reactivity in the simplest chlorine atom reactionJournal of Chemical Physics199911114625362592-s2.0-0343106493DongF.LeeS. H.LiuK.Direct determination of the spin-orbit reactivity in CI(2P3/2, 2P1/2) + H2/D2/HD reactionsJournal of Chemical Physics20011153119712042-s2.0-003587898610.1063/1.1378834AlexanderM. H.CapecchiG.WernerH. J.Theoretical study of the validity of the Born-Oppenheimer approximation in the CL + H2 → HCL + H reactionScience200229655687157182-s2.0-003717752810.1126/science.1070472BalucaniN.SkouterisD.CartechiniL.CapozzaG.SegoloniE.CasavecchiaP.AlexanderM. H.CapecchiG.WernerH. J.Differential cross sections from quantum calculations on coupled ab initio potential energy surfaces and scattering experiments for Cl(2P) + H2 reactionsPhysical Review Letters20039110132010132042-s2.0-0042709551AlexanderM. H.CapecchiG.WernerH.-J.Details and consequences of the nonadiabatic coupling in the Cl(2P) + H2 reactionFaraday Discussions200412759722-s2.0-494422864610.1039/b314189fGarandE.ZhouJ.ManolopoulosD. E.AlexanderM. H.NeumarkD. M.Nonadiabatic interactions in the Cl + H2 reaction probed by CIH2 and CID2 photoelectron imagingScience2008319585972752-s2.0-3784903416210.1126/science.1150602NeumarkD. M.Slow electron velocity-map imaging of negative ions: applications to spectroscopy and dynamicsJournal of Physical Chemistry A20081125113287133012-s2.0-5814915779710.1021/jp807182qWangX.DongW.XiaoC.CheL.RenZ.DaiD.WangX.CasavecchiaP.YangX.JiangB.XieD.SunZ.LeeS. Y.ZhangD. H.WernerH. J.AlexanderM. H.The extent of non-born-oppenheimer coupling in the reaction of Cl(2P) with para-H2Science200832259015735762-s2.0-5494914152610.1126/science.1163195Monge-PalaciosM.Espinosa-GarciaJ.Reaction-path dynamics calculations of the Cl + NH3 hydrogen abstraction reaction: the role of the intermediate complexesJournal of Physical Chemistry A201011412441844262-s2.0-7795027818910.1021/jp911664tMonge-PalaciosM.RangelC.CorchadoJ. C.Espinosa-GarciaJ.Analytical potential energy surface for the reaction with intermediate complexes NH3 + Cl → NH2 + HCl: application to the kinetics studyInternational Journal of Quantum Chemistry. In press10.1002/qua.23165Espinosa-GarcíaJ.Analytical potential energy surface for the GeH4 + H → GeH3 + H2 reaction: thermal and vibrational-state selected rate constants and kinetic isotope effectsJournal of Chemical Physics199911120933093362-s2.0-0000591875Espinosa-GarcíaJ.RangelC.Monge-PalaciosM.CorchadoJ. C.Kinetics and dynamics study of the H + CCl4 → HCl(v', j')+ CCl3 reactionTheoretical Chemistry Accounts20101287437552-s2.0-7795364210410.1007/s00214-010-0776-9RangelC.Espinosa-GarcíaJ.Analytical potential energy surface describing abstraction reactions in asymmetrically substituted polyatomic systems of type CX3Y + A → productsJournal of Physical Chemistry A200611025375472-s2.0-3164444068510.1021/jp052297zRangelC.Espinosa-GarcíaJ.Potential energy surface for asymmetrically substituted reactions of type CWXYZ + A. kinetics studyJournal of Physical Chemistry A200711123505750622-s2.0-3434735316110.1021/jp071264bClaryD. C.Theoretical studies on bimolecular reaction dynamicsProceedings of the National Academy of Sciences of the United States of America20081053512649126532-s2.0-5134908466410.1073/pnas.0800088105KondoS.TokuhashiK.TakahashiA.KaiseM.Ab initio study of reactions between halogen atoms and various fuel molecules by Gaussian-2 theoryJournal of Hazardous Materials2000791-277862-s2.0-003435375510.1016/S0304-3894(00)00266-1GaoY.AlecuI. M.HsiehP.-C.MorganB. P.MarshallP.KrasnoperovL. N.Thermochemistry is not a lower bound to the activation energy of endothermic reactions: a kinetic study of the gas-phase reaction of atomi-chlorine with ammoniaJournal of Physical Chemistry A200611021684468502-s2.0-3374545761310.1021/jp056406lXuZ. F.LinM. C.Computational studies on the kinetics and mechanisms for NH3 reactions with ClOx (x = 0-4) radicalsJournal of Physical Chemistry A200711145845902-s2.0-3384701565210.1021/jp065397tZhaoY.TruhlarD. G.Hybrid meta density functional theory methods for thermochemistry, thermochemical kinetics, and noncovalent interactions: the MPW1B95 and MPWB1K models and comparative assessments for hydrogen bonding and van der Waals interactionsJournal of Physical Chemistry A200410833690869182-s2.0-434457729410.1021/jp048147q
b6f6e74faa8c6ac0
Posts Tagged ‘physics’ On Wavicles June 16, 2018 Leave a comment Is material reality made up of particles or waves or both? I think that the truth is that we’ve got it wrong. The universe is an experiential phenomenon that transcends realism, so it cannot be reduced to geometry. When we do reduce it to geometry in our interpretation, we get the geometric equivalent of an imaginary number. We get an impossible contradiction of diametrically opposed shapes…points in a void or ‘waves’ of energetic nothingness. Combined with clues like special relativity, quantum contextuality, and Gödel’s incompleteness, I think it should be almost obvious that nature is answering the question of what matter is made of by demonstrating that the question cannot be answered in that way. It is like trying to look for parts of a rainbow inside the water vapor of clouds. Information does not physically exist​ December 31, 2017 Leave a comment mapterrAlfred Korzybski famously said “the map is not the territory”. To the extent that this is true, it should be understood to reveal that “information is not physics”. If there is a mapping function, there is no reason to consider it part of physics, and in fact that convention comes from an assumption of physicalism rather than a discovery of physical maps. There is no valid hypothesis of a physical mechanism for one elemental phenomenon or event to begin to signify another as a “map”.​ Physical phenomena include ‘formations’ but there is nothing physical which could or should transform them ‘in’ to anything other than different formations. A bit or elementary unit of information has been defined as ‘a difference that makes a difference’. While physical phenomena seem *to us* to make a difference, it would be anthropomorphizing to presume that they are different or make a difference to each other. ​Difference and making a difference seem to depend on some capacity for detection, discernment, comparison, and evaluation. These seem to be features of conscious sense and sense making rather than physical cause and effect.​ The more complete context of the quote about a difference which makes a difference has to do with neural pathways and an implicit readiness to be triggered. In Bateson’s paper, he says “In fact, what we mean by information—the elementary unit of information—is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The pathways are ready to be triggered. We may even say that the question is already implicit in them.”​ In my view this ‘readiness’ is a projection of non-physical properties of sense and sense making onto physical structures and functions. If there are implicit ‘questions’ on the neural level, I suggest that they cannot be ‘in them’ physically, and the ‘interiority’ of the nervous system or other information processors is figurative rather than literal.​ My working hypothesis is that information is produced by sense-making, which in turn is dependent upon more elemental capacities for sense experience.​ Our human experience is a complex hybrid of sensations which seem to us to be embodied through biochemistry and sense-making experiences which seem to map intangible perceptions outside of those tangible biochemical mechanisms. The gap between the biochemical sensor territories and the intangible maps we call sensations are a miniaturized view of the same gap that exists at the body-mind level. Tangibility itself may not be an ontological fact, but rather a property that emerges from the nesting of sense experience. There may be no physical territory or abstract maps, only sense-making experiences of sense experiences. There may be a common factor which links concrete territories and abstract maps, however.​ The common factor cannot be limited to the concrete/abstract dichotomy, but it must be able to generate those qualities which appear dichotomous in that way.​ To make this common factor universal rather than personal, qualia or sense experience could be considered an absolute ground of being. George Berkeley said “Esse est percipi (To be is to be perceived)”, implying that perception is the fundamental fabric of existence. Berkeley’s idealism conceived of God as the ultimate perceiver whose perceptions comprise all being, however it may be that the perceiver-perceived dichotomy is itself a qualitative distinction which relies on an absolute foundation of ‘sense’ that can be called ‘pansense’ or ‘universal qualia’.​ In personal experience, the appearance of qualities is known by the philosophical term ‘qualia’ but can also be understood as received sensations, perceptions, feelings, thoughts, awareness and consciousness. Consciousness can be understood as ‘the awareness of awareness’, while awareness can be ‘the perception of perception’.​Typically we experience the perceiver-perceived dichotomy, however practitioners of advanced meditation techniques and experiencers of mystical states of consciousness report a quality of perceiverlessness which defies our expectation of perceiver-hood as a defining or even necessary element of perception. This could be a clue that transpersonal awareness transcends distinction itself, providing a universality which is both unifying, diversifying, and re-unifying.​ Under the idea of pansense, God could either exist or not exist, or both, but God’s existence would either have to be identical with or subordinate to pensense. God cannot be unconscious and even God cannot create his own consciousness. It could be thought that making the category of perception absolute makes it just as meaningless as calling it physical, however the term ‘perception’ has a meaning even in an absolute sense in that it positively asserts the presence of experience, whereas the term ‘physical’ is more generic and meaningless.​ Physical could be rehabilitated as a term which refers to tangible geometric structures encountered directly or indirectly during waking consciousness. Intangible forces and fields should be understood to be abstract maps of metaphysical influences on physical appearances. What we see as biology, chemistry, and physics may in fact be part of a map in which a psychological sense experience makes sense of other sense experiences by progressively truncating their associated microphenomenal content. Information is associated with Entropy, but entropy ultimately isn’t purely physical either.​ The association between information and entropy is metaphorical rather than literal.​ The term ‘entropy’ is used in many different contexts with varying degrees of rigor. The connection between information entropy and thermodynamic entropy comes from statistical mechanics. Similar statistical mechanical formulas can be applied to both the probability of physical microstates (Boltzmann, Gibbs) and the probability of ‘messages’ (Shannon), however probability derives from our conscious desire to count and predict, not from that which is being counted and predicted. “Gain in entropy always means loss of information, and nothing more”. To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.” ​- Wikipedia Information can be considered negentropy also: “Shannon considers the uncertainty in the message at its source, whereas Brillouin considers it at the destination” – Information is surprise Thermodynamic entropy can be surprising in the sense that it becomes more difficult to predict the microstate of any individual particle, but unsurprising in the sense that the overall appearance of equilibrium is both a predictable, unsurprising conclusion and it is an appearance which implies the loss of potential to generate novelty or surprise.​ Also, surprise is not a physical condition.​ Heat death is a cosmological end game scenario which is maximally entropic in thermodynamic terms but lacks any potential for novelty or surprise. If information is surprise, then high information would correlate to high thermodynamic negentropy.​ The Big Bang is a cosmological creation scenario which follows from a state of minimal entropy in which novelty and surprise are also lacking until the Big Bang occurs. If information is surprise, then low information would correlate to high thermodynamic negentropy.​ The qualification of ‘physical’ has evolved and perhaps dissolved to a point where it threatens to lose all meaning.​ In the absence of a positive assertion of tangible ‘stuff’ which does not take tangibility itself for granted, the modern sense of physical has largely blurred the difference between the abstract and concrete, mathematical theory and phenomenal effects, and overlooks the significance of that blurring. Considering physical a category of perceptions gives meaning to both categories in that nature is conceived as being intrinsically experiential with physical experiences being those in which the participatory element is masked or alienated by a qualitative perceiver-subject/perceived-object sense of distinction. The physical is perceived by the subject which perceives itself to possess a participatory subjectivity that the object lacks. Information depends on a capacity to create (write) and detect (read) contrasts between higher and lower entropy. In that sense it is meta-entropic and either the high or low entropy state can be foregrounded as signal or backgrounded as noise. The absence of both signal and noise on one level can also be information, and thus a signal, on another level.​ What constitutes a signal at in the most direct frame of reference is defined by the meta-signifying capacity of “sense” to deliver sense-experience. If there is no sense experience, there is nothing to signify or make-sense-of. If there is no sense-making experience, then there is nothing to do with the sense of contrasting qualities to make them informative. The principle of causal closure in physics, would, if true, prevent any sort of ‘input’ or receptivity. Physical activity reduces to chains of causality which are defined by spatiotemporal succession. A physical effect differs from a physical cause only in that the cause precedes the effect. Physical causality therefore is a succession of effects or outputs acting on each other, so that any sense of inputs or affect on to physics would be an anthropomorphic projection.​ The lack of acknowlegement of input/affect as a fundamental requirement for natural phenomena is an oversight that may arise from a consensus of psychological bias toward stereotypically ‘masculine’ modes of analysis and away from ‘feminine’ modes of empathy. Ideas such as Imprinted Brain Theory, Autistic-Psychotic spectrum, and Empathizing-Systemizing theory provide a starting point for inquiries into the role that overrepresentation of masculine perspectives in math, physics, and engineering play in the development of formal theory and informal political influence in the academic adoption of theories. Criticisms? Support? Join the debate on Kialo. The Universe Has No Purpose? August 11, 2017 Leave a comment The physical universe appears purposeless because it’s only a stage upon which experiences play out. The rest of the universe is not made of forms and functions and driven by entropy, but rather made of participatory perceptions and driven by the opposite of entropy – significance. The universe is overflowing with significance. From spectacular aesthetics to mind-bogglingly sophisticated mechanisms. Our personal life is filled with purposeful agendas competing for our attention. Some agendas are powerful because they are urgently asserted from our bodies, from society, or from some immediate circumstance that we confront. Others are asserted with subtlety over years…a barely perceptible theme that connects the dots over a lifetime but which shapes our destiny or career. 21st century madman’s picture of God February 25, 2017 4 comments In/out : Electromotive-sensory force :: Around and around : Gravitoentropic-Magnetic a-motive field Are We Wrong About The Universe? December 7, 2016 4 comments Are we today as wrong about any scientific fact that is widely accepted as the belief that the earth was the center of the universe and the like? It’s not so much a particular scientific fact that we are currently wrong about, but rather the interpretation of those facts which is ultimately incomplete and inverted. In my view, the cosmological picture that we have inherited is as wrong as geocentric astronomy was, in that we presume a physical universe of forces, fields, particles, and mechanisms; forms and functions which act in the complete absence of any kind of experience or awareness. I expect that we will eventually come to understand that unconscious forms and functions cannot generate any such thing as a sensation or feeling, and that it is actually forms and functions which are presentations within a deeper context of universal perceivability. Because we have made great use of the tools of science to objectify the universe by factoring out our own subjectivity, we have fallen under a kind of spell of amnesia in which we exclude the process of objectification itself from our picture of the universe. In the effort to dispel the ghost-in-the-machine legacy of Cartesian Dualism, we have succumbed to a more insidious dualism, which is that of “illusion” vs reality, or “emergent properties” vs physical systems. From this vantage point, we are susceptible to any kind of theory which satisfies our empirical measurements, regardless of how incompatible they are with our direct experience. As long as a legitimate scientific authority stands behind it, the educated public happily swallows up anti-realisms in the service of realism…multi world interpretations, superposition, vacuums filled with energy. There is nothing wrong with entertaining these very legitimate possibilities, but there is a deep irony which is being overlooked. The problem is that we have taken ourselves out of the picture of the universe, but we haven’t gone far enough. We have over-estimated our objectivity in one sense and under-estimated it in another so that the universe we imagine as objectively present looks, sounds, tastes, and feels just as it would to a highly culturally conditioned Homo sapien of the early 21st century. We have failed to appreciate the profound truths revealed by Relativity, quantum uncertainty, incompleteness, the placebo effect, and the vast pool of insight provided by centuries of direct consciousness exploration. Had we been willing to connect the dots, I think that we would see the common denominator is that nature is subject to perceptual participation for its fundamental definitions. In other words, what both the empirical and rational methods of inquiry have shown is that nature is inseparable from perceivability. It is a multitude of changing types of awareness which produces and preserves all forms. We are used to thinking that consciousness is a special ability of Homo sapiens, and perhaps a few other species, but this is as naive and egocentric as Ptolemaic astronomy now seems. Just as biology has found no hard line separating living cells from genetic machinery, the study of consciousness has revealed signs of sensation and awareness in everything from ants, single celled plants, even a ball of dough. There seems to be no good reason to automatically consider the activities performed by any natural structure strictly unconscious. Indeed, we may be projecting our own complex human experience of layers of consciousness, semi-consciousness, and seeming unconsciousness onto nature at large. The reality may be that every frame of reference is actually a frame of afference… a trans-spatial, trans-temporal platform for developing temporalizing and spatializing aesthetic experiences. Afference is a neologism adapted from the function of afferent nerves. In this case I am generalizing that function of bringing signals in from the outside. Afference is conceived as a fundamental receptivity to experience which allows for the appearance of all phenomena including space (a sense of distance between tangible or visual presentations) and time (a sense of memory and evaluation of causality) within any given frame. Afference is a hypothetical sub-set or diffraction from the overall Perceivability Spectrum (pansensitivity, pan-afference, or even ‘ference’). This doesn’t mean that every ‘thing’ is conscious. That sort of ‘promiscuous’ panpsychism is only the first step away from the pseudo-dualism of contemporary science. It can help us to begin to break through our anthropocentrism and consider other scales of time and body size, however it can also lead to misguided expectations about inanimate objects ‘having’ experiences rather than their objecthood ‘being’ an experience within our body’s perceptual scales and limits. The experience of a computer for example, may be limited to the hardware level where natural sensory acquaintance and motor engagement is felt on the microphysical scale and has no emergence to genuine high level humanlike intelligence. By considering consciousness (not human consciousness, but universal perceivability) to be the source of all qualities and properties of nature, the Hard Problem of materialism solves itself. Physical forces and fields need not be sought out to explain the creation of bodies-with-awareness, which are impossible by definition in my view. In my view there is no room for any kind of sensation or participation as a mechanical product of geometry or computation. Instead, we should recognize that it is experiential phenomena alone which present themselves as bodies, images, thoughts, feelings, etc. Every appearance of mechanical or random force in our frame of perception is ultimately a feeling of participation and sense in a distant and alienated frame of perception. Every appearance of a ‘field’ (gravitational, electromagnetic, or otherwise) is in the same way only a range of sensitivity projected into another range of sensitivity that uses spatial terms (rather than non-spatial or trans-spatial like olfactory or emotional sense). It is the sense modality of tangibility which deals in spaces and geometries: visible and/or touchable forms. With the ‘field’ model, we are presuming regions of space as domains within which effects simply found to be present by definition. By using the afference model instead, locality is understood to be a symptom of how extra-local phenomena are translated into locality-constrained sensory modes. Afference opens the door to understanding how not to take presence for granted and to see it as a relativistic, aesthetically driven universal phenomenon (or the absolute meta-phenomenon). Supporting articles MSR Schema 3.3 November 13, 2016 1 comment Physical existence is consciousness which has been cooked by the entropy of relative unconsciousness. November 6, 2016 Leave a comment LRM: Can you please explain? Craig Weinberg Think of consciousness as the raw cookie dough which comprises the totality of nature. Nature as I’m using it includes thoughts, imagination, dreams, fiction, etc. It includes the experience of the thought “square circle” but it does not include the referent of that thought as that referent is purely artificial/unnatural/logically impossible. To get from this raw set of experiences to experiences which are ‘physical’, i.e. which persist in a tangible sense (literally, we can experience a sense of touching them, or touching something that can touch/hold/collide with them), I propose that there is a hierarchy of layers of disconnection or dissociation in which direct experience becomes increasingly indirect. The dream of the person is not composed of the function of the brain, rather the brain is the set of sub-personal dreams which have become partitioned off in the formation of personhood. Think of how language begins…we have the personal experience of learning the alphabet and how to use letters to spell words. We become so good at reading letters that it eventually becomes second nature. The personal experience now evolves into an experience of taking in entire sentences of words, while the piecing together of letters to form phonemes in our internal dialogue is relegated to the sub-personal. There is now a layer of entropy…a leveling of insensitivity which insulates our direct attention from those less-relevant experiences which are nonetheless occurring at a lower level. Extrapolating on that, imagine that over billions of lifetimes, and countless pre-biotic experiences before that, the sub-personal content has been subjected to this kind of objectification so many times that the sub-personal quality becomes impersonal…shared only through a geometrically summarized protocol of frozen touch…tangibility through quantification or maximally layered entropy of tactile sense. Visibility begins to recover some of that loss of sense by partially removing the tangibility restriction, such that there is an intangible medium for re-connecting with tangibly disconnected experiences. That intangible recovery of the disconnected tangible is the parent of space, time, and light. It is “c²”. LRM: “So.. consciousness is a kind of sum of an increasingly indirect references??” It is that, but it as also the direct ‘ference’ itself. Consciousness is the sole primordial absolute. It is the the necessary ingredient of any and all possible phenomena, including possibility itself. SW: “It seems that consciousness is not so much layered through the body or the physical (though this may certainly be something that happens) but, rather, it is our conceptualizations that provide layer upon layer. I think these conceptualizations happen precisely because we are not embodied enough, like an existential terror that keeps us from fully feeling and we insulate ourselves from the terror of embodiment by adding layer upon layer upon layer of insulation between ourselves and how we experience our embodiment, primarily through conceptualization. You use language as an example – language used to refer to the world in which we live – it was a direct representation of that world. Now language is seen in shapes that have no meaning in and of themselves except the way we string them together. “A” is completely meaningless until we give it meaning. Whereas a symbol like a tree has meaning without us doing anything to it. Our efforts to not be embodied can be seen in virtually every aspect of society. And by embodied I mean to feel ourselves as our bodies and the chemical bath of emotions and the intuition and connectedness that comes from physical form and for this experience to have primacy over the conceptual layering we do that is driven by our inability to deal with our embodiment. It seems to me that we are only really conscious when we are able to be in our bodies without adding layers to our experience. Perhaps it is two sides of the same coin? I don’t know.” Craig Weinberg Yes I think two sides of the same coin, or to try to be more precise: Consciousness in general* creates the experience of bodies and physical matter in the first place through layering sensitivity gaps** but when we, as human beings, expand our consciousness we become more fully aware of our humanity, which includes the direct experience of the world through the body and of the body itself. I agree with you on language, although I would not automatically assume that it begins as a way to talk about things in the world. I think before that it begins as a way of imitating each other’s natural gestures. It probably developed in both the inner and outer direction at the same time, allowing us to communicate our feelings and call and sing to each other as well as to call each others attention TO some event or condition in the shared world of our bodies. “Our efforts to not be embodied can be seen in virtually every aspect of society” Absolutely. I see this in a deep historical context…the swing from nature shamanism to poly and monotheism to dualism to deism to atheism/anti-theism. The pendulum swing in philosophy corresponds to the political and technological swing Westward. The rise of capitalism was concurrent with the rise of physicalism, not coincidentally, but necessarily. The dis-Orientation (Orient = East) toward the world of the body and Copernican anti-centricity has to do with converting the primacy of subjective kinds of feelings into secondary ‘properties’. This relates to the sense of property, as the proprietor herself becomes the ghostly ‘owner’ of a body and of material positions. Here’s where the trouble begins…the immateriality of ownership is projected onto objects, the trading of which is facilitated by a super-object…”currency”. Money then becomes smaller and smaller, more and more abstract until it has reached the state of purely symbolic, immaterial disembodiment. All this to say that we don’t just want to be disembodied, we want to become money. Money is always welcome. Money is always loved and appreciated. It never goes out of style, it never cares what anyone thinks of it…it can do most anything and anything it can’t do it doesn’t value. Money is our human social counterfeit essence…our ultimately refined sense of insensitivity. Through this ‘love of money’ we seek immortality and an escape from both the body and the less-than-omnipotent mortality which comes with it. *(Consciousness in general = what I call “pansensitivity”, and others call nondual fundamental awareness) **(Sensitivity gaps = what I identify as the ultimate source of entropy) First Consciousness or Reality? October 1, 2016 2 comments When answering the above question, please provide definitions for reality and consciousness because I’m not even sure I fully understand what they are. Thank you. These are my understandings and should not be taken to constitute knowledge which is considered consensus science or philosophy. These are conjectures offered to inspire a deeper understanding into the nature of consciousness and reality. Reality = Conscious experience in which relative qualities of realism are present. These qualities typically include persistence in memory, coherence, non-contradiction in causality, and shared pervasiveness, however we know that in a dream, even the most surreal conditions can be taken for reality. From this we can conclude that while on one level we believe that reality is based on qualities of realism, consciousness can be spoofed into assigning realistic qualities to any experience. Logically we might think that the experience of waking up is what creates the difference between reality and dreaming, and that our waking life is simply a dream which we have not yet awakened from. There is another possibility, which is that our personal consciousness is part of a larger hierarchy or holarchy of conscious experiences, such that our sense of waking as being conscious of that which is finally and authentically real may be a sense which is as real as anything can ever be. Consciousness = All that is not present in complete unconsciousness. We can use a lot of different terms to specify limits on this or that aspect of conscious experience. We can talk about awareness, perception, feeling, sensing, etc, or attention and being awake, being alive. In my view the point is not to make the subject more complicated but to distill it to its essence. We know what unconsciousness is. We know what general anesthesia is. We can look at a term like ‘local anesthetic’ and see an intuitive connection between numbing of sensation and the annihilation of consciousness in general anesthesia. Between and opposing these poles, we can triangulate a term like ‘aesthesia’ or ‘aesthetic phenomena’ to refer to all that exists which is contingent upon the presence of direct presence of sensory perception and participation. Sense can be understood as the content of all experience, including thoughts and ideas, but not limited in any way to human beings, biology, or physical substances. The point of a term like ‘aesthetic’ is to make a distinction between experiential phenomena which are indisputably concrete and anesthetic phenomena such as physical forms and logical functions (physics or information processing), which are, as far as we can ever know, hypothetical and abstract. We cannot know physics except by an indirect experience through our body and we cannot know information except by an indirect experience through our intellectual contemplation. Both of these are dependent upon conscious powers of perceptual participation and comparison. To answer the OP question then, we must first completely sever any connection between consciousness, reality and the particular context of human beings so that consciousness as sense-perception/motive participation can be fairly considered alongside the other possibilities of physical mass-energy/space-time/force-field and information-theoretic form-functions/data-processes. If we fail to detach consciousness or qualia from the human experience then we are not comparing apples to apples. It would be like mistaking all forms of matter for parts of our physical body. Next, we should see that there is no reality which cannot be dreamed. Lucid dreamers report that their dreams can be examined in excruciating detail and can contain experiences which are indistinguishable from waking reality. We should also leave the possibility open that even though our final reality could be a dream, it still could be different from any other dream. This difference could be an authentic sense that waking life is not any dream, but the only dream which is shared by all conscious experiences. It is the dream which counts more than all others because of its shared access, and because of the significance which is accumulated in a universe of experience which is felt so intensely for so many, for so long a time. I consider significance to be a concrete metaphysical feature – an aesthetic saturation which underlies both the privately impressive power of symbolic and archetypal phenomena and the publicly expressed power of energy, mass, matter and gravity. Significance manifests tangibly as an arrest of motive effects, a slowing or marking of time and intensification of attention. The physical universe is a view of significance – the persistence of all experience as viewed from an anthropocentric scope of sensitivity/insensitivity. It is collection of many layers of limits of our human awareness which we see as the gaps between ourselves and our mind, brain, body, and universe of bodies. If our awareness were to expand to a transpersonal scope, we would appreciate directly that consciousness is not only a human phenomenon, but the only possible phenomenon which can make any and all other phenomena possible. Without physics or information, we can still conceive of a universe of raw feelings, colors, sounds, etc. There could still be a dream in which things like matter or narrative activities could be present. Without consciousness/qualia, we can fool ourselves into thinking that a universe of Reality could ‘exist’ but when examined more carefully, our notion of ‘existence’ unravels into a purely abstract, faith-based concept which seems likely to me to be derived from our subjective sense of separation within consciousness rather than an objective sense of objectivity. When we ask why something which we imagine has no experience, like a stone, it becomes a problem to rationally expect that any sort of experience should develop at all. A universe which is a physical machine cannot include immaterial feelings and thoughts without support from physics. A universe which is immaterial ‘simulation data’ also cannot include real aesthetic qualities other than the literal qualities which constitute each separate switch or branch in the data-processing substrate (be it material or otherwise). If we include conscious experiences as ‘emergent properties’ of either physics or information, we have become guilty of chasing our tail. Since the purpose of reducing our model of nature to a single phenomenon is to rationally explain every phenomenon with that single phenomenon, resorting to emergence amounts to inventing an unacknowledged second substance which has no rational connection to the first. The solution to this in my view is to begin with the single phenomenon of sense (pan-aesthesia or pansensitivity) as the Absolute. From there, we get principles such as symmetry and reason with which to identify relations between physics and information as a Hegelian dialectic which reflects, rather than produces the original thesis of sense. Sense is the thesis, physics and information are the dual-aspect or double antithesis (antithesis of each other and of sense), significance is the synthesis, and entropy or insensitivity is the antithesis of the synthesis (the shadow of the thesis within the thesis). Because this quadruplicity is absolute, if we call it panpsychism we must be careful not to confuse it with what I call promiscuous panpsychism in which every thing, such as stones or signs have consciousness. Under pansensitivity, every “thing” is an appearance of consciousness within itself. We are not a body which has become conscious, we are a conscious experience which has foregrounded itself by back-grounding other conscious experiences as bodies. In my view, a stone is what we see through the sense perspective of a human body in an anthropocentric timescale. In its native geological-astrophysical timescale, the events associated with the formation of minerals and planets are as dynamic and creative as biology or psychology. We see a stone because our sense of the experience which stretches back billions of years is frozen, relative to the scale of our own human experience. To us, it is a stone. Without us, there is no stone, only an aesthetic dream which speeds along at sampling rates too extreme for us to sense personally. The mineral level of experience is both too fast on the molecular level and too slow on the interstellar level for us to relate to directly. The relation between a medium-rate human experience and an extreme-rate inorganic experience is presented as a truncated and collapsed aesthetic: as classical physics; density, mass, gravity, persistence of linear duration and causality, etc. Our human experience is nested within a deeper biological-zoological body experience, which is nested within a deeper organic-chemical experience, which is nested within a deeper inorganic-astrophysical experience. Each of these nested ‘gears’ is concurrent with our own, even down to the Big Bang, which is eternally present as both an event in time and as the diffraction of sense into physical and psychological phenomena from beyond time. In this way, the Absolute is itself in ‘superposition’ of being sense experience which is becoming more significant sense experience by its diffraction as the physics vs information duality. This duality can be seen on the quantum scale as entanglement and contextuality. I think that entanglement is the parent of space and physics, since non-locality is a contrast against locality. For space or distance to exist, there must be a parallel, simultaneous relation which juxtaposes a non-local experience of ‘here’ with multiple experiences of ‘there, and there’. For time or causality to exist, there must be a serial contextuality in which a de-contextualized or immutable time-traveller is defined against the context of its ongoing mutable experience. The physics-information duality shows up in relativity also as energy is defined in terms of spatiotemporalized mass (E = mc²). Energy, as the capacity to perform work is, in my view, the event horizon of conscious participation as it makes its teleological impact on what has been perceived. Energy is the footprint of subjectivity upon the subjective perception of objectivity, as it expresses the motive power to cause significant effects (or effect increasing significance). Gravity is the shadow of E = mc²…the significance-masking effect which we can conceive of as both physical entropy and information entropy. Gravity is the collapse of former significance in a given frame of reference which results in an increase in mass and aesthetic ‘seriousness’ of what remains. To sum up: Reality is what consciousness finds serious and significant. It is a relation between the local frame of perception (such as a human lifetime) and the larger frames of reference in which that frame is nested (the history of the human species, zoology, biology, chemistry, physics, and metaphysics). In this relation the relative insignificance of the local frame is presented as a heightened quality of significance of the distal frames. We are thus presented with a way to use our limited consciousness to partially transcend its own limitation, by recognizing its own incompleteness as a material fact. This is ironic as it is the unbounded, absolute power of consciousness to transcend itself which gives rise to the nearly-absolute boundedness of realism into ‘Reality’. In other words, reality seems real because consciousness needs to become partially unreal to itself to create realism. Realism is the local appearance of phenomena beyond local appearance: Non-local consciousness (pansensitivity) as localized, decontextualed, de-sensitized, mechanics. Reality is the way that consciousness creates the possibility of greater and greater enchantment through the appearance of disenchantment. A Quantum Analogy with Dice, Fans, and Basketball September 10, 2016 Leave a comment This is as much for my own edification as anything else, but I’m trying to get across my understanding of what is called the quantum wave function collapse. After that, it goes off into my usual attempt to say something absolutely particular about absolutely everything in general. From what I have gathered the quantum wave function is a statistical mean which may or may not correspond to a physical phenomenon. Now, in QM we try to predict the probability density for a particle’s position (or momentum, or energy, or whatever).  We could try to do this by writing an equation for how p(x) changes over time, but it turns out that doesn’t give us enough information; there are situations where particles start with identical p(x) but do different things as time goes on. It’s found that we do get enough information to make predictions if we write an equation for a complex-valued function ψ(x), and derive the probability density from it as p(x)=ψ∗(x)ψ(x) The way the complex phase of ψ(x) varies from point to point encodes additional information about the particle’s momentum, which is necessary to predict its future behavior. It has units of the square root of a probability density, which is a bit weird but perfectly mathematically acceptable.  This is of course the wavefunction, and the equation that determines how it varies is the Schrödinger equation.-source From another source: An observable is “something we can observe”, and is it represented in quantum mechanics by an operator, that is, something that operates on a quantum state. A very simple example of an operator is the position operator. We usually write the position operator along the x axis as x^ (which is just x with a “hat” on top of it). If the quantum state |Ψ⟩ represents a particle, that means that it contains all the information about that particle, including its position along the x axis. So we calculate the following: Note that the state |Ψ⟩ appears as both a bra and a ket, and the operator x^ in “sandwiched” in the middle. This is called an expectation value. When we calculate this expression, we will get the value for the position of the particle that one would “expect” to find, according to the laws of probability. To be more accurate, this is a weighted average of all possible positions; so a position that is more probable would contribute more to the expectation value. However, in many cases the expectation value is not even a value that the observable can get. For example, if the particle can be at position x=+1 with probability ½ or at position x=−1 with probability ½, then the expectation value would be x=0, whereas the particle could never actually be in that position. – source In the terms of the dice analogy, the table above shows a bell curve function of probability density for the observables of the dice. To make this a metaphor for quantum observations I think it would look more this way: The difference is that we can’t observe the wave function, we can only think of the set of possible observables for a given system and give it a name. This is important because in my view, quantum theory actually oversteps its mandate as a rational solution to a set of physical problems to become a faith-based solution to a set of metaphysical/mathematical problems. There can never be any observation of quantum, there can only be qualitative observations from which we can infer quantitative ideas of relation*. *note that ‘relation’ is itself an aesthetic quality which is dependent upon a preferred sense of grouping. This preference, so far as we can ever know, only occurs within a sensed experience in which aesthetic phenomena are presented as sharing a common quality. Physics in and of itself can have no relations, as general relation qualities cannot be decomposed into fundamental physical forces. No physical mechanism can make quantitative ‘relations’ happen. What the quotes above are trying to say, in my view, is that the wave function itself is an imaginary square root of the inferred probability density of the mentally counted sets of actually observed phenomena. We want to think that quantum particles are the observed dice rolls: a pair of upturned faces of cubes containing a finite number of dots or ‘pips’, and that the wave function is the set of numbers 1 to 6 corresponding to each possible set of dots, but in reality there may not be two dice at all. The observable reality is that when we look at one die, the other one disappears, and we can only see both dice if we don’t look at the dots. Two more analogies illustrating the reducibility of quantum ‘particles’ to qualitative sense: 1. Looking at a ceiling fan in motion, we can either see a circular blur, or if we follow the blur with our eyes at the same frequency as the fan, we can see the fan blades (or a standing-wave of averaged images of fan blades) but not really the circular blur. 2. I’m in my house and hear noises coming from outside. One sounds like a loud motor, and one sounds like a frequent thumping. I know from experience that the neighbors do like to play basketball in their driveway when the weather is nice. I also know that the neighbors across the street are having their roof replaced which may or may not involve some kind of compressor noise. Finally, I know that Saturday morning is a time when there are a lot of neighbors mowing their lawn. The point of this example is to illustrate the common/superficial understanding of the wavefunction collapse would be analogous to me going outside and looking around. By observing, I find out whether there are roofers running some kind of noisy machine and pounding on shingles, or whether there is one neighbor mowing their lawn and another pounding on their fence or something, or whether there’s some combination of things going on which may include a basketball game. By ‘finding out’ what’s going on, I am collapsing the wave function of possibilities because I now know what the noises I heard inside my house refer to outside. This is not correct as an analogy though either. It cannot be applied to quantum observables. The delayed choice quantum eraser and other experiments show surreal phenomena such as entanglement, contextuality, and the mutual exclusivity of entanglement and contextuality. It would be like me like going outside and seeing that the hammering is definitely coming from the roofers across the street, but then going outside again later and seeing that the there is a dude playing basketball instead and there were never any roofers. Entanglement/Contextuality would be like if I went out and played basketball with the neighbors then as long as I was playing, suddenly no neighbor could have their roof repaired. In terms of the fan, it would be like if I had two fans in two separate rooms controlled by the same light switch, putting my hand in the way of one fan not only stops the other, but you can tell by filling the rooms with feathers that stopping one fan makes it so no feathers had ever blown around in the other room. Entanglement and contextuality are opposite orientations of the same thing. The entanglement view focuses on the synchronization of what has been connected experimentally while the contextuality view focuses on the strange contradiction to our expectations about causality extending from the past to the present. Anyhow, this too is not correct in my view. What is being overlooked is that we are taking for granted that the quality of finality in our experience is identical to the property of factuality. We want to say that because we have actually seen the blades of the fan, they are the physical objects which exist and the circular blur is an optical illusion – true enough in the case of a fan. We want to say that seeing a roofer pounding nails into a shingle is evidence that roofing is what is actually going on and the idea that the sound we heard inside could have been a basketball bouncing was a misperception. This is not what physics is telling us, however. Instead, it is telling us, in my view, that there is no fan or basketball or roofer, nor is there any mistake of misperception, there are only sensory experiences, some of which acquire a higher aesthetic density of ‘realism’ than others. We say ‘seeing is believing’ because visual sense presents such an unambiguous seeming experience most of the time but we know from optical illusions and from comparing binocular differences that even seeing should not be believed. What we are seeing when we look at something like the double slit experiment is a context in which perception itself is revealed to be 1. more fundamental than the ‘object’ which is sensed and 2. a revealing of (sense experience) itself as both a self-revealer and a self-concealer. In the phenomenon of seeing visible light we have a metaphor about the relation between metaphor and non-metaphor which is expressed non-metaphorically. It is a context in which the contextualization of contextuality is presented as an uncontextualized/absolute text. (sense = the sole abtext?) Philosophically, we should see that it is necessary to reverse the priority assigned by Galileo and Locke to tangible/physical qualities being primary and phenomenal qualities being secondary. Physics should be considered a set of phenomenal qualities which have been reduced by the subtraction of intangible modes of sensitivity. It is only in the intangible modes which nature can be fully appreciated as the self-revealing, self-concealing meta-phenomenon that it is. Finally, here’s another serendipitous experiment with light. On a polished granite surface I see the reflection of a single overhead light as two separate reflections. With one eye open, I can see the image of the light is on the edge of the surface, while with the other eye open instead, the image of the light is in the center of the surface. Try it next time you see a floor or counter like this and can play with closing one eye or the other. Notice how you can choose between two separate but entangled images of the light which move as your head moves or, you can focus your sight so that there is only a single image of the light. In the former case, the details of the surface are clear – you can see the patterns of granite and can tell exactly which colored spots seem to be illuminated by the overhead light. In the latter case, you have to look ‘through’ or passed the grain of the stone and focus your visual attention on the image which is reflected from the polished surface. To make the former view real is the materialist orientation. To make the latter view real is the information-theoretic orientation. Both orientations entail the disorientation/de-realization of the other. The materialist says the floor is the real thing being illuminated, while the computationalist says that the floor and light are only generic vehicles for the underlying reality of mathematical laws of relation. What is left out of both of these views is the connection to the eye and the experience of seeing. The eye’s location is what is telling my experience of where the image of the light’s reflection appears to be. Indeed, that appearance *is* the actual location of the lights reflection as seen through one eye. When seen through the other eye, there is a different actual location. When seen through both eyes, there are either two semi-actual locations or there is one actual light reflection against a single blurred semi-actual location. I cannot emphasize this enough: Quantum theory is about perceiving perception. It tells us not that the reality of nature is inconceivably weird and unfamiliar, but that nature is more than ‘reality’. The different concepts of wave function, probablility density, and observables map to quantum contextuality, quantum entanglement, and classical (collapsed) realism respectively. QM is about how appearances acquire density of realism by consensus of accumulated limits. For a quantum phenomenon (which is totally abstract) to begin to seem concretely ‘real’, the sense of contextuality or entanglement must in one frame of reference seem to be shared as an isomorphic sense in every other frame of reference, without contradiction. Thus there is no mysterious ‘classical limit’ at which quantum decoherence occurs, and no magical ‘emergent properties’ which appear out of nowhere to turn intangible figments of math into concrete objects – there is only a dynamic aesthetic phenomena (sense experiences or qualia) which merge and diffract as aesthetic meta-phenomena (veridical perceptions or ‘shared reality’). There is no ‘finding out’ what really happened, there is only an adding of dimensions of realism by sacrificing qualities that extend beyond realism. This goes for our own consensus of sense modalities as well as a consensus among peer-reviewed scientific papers. The sense of realism arises from the multiplicity of limited perspectives, which then divides the total entropy of doubt/uncertainty. With only one slit or sense or scientific mind, any given phenomenon is presented as-is – an observed effect only. With multiple senses or slits or peers, we observe a different effect which enables a cross-reference that goes beyond the observation itself to an observation of the observation process. This opens the door not only to theories which connect the particular observations but which can apply to many other kinds of observations, as well as to theories of observation in general. In this way, the general/rational/contextual/illuminating and the particular/empirical/textual-entangled/illuminated can be reconciled as opposite ends of a single spectrum of sense/aesthetic/ab-textual/visibility. Everything is Not Energy September 7, 2016 5 comments another source: 4. “Energy is… • a scalar quantity, • abstract and cannot always be perceived, • given meaning through calculation, • a central concept in science. “- source Shé Art The Art of Shé D'Montford Transform your life with Astrology Be Inspired..!! Listen to your inner has all the answers.. Rain Coast Review Thoughts on life... by Donald B. Wilson Perfect Chaos The Blog of Author Steven Colborne Multimedia Project: Mettā Programming DNA I can't believe it! Problems of today, Ideas for tomorrow Rationalising The Universe one post at a time Conscience and Consciousness Academic Philosophy for a General Audience Exploring the Origins and Nature of Awareness BRAINSTORM- An Evolving and propitious Synergy Mode~! Musings and Thoughts on the Universe, Personal Development and Current Topics Paul's Bench Ruminations on philosophy, psychology, life Political Joint A political blog centralized on current events Zumwalt Poems Online
36d81585f6ffb401
smallest unit of a chemical element (Redirected from Atoms) Atoms are very small pieces of matter. There are many different types of atoms, each with its own name, mass and size. These different types of atoms are called chemical elements. The chemical elements are organized on the periodic table. Examples of elements are hydrogen, carbon, chlorine, and gold. Helium atom Helium atom ground state. Smallest recognized division of a chemical element Electric chargezero (neutral), or ion charge ComponentsElectrons and a compact nucleus of protons and neutrons Atoms are very small, but their exact size depends on the element. Atoms range from 0.1 to 0.5 nanometers in width.[1] One nanometer is about 100,000 times smaller than the width of a human hair.[2] This makes atoms impossible to see without special tools. Scientists discover how they work and interact with other atoms through experiments. Atoms can join together to make molecules: for example, two hydrogen atoms and one oxygen atom combine to make a water molecule, and many separate molecules make up a glass of water. When atoms join together it is called a chemical reaction. Atoms can join together without forming separate molecules: in this case every atom is connected to a giant web of atoms. These are called crystals. Atoms are made up of three kinds of smaller particles, called protons, neutrons and electrons. The protons and neutrons are heavier, and stay in the middle of the atom, which is called the nucleus. The nucleus is surrounded by a cloud of light-weight electrons, which are attracted to the protons in the nucleus by the electromagnetic force because they have opposite electric charges. The number of protons an atom has defines what chemical element it is. This number is sometimes called its atomic number. For example, hydrogen has one proton and sulfur has 16 protons. Because the mass of neutrons and protons is very similar, and the mass of electrons is very small, we can call the amount of protons and neutrons in an atom its atomic mass.[3] Atoms move faster when they are in their gas form (because they are free to move) than they do in liquid form and solid matter. In solid materials, the atoms are tightly packed next to each other so they vibrate, but are not able to move (there is no room) as atoms in liquids do. The word "atom" comes from the Greek (ἀτόμος) "atomos", indivisible, from (ἀ)-, not, and τόμος, a cut. The first historical mention of the word atom came from works by the Greek philosopher Democritus, around 400 BC. Atomic theory stayed as a mostly philosophical subject, with not much actual scientific investigation or study, until the development of chemistry in the 1650s. In 1777 French chemist Antoine Lavoisier defined the term element for the first time. He said that an element was any basic substance that could not be broken down into other substances by the methods of chemistry. Any substance that could be broken down was a compound.[4] In 1803, English philosopher John Dalton suggested that elements were made of tiny, solid balls called atoms. Dalton believed that all atoms of the same element have the same mass. He said that compounds are formed when atoms of more than one element combine. According to Dalton, in a certain compound, the atoms of the compound's elements always combine the same way. In 1827, British scientist Robert Brown looked at pollen grains in water under his microscope. The pollen grains appeared to be jiggling. Brown used Dalton's atomic theory to describe patterns in the way they moved. This was called brownian motion. In 1905 Albert Einstein used mathematics to prove that the seemingly random movements were caused by the reactions of atoms, and by doing this he conclusively proved the existence of the atom.[5] In 1869, Russian scientist Dmitri Mendeleev published the first version of the periodic table. The periodic table groups elements by their atomic number (how many protons they have. This is usually the same as the number of electrons). Elements in the same column, or period, usually have similar properties. For example, helium, neon, argon, krypton and xenon are all in the same column and have very similar properties. All these elements are gases that have no colour and no smell. Also, they are unable to combine with other atoms to form compounds. Together they are known as the noble gases.[4] The physicist J.J. Thomson was the first person to discover electrons. This happened while he was working with cathode rays in 1897. He realized they had a negative charge, and the atomic nucleus had a positive charge. Thomson created the plum pudding model, which stated that an atom was like plum pudding: the dried fruit (electrons) were stuck in a mass of pudding (nucleus). In 1909, a scientist named Ernest Rutherford used the Geiger–Marsden experiment to prove that most of an atom is in a very small space, the atomic nucleus. Rutherford took a photo plate and covered it with gold foil, and then shot alpha particles (made of two protons and two neutrons stuck together) at it. [6] Many of the particles went through the gold foil, which proved that atoms are mostly empty space. Electrons are so small they make up only 1% of an atom's mass.[7] Ernest Rutherford In 1913, Niels Bohr introduced the Bohr model. This model showed that electrons travel around the nucleus in fixed circular orbits. This was more accurate than the Rutherford model. However, it was still not completely right. Improvements to the Bohr model have been made since it was first introduced. In 1925, chemist Frederick Soddy found that some elements in the periodic table had more than one kind of atom.[8] For example, any atom with 2 protons should be a helium atom. Usually, a helium nucleus also contains two neutrons. However, some helium atoms have only one neutron. This means they truly are helium, because an element is defined by the number of protons, but they are not normal helium, either. Soddy called an atom like this, with a different number of neutrons, an isotope. To get the name of the isotope we look at how many protons and neutrons it has in its nucleus and add this to the name of the element. So a helium atom with two protons and one neutron is called helium-3, and a carbon atom with six protons and six neutrons is called carbon-12. However, when he developed his theory Soddy could not be certain neutrons actually existed. To prove they were real, physicist James Chadwick and a team of others created the mass spectrometer.[9] The mass spectrometer actually measures the mass and weight of individual atoms. By doing this Chadwick proved that to account for all the weight of the atom, neutrons must exist. In 1937, German chemist Otto Hahn became the first person to create nuclear fission in a laboratory. He discovered this by chance when he was shooting neutrons at a uranium atom, hoping to create a new isotope.[10] However, he noticed that instead of a new isotope the uranium simply changed into a barium atom, a smaller atom than uranium. Apparently, Hahn had "broken" the uranium atom. This was the world's first recorded nuclear fission reaction. This discovery eventually led to the creation of the atomic bomb. Further into the 20th century, physicists went deeper into the mysteries of the atom. Using particle accelerators they discovered that protons and neutrons were actually made of other particles, called quarks. The most accurate model so far comes from the Schrödinger equation. Schrödinger realized that the electrons exist in a cloud around the nucleus, called the electron cloud. In the electron cloud, it is impossible to know exactly where electrons are. The Schrödinger equation is used to find out where an electron is likely to be. This area is called the electron's orbital. Structure and partsEdit The complex atom is made up of three main particles; the proton, the neutron and the electron. Hydrogen-1, an isotope of hydrogen, has no neutrons, just the one proton and one electron. A positive hydrogen ion has no electrons, just the one proton. These two examples are the only known exceptions to the rule that all other atoms have at least one proton, one neutron and one electron each. The mass of atoms is measured using the atomic mass unit (amu), which is about 1.7 x 10-24 grams.[11] Electrons are by far the smallest of the three atomic particles; their size is too small to be measured using current technology,[12] and their mass is 0.00055 amu (9.1 x 10-27 grams). They have a negative charge. Protons and neutrons are of similar size and weight to each other, about 1 amu in mass. Protons are positively charged and neutrons have no charge.[11] Most atoms have a neutral charge; because the number of protons (positive) and electrons (negative) are the same, the charges balance out to zero. However, in ions (different number of electrons) this is not always the case, and they can have a positive or a negative charge. Protons and neutrons are made out of quarks, of two types; up quarks and down quarks. A proton is made of two up quarks and one down quark and a neutron is made of two down quarks and one up quark. The nucleus is in the middle of an atom. While it makes up almost all of the atom's mass, it is very small: about 1 femtometre (10-15 m) across, which is around 100,000 times smaller than the diameter of an atom.[11] It is made up of protons and neutrons. Usually in nature, two things with the same charge repel or shoot away from each other. So for a long time it was a mystery to scientists how the positively charged protons in the nucleus stayed together. They solved this by finding particles called mesons that hold together these protons and neutrons.[13][14] Later, scientists found that the quarks in a proton or neutron are held together by a particle called a gluon. Its name comes from the word glue as gluons act like atomic glue, sticking the quarks together using the strong interaction.[15] Mesons are also made of quarks, so the strong interaction explains how mesons hold the nucleus together.[14] A diagram showing the main difficulty in nuclear fusion, the fact that protons, which have positive charges, repel each other when forced together. The number of neutrons in relation to protons defines whether the nucleus is stable or goes through radioactive decay. When there are too many neutrons or protons, the atom tries to make the numbers the same by getting rid of the extra particles. It does this by emitting radiation in the form of alpha, beta or gamma decay.[16] Nuclei can change through other means too. Nuclear fission is when the nucleus splits into two smaller nuclei, releasing a lot of stored energy. This release of energy is what makes nuclear fission useful for making bombs and electricity, in the form of nuclear power. The other way nuclei can change is through nuclear fusion, when two nuclei join together, or fuse, to make a heavier nucleus. This process requires extreme amounts of energy in order to overcome the electrostatic repulsion between the protons, as they have the same charge. Such high energies are most common in stars like our Sun, which fuses hydrogen for fuel. Electrons orbit, or travel around, the nucleus. They are called the atom's electron cloud. They are attracted towards the nucleus because of the electromagnetic force. Electrons have a negative charge and the nucleus always has a positive charge, so they attract each other. Around the nucleus, some electrons are further out than others, in different layers. These are called electron shells. In most atoms the first shell has two electrons, and all after that have eight. Exceptions are rare, but they do happen and are difficult to predict.[17] The further away the electron is from the nucleus, the weaker the pull of the nucleus on it. This is why bigger atoms, with more electrons, react more easily with other atoms. The electromagnetism of the nucleus is not strong enough to hold onto their electrons and atoms lose electrons to the strong attraction of smaller atoms.[18] Electrons that are further from the nucleus generally have more energy. An electron can move into a higher energy shell by absorbing energy, or move into a lower energy shell by releasing energy. This energy comes as a burst of light called a photon. When an electron moves from one particular energy shell to another, it will always absorb or release the exact same frequency photon, so an element can be identified by which colors of light it absorbs: its absorption spectrum, or by which colors of light it emits: its emission spectrum.[19] Atoms tend to react with each other in a way that fills or empties their outer electron shell. The most reactive elements are those that need to lose or gain a small number of electrons to have a full outer shell.[20] Radioactive decayEdit Some elements, and many isotopes, have what is called an unstable nucleus. This means the nucleus is either too big to hold itself together[21] or has too many protons or neutrons. When this happens the nucleus has to get rid of the excess mass or particles. It does this through radiation. An atom that does this can be called radioactive. Unstable atoms continue to be radioactive until they lose enough mass/particles that they become stable. All atoms above atomic number 82 (82 protons, lead) are radioactive.[21] There are three main types of radioactive decay; alpha, beta and gamma.[22] • Alpha decay is when the atom shoots out a particle having two protons and two neutrons. This is essentially a helium nucleus. The result is an element with atomic number two less than before. So for example if a beryllium atom (atomic number 4) went through alpha decay it would become helium (atomic number 2). Alpha decay happens when an atom is too big and needs to get rid of some mass. • Beta decay is when a neutron turns into a proton or a proton turns into a neutron. In the first case the atom shoots out an electron. In the second case it is a positron (like an electron but with a positive charge). The end result is an element with one higher or one lower atomic number than before. Beta decay happens when an atom has either too many protons, or too many neutrons. • Gamma decay is when an atom shoots out a gamma ray, or wave. It happens when there is a change in the energy of the nucleus. This is usually after a nucleus has already gone through alpha or beta decay. There is no change in the mass, or atomic number or the atom, only in the stored energy inside the nucleus. Every radioactive element or isotope has what is named a half-life. This is how long it takes half of any sample of atoms of that type to decay until they become a different stable isotope or element.[23] Large atoms, or isotopes with a big difference between the number of protons and neutrons will therefore have a long half life, because they must lose more neutrons to become stable. Marie Curie discovered the first form of radiation. She found the element and named it radium. She was also the first female recipient of the Nobel Prize. Frederick Soddy conducted an experiment to observe what happens as radium decays. He placed a sample in a light bulb and waited for it to decay. Suddenly, helium (containing 2 protons and 2 neutrons) appeared in the bulb, and from this experiment he discovered this type of radiation has a positive charge. James Chadwick discovered the neutron, by observing decay products of different types of radioactive isotopes. Chadwick noticed that the atomic number of the elements was lower than the total atomic mass of the atom. He concluded that electrons could not be the cause of the extra mass because they barely have mass. Enrico Fermi, used the neutrons to shoot them at uranium. He discovered that uranium decayed a lot faster than usual and produced a lot of alpha and beta particles. He also believed that uranium got changed into a new element he named hesperium. Otto Hahn and Fritz Strassmann repeated Fermi's experiment to see if the new element hesperium was actually created. They discovered two new things Fermi did not observe. By using a lot of neutrons the nucleus of the atom would split, producing a lot of heat energy. Also the fission products of uranium were already discovered: thorium, palladium, radium, radon and lead. Fermi then noticed that the fission of one uranium atom shot off more neutrons, which then split other atoms, creating chain reactions. He realised that this process is called nuclear fission and could create huge amounts of heat energy. That very discovery of Fermi's led to the development of the first nuclear bomb code-named 'Trinity'. 1. "Size of an Atom". Archived from the original on 2007-11-04. Retrieved 2009-11-29. 2. "Diameter of a Human Hair". 3. Di Risio, Cecilia D.; Roverano, Mario; Vasquez, Isabel M. (2018). Química Básica (6ta ed.). Buenos Aires, Argentina: Universidad De Buenos Aires. pp. 58–59. ISBN 9789508070395. 4. 4.0 4.1 "A Brief History of the Atom". Archived from the original on 2009-12-09. Retrieved 2009-11-30. 5. "Brownian motion - a history". Archived from the original on 2007-12-18. Retrieved 2009-11-30. 6. Education, Reeii (2020-05-30). "Structure of Atom: Class 11 Chemistry NCERT Chapter 2". Reeii Education. Archived from the original on 2020-10-22. Retrieved 2020-10-18. 7. "Ernest Rutherford on Nuclear spin and Alpha Particle interaction" (PDF).[permanent dead link] 8. "Frederick Soddy, the Nobel Prize in chemistry: 1921". 9. "James Chadwick: The Nobel Prize in Physics 1935, a lecture on the Neutron and its properties". 10. "Otto Hahn, Liese Meitner and Fritz Strassman". 11. 11.0 11.1 11.2 Flowers, Paul (2019). Chemistry 2e : atoms first. Klaus Theopold, Richard Langley, Edward J. Neth, William R. Robinson, OpenStax College, OpenStax (2e ed.). Houston, Texas: OpenStax. pp. 67–86. ISBN 978-1-947172-63-0. OCLC 1089692119. 12. "Particle Physics - Structure of a Matter". Archived from the original on 2017-07-21. 13. "The Nobel Prize in Physics 1949". Retrieved 2022-05-13. 14. 14.0 14.1 Aoki, Sinya; Hatsuda, Tetsuo; Ishii, Noriyoshi (January 2010). "Theoretical Foundation of the Nuclear Force in QCD and Its Applications to Central and Tensor Forces in Quenched Lattice QCD Simulations". Progress of Theoretical Physics. 123. 15. Flegel, Ilka; Söding, Paul (2004-11-12). "Twenty-five years of gluons". CERN Courier. Retrieved 2022-05-13. 16. "How does radioactive decay work?". 17. "Chemtutor on atomic structure". 18. "Chemical reactivity". Archived from the original on 2009-12-03. Retrieved 2009-12-02. 19. "Atomic Emission Spectra - Origin of Spectral Lines". Archived from the original on 2006-02-28. Retrieved 2022-05-02. 21. 21.0 21.1 "Radioactivity". 22. "S-Cool: Types of radiation". 23. "What is half-life?". Archived from the original on 2013-08-30. Retrieved 2009-12-03. Other websitesEdit
9cdde628ddb8f673
Mean-Field Dynamics for the Nelson Model with Fermions title={Mean-Field Dynamics for the Nelson Model with Fermions}, author={Nikolai Leopold and Soren Petrat}, journal={Annales Henri Poincar{\'e}}, We consider the Nelson model with ultraviolet cutoff, which describes the interaction between non-relativistic particles and a positive or zero mass quantized scalar field. We take the non-relativistic particles to obey Fermi statistics and discuss the time evolution in a mean-field limit of many fermions. In this case, the limit is known to be also a semiclassical limit. We prove convergence in terms of reduced density matrices of the many-body state to a tensor product of a Slater determinant…  Bogoliubov Dynamics and Higher-order Corrections for the Regularized Nelson Model Derivation of the Maxwell-Schr\"odinger Equations: A note on the infrared sector of the radiation field We slightly extend prior results about the derivation of the Maxwell-Schrödinger equations from the bosonic Pauli-Fierz Hamiltonian. More concretely, we show that the findings from [25] about the The Landau–Pekar equations : adiabatic theorem and accuracy We prove an adiabatic theorem for the Landau-Pekar equations. This allows us to derive new results on the accuracy of their use as effective equations for the time evolution generated by the Frohlich D ec 2 01 9 An optimal semiclassical bound on certain commutators We prove an optimal semiclassical bound on the trace norm of the following commutators [1(−∞,0](H~), x], [1(−∞,0](H~),−i~∇] and [1(−∞,0](H~), e], where H~ is a Schrödinger operator with a An optimal semiclassical bound on certain commutators We prove an optimal semiclassical bound on the trace norm of the following commutators $[\boldsymbol{1}_{(-\infty,0]}(H_\hbar),x]$, $[\boldsymbol{1}_{(-\infty,0]}(H_\hbar),-i\hbar\nabla]$ and Derivation of the Time Dependent Gross–Pitaevskii Equation in Two Dimensions In both cases, the convergence of the reduced density corresponding to the exact time evolution to the projector onto the solution of the corresponding nonlinear Schrödinger equation in trace norm is proved. An optimal semiclassical bound on commutators of spectral projections with position and momentum operators We prove an optimal semiclassical bound on the trace norm of the following commutators $$[{\varvec{1}}_{(-\infty ,0]}(H_\hbar ),x]$$ , $$[{\varvec{1}}_{(-\infty ,0]}(H_\hbar ),-i\hbar \nabla ]$$ Towards a derivation of Classical ElectroDynamics of charges and fields from QED . The purpose of this article is twofold: • On one hand, we rigorously derive the Newton–Maxwell equation in the Coulomb gauge from first principles of quantum electrodynamics in agreement with the Partially Classical Limit of the Nelson Model Mean–Field Evolution of Fermionic Systems The mean field limit for systems of many fermions is naturally coupled with a semiclassical limit. This makes the analysis of the mean field regime much more involved, compared with bosonic systems. Classical limit of the Nelson model with cutoff In this paper we analyze the classical limit of the Nelson model with cutoff, when both non-relativistic and relativistic particles number goes to infinity. We prove convergence of quantum Mean Field Evolution of Fermions with Coulomb Interaction We study the many body Schrödinger evolution of weakly coupled fermions interacting through a Coulomb potential. We are interested in a joint mean field and semiclassical scaling, that emerges Hartree corrections in a mean-field limit for fermions with Coulomb interaction* We consider the many-body dynamics of fermions with Coulomb interaction in a mean-field scaling limit where the kinetic and potential energy are of the same order for large particle numbers. In the Mean-field dynamics of fermions with relativistic dispersion We extend the derivation of the time-dependent Hartree-Fock equation recently obtained by Benedikter et al. [“Mean-field evolution of fermionic systems,” Commun. Math. Phys. (to be published)] to Effective N-Body Dynamics for the Massless Nelson Model and Adiabatic Decoupling without Spectral Gap Abstract. The Schrödinger equation for N particles interacting through effective pair potentials is derived from the massless Nelson model with ultraviolet cutoffs. We consider a scaling limit where Interaction of Nonrelativistic Particles with a Quantized Scalar Field Mean-field evolution of fermions with singular interaction We consider a system of N fermions in the mean-field regime interacting though an inverse power law potential $V(x)=1/|x|^{\alpha}$, for $\alpha\in(0,1]$. We prove the convergence of a solution of
8548cd9140b3cbd8
Scalar Torsion – Unified Field Theory Key Scalar Torsion is the New Symmetry of General Relativity scalar torsion toroidal donut Image credit: Under Cartan transformations, the new formalism of the General Theory of Relativity (GTR) leads to different pictures of the same gravitational phenomena. “We reformulate the general theory of relativity in the language of Riemann-Cartan geometry (J. B. Fonseca-Neto, C. Romero, S. P. G. Martinez)[1]. They show that in an arbitrary Cartan gauge general relativity has the form of a scalar-tensor theory – “…we extend the concept of space-time symmetry to the more general case of Riemann-Cartan space-times endowed with scalar torsion.” To Tesla, the General Theory of Relativity was just: GTR has the problem of using the speed of light as a constant, but if you look you are using the speed of light. Einstein ignored everything he couldn’t see, but if you want to derive particles you can’t ignore it. The speed of light is infinite, according to the ancient Greeks and up until Galileo Galilei in early 1560’s. Quantum Theory explains everything except General Relativity and Gravity. But if space-times are endowed with scalar torsion we are down to just Gravity with the New Symmetry of General Relativity. And if Gravity, as explained by Bob McElrath, a former theory postdoc at UC Davis and now at CERN, emerges from neutrinos [2] then, it too is explained by scalar torsion waves, since Prof. Konstantin Meyl PhD has presented the theory that neutrinos are scalar waves moving faster than the speed of light.[3] Properties of Neutrinos In 2002 the Nobel Prize was awarded to Raymond Davis Jr. and Masatoshi Koshiba for the detection of cosmic neutrinos. A neutrino oscillates between the properties of an electron and a positron and the average of the charge of the neutrino is zero and the mass is as well. But the effective value is not zero. Just like AC current: the average of the voltage is zero but the effective value is not zero. The result is that you have a particle with no charge and no mass, but it has energy and it has a pulse and this is the only way to explain the existence of the physical neutrino. Neutrinos – Potential Vortices – Scalar Torsion Waves – Unified Field Theory From Objectivity to a Unified Field Theory Potential vortices, newly discovered properties of the electric field, fundamentally change the picture of the physical world. Prof. Konstantin Meyl PhD., who lectures at Technical University of Berlin, University of Clausthal and at the University of Applied Sciences, Furtwangen has written several books. Some of them have been translated into English. He has designed a fully functional replica of Nicola Tesla’s longitudinal electric wave-potential vortex which propagates scalar-like through space and which phenomenon can now be studied and examined once again. He used James Clerk Maxwell‘s 3rd Equation known as Faraday’s Law, as a hypothetical factor and proves that the electric vortex is a part of it. Because his theory is based on an extension of the Maxwell theory, so classic physical laws remain in force as his theory is a special case scenario that does not affect them. This non-speculative theory enables new interpretations of several principles of electrical engineering and quantum physics. It leads to feasible interpretations of experimental observations which to this day have not been possible to explain via existing theories. For example, quantum particle characteristics can be calculated when interpreted as a vortex. Likewise a number of neutrino experimental results can be explained when the neutrinos are regarded as a vortex. Dr. Meyl’s theory describes how field vortices form scalar waves via his extended field theory. Neutrino Power – Alternative Clean, Cheap Energy Neutrino energy (scalar torsion) from black hole Neutrinos blasting from the centre of a Galaxy Radio Galaxy Pictor A Credits: X-ray: NASA/CXC/Univ. of Hertfordshire/M. Hardcastle et al.; Radio: CSIRO/ATNF/ATCA If one goes into resonance with neutrino radiation then one can collect it, but the frequencies are higher than the present day semi-conductors. Nevertheless, neutrino power is available as an inexhaustible form of energy due to a remarkable overunity effect. Significant advances can result in terms of environmental sustainability and regarding today’s electromagnetic pollution, by means of this revised theory. However, the Khazarian Mafia or Hyksos blue bloods who are less than 1% of the population and who own the world and everything in it, cannot get what they want from clean, cheap energy; because what they want is full control over what happens on earth, using their occult knowledge and our taxes and interest on their black magic money scam. They are very proud of being able to trace their bloodline back to Cain, who was not Adam’s son and not fully human, hence their copper-based blue blood. But for regular red blooded people with a heart and the ability to empathize and create, all this means is that newly discovered properties of the electric field are fundamentally changing our view of the physical world. In the enhanced view of potential vortex, the physical comprehension becomes ever more objective, as the Meyl theory explains not only interactions but temperature, which to date is inexplicable via conventional theories. Electric Scalar Waves and Magnetic Scalar Waves Since Maxwell’s four equations of 1861 describing electromagnetic waves, we have understood and used them but not until Heinrich Hertz demonstrated their existence in 1887. These electromagnetic waves were predicted by Maxwell 26 years earlier, and were not demonstrated by Hertz until after Maxwell’s death in 1879, albeit using the truncated version called the Maxwell-Heaviside equation of 1874. Oliver Heaviside was an English self-taught electrical engineer, mathematician, and physicist. It was not known then and still it is not accepted by mainstream electromagnetic engineers, that the electromagnetic waves comprise electric scalar waves and magnetic scalar waves within the recognized electromagnetic waves. This is in part because of the truncation of the equation, and because the 3rd equation describes the magnetic monopoles as being set to zero. These days mobile phones are sending that part of Maxwell’s third equation – Gauss’ Magnetism Law which is set to 0 (and should not be) – by default. Once it is recognized, understood and used intentionally, that the magnetic monoplole is not zero, our technology will take another quantum leap. Meyl derives the extended Maxwell equation from his unified field theory as contained in the soon to be translated book: “From Objectivity to the Theory of Everything” or Unified Field Theory published in German. His intention is write papers on it soon so it can be peer-reviewed. His equation says there are two sides of the coin, the electric and the magnetic, and one is changing to the other one if there is movement e.g. us around sun and/or the sun around the centre of the galaxy etc.. If you have an electric field, the field is the influence for the light, and if there is an electric field it will influence the speed of light, and if there is movement we get a magnetic field from the electric field, and vice versa. We cannot measure the aether wind because we are moving with it. Aether is the field, an electric and magnetic field. The cause for the known value of the speed of light is called the aether, but putting the 3nd Maxwell structure forming particle potential which is the magnetic monopole to zero was the error, so physicists couldn’t explain the particles. Paul Dirac was the first who understood that the magnetic monopole has to be there to explain the observations, and the Helmholtz Society found it in 2009. Quantum physicists like Max Planck were experiencing these quantum effects as he called them and tried to explain all the field effects by quantum effects, like for gravitational effects they created gravitons. In other words they postulated what they wanted to explain. Now they are in process of postulating from non-derived postulations such as electrons, positron, protons etc. such postulations as quarks, muons and more. The Big Bang is a Big BluffKonstantin Meyl How Matter is Produced If the Schrödinger equation is derived from the extended field theory of Meyl then all particles which the Schrödinger equation is describing have to be vortex structures, according to his objectivity theory. The fields of the vortex balls run around one point which is the centre of the ball which is still, i.e. the speed of light is at zero, and the field lines all go to the centre which means it is infinite at the centre. Such balls become stable matter. Electromagnetic waves don’t have this property, but all matter is vortex balls, according to Meyl’s theory. Scalar torsion Energy Emanates from a Singularity at the Centers of Galaxies Wandering Black Hole Wandering Black Hole found by NASA’s Chandra X-ray Observatory and ESA’s XMM-Newton X-ray observatory. Image credit: Photo : Flickr/Creative Commons/NASA The universe is a web of singularities (“black holes” and “white holes”) from which faster-than-light scalar torsion energy flows. This energy flows throughout the universe, from the galactic scale to the subatomic, spiraling and branching (fractaling) as it goes, following toroidal (doughnut-shaped) flows.[5] Other names for scalar torsion fields are: chi, prana, tachyon energy, zero point energy, life force, torsion fields/waves, longitudinal waves, neutrino power and more. hexagonal scalar torsion-wave structured frozen water crystals Dr Emoto Masaru’s book cover showing the hexagonal structure which torsion waves create in water. At Sound Energy Research scientists created torsion field imprints in distilled water using scalar torsion wave technologies. The result is structured water called scalar wave–structured water™. They provided samples to Dr. Masaru Emoto who froze them and studied the crystals, which formed hexagonal structures like those created by human consciousness. The scalar torsion technology creates the same effects as mental intent that has been captured and frozen by Dr. Emoto. The inference to be drawn is that scalar torsion waves, albeit bereft of any electromagnetic properties or mass, are “carrier waves” of consciousness via the scalar torsion field.[6] [4] Watch Nassim Haramein – Crossing the event horizon DVD
6586eb94ab54198d
Saturday, February 29, 2020 14 Years BackRe(Action) [Image: Scott McLeod/Flickr] 14 years ago, I was a postdoc in Santa Barbara, in a tiny corner office where the windows wouldn't open, in a building that slightly swayed each time one of the frequent mini-earthquakes shook up California. I had just published my first blogpost. It happened to be about the possibility that the Large Hadron Collider, which was not yet in operation, would produce tiny black holes and inadvertently kill all of us. The topic would soon rise to attention in the media and thereby mark my entry into the world of science communication. I was well prepared: Black holes at the LHC were the topic of my PhD thesis. A few months later, I got married. Later that same year, Lee Smolin's book "The Trouble With Physics" was published, coincidentally at almost the same time I moved to Canada and started my new position at Perimeter Institute. I had read an early version of the manuscript and published one of the first online reviews. Peter Woit's book "Not Even Wrong" appeared at almost the same time and kicked off what later became known as "The String Wars", though I've always found the rather militant term somewhat inappropriate. Time marched on and I kept writing, through my move to Sweden, my first pregnancy and the following miscarriage, the second pregnancy, the twin's birth, parental leave, my suffering through 5 years of a 3000 km commute while trying to raise two kids, and, in late 2015, my move back to Germany. Then, in 2018, the publication of my first book. The loyal readers of this blog will have noticed that in the past year I have shifted weight from Blogger to YouTube. The reason is that the way search engine algorithms and the blogosphere have evolved, it has become basically impossible to attract new audiences to a blog. Here on Blogger, I feel rather stuck on the topics I have originally written about, mostly quantum gravity and particle physics, while meanwhile my interests have drifted more towards astrophysics, quantum foundations, and the philosophy of physics. YouTube's algorithm is certainly not perfect, but it serves content to users that may be interested in the topic of a video, regardless of whether they've previously heard of me. I have to admit that personally I still prefer writing over videos. Not only because it's less time-consuming, but also because I don't particularly like either my voice or my face. But then, the average number of people who watch my videos has quickly surpassed the number of those who typically read my blog, so I guess I am doing okay. On this occasion I want to thank all of you for spending some time with me, for your feedback and comments and encouragement. I am especially grateful to those of you who have on occasion sent a donation my way. I am not entirely sure where this blog will be going in the future, but stay around and you will find out. I promise it won't be boring. Friday, February 28, 2020 Quantum Gravity in the Lab? The Hype Is On. Quanta Magazine has an article by Phillip Ball titled “Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab”. It’s about using quantum simulations to study the behavior of black holes in Anti De-Sitter space, that is a space with a negative cosmological constant. A quantum simulation is a collection of particles with specifically designed interactions that can mimic the behavior of another system. To briefly remind you, we do not live in Anti De-Sitter space. For all we know, the cosmological constant in our universe is positive. And no, the two cases are not remotely similar. It’s an interesting topic in principle, but unfortunately the article by Ball is full of statements that gloss over this not very subtle fact that we do not live in Anti De-Sitter space. We can read there for example: “In principle, researchers could construct systems entirely equivalent to wormhole-connected black holes by entangling quantum circuits in the right way and teleporting qubits between them.” The correct statement would be: “Researchers could construct systems whose governing equations are in certain limits equivalent to those governing black holes in a universe we do not inhabit.” Further, needless to say, a collection of ions in the laboratory is not “entirely equivalent” to a black hole. For starters that is because the ions are made of other particles which are yet again made of other particles, none of which has any correspondence in the black hole analogy. Also, in case you’ve forgotten, we do not live in Anti De-Sitter space. Why do physicists even study black holes in Anti-De Sitter space? To make a long story short: Because they can. They can, both because they have an idea how the math works and because they can get paid for it. Now, there is nothing wrong with using methods obtained by the AdS/CFT correspondence to calculate the behavior of many particle systems. Indeed, I think that’s a neat idea. However, it is patently false to raise the impression that this tells us anything about quantum gravity, where by “quantum gravity” I mean the theory that resolves the inconsistency between the Standard Model of particle physics and General Relativity in our universe. Ie, a theory that actually describes nature. We have no reason whatsoever to think that the AdS/CFT correspondence tells us something about quantum gravity in our universe. As I explained in this earlier post, it is highly implausible that the results from AdS carry over to flat space or to space with a positive cosmological constant because the limit is not continuous. You can of course simply take the limit ignoring its convergence properties, but then the theory you get has no obvious relation to General Relativity. Let us have a look at the paper behind the article. We can read there in the introduction: “In the quest to understand the quantum nature of spacetime and gravity, a key difficulty is the lack of contact with experiment. Since gravity is so weak, directly probing quantum gravity means going to experimentally infeasible energy scales.” This is wrong and it demonstrates that the authors are not familiar with the phenomenology of quantum gravity. Large deviations from the semi-classical limit can occur at small energy scales. The reason is, rather trivially, that large masses in quantum superpositions should have gravitational fields in quantum superpositions. No large energies necessary for that. If you could, for example, put a billiard ball into a superposition of location you should be able to measure what happens to its gravitational field. This is unfeasible, but not because it involves high energies. It’s infeasible because decoherence kicks in too quickly to measure anything. Here is the rest of the first paragraph of the paper. I have in bold face added corrections that any reviewer should have insisted on: “However, a consequence of the holographic principle [3, 4] and its concrete realization in the AdS/CFT correspondence [5–7] (see also [8]) is that non-gravitational systems with sufficient entanglement may exhibit phenomena characteristic of quantum gravity in a space with a negative cosmological constant. This suggests that we may be able to use table-top physics experiments to indirectly probe quantum gravity in universes that we do not inhabit. Indeed, the technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab, except that, by the methods described in this paper, we cannot actually test quantum gravity in our universe. For this, other experiments are needed, which we will however not even mention. The purpose of this paper is to discuss one way in which quantum gravity can make contact with experiment, if you, like us, insist on studying quantum gravity in fictional universes that for all we know do not exist.” I pointed out that these black holes that string theorists deal with have nothing to do with real black holes in an article I wrote for Quanta Magazine last year. It was also the last article I wrote for them. Thursday, February 20, 2020 The 10 Most Important Physics Effects Today I have a count-down of the 10 most important effects in physics that you should all know about. 10. The Doppler Effect The Doppler effect is the change in frequency of a wave when the source moves relative to the receiver. If the source is approaching, the wavelength appears shorter and the frequency higher. If the source is moving away, the wavelength appears longer and the frequency lower. The most common example of the Doppler effect is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you. But the Doppler effect does not only happen for sound waves; it also happens to light which is why it’s enormously important in astrophysics. For light, the frequency is the color, so the color of an approaching object is shifted to the blue and that of an object moving away from you is shifted to the red. Because of this, we can for example calculate our velocity relative to the cosmic microwave background. The Doppler effect is named after the Austrian physicist Christian Doppler and has nothing to do with the German word Doppelgänger. 9. The Butterfly Effect Even a tiny change, like the flap of a butterfly’s wings, can making a big difference for the weather next Sunday. This is the butterfly effect as you have probably heard of it. But Edward Lorenz actually meant something much more radical when he spoke of the butterfly effect. He meant that for some non-linear systems you can only make predictions for a limited amount of time, even if you can measure the tiniest perturbations to arbitrary accuracy. I explained this in more detail in my earlier video. 8. The Meissner-Ochsenfeld Effect The Meissner-Ochsenfeld effect is the impossibility of making a magnetic field enter a superconductor. It was discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933. Thanks to this effect, if you try to place a superconductor on a magnet, it will hover above the magnet because the magnetic field lines cannot enter the superconductor. I assure you that this has absolutely nothing to do with Yogic flying. 7. The Aharonov–Bohm Effect Okay, I admit this is not a particularly well-known effect, but it should be. The Aharonov-Bohm effect says that the wave-function of a charged particle in an electromagnetic field obtains a phase shift from the potential of the background field. I know this sounds abstract, but the relevant point is that it’s the potential that causes the phase, not the field. In electrodynamics, the potential itself is normally not observable. But this phase shift in the Aharonov-Bohm Effect can and has been observed in interference patterns. And this tells us that the potential is not merely a mathematical tool. Before the Aharonov–Bohm effect one could reasonably question the physical reality of the potential because it was not observable. 6. The Tennis Racket Effect If you throw any three-dimensional object with a spin, then the spin around the shortest and longest axes will be stable, but that around the intermediate third axis not. The typical example for the spinning object this is a tennis racket, hence the name. It’s also known as the intermediate axis theorem or the Dzhanibekov effect. You see a beautiful illustration of the instability in this little clip from the International Space Station. 5. The Hall Effect If you bring a conducting plate into a magnetic field, then the magnetic field will affect the motion of the electrons in the plate. In particular, If the plate is orthogonal to the magnetic field lines, you can measure a voltage flowing between opposing ends of the plate, and this voltage can be measured to determine the strength of the magnetic field. This effect is named after Edwin Hall. If the plate is very thin, the temperature very low, and the magnetic field very strong, you can also observe that the conductivity makes discrete jumps, which is known as the quantum Hall effect. 4. The Hawking Effect Stephen Hawking showed in the early 1970s that black holes emit thermal radiation with a temperature inverse to the black hole’s mass. This Hawking effect is a consequence of the relativity of the particle number. An observer falling into a black hole would not measure any particles and think the black hole is surrounded by vacuum. But an observer far away from the black hole would think the horizon is surrounded by particles. This can happen because in general relativity, what we mean by a particle depend on the motion of an observer like the passage of time. A closely related effect is the Unruh effect named after Bill Unruh, which says that an accelerated observer in flat space will measure a thermal distribution of particles with a temperature that depends on the acceleration. Again that can happen because the accelerated observer’s particles are not the same as the particles of an observer at rest. 3. The Photoelectric Effect When light falls on a plate of metal, it can kick out electrons from their orbits around atomic nuclei. This is called the “photoelectric effect”. The surprising thing about this is that the frequency of the light needs to be above a certain threshold. Just what the threshold is depends on the material, but if the frequency is below the threshold, it does not matter how intense the light is, it will not kick out electrons. The photoelectric effect was explained in 1905 by Albert Einstein who correctly concluded that it means the light must be made of quanta whose energy is proportional to the frequency of the light. 2. The Casimir Effect Everybody knows that two metal plates will attract each other if one plate is positively charged and the other one negatively charged. But did you know the plates also attract each other if they are uncharged? Yes, they do! This is the Casimir effect, named after Hendrik Casimir. It is created by quantum fluctuations that create a pressure even in vacuum. This pressure is lower between the plates than outside of them, so that the two plates are pushed towards each other. However, the force from the Casimir effect is very weak and can be measured only at very short distances. 1. The Tunnel Effect Definitely my most favorite effect. Quantum effects allow a particle that is trapped in a potential to escape. This would not be possible without quantum effects because the particle just does not have enough energy to escape. However, in quantum mechanics the wave-function of the particle can leak out of the potential and this means that there is a small, but nonzero, probability that a quantum particle can do the seemingly impossible. Saturday, February 15, 2020 The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop Monday, February 10, 2020 Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 2.” by Tim Palmer [This is the second part of Tim’s guest contribution. The first part is here.] In this second part of my guest post, I want to discuss how the concepts of undecidability and uncomputability can lead to a novel interpretation of Bell’s famous theorem. This theorem states that under seemingly reasonable conditions, a deterministic theory of quantum physics – something Einstein believed in passionately – must satisfy a certain inequality which experiment shows is violated. These reasonable conditions, broadly speaking, describe the concepts of causality and freedom to choose experimental parameters. The issue I want to discuss is whether the way these conditions are formulated mathematically in Bell’s Theorem actually captures the physics that supposedly underpins them. The discussion here and in the previous post summarises the essay I recently submitted to the FQXi essay competition on undecidability and uncomputability. For many, the notion that we have some freedom in our actions and decisions seems irrefutable. But how would we explain this to an alien, or indeed a computer, for whom free will is a meaningless concept? Perhaps we might say that we are free because we could have done otherwise. This invokes the notion of a counterfactual world: even though we in fact did X, we could have done Y. Counterfactuals also play an important role in describing the notion of causality. Imagine throwing a stone at a glass window. Was the smashed glass caused by my throwing the stone? Yes, I might say, because if I hadn’t thrown the stone, the window wouldn’t have broken. However, there is an alternative way to describe these notions of free will and causality without invoking counterfactual worlds. I can just as well say that free will denotes an absence of constraints that would otherwise prevent me from doing what I want to do. Or I can use Newton’s laws of motion to determine that a stone with a certain mass, projected at a certain velocity, will hit the window with a momentum guaranteed to shatter the glass. These latter descriptions make no reference to counterfactuals at all; instead the descriptions are based on processes occurring in space-time (e.g. associated with the neurons of my brain or projectiles in physical space). What has all this got to do with Bell’s Theorem? I mentioned above the need for a given theory to satisfy “certain conditions” in order for it to be constrained by Bell’s inequality (and hence be inconsistent with experiment). One of these conditions, the one linked to free will, is called Statistical Independence. Theories which violate this condition are called Superdeterministic. Superdeterministic theories are typically excoriated by quantum foundations experts, not least because the Statistical Independence condition appears to underpin scientific methodology in general. For example, consider a source of particles emitting 1000 spin-1/2 particles. Suppose you measure the spin of 500 of them along one direction and 500 of them along a different direction. Statistical Independence guarantees that the measurement statistics (e.g. the frequency of spin-up measurements) will not depend on the particular way in which the experimenter chooses to partition the full ensemble of 1000 particles into the two sub-ensembles of 500 particles. If you violate Statistical Independence, the experts say, you are effectively invoking some conspiratorial prescient daemon who could, unknown to the experimenter, preselect particles for the particular measurements the experimenter choses to make - or even worse perhaps, could subvert the mind of the experimenter when deciding which type of measurement to perform on a given particle. Effectively, violating Statistical Independence turns experimenters into mindless zombies! No wonder experimentalists hate Superdeterministic theories of quantum physics!! However, the experts miss a subtle but crucial point here: whilst imposing Statistical Independence guarantees that real-world sub-ensembles are statistically equivalent, violating Statistical Independence does not guarantee that real-world sub-ensembles are not statistically equivalent. In particular it is possible to violate Statistical Independence in such a way that it is only sub-ensembles of particles subject to certain counterfactual measurements that may be statistically inequivalent to the corresponding sub-ensembles with real-world measurements. In the example above, a sub-ensemble of particles subject to a counterfactual measurement would be associated with the first sub-ensemble of 500 particles subject to the measurement direction applied to the second sub-ensemble of 500 particles. It is possible to violate Statistical Independence when comparing this counterfactual sub-ensemble with the real-world equivalent, without violating the statistical equivalence of the two corresponding sub-ensembles measured along their real-world directions. However, for this idea to make any theoretical sense at all, there has to be some mathematical basis for asserting that sub-ensembles with real-world measurements can be different to sub-ensembles with counterfactual-world measurements. This is where uncomputable fractal attractors play a key role. It is worth keeping an example of a fractal attractor in mind here. The Lorenz fractal attractor, discussed in my first post, is a geometric representation in state space of fluid motion in Newtonian space-time. The Lorenz attractor. [Image Credits: Markus Fritzsch.] As I explained in my first post, the attractor is uncomputable in the sense that there is no algorithm which can decide whether a given point in state space lies on the attractor (in exactly the same sense that, as Turing discovered, there is no algorithm for deciding whether a given computer program will halt for given input data). However, as I lay out in my essay, the differential equations for the fluid motion in space-time associated with the Lorenz attractor are themselves solvable by algorithm to arbitrary accuracy and hence are computable. This dichotomy (between state space and space-time) is extremely important to bear in mind below. With this in mind, suppose the universe itself evolves on some uncomputable fractal subset of state space, such that the corresponding evolution equations for physics in space-time are computable. In such a model, Statistical Independence will be violated for sub-ensembles if the corresponding counterfactual measurements take states of the universe off the fractal subset (since such counterfactual states have probability of occurrence equal to zero by definition). In the model I have developed this always occurs when considering counterfactual measurements such as those in Bell’s Theorem. (This is a nontrivial result and is the consequence of number-theoretic properties of trigonometric functions.) Importantly, in this theory, Statistical Independence is never violated when comparing two sub-ensembles subject to real-world measurements such as occurs in analysing Bell’s Theorem. This is all a bit mind numbing, I do admit. However, the bottom line is that I believe that the mathematical definitions of free choice and causality used to understand quantum entanglement are much too general – in particular they admit counterfactual worlds as physical in a completely unconstrained way. I have proposed alternative definitions of free choice and causality which strongly constrain counterfactual states (essentially they must lie on the fractal subset in state space), whilst leaving untouched descriptions of free choice and causality based only on space-time processes. (For the experts, in the classical limit of this theory, Statistical Independence is not violated for any counterfactual states.) With these alternative definitions, it is possible to violate Bell’s inequality in a deterministic theory which respects free choice and local causality, in exactly the way it is violated in quantum mechanics. Einstein may have been right after all! If we can explain entanglement deterministically and causally, then synthesising quantum and gravitational physics may have become easier. Indeed, it is through such synthesis that experimental tests of my model may eventually come. In conclusion, I believe that the uncomputable fractal attractors of chaotic systems may provide a key geometric ingredient needed to unify our theories of physics. My thanks to Sabine for allowing me the space on her blog to express these points of view. Saturday, February 08, 2020 Philosophers should talk more about climate change. Yes, philosophers. I never cease to be shocked – shocked! – how many scientists don’t know how science works and, worse, don’t seem to care about it. Most of those I have to deal with still think Popper was right when he claimed falsifiability is both necessary and sufficient to make a theory scientific, even though this position has logical consequences they’d strongly object to. Trouble is, if falsifiability was all it took, then arbitrary statements about the future would be scientific. I should, for example, be able to publish a paper predicting that tomorrow the sky will be pink and next Wednesday my cat will speak French. That’s totally falsifiable, yet I hope we all agree that if we’d let such nonsense pass as scientific, science would be entirely useless. I don’t even have a cat. As the contemporary philosopher Larry Laudan politely put it, Popper’s idea of telling science from non-science by falsifiability “has the untoward consequence of countenancing as `scientific’ every crank claim which makes ascertainably false assertions.” Which is why the world’s cranks love Popper. But you are not a crank, oh no, not you. And so you surely know that almost all of today’s philosophers of science agree that falsification is not a sufficient criterion of demarcation (though they disagree on whether it is necessary). Luckily, you don’t need to know anything about these philosophers to understand today’s post because I will not attempt to solve the demarcation problem (which, for the record, I don’t think is a philosophical question). I merely want to clarify just when it is scientifically justified to amend a theory whose predictions ran into tension with new data. And the only thing you need to know to understand this is that science cannot work without Occam’s razor. Occam’s razor tells you that among two theories that describe nature equally well you should take the simpler one. Roughly speaking it means you must discard superfluous assumptions. Occam’s razor is important because without it we were allowed to add all kinds of unnecessary clutter to a theory just because we like it. We would be permitted, for example, to add the assumption “all particles were made by god” to the standard model of particle physics. You see right away how this isn’t going well for science. Now, the phrase that two theories “describe nature equally well” and you should “take the simpler one” are somewhat vague. To make this prescription operationally useful you’d have to quantify just what it means by suitable statistical measures. We can then quibble about just which statistical measure is the best, but that’s somewhat beside the point here, so let me instead come back to the relevance of Occam’s razor. We just saw that it’s unscientific to make assumptions which are unnecessary to explain observation and don’t make a theory any simpler. But physicists get this wrong all the time and some have made a business out of it getting it wrong. They invent particles which make theories more complicated and are of no help to explain existing data. They claim this is science because these theories are falsifiable. But the new particles were unnecessary in the first place, so their ideas are dead on arrival, killed by Occam’s razor. If you still have trouble seeing why adding unnecessary details to established theories is unsound scientific methodology, imagine that scientists of other disciplines would proceed the way that particle physicists do. We’d have biologists writing papers about flying pigs and then hold conferences debating how flying pigs poop because, who knows, we might discover flying pigs tomorrow. Sounds ridiculous? Well, it is ridiculous. But that’s the same “scientific methodology” which has become common in the foundations of physics. The only difference between elaborating on flying pigs and supersymmetric particles is the amount of mathematics. And math certainly comes in handy for particle physicists because it prevents mere mortals from understanding just what the physicists are up to. But I am not telling you this to bitch about supersymmetry; that would be beating a dead horse. I am telling you this because I have recently had to deal with a lot of climate change deniers (thanks so much, Tim). And many of these deniers, believe that or not, think I must be a denier too because, drums please, I am an outspoken critic of inventing superfluous particles. Huh, you say. I hear you. It took me a while to figure out what’s with these people, but I believe I now understand where they’re coming from. You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes, it’s exactly the same thing that all these physicists do each time their hypothetical particles are not observed! They just fiddle with the parameters of the theory to evade experimental constraints and to keep their pet theories alive. But Popper already said you shouldn’t do that. Then someone yells “Epicycles!” And so, the deniers conclude, climate scientists are as wrong as particle physicists and clearly one shouldn’t listen to either. But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do. The more and the better data you have, the higher the demands on your theory. Sometimes this means you actually need a new theory. Sometimes you have to adjust one or the other parameter. Sometimes you find an actual mistake and have to correct it. But more often than not it just means you neglected something that better measurements are sensitive to and you must add details to your theory. And this is perfectly fine as long as adding details results in a model that explains the data better than before, and does so not just because you now have more parameters. Again, there are statistical measures to quantify in which cases adding parameters actually makes a better fit to data. Indeed, adding epicycles to make the geocentric model of the solar system fit with observations was entirely proper scientific methodology. It was correcting a hypothesis that ran into conflict with increasingly better observations. Astronomers of the time could have proceeded this way until they’d have noticed there is a simpler way to calculate the same curves, which is by using elliptic motions around the sun rather than cycles around cycles around the Earth. Of course this is not what historically happened, but epicycles in and by themselves are not unscientific, they’re merely parametrically clumsy. What scientists should not do, however, is to adjust details of a theory that were unnecessary in the first place. Kepler for example also thought that the planets play melodies on their orbits around the sun, an idea that was rightfully abandoned because it explains nothing. To name another example, adding dark matter and dark energy to the cosmological standard model in order to explain observations is sound scientific practice. These are both simple explanations that vastly improve the fit of the theory to observation. What is not sound scientific methodology is then making these theories more complicated than needs to be, eg by replacing dark energy with complicated scalar fields even though there is no observation that calls for it, or by inventing details about particles that make up dark matter even though these details are irrelevant to fit existing data. But let me come back to the climate change deniers. You may call me naïve, and I’ll take that, but I believe most of these people are genuinely confused about how science works. It’s of little use to throw evidence at people who don’t understand how scientists make evidence-based predictions. When it comes to climate change, therefore, I think we would all benefit if philosophers of science were given more airtime. Thursday, February 06, 2020 Ivory Tower [I've been singing again] I caught a cold and didn't come around to record a new physics video this week. Instead I finished a song that I wrote some weeks ago. Enjoy! Monday, February 03, 2020 Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 1.” by Tim Palmer [Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.] [Screenshot from Tim’s public lecture at Perimeter Institute] Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other. The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it. Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality. Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates. So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post. To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem. Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor. One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor A of a chaotic system is uncomputable and the proposition “x belongs to A” is algorithmically undecidable. How does this help unify physics? Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory. That was easy! The rest is not so easy which is why I need two guest posts and not one! When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos. To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting). Fig 1: Evolution of a contour of probability, based on ensembles of integrations of the Lorenz equations, is shown evolving in state space for different initial conditions, with the Lorenz attractor as background.  In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between. Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time. The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities. However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post. Sunday, February 02, 2020 Does nature have a minimal length? Molecules are made of atoms. Atomic nuclei are made of neutrons and protons. And the neutrons and protons are made of quarks and gluons. Many physicists think that this is not the end of the story, but that quarks and gluons are made of even smaller things, for example the tiny vibrating strings that string theory is all about. But then what? Are strings made of smaller things again? Or is there a smallest scale beyond which nature just does not have any further structure? Does nature have a minimal length? This is what we will talk about today. When physicists talk about a minimal length, they usually mean the Planck length, which is about 10-35 meters. The Planck length is named after Max Planck, who introduced it in 1899. 10-35 meters sounds tiny and indeed it is damned tiny. To give you an idea, think of the tunnel of the Large Hadron Collider. It’s a ring with a diameter of about 10 kilometers. The Planck length compares to the diameter of a proton as the radius of a proton to the diameter of the Large Hadron Collider. Currently, the smallest structures that we can study are about ten to the minus nineteen meters. That’s what we can do with the energies produced at the Large Hadron Collider and that is still sixteen orders of magnitude larger than the Planck length. What’s so special about the Planck length? The Planck length seems to be setting a limit to how small a structure can be so that we can still measure it. That’s because to measure small structures, we need to compress more energy into small volumes of space. That’s basically what we do with particle accelerators. Higher energy allows us to find out what happens on shorter distances. But if you stuff too much energy into a small volume, you will make a black hole. More concretely, if you have an energy E, that will in the best case allow you to resolve a distance of about ℏc/E. I will call that distance Δx. Here, c is the speed of light and ℏ is a constant of nature, called Planck’s constant. Yes, that’s the same Planck! This relation comes from the uncertainty principle of quantum mechanics. So, higher energies let you resolve smaller structures. Now you can ask, if I turn up the energy and the size I can resolve gets smaller, when do I get a black hole? Well that happens, if the Schwarzschild radius associated with the energy is similar to the distance you are trying to measure. That’s not difficult to calculate. So let’s do it. The Schwarzschild radius is approximately M times G/c2 where G is Newton’s constant and M is the mass. We are asking, when is that radius similar to the distance Δx. As you almost certainly know, the mass associated with the energy is E=Mc2. And, as we previously saw, that energy is just ℏcx. You can then solve this equation for Δx. And this is what we call the Planck length. It is associated with an energy called the Planck energy. If you go to higher energies than that, you will just make larger black holes. So the Planck length is the shortest distance you can measure. Now, this is a neat estimate and it’s not entirely wrong, but it’s not a rigorous derivation. If you start thinking about it, it’s a little handwavy, so let me assure you there are much more rigorous ways to do this calculation, and the conclusion remains basically the same. If you combine quantum mechanics with gravity, then the Planck length seems to set a limit to the resolution of structures. That’s why physicists think nature may have a fundamentally minimal length. Max Planck by the way did not come up with the Planck length because he thought it was a minimal length. He came up with that simply because it’s the only unit of dimension length you can create from the fundamental constants, c, the speed of light, G, Newton’s constant, and ℏ. He thought that was interesting because, as he wrote in his 1899 paper, these would be natural units that also aliens would use. The idea that the Planck length is a minimal length only came up after the development of general relativity when physicists started thinking about how to quantize gravity. Today, this idea is supported by attempts to develop a theory of quantum gravity, which I told you about in an earlier video. In string theory, for example, if you squeeze too much energy into a string it will start spreading out. In Loop Quantum Gravity, the loops themselves have a finite size, given by the Planck length. In Asymptotically Safe Gravity, the gravitational force becomes weaker at high energies, so beyond a certain point you can no longer improve your resolution. When I speak about a minimal length, a lot of people seem to have a particular image in mind, which is that the minimal length works like a kind of discretization, a pixilation of an photo or something like that. But that is most definitely the wrong image. The minimal length that we are talking about here is more like an unavoidable blur on an image, some kind of fundamental fuzziness that nature has. It may, but does not necessarily come with a discretization. What does this all mean? Well, it means that we might be close to finding a final theory, one that describes nature at its most fundamental level and there is nothing more beyond that. That is possible, but. Remember that the arguments for the existence of a minimal length rest on extrapolating 16 orders magnitude below the distances what we have tested so far. That’s a lot. That extrapolation might just be wrong. Even though we do not currently have any reason to think that there should be something new on distances even shorter than the Planck length, that situation might change in the future. Still, I find it intriguing that for all we currently know, it is not necessary to think about distances shorter than the Planck length.
4dfc765ed728aa24
Wednesday, October 30, 2019 The crisis in physics is not only about physics downward spiral In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then. The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars. With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end. The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing. The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know. But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it. Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field. This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”. And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality. Physicists need new methods. Better methods. Methods that are appropriate to the present century. And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do. Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it. You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive. But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community. Indeed, we see this beginning to happen in medicine and in ecology, too. Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits. How physicists handle their crisis will give an example to other disciplines. So watch this space. Tuesday, October 22, 2019 What is the quantum measurement problem? Today, I want to explain just what the problem is with making measurements according to quantum theory. Quantum mechanics tells us that matter is not made of particles. It is made of elementary constituents that are often called particles, but are really described by wave-functions. A wave-function is a mathematical object which is neither a particle nor a wave, but it can have properties of both. The curious thing about the wave-function is that it does not itself correspond to something which we can observe. Instead, it is only a tool by help of which we calculate what we do observe. To make such a calculation, quantum theory uses the following postulates. First, as long as you do not measure the wave-function, it changes according to the Schrödinger equation. The Schrödinger equation is different for different particles. But its most important properties are independent of the particle. One of the important properties of the Schrödinger equation is that it guarantees that the probabilities computed from the wave-function will always add up to one, as they should. Another important property is that the change in time which one gets from the Schrödinger equation is reversible. But for our purposes the most important property of the Schrödinger equation is that it is linear. This means if you have two solutions to this equation, then any sum of the two solutions, with arbitrary pre-factors, will also be a solution. The second postulate of quantum mechanics tells you how you calculate from the wave-function what is the probability of getting a specific measurement outcome. This is called the “Born rule,” named after Max Born who came up with it. The Born rule says that the probability of a measurement is the absolute square of that part of the wave-function which describes a certain measurement outcome. To do this calculation, you also need to know how to describe what you are observing – say, the momentum of a particle. For this, you need further postulates, but these do not need to concern us today. And third, there is the measurement postulate, sometimes called the “update” or “collapse” of the wave-function. This postulate says that after you have made a measurement, the probability of what you have measured suddenly changes to 1. This, I have to emphasize, is a necessary requirement to describe what we observe. I cannot stress this enough because a lot of physicists seem to find it hard to comprehend. If you do not update the wave-function after measurement, then the wave-function does not describe what we observe. We do not, ever, observe a particle that is 50% measured. The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition. We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation. The measurement problem, I have to emphasize, is not solved by decoherence, even though many physicists seem to believe this to be so. Decoherence is a process that happens if a quantum superposition interacts with its environment. The environment may simply be air or, even in vacuum, you still have the radiation of the cosmic microwave background. There is always some environment. This interaction with the environment eventually destroys the ability of quantum states to display typical quantum behavior, like the ability of particles to create interference patterns. The larger the object, the more quickly its quantum behavior gets destroyed. Decoherence tells you that if you average over the states of the environment, because you do not know exactly what they do, then you no longer have a quantum superposition. Instead, you have a distribution of probabilities. This is what physicists call a “mixed state”. This does not solve the measurement problem because after measurement, you still have to update the probability of what you have observed to 100%. Decoherence does not tell you to do that. Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. It makes it necessary that the formulation of quantum mechanics explicitly refers to macroscopic objects like detectors, when really what these large things are doing should follow from the theory. A lot of people seem to think that you can solve this problem by way of re-interpreting the wave-function as merely encoding the knowledge that an observer has about the state of the system. This is what is called a Copenhagen or “neo-Copenhagen” interpretation. (And let me warn you that this is not the same as a Psi-epistemic interpretation, in case you have heard that word.) Now, if you believe that the wave-function merely describes the knowledge an observer has then you may say, of course it needs to be updated if the observer makes a measurement. Yes, that’s very reasonable. But of course this also refers to macroscopic concepts like observers and their knowledge. And if you want to use such concepts in the postulates of your theory, you are implicitly assuming that the behavior of observers or detectors is incompatible with the behavior of the particles that make up the observers or detectors. This requires that you explain when and how this distinction is to be made and none of the existing neo-Copenhagen approaches explain this. I already told you in an earlier blogpost why the many worlds interpretation does not solve the measurement problem. To briefly summarize it, it’s because in the many worlds interpretation one also has to use a postulate about what a detector does. What does it take to actually solve the measurement problem? I will get to this, so stay tuned. Wednesday, October 16, 2019 Dark matter nightmare: What if we are just using the wrong equations? Dark matter filaments. Computer simulation. [Image: John Dubinski (U of Toronto)] Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy. But when we point our telescopes at larger distances, something is amiss. Galaxies rotate faster than expected. Galaxies in clusters move faster than they should. The expansion of the universe is speeding up. General relativity does not tell us what is going on. Physicists have attributed these puzzling observations to two newly postulated substances: Dark matter and dark energy. These two names are merely placeholders in Einstein’s original equations; their sole purpose is to remove the mismatch between prediction and observation. This is not a new story. We have had evidence for dark matter since the 1930s, and dark energy was on the radar already in the 1990. Both have since occupied thousands of physicists with attempts to explain just what we are dealing with: Is dark matter a particle, and if so what type, and how can we measure it? If it is not a particle, then what do we change about general relativity to fix the discrepancy with measurements? Is dark energy maybe a new type of field? Is it, too, made of some particle? Does dark matter have something to do with dark energy or are the two unrelated? To answer these questions, hundreds of hypotheses have been proposed, conferences have been held, careers have been made – but here we are, in 2019, and we still don’t know. Bad enough, you may say, but the thing that really keeps me up at night is this: Maybe all these thousands of physicists are simply using the wrong equations. I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with. The issue is this. General relativity relates the curvature of space and time to the sources of matter and energy. Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response. But general relativity is a non-linear theory. This means, loosely speaking, that gravity gravitates. More concretely, it means that if you have two solutions to the equations and you take their sum, this sum will not also be a solution. Now, what we do when we want to explain what a galaxy does, or a galaxy cluster, or even the whole universe, is not to plug the matter and energy of every single planet and star into the equations. This would be computationally unfeasible. Instead, we use an average of matter and energy, and use that as the source for gravity. Needless to say, taking an average on one side of the equation requires that you also take an average on the other side. But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system: The average of a function of a variable is not the same as the function of the average of the variable. We know it’s not. But whenever we use general relativity on large scales, we assume that this is the case. So, we know that strictly speaking the equations we use are wrong. The big question is, then, just how wrong are they? Nosy students who ask this question are usually told these equations are not very wrong and are good to use. The argument goes that the difference between the equation we use and the equation we should use is negligible because gravity is weak in all these cases. But if you look at the literature somewhat closer, then this argument has been questioned. And these questions have been questioned. And the questioning questions have been questioned. And the debate has remained unsettled until today. That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory. It’s admittedly an unsexy research topic. It’s technical and tedious and most physicists ignore it. And so, while there are thousands of physicists who simply assume that the correction-terms from averaging are negligible, there are merely two dozen or so people trying to make sure that this assumption is actually correct. Given how much brain-power physicists have spent on trying to figure out what dark matter and dark energy is, I think it would be a good idea to definitely settle the question whether it is anything at all. At the very least, I would sleep better. Further reading: Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology, by Chris Clarkson, George Ellis, Julien Larena, and Obinna Umeh. Rept. Prog. Phys. 74 (2011) 112901, arXiv:1109.2314 [astro-ph.CO]. Monday, October 07, 2019 What does the future hold for particle physics? In my new video, I talk about the reason why the Large Hadron Collider, LHC for short, has not found fundamentally new particles besides the Higgs boson, and what this means for the future of particle physics. Below you find a transcript with references. Before the LHC turned on, particle physicists had high hopes it would find something new besides the Higgs boson, something that would go beyond the standard model of particle physics. There was a lot of talk about the particles that supposedly make up dark matter, which the collider might produce. Many physicists also expected it to find the first of a class of entirely new particles that were predicted based on a hypothesis known as supersymmetry. Others talked about dark energy, additional dimensions of space, string balls, black holes, time travel, making contact to parallel universes or “unparticles”. That’s particles which aren’t particles. So, clearly, some wild ideas were in the air. To illustrate the situation before the LHC began taking data, let me quote a few articles from back then. Here is Valerie Jamieson writing for New Scientist in 2008: “The Higgs and supersymmetry are on firm theoretical footing. Some theorists speculate about more outlandish scenarios for the LHC, including the production of extra dimensions, mini black holes, new forces, and particles smaller than quarks and electrons. A test for time travel has also been proposed.” Or, here is Ian Sample for the Guardian, also in 2008: “Scientists have some pretty good hunches about what the machine might find, from creating never-seen-before particles to discovering hidden dimensions and dark matter, the mysterious substance that makes up 25% of the universe.” Paul Langacker in 2010, writing for the APS: “Theorists have predicted that spectacular signals of supersymmetry should be visible at the LHC.” Michael Dine for Physics Today in 2007: The Telegraph in 2010: “[The LHC] could answer the question of what causes mass, or even surprise its creators by revealing the existence of a fifth, sixth or seventh secret dimension of time and space.” A final one. Here is Steve Giddings writing in 2010 for “LHC collisions might produce dark-matter particles... The collider might also shed light on the more predominant “dark energy,”... the LHC may reveal extra dimensions of space... if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes... Supersymmetry could be discovered by the LHC...” The Large Hadron collider has been running since 2010. It has found the Higgs boson. But why didn’t it find any of the other things? This question is surprisingly easy to answer. There was never a good reason to expect any of these things in the first place. The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions? To see what happened here, it is useful to look at the difference between the prediction of the Higgs-boson and the other speculations. The standard model without the Higgs does not work properly. It becomes mathematically inconsistent at energies that the LHC is able to reach. Concretely, without the Higgs, the standard model predicts probabilities larger than one, which makes no sense. We therefore knew, before the LHC turned on, that something new had to happen. It could have been something else besides the Higgs. The Higgs was one way to fix the problem with the standard model, but there are other ways. However, the Higgs turned out to be right. All other proposed ideas, extra dimensions, supersymmetry, time-travel, and so on, are unnecessary. These theories have been constructed so that they are compatible with all existing observations. But they are not necessary to solve any problem with the standard model. They are basically wishful thinking. The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix. I am afraid that many of them still believe this. This supposed problem is that the standard model is not “technically natural”. This means the standard model contains one number that is small, but there is no explanation for why it is small. This number is the mass of the Higgs-boson divided by the Planck mass, which happens to be about 10-15. The standard model works just fine with that number and it fits the data. But a small number like this, without explanation, is ugly and particle physicists didn’t want to believe nature could be that ugly. Well, now they know that nature doesn’t care what physicists want it to be like. What does this mean for the future of particle physics? This argument from “technical naturalness” was the only reason that physicists had to think that the standard model is incomplete and something to complete it must appear at LHC energies. Now that it is clear this argument did not work, there is no reason why a next larger collider should see anything new either. The standard model runs into mathematical trouble again at energies about a billion times higher than what a next larger collider could test. At the moment, therefore, we have no good reason to build a larger particle collider. But particle physics is not only collider physics. And so, it seems likely to me, that research will shift to other areas of physics. A shift that has been going on for two decades already, and will probably become more pronounced now, is the move to astrophysics, in particular the study of dark matter and dark energy and also, to some extent, the early universe. The other shift that we are likely to see is a move away from high energy particle physics and move towards high precision measurements at lower energies, or to table top experiments probing the quantum behavior of many particle systems, where we still have much to learn. Wednesday, October 02, 2019 Has Reductionism Run its Course? For more than 2000 years, ever since Democritus’ first musings about atoms, reductionism has driven scientific inquiry. The idea is simple enough: Things are made of smaller things, and if you know what the small things do, you learn what the large things do. Simple – and stunningly successful. After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Democritus originally coined the word “atom” to refer to indivisible, elementary units of matter. But what we have come to call “atoms”, we now know, is made of even smaller particles. And those smaller particles are yet again made of even smaller particles. © Sabine Hossenfelder The smallest constituents of matter, for all we currently know, are the 25 particles which physicists collect in the standard model of particle physics. Are these particles made up of yet another set of smaller particles, strings, or other things? It is certainly possible that the particles of the standard model are not the ultimate constituents of matter. But we presently have no particular reason to think they have a substructure. And this raises the question whether attempting to look even closer into the structure of matter is a promising research direction – right here, right now. It is a question that every researcher in the foundations of physics will be asking themselves, now that the Large Hadron Collider has confirmed the standard model, but found nothing beyond that. 20 years ago, it seemed clear to me that probing physical processes at ever shorter distances is the most reliable way to better understand how the universe works. And since it takes high energies to resolve short distances, this means that slamming particles together at high energies is the route forward. In other words, if you want to know more, you build bigger particle colliders. This is also, unsurprisingly, what most particle physicists are convinced of. Going to higher energies, so their story goes, is the most reliable way to search for something fundamentally new. This is, in a nutshell, particle physicists’ major argument in favor of building a new particle collider, one even larger than the presently operating Large Hadron Collider. But this simple story is too simple. The idea that reductionism means things are made of smaller things is what philosophers more specifically call “methodological reductionism”. It’s a statement about the properties of stuff. But there is another type of reductionism, “theory reductionism”, which instead refers to the relation between theories. One theory can be “reduced” to another one, if the former can be derived from the latter. Now, the examples of reductionism that particle physicists like to put forward are the cases where both types of reductionism coincide: Atomic physics explains chemistry. Statistical mechanics explains the laws of thermodynamics. The quark model explains regularities in proton collisions. And so on. But not all cases of successful theory reduction have also been cases of methodological reduction. Take Maxwell’s unification of the electric and magnetic force. From Maxwell’s theory you can derive a whole bunch of equations, such as the Coulomb law and Faraday’s law, that people used before Maxwell explained where they come from. Electromagnetism, is therefore clearly a case of theory reduction, but it did not come with a methodological reduction. Another well-known exception is Einstein’s theory of General Relativity. General Relativity can be used in more situations than Newton’s theory of gravity. But it is not the physics on short distances that reveals the differences between the two theories. Instead, it is the behavior of bodies at high relative speed and strong gravitational fields that Newtonian gravity cannot cope with. Another example that belongs on this list is quantum mechanics. Quantum mechanics reproduces classical mechanics in suitable approximations. It is not, however, a theory about small constituents of larger things. Yes, quantum mechanics is often portrayed as a theory for microscopic scales, but, no, this is not correct. Quantum mechanics is really a theory for all scales, large to small. We have observed quantum effects over distances exceeding 100km and for objects weighting as “much” as a nanogram, composed of more than 1013 atoms. It’s just that quantum effects on large scales are difficult to create and observe. Finally, I would like to mention Noether’s theorem, according to which symmetries give rise to conservation laws. This example is different from the previous ones in that Noether’s theorem was not applied to any theory in particular. But it has resulted in a more fundamental understanding of natural law, and therefore I think it deserve a place on the list. In summary, history does not support particle physicists’ belief that a deeper understanding of natural law will most likely come from studying shorter distances. On the very contrary, I have begun to worry that physicists’ confidence in methodological reductionism stands in the way of progress. That’s because it suggests we ask certain questions instead of others. And those may just be the wrong questions to ask. If you believe in methodological reductionism, for example, you may ask what dark energy is made of. But maybe dark energy is not made of anything. Instead, dark energy may be an artifact of our difficulty averaging non-linear equations. It’s similar with dark matter. The methodological reductionist will ask for a microscopic theory and look for a particle that dark matter is made of. Yet, maybe dark matter is really a phenomenon associated with our misunderstanding of space-time on long distances. The maybe biggest problem that methodological reductionism causes lies in the area of quantum gravity, that is our attempt to resolve the inconsistency between quantum theory and general relativity. Pretty much all existing approaches – string theory, loop quantum gravity, causal dynamical triangulation (check out my video for more) – assume that methodological reductionism is the answer. Therefore, they rely on new hypotheses for short-distance physics. But maybe that’s the wrong way to tackle the problem. The root of our problem may instead be that quantum theory itself must be replaced by a more fundamental theory, one that explains how quantization works in the first place. Approaches based on methodological reductionism – like grand unified forces, supersymmetry, string theory, preon models, or technicolor – have failed for the past 30 years. This does not mean that there is nothing more to find at short distances. But it does strongly suggest that the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things.
caa44db0275e4ece
It is in general known to everybody that magnets attract some particular (when kept in a particular region) substances and they also attract/repel each other, due to their magnetic field. I have got two conceptual : Why is a magnetic field produced at all? What is inside magnetic substances (that creates the magnetic field)? Why is a magnetic material magnetic? 1. All electrons have a magnetic moment associated with them, that is just a law of nature that we have to accept..... Even though I detail in basic terms, the math relationship between spin and mangnetic moment, which I have taken from Wikipedia Magnetic Moment , you will still need to accept the notion of magnetic fields as an aspect of nature that currently has no deeper explanation. Although some of the other answers deal far better than I could with explaining electrons in an electric field as a source of magnetism, I wanted to try and answer your question in light of the importance, imo, of magnetic moments and domains in describing magnetism. The spin magnetic moment is intrinsic for an electron. It is: $${\displaystyle {\boldsymbol {\mu }}_{\text{s}}=-g_{\text{s}}\mu _{\text{B}}{\frac {\mathbf {S} }{\hbar }}.}$$ Here S is the electron spin angular momentum. The spin g-factor is approximately two: $g_s ≈ 2$. The magnetic moment of an electron is approximately twice what it should be in classical mechanics. The factor of two implies that the electron appears to be twice as effective in producing a magnetic moment as the corresponding classical charged body. The spin magnetic dipole moment is approximately one $μB$ because $g ≈ 2$ and the electron is a spin one-half particle: $S = ħ/2$. $${\displaystyle \mu _{\text{S}}\approx 2{\frac {e\hbar }{2m_{\text{e}}}}{\frac {\frac {\hbar }{2}}{\hbar }}=\mu _{\text{B}}.} $$ The z-component of the electron magnetic moment is: $${\displaystyle ({\boldsymbol {\mu }}_{\text{s}})_{z}=-g_{\text{s}}\mu _{\text{B}}m_{\text{s}}}$$ where $m_s$ is the spin quantum number. Note that $μ$ is a negative constant multiplied by the spin, so the magnetic moment is antiparallel to the spin angular momentum. The spin g-factor $g_s = 2$ comes from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties. Reduction of the Dirac equation for an electron in a magnetic field to its non-relativistic limit yields the Schrödinger equation with a correction term which takes account of the interaction of the electron's intrinsic magnetic moment with the magnetic field giving the correct energy. For the electron spin, the most accurate value for the spin g-factor has been experimentally determined to have the value $2.00231930419922 ± (1.5 × 10^{−12}).$ Note that it is only two thousandths larger than the value from Dirac equation. The small correction is known as the anomalous magnetic dipole moment of the electron; it arises from the electron's interaction with virtual photons in quantum electrodynamics. In fact, one famous triumph of the Quantum Electrodynamics theory is the accurate prediction of the electron g-factor. The most accurate value for the electron magnetic moment is $(−928.476377±0.000023)×10^{−26} J⋅T^{−1}$. enter image description here From Magnetic Domains Wikipedia Moving domain walls in a grain of silicon steel caused by an increasing external magnetic field in the "downward" direction, observed in a Kerr microscope. White areas are domains with magnetization directed up, dark areas are domains with magnetization directed down. 1. Some materials, such as iron, allow these electrons to clump in organised domains, rather than randomly, which lets iron's molecular structure act to enhance the magnetic effect, so all these magnetic moments are aligned in the same direction. 2. The domains have to occupy a relatively large volume of the material, depending on their strength. Aluminum is an example of a metal in which the domains exist but are not as organized or as widespread as iron. 3. Another example of how important the magnetic domains are, is the fact that almost all magnetic materials are solids. In fact melting a solid will normally disrupt its magnetic field, again because domains are disrupted and again iron is special, as the liquid core of the Earth still makes a compass needle move. Mostly responsible for a magnetic field are electrons. Electrons magnetic dipole moment and unpaired electrons In atom only two electrons can share the same the same quantum state except their intrinsic magnetic dipole moment. This moments have to be oriented in opposite directions. Thus the opposite directions makes the atom more or less magnetically neutral. But there exist atoms with unpaired electrons and this atoms have a remarkable magnetic moment. (To draw the full picture, if one influent atoms strongly the magnetic dipole moments of all subatomic particles will be observable.) Alignment of the electrons magnetic fields From a remarkable magnetic dipole moments in atomaric level it couldn't concluded that in the macroscopic level a magnetic field would exist. There are some natural minerals there the magnetic dipole moments of the involved electrons are aligned and thus the act as magnets. To make a more strong magnet the powder of magnetic materials is pressed and by this influenced with an external magnetic field which aligns the powders magnetic fields to a common field. As you can see, responsible for the macroscopic phenomenon of magnetic field is the intrinsic (under all circumstancess existing) property of magnetic dipole moment of the involved particles. Magnetic force is created by moving electric charges. In the case of electric charge flowing along a wire, the magnetic field surrounds the wire and is perpendicular to the direction of electric flow along the wire. In the case of ferromagnetic material (the magnets you may have played with), the magnetic field arises from the spin of electrons aligning inside the molecules of the material. The north and south poles of each molecule align with other molecules in the material, creating a magnetic field which acts on electric charges outside the magnetized material. To put it very simply, a magnetic field is produced when charged particles move. This is why we can make electromagnets: the moving electrons generate a magnetic field. So what makes magnet... magnetic even without current? Well, the electrons in the material are moving in the atom, and that generates a magnetic field. Furthermore, electrons have an intrinsic magnetic moment which we call spin. These effects contribute to the magnetic field generated by an atom. But if you notice all materials contain these moving electrons, but not all are magnetic. The reason why this is the case is more complicated than what I am presenting, but it is a helpful picture to have in mind. In atoms, electrons fill the atomic orbitals in opposite pairs, so in some cases the intrinsic magnetic moment from spins cancel out. Further, the movement of electrons are described by their angular momentum, which in some cases cancel out due to how the orbitals are filled (see Hund's rule). The description thus far is based on atoms, but in solids these atoms have neighbours, and these play a role in the amount of angular momentum that contributes to magnetic field. The contribution from electron movement (angular momentum) can quenched due to the non radial potential of electric field, and thus diminishes magnetic property. I would add that the atom description works well for well for solid rare earth metals (lanthanides), as the 4f shells (which have unpaired electrons) are shielded from effects from neighboring atoms by their 5s & 5p shells, so the 4f electrons experience a radial electric field. But all these effects are not able to describe the iron or steel permanent magnets we know. What happens in permanent magnets (made of ferromagnetic materials) is quite special: ferromagnetic material have domains, which are regions where all the atoms have magnetic moment pointing in a particular direction. In a unmagnetised iron for example, these domains are randomly oriented, so the effects cancel out and it does not have magnetic property. But if you put it in a magnetic field, these domains align up, and thus contributes to the applied magnetic field, making the field stronger. If you stroke it with a permanent magnet, you can make these domains point more or less in the same direction, and the material becomes a permanent magnet. We all have seen or played with a magnet.It is in general known to everybody that magnet attract particular substance kept in a certain area and this certain area is magnetic field.Now the question teasing my mind is that why a magnetic field produced???What is inside magnetic substances (I mean is there a kind of core which vibrates to produce magnetic field,or may be something else). A magnetic field is produced by moving charged particles, such as a current in a wire. Permanent magnets consist of materials in which the electrons moving around the atomic nucleus combine to create a magnetic field without a current. Finally,The question i m asking is that why a magnetic field is produced and what is so special with magnetic substances??? I don't know the answer to this part of your question, but the magnetic substances have a property called "ferromagnetism", where the "ferro" is another word for "iron". There is an explanation on Wikipedia here, but it seems pretty advanced. As I understood it there are a couple of good ferromagnetic materials like cobalt nickel and iron. In those materials there are a couple of unpaired electrons in the outer shell in contrary to most other elements which are equalling each other fields out. Because those ferromagnetic elements has domains, a group of electrons wiht their upaired electrons, they a aligned very well when influenced by another magnetic. Now for example in steel , there are also carbon atoms prohibiting those domains to turn back in a disordered range. So steel, ones magnetised is a good one.
6e28630d2e451b03
MAD-TH-05-5 SISSA-62/2005/EP hep-th/0508229 Warped Reheating in Multi-Throat Brane Inflation Diego Chialva, Gary Shiu, Bret Underwood [.3in] International School for Advanced Studies (SISSA) Via Beirut 2-4, I-34013 Trieste, Italy We investigate in some quantitative details the viability of reheating in multi-throat brane inflationary scenarios by estimating and comparing the time scales for the various processes involved. We also calculate within perturbative string theory the decay rate of excited closed strings into KK modes and compare with that of their decay into gravitons; we find that in the inflationary throat the former is preferred. We also find that over a small but reasonable range of parameters of the background geometry, these KK modes will preferably tunnel to another throat (possibly containing the Standard Model) instead of decaying to gravitons due largely to their suppressed coupling to the bulk gravitons. Once tunneled, the same suppressed coupling to the gravitons again allows them to reheat the Standard Model efficiently. We also consider the effects of adding more throats to the system and find that for extra throats with small warping, reheating still seems viable. Email addresses: , , . 1 Introduction Inflationary models from string theory have become more refined in the last few years [1, 2, 3, 4, 5, 7], due largely to the observation that branes and fluxes can play an important role in early universe cosmology. A key element in many of these explicit constructions is the idea of brane inflation [1], where the interaction between branes provides a microscopic origin of the inflaton potential. To embed this idea in a full-fledged string theoretical model, however, there are more hurdles to clear; the most important of which is the problem of moduli stabilization. Moreover, the mechanism which stabilizes moduli may add new constraints to this scenario and could a priori ruin the successes of brane inflation. This and related issues were addressed in a concrete Type IIB string theory setting in [5], utilizing the fact that all geometric moduli can be stabilized by background fluxes [8, 9, 10, 11, 12, 13, 14, 15] and non-perturbative effects [6]. Thus, a rather rich framework for inflation has emerged from our current, albeit still limited, understanding of flux compactification in string theory. A by-product of stabilizing moduli by fluxes is that the background geometry comes naturally equipped with a warp factor. In particular, strongly warped regions, or “warped throats”, with exponential warp factors can arise when there are fluxes supported on cycles localized in small regions of the compactified space. A prototypical example of such a strongly warped throat is the warped deformed conifold solution of Klebanov and Strassler [16, 17]. Indeed, warping is ubiquitous in many string inflationary models, e.g., inflation [5], tachyonic inflation [7], and DBI inflation [4]. The reason is that in addition to generating the hierarchy between the weak scale and the Planck scale as in the Randall-Sundrum scenario [18], warping can also help in flattening the inflaton potential and/or slowing down the D-branes in order to obtain a sufficient number of e-foldings of inflation. Even with warping, however, it appears that general contributions from stabilizing the Kähler moduli generate a large mass for the inflaton, the -problem, although some investigations of this issue have already begun [32, 33, 34]. Given that we now have a wide variety of string inflationary models with warped throats, a natural next step in this program is to investigate the viability of reheating in these models (see [35] for a review of reheating). However, to examine the reheating problem in a concrete setting, we need to understand better how the Standard Model can arise in flux compactification. Fortunately, much progress has been made in the past year in constructing Standard Model-like flux vacua [19, 20, 21, 22]. Thus, the time is ripe to visit, in more quantitative terms, the issue of warped reheating in the brane inflationary scenario [23, 24, 25]. A detailed study of reheating in single throat brane inflation has recently been undertaken in [24]. While reheating seems viable in models with a single throat, there are several challenges that one needs to overcome in building fully realistic models: (i) First of all, there is not much room in generating more than one hierarchy with a single throat111An exception is when there are throats within throats so that the warp factor is discontinuous in the radial direction, see, e.g., [26].. Unless additional mechanisms of generating hierarchies (e.g., dynamical effects such as supersymmetry breaking) are invoked, it seems difficult to generate simultaneously the weak scale hierarchy and a correct level of density perturbation , at least within the context of single field inflation. (ii) In a generic KKLT-type model, at least its simplest version, the scale of supersymmetry breaking is intimately tied to the inflationary scale [27]. Although there are ways out of this conundrum [27], such considerations impose rather strong constraints on model building. (iii) On a technical side, since the Standard Model branes and the pair which drives inflation reside in the same throat, the energy resulting from the annihilation can partition among both the open and closed string degrees of freedom. Unlike annihilation, the process of tachyon condensation in the presence of leftover D-branes, though considered [40] is certainly less understood. In view of these challenges, multi-throat brane inflation seems more appealing as one can generate several hierarchies of energy scales by the warp factors associated with different throats. There is also no need for additional branes to reside in the throat where the tachyon condenses. Hence the aforementioned problems with single throat inflation can be solved in one stroke. However, one should be cautious in applying this “modular” approach in model building, i.e., by introducing separate throats to generate widely different scales. As recently pointed out in [25], if the warping differs significantly between throats, string and Kaluza-Klein (KK) effects can dramatically alter the usual 4D effective field theory description of inflation and reheating. Nonetheless, irrespective of the warped scales they generate during inflation, having more throats can certainly give us more leeway in building realistic models. Most importantly, since the cosmic strings formed at the end of brane inflation are spatially separated from the Standard Model branes in multi-throat scenarios, they are not susceptible to breakage [29] and could survive cosmologically as a network of cosmic F- and D-strings and give rise to interesting signatures of string theory [28, 29, 30, 31]. Thus, there are strong theoretical reasons to evaluate the viability of this scenario. At the face of it, multi-throat brane inflation appears to fail on the front of reheating because there are no direct couplings between the inflationary and Standard Model branes. The two throats communicate only weakly through gravity and given the energy barrier produced by the warping of the bulk separating the throats, it seems to be a major challenge to successfully channel the energy from the annihillation into standard model degrees of freedom. Some preliminary studies of reheating in such multi-throat systems have been initiated in [23, 24, 25]. In particular, it has been suggested in [23] that the KK modes of the graviton can efficiently reheat the Standard Model, since the wavefunctions of the KK modes are sharply peaked at the tip of the throats. However, reheating is a dynamical process with several time scales involved. It is not a priori obvious if this interesting observation will be enough to guarantee efficient reheating when the full dynamics are taken into account. In particular, the issue of tunneling seems to be the most serious obstacle [23] for multi-throat reheating because the KK modes could instead decay to gravitons in the inflationary throat. Therefore, whether the Standard Model is reheated in a conventional way [23, 24] or via more exotic stringy phenomena [25], we need to first establish that the post-inflation energy can be effectively channeled to the Standard Model throat. The purpose of this paper is to investigate in some quantitative details the viability of reheating in multi-throat brane inflationary scenarios, building on some observations made in Refs [23, 24, 25]. In particular, we investigate the decay of the excited closed string end products of annihilation into KK modes and gravitons; we find that in flat space, the decay to gravitons is favored: the decaying closed strings prefer to change their oscillator level by a small amount, and since the KK modes have non-zero momentum in the radial direction of the throat, they have a suppression factor due to their reduced phase space. However, in a warped throat the couplings of KK modes to closed strings is enhanced by two powers of the warp factor, and even for small warping this enhancement is enough that the KK modes are favored decay products. These KK modes can then either tunnel from one throat to the other or decay into gravitons; surprisingly, we find that over a small but reasonable range of parameters of the background geometry the KK modes prefer to tunnel into another throat. This is largely due to the suppressed coupling between the KK modes and the graviton deep in the throat. These modes, now localized in the Standard Model throat, will then decay to Standard Model degrees of freedom, reheating the Universe, and we find that the reheating temperature can be in an acceptable range. It was suggested in [24] that adding additional throats may ruin reheating in this scenario, due mainly to the late-time decay of KK modes in the other throats; we find, however, that for mildly warped throats the decay from the KK modes in other throats reheats the Standard Model at an acceptable temperature. We should emphasize that while our analysis of warped reheating has KKLMMT-like constructions in mind, the results are applicable to various other warped inflationary models such as the scenarios of [4] and [7]. This paper is organized as follows. For completeness and to set up our notation, we review in Section 2 the setup of throat geometry in Calabi-Yau compactifications. In Section 3 we discuss the chain process in which the annihilate, decay into KK modes and gravitons within the throat, and tunnel out of the inflationary throat. In Section 4 we examine the decay of KK modes localized in the Standard Model throat into gravitons or Standard Model degrees of freedom, and estimate the reheating temperature. In Section 5 we discuss how additional throats will change our analysis. We conclude in Section 6. Some details are relegated to the appendix. Figure 1: We will consider a KKLMMT setup with two throats, as in a.); as a simplification, we will consider the throats to be , and glue the two throats together at a “Planck” brane, as in b.). 2 Setup The setup is a Type IIB compactification on a Calabi-Yau (CY) 3-fold with NS-NS and R-R fluxes turned on along the internal compact dimensions. As in [16, 17], by turning on fluxes on the cycles associated with a conifold one can generate a strongly warped “throat” which is glued to the bulk CY compact space. The fluxes are quantized by: where and are the cycles on which the fluxes are supported. The throat is a warped deformed conifold where the deformation replaces the conifold singularity with an “cap”. Far from the tip of the throat the geometry looks like an exact conifold with 6-dimensional metric . The 5-dimensional space is some Einstein-Sasaki manifold (e.g., ) whose details we will not be concerned with here. Far from the deformation the throat can be described by the metric [24]: where the warping is, and are the radii of the at the top and bottom of the throat, respectively, and are given in terms of the flux quantizations as: The throat geometry can be further simplified as where is the curvature scale and the hierarchy between and sets the amount of warping in the throat Using the coordinate transformation () one can put the equation of motion for the metric flucuation (where are the coordinates associated with the ) in the form [24] (, where is the polarization): where is the 4-D mass of the KK mode and is the quantized angular momentum on the whose structure we will leave unspecified. The solution of this equation is given by combinations of Bessel functions times an exponential: where . (The bulk graviton zero mode is given as the limit as ). The quantization of the KK masses can be seen by imposing orbifold boundary conditions at the tip of the throat: implies that , so we find that the masses are quantized in units of the zeros of the Bessel function, for mode numbers (more precisely, [24] find that KK modes localized near the bottom of the throat are quantized in units of the radius of the cap ). Imposing continuity and jump boundary conditions across the Planck brane as in [41], we find 222Note that the numerical coefficient is slightly different from that used in [45, 18] where the boundary condition at the Planck brane was that the derivative of the wavefunction vanish, while in our case we merely require that the derivative on either side of the Planck brane be equal and opposite (see Appendix). Imposing different boundary conditions at the Planck brane can possibly give different behaviors of the wavefunction at the tip of the throat, but it is unclear whether these boundary conditions will lead to tunneling as in [41]. We would like to thank Hooman Davoudiasl for clairifying this point.. The string scale is related to the 4-D Planck mass by , where is the volume of the Calabi-Yau compactification, which we take to be slightly larger than , and . Because where is various factors of from integrating out the extra dimensions in the conifold, we will take . In the paper we will be interested in staying within the SUGRA description where we can ignore trans-stringy excitations, thus we wish to have . We will parameterize the excess as , where and we wish to keep track of the explicit dependence on . For deep enough throats almost all of the flux in the quantization conditions in Eq.(2.1) lies in the throats so one can have independent curvature scales for different throats, but for simplicity we will consider both throats to have the same scale, so the product must be the same in both throats. (One can see that allowing the scales to differ gives us extra parameters to vary). We then have three parameters: explained above, the warp factor of the I throat, and the warp factor of the SM throat (we will take as fixed by requiring that ). These parameters can be further related by noting that the warp factor gives the local string scale in the throat; because is related to through the definintion of the Planck scale, and have implicit dependence as well. Making this dependence explicit, We will fix by requiring that the correct level of density perturbations is generated by the brane-antibrane potential (which is set by the local string scale in the inflationary throat). However, we will not commit ourselves to fixing the local string scale at the tip of the Standard Model throat to be TeV scale during inflation. This will gives us more flexibility in satisfying the phenomenology of reheating and presumably other mechanisms can be used in conjunction with warping to generate the required hierarchy. Another possibility is that the Standard Model throat relaxes to its ground state after inflation [25] because generic arguments suggest that stringy corrections will tend to set during the inflationary era, so a throat with a local string scale less than will generically become shorter during inflation after these corrections are taken into account. 3 Chain Processes The reheating of the Standard Model is the last step of a sequence of processes that starts in the inflationary throat through annihilation and, through a chain of decays and tunneling, ends when the inflaton energy is transferred to the Standard Model degrees of freedom. The steps in the chain decay are: • the transfer of energy from the tachyon to closed strings at the end of inflation (Section 3.1), • the subsequent decay of closed strings into massless string states (Kaluza-Klein modes and gravitons) (Section 3.2), • the interactions among these lower energy degrees of freedom (Section 3.3), • the transfer of energy into the other throat(s) (via tunneling) (Section 3.4), • the decay of these modes into Standard Model degrees of freedom and gravitons (Section 4). Throughout these chain processes, one must consider various cosmological constraints in order to determine if a viable model of reheating can be constructed in a self-consistent way. For example, an overproduction of gravitons or the presence of relics could interfere with nucleosynthesis or overclose the universe and must be avoided. In the following we will analyze these issues systematically. 3.1 annihilation The multi-throat scenario allows us to have a simple inflationary system in one throat and the Standard Model in a separate throat, sidestepping the problems of single throat inflation discussed in the introduction. Thus, the process of reheating is then more clearcut than a system (single-throat model). The end of inflation is triggered by the condensation of the open string tachyon which develop when the distance between the brane-antibrane pair is sub-stringy, leading to what is known as the tachyon matter [37]. The latter can be thought of as a coarse-grained description of the actual physical state, a distribution of closed strings [36, 37, 38, 39]. By contrast, in the case, the tachyon can couple directly to the open strings on the remaining (Standard Model) D-branes instead of the usual process of decaying into closed strings. The reheating process is then be more complicated and less understood (see, however, [24] for comments about the suppression of the coupling to the open string sector). The properties of the annihilation process into closed strings (the spectrum, the energy density and its distribution) can presumably be captured by the decay of unstable D-branes: the tachyon matter is an (asymptotically) pressureless gas with energy momentum components only in the directions orthogonal to the D-branes [36, 37, 38, 39]. The total average number density and the total average energy density emitted were found in [38]: where is the volume of the branes, is the amplitude for producing a state , is its energy and is the oscillator level (note that we are taking ). For large , and the probability for every single state to be produced is Notice that it depends only on the level number and not on the partition of among the various oscillators forming the state, so every state at a given level is equally produced. Moreover, for fixed the probability is peaked on the states that have , i.e. the closed strings produced will predominately have no Kaluza-Klein modes or winding modes in the Calabi-Yau space. Furthermore, for the energy density, where is the degeneracy of states at level and is independent of , we see that the set of states at every level number receives the same amount of energy density. In other words, energy is equipartitioned among the oscillator levels labeled by . If the maximum energy available for a single state is , then the maximum allowed level number is if . 3.2 Decay of Closed Strings into KK modes The next step in the energy conversion process is the decay of the closed strings into lighter degrees of freedom, namely the massless graviton and KK modes (see [24] for additional discussion). The Kaluza-Klein modes are localized deep inside the throat due to the warp factor, whereas the graviton has a non-zero wavefunction all over the Calabi-Yau. After proper normalization one finds that the KK mode wavefunctions are peaked deep in the throat by a factor of relative to the graviton, so decay to the former is enhanced by a factor . However, decay to KK modes will be suppressed by phase space since the KK mass can be comparable to the local string scale, so it is not immediately clear whether KK modes or gravitons are preferred decay products. We can estimate the decay rates into graviton and into Kaluza-Klein modes by an explicit (flat space) worldsheet computation. For a flat space computation to be valid the geometry must vary sufficiently slowly with respect to the string scale, i.e. for an AdS throat the AdS length must be less than the string length. However, since we require in order to use a SUGRA approximation, , where is the AdS length, this condition is satisfied. The geometry induces a potential that localizes the string, and so the string effectively lives inside a box [30]. It must be stressed that since our calculation is for flat space it will not capture all of the features of decay in curved space; however, as an estimate we can compute the decay rates in flat space and convolve the results with the wavefunction square of the corresponding decay products, i.e., graviton and KK modes respectively, in order to take into account the enhancement of the KK mode wavefunction in the AdS space. Let us compute the average decay rate for closed strings fixing: (i) the mass of the initial state (), (ii) one of the final states (the graviton or the Kaluza-Klein mode), and (iii) the mass of the other final states (). We do not fix, however, the specific initial state (over which we average) or final state, other than specifiying its mass level. The decay is given by where is the number of spatial dimensions, is defined as the vertex operator we need is , with is the integral over the phase-space and is the number of states at level . In particular, as we saw in Section (3.1), the closed strings produced when the system decays have preferably momentum, Kaluza-Klein and winding charges equal to zero. The result of the computation (see, e.g., [42]) can be written in terms of and : where , is the angular integral in three dimensions, and is the Kaluza-Klein momentum of the -th KK mode. We have implicitly assumed that the KK mode has momentum only in the radial direction of the throat, although it is straightforward to include momenta in other directions of the Calabi-Yau as well. Note that in order to get these results we have approximated the distribution of states by its limit for large , but actually this approximation is already very good for moderate values of . Figure 2: The numerical value of the decay rate (in units of the local string scale) of a closed string into an a.) graviton and b.) KK mode, for all allowed processes, is plotted for finite oscillator level (black dotted line), which is relevant for the decay of the closed string end products from annihilation, and in the limit (red solid line), which corresponds to the field theory limit of [42], where (note that we have not included any warp factor enhancement yet). The jumps in b.) are from threshold effects for the production of KK modes; see the discussion below Eq.(3.5). To obtain the total decay rate into gravitons (or similarly KK modes) for any given initial state , we sum over all the allowed processes labeled by where to . Unlike [42], we have not taken so the total decay rate into gravitons is valid for an initial state of finite mass. Recall that in the limit, we can turn the sum over into an integral through the Jacobian , where , where . This corresponds to the field theory limit since as , the mass of the initial state or equivalently . As can be seen from Figure 2, taking finite gives a smaller value for than would be expected from field theory, however the parametric dependence of as discussed in [42] is unchanged. The difference between the two curves in Figure 2 is due to the stringy corrections to the decay rate, namely that decays must occur in discrete steps of energy. One can obtain similar looking expressions for the KK modes, where once again the parametric dependence on is , where without the warp factor. The finite solution shows some qualitatively different features than the limiting case: the jumps are produced when, for fixed , becomes large enough that a KK mode cannot be produced, i.e. . In particular, the jump at is for (where and , which as we will see below are reasonable values): above one cannot produce a KK mode with , so we drop below threshold. There are multiple jumps because we are summing over all , so we pick up multiple threshold effects. Notice how this is different from the limit, where we only obtain the enveloping behavior. The sum over the mode number of the KK modes of the square root in Eq.(3.4) is the phase space available to the decay when the KK mode has a non-zero 4D mass, and the sum is over all kinematically allowed KK modes. Notice that for small the energy released by the decay is also small so it is difficult to produce KK modes, and the phase space factor reduces the rate of decay into KK modes. For large , i.e. , more KK modes can be produced, however the decay rate then depends exponentially on and is highly suppressed. The phase space factor and the exponential suppression333This exponential suppression is due to the sum over the oscillators part of the string states. Since it depends only on the local properties of the compactification, it should be present both in flat and curved spaces. of decay with large compete to maximize the decay rate into KK modes for fixed for , where one can numerically determine , so we see that only low lying KK modes () are produced444Note that this also means that angular KK states, which are quantized at a higher level, are not produced in copious amounts so the problem discussed in [24] may not be present.. Similarly, one can determine that gravitons are produced most often for , . Figure 3: The numerical value of the decay rate of a (red) and (black) closed string into a KK mode for different decay processes . Notice that the decay rate is peaked at small and falls off quickly, but increasing increases the at which the rate is maximum. The large initial jump when is due to the decay process crossing the threshold for production of a KK mode. One can show that . Without any enhancement from the warp factor we find that over to , and so we see that a warp factor enhancement may help in making KK modes a preferred decay product; however, the order of magnitude of the KK decay rate is strongly dependent on the size of the cap because this determines how easily a KK mode can be produced. Changing the value by a small amount leads to large variations in the decay rate, e.g. changing by a factor of 5 can modify the KK decay rate by orders of magnitude ()! What we are really interested in, however, is the ratio of the total amount of energy deposited into KK modes versus gravitons. This can be estimated through the following argument. The typical time for an initial state to decay into either a graviton or KK mode is: where is the typical step size. We will take if the KK modes have a larger decay rate and otherwise. The total number of species produced by a closed string is then, where we have approximated and . The total number of species produced by the closed strings is then the weighted sum over all oscillator levels: similarly, the fraction of energy carried by species is We see then that the fraction of energy carried by KK modes versus gravitons is , so even for mildly warped throats we see that the KK modes appear to be preferred decay products. One can estimate the total time for the decay of the closed strings into KK modes and gravitons by considering the decay time for the most massive closed string. In this case, the total time for decay is the sum of each of the steps, For the case where gravitons dominate, , we have , while for the case where the KK modes are dominant decay products due to the warp factor enhancement, . 3.3 KK mode interactions Because the KK modes carry conserved quantum numbers they cannot decay into lighter degrees of freedom unless they collide with another KK mode with opposite quantum number in the throat. The time scale of such collisions is found by estimating when the average number of collisions (the number density of the particle times the column depth times the cross-section of interaction) is approximately one, i.e. , where is the particle number density, is the cross-section, and is the velocity. The KK modes as initial decay products of excited closed strings are relativistic. This can be seen as follows. The average energy per KK mode is approximated by the average energy per closed string and their mass is therefore and so we can assume (taking will only increase the annihilation time). Moreover, as we will see, the KK modes thermalize quickly so it is reasonable to take in estimating the time scales for the processes involved in the inflationary throat. The KK states can annihilate and pair produce gravitons or other (kinematically allowed) KK modes. The cross-section for KK modes to annhilate into bulk gravitons is set by the Planck scale . We will approximate the number density after inflation by the total energy density after inflation divided by the average energy per KK mode; the energy density after inflation is, where is the tension of a . The average energy per KK mode is, as we said above, (3.11). The number density after inflation then is given by . Plugging these values in we can obtain a rough estimate for the timescale for KK modes to decay to gravitons: As mentioned above, KK modes can interact among themselves and thermalize, coming to a thermal temperature (see also [24]). The timescale for thermalization is similar to the timescale for KK modes to interact and decay to gravitons, except that the cross-section is now set by the local scale, . We immediately see, then, that the timescale for thermalization of the KK modes is much smaller than the timescale for KK modes to decay into gravitons: We can estimate the thermal temperature of the relativistic KK modes by setting the thermal energy density equal to the initial energy density of the system: The number of KK degrees of freedom is approximately equal to the number of relativistic KK modes, as we will argue below. The temperature of the KK modes, then, is: Since only KK states with are relativistic, we see that only states with mode numbers are relativistic; since the number density of non-relativistic modes has an suppression, most KK modes will occupy relativistic degrees of freedom, thus . Note that the actual low-lying zeros of the Bessel functions of order 2 do not follow the simple relation , but instead are at a higher mass level than this relation would suggest. As we will see below we expect to be small, thus will also be small, so only the very lowest lying KK states will be excited. Since angular states, as discussed in [24], are zeros of Bessel functions of a larger order where is a conserved quantum number of the internal space, the lowest lying angular KK modes will be above the small thermal temperature and thus we do not expect that they will be present in significant numbers after thermalization. 3.4 KK tunneling We would now like to estimate the time required for modes localized in the I throat to tunnel into the SM throat. One can consider a toy model of the two throat system by gluing two finite RS spaces together at the Planck brane as in [41, 23] (see Figure 4). Under the coordinate transformation , the metric can also be written in the form, where as before is the curvature scale. We will place the IR brane for the inflationary throat at and the IR brane for the SM throat at . The utility of expressing the metric in these coordinates is that the equation of motion for the KK modes of the graviton, becomes a simple Schrödinger equation for (suppressing the indicies): where the potential felt by the KK mode is proportional to the square of the warp factor (we have ignored the extra part of the potential felt by the angular states). Figure 4: The warping creates a potential barrier for Kaluza-Klein states of the graviton in the Schrödinger coordinate system. States initially localized in the inflationary (left) throat must tunnel through this barrier to communicate with the Standard Model (right) throat. The Schrödinger energy of a state is . The general solution to this equation of motion can be expressed in terms of Bessel functions, We impose boundary conditions at the IR branes (which gives us our mass gap as discussed in Section 2) and the appropriate Israel jump conditions at the Planck brane. We would like to consider tunneling of an incoming state from the left hand throat into a state on the right hand throat for which [41] finds a tunneling probability for (certainly the case after thermalization for mode number ) of In the calculation of [41] the reflection of the KK mode at the wall of the finite SM throat was not included, and one may worry that this reflection may significantly alter the tunneling results. However, one can see that the reflection will only increase the amplitude in the SM throat by a factor of 2 (i.e. constructive interference), and when it tunnels back through the barrier will contribute a vanishingly small amount to the I throat. Performing the calculation explicitly by including the reflected wave confirms this conclusion (see Appendix A), namely that the probability Eq.(3.22) is effectively unchanged. The tunneling rate from throat I to throat SM is found by multiplying the tunneling probability Eq.(3.22) by the flux, the inverse of the potential wall length , so the tunneling rate from throat I to throat SM is: Note that when a KK mode tunnels from throat I to throat SM the mass of the mode stays (approximately) the same, and since the lengths are the same, the tunneling probability back to the I throat is the same (making the lengths different will make the tunneling probability more asymetrical, suppressing the tunneling more in one direction than the other). The tunneling rate, however, is inversely proportional to the (conformal) length of the throat, thus modes in a long throat take longer to tunnel out. Note that the depth of the two wells is approximately the same and does not cause much difference in the tunneling rates. This can be understood semi-classically as follows. The average time required for a particle to escape a potential well is the average number of collisions required to escape multiplied by the time between collisions. The average number of collisions required is the inverse of the tunneling probability, . The time required is twice the length of the well divided by the speed of the particle. In conformal coordinates, Eq.(3.18), the KK modes behave like massless Klein-Gordon particles, and the length of the potential well is measured in terms of the conformal length, thus the average time required for a KK mode to escape is , which gives the tunneling rate Eq.(3.23) (up to a factor of two). The tunneling rate can also be represented as the mixing between two weakly coupled potential wells, as in [24]: is the Hamiltonian for the coupled system, , and is the coupling of the systems. Note that is the difference in mass between states in the two throats, not the mass gap within the throats. To leading order in one finds, The coupling of the two systems should be proportional to some power of the tunneling probability; [24] suggests that , which seems to imply ; in order to get Eq.(3.23) we must have . But this says that the difference in mass between the states in different wells is proportional to the mass level, which seems unlikely in a general setup. Instead, one can show that for a symmetric double well, the expression for should be (note that the flux is still ). This makes sense, since the coupling between two weakly coupled potential wells should be proportional to the splitting of their adjacent masses. A crucial aspect (as noted in [23]) of the viability of warped reheating is the ability to channel energy into Standard Model degrees of freedom and not to lose energy to bulk gravitons, which can ruin BBN and other cosmological observations. Naively one would expect the perturbative decay of KK modes to gravitons to dominate over the non-perturbative tunneling effects555We thank D. Chung for emphasizing this point to us., thus reheating would fail to channel energy onto the Standard Model branes effectively. However, several effects seem to render this intuition invalid. First, as noted above, the warping induces small couplings between the gravitons and KK modes. Second, KK to graviton decay must occur via pair annihilation in order to conserve extra dimensional quantum numbers, so the decay can only happen when a KK mode finds its partner with opposite quantum numbers. Because of these effects, it is possible that there exists a range of parameters for the model for with . Since thermalization drops the KK states down to the lowest mass levels, we will take , which is the longest timescale for tunneling. We find, We find that for, one can suppress the decay to gravitons. For and this gives a lower limit , and keeps , which is what we wanted in order to trust the SUGRA approximation anyway. However, this sets , so the “throat” is not very strongly warped, which also helps the modes tunnel through. An important question is whether the tunneling takes place within the Hubble time after inflation, for otherwise the tunneling does not take place until the Hubble rate falls below the tunneling rate (which has implications for KK interactions in the SM throat). The Hubble time after inflation is found from , and so the ratio of the tunneling time to the Hubble time is, Thus, for we must consider Hubble expansion for tunneling. Note that while this gives us a small range of , this small range already implicitly existed since and we want for a warped throat, thus too large of ruins the warping of the throat. One can also check that the KK modes thermalize within a Hubble time for and so for within the values (), corresponding to (which are within our approximation ), we have the hierarchy of timescales: . 4 Standard Model Throat After the KK modes from the I throat tunnel into KK modes in the SM throat these KK modes can do several things in the SM throat, each with its own timescale: 1) decay to gravitons , 2) decay to Standard Model degrees of freedom , 3) tunnel out of the SM throat and 4) thermalize and drop down to relativistic degrees of freedom at the bottom of the throat , all of which can happen relative to 5) the Hubble timescale . Before we determine any of these timescales, we must determine the approximate number density of the KK modes after tunneling, which will be important in our calculation of the thermalization and graviton decay timescales. Since the number density falls off as , where is the scale factor, the number density after tunneling is, We will estimate the energy density for the modes to be non-relativistic, which falls off as , and since the tunneling will happen after the Hubble rate drops below the tunneling rate, i.e. we have: Putting Eqs.(4.1)(4.2) together we find We can now use this to estimate the thermalization and graviton decay time scales for the KK modes in the SM throat as in Section 3. 4.1 KK graviton decay We again estimate the decay of KK modes to gravitons as , where we will take and as before. For the number density we will use the estimate Eq.(4.3) after tunneling, and our estimate for the graviton decay timescale is then Similarly, we estimate the timescale for thermalization of the KK modes where we note that now the cross-section is suppressed only by the local string scale in the SM throat, , so that the thermalization time scale is again much faster than the decay to gravitons, and is given by: 4.2 Decay to Standard Model degrees of freedom The decay rate of KK particles to Standard model degrees of freedom on a brane located at in the throat can be estimated by considering the interaction to be of the form [43, 44, 45] (where we are considering only the part of the throat here): where is the 5-D Planck mass, related to the 4-D Planck mass by , and is the energy-momentum tensor of the Standard Model fields. The metric flucuation can be expanded as: The 5-D part of the wavefunction is given by the general formula Eq.(3.21) and is normalized as . Using the normalization, the expansion of the KK modes Eq.(4.7), and the definition of the 5-D Planck mass we have then, where is set by the local string scale at the tip of the Standard Model throat. The decay rate to Standard Model fields is then approximately [44]: A more precise determination of the decay rate, including loop corrections, can be found in [43, 44, 45, 46], where we have ignored the extra contributions which depend on the ratio of the mass of the Standard Model particles and the mass of the KK modes, . Note that this estimate is similar to that of [24], where they consider decay of KK modes into the scalar transverse excitations of the brane [46] which then quickly decay into Standard Model degrees of freedom: Here is the effective mass of the () induced by the background fluxes (SUSY breaking). We see that for we can neglect the square root in Eq.(4.10) and this decay rate is similar to the decay to Standard Model particles. However, since and a will have from the fluxes, the decay rate to the transverse fluctuations becomes effectively zero. Decays directly to Standard Model degrees of freedom thus have more phase space and we will use Eq.(4.9) to determine the decay time to Standard Model degrees of freedom. Using we find, 4.3 Reheating We would like the decay of KK modes into Standard Model particles Eq.(4.11) to be quicker than the decay into gravitons Eq.(4.4), We find that for , this ratio is less than one for the range of quoted at the end of Section 3, thus the decay to Standard Model degrees of freedom is favored. However, for (i.e. Tev if one wants to generate the weak scale hierarchy during inflation) we must bring down to such small values in order to have this process be favored that it does not appear we can simultaneously ensure that the KK modes in the inflationary throat tunnel and the KK modes in the SM throat reheat the Standard Model degrees of freedom. This can be seen by noting that less massive modes couple weaker to Standard Model degrees of freedom, Eq.(4.9). A deeper throat means that the lightest KK modes in the throat are less massive, thus couple weaker to the Standard Model. Meanwhile is insensitive to the details of the throat construction, so eventually KK to graviton decay is favored. The ratio of the time to decay to Standard Model particles and the Hubble time is, This ratio is less than one for as we took above and for all in the range quoted above at the end of Section 3. Thus, we expect that the reheating of the Standard Model from KK modes decaying in the Standard Model throat will take place immediately after those modes tunnel. Also note that
7710e9de1316bf03
Leaps and bounds in quantum technologies The Fraunhofer-Gesellschaft and IBM are bringing the first quantum computer to Germany – with other quantum technologies in the pipeline. Marta Gilaberte Basset arbeitet in ihrem Labor an neuen Technologien für angewandte Quantenbildgebung. © Sven Döring Quanta can assume different states... Marta Gilaberte Basset from the Fraunhofer Institute for Applied Optics and Precision Engineering IOF is working on new technologies for applied quantum imaging. Dr. Erik Beckert vom Fraunhofer-Institut für Angewandte Optik und Feinmechanik IOF mit einer Photonenquelle zur Erzeugung verschränkter Lichtteilchen. © Sven Döring ... at the same time. Dr. Erik Beckert from the Fraunhofer Institute for Applied Optics and Precision Engineering IOF holding an entangled photon source. © Sven Döring Prof. Andreas Tünnermann, Leiter des Fraunhofer IOF, ist einer der führenden Köpfe für angewandte Quantentechnologien. © Sven Döring That is not all they have in common with those multitasking quantum engineers at Fraunhofer. Prof. Andreas Tünnermann, director of Fraunhofer IOF, is one of the leading lights in the field of applied quantum technology. Quantum technologies The future is now Web special of the cover story of Fraunhofer magazine 4.2019 Quantum mechanics is slippery, defying intuitive grasp. Albert Einstein called it “spooky.” He was born in 1879. Yet here it is in the here and now, a ghost knocking at the door of everyday reality. In a rare consensus, experts agree that quantum technology is poised to change the world. Take the field of medicine: Quantum sensors could shed new light on brain functions. Quantum imaging could revolutionize diagnostic procedures. Quantum computers could yield new insights into molecular chemistry to fast-track drug discovery and slash their production costs. The mind boggles at the potential of applied quantum mechanics. Who knows where this technology will take us? Will we be able to better protect the climate with the means to measure and predict climate change far more accurately? What marvelous products might emerge once we are able to develop and test materials faster and at lower cost? Will gridlocks be but a bygone annoyance once quantum computers send every traveler down the best path? And will quantum technology help create a secure and sovereign digital infrastructure for business and private citizens? Expectations are high; investments are soaring. The German federal government has earmarked 650 million euros for research into quantum technologies until 2021. And the EU’s Quantum Flagship initiative has a budget of one billion euros for European research over the coming ten years. Fraunhofer and IBM will be bringing the world’s first commercial quantum computer to Europe to serve as an open research platform. They aim to ramp it up in Germany by late 2021. Prof. Reimund Neugebauer, president of the Fraunhofer-Gesellschaft, expects it to give a “decisive boost for German research and companies of all sizes” with “full data sovereignty based on European law.” Current news "FUTURAS IN RES" conference: The Quantum Breakthrough November 23.–25, 2021 in Berlin In 2021 the Fraunhofer-Gesellschaft is organizing the third international science and technology conference in its series “FUTURAS IN RES” on this year's topic “Quantum Technologies”. more info Video: The Quantum World Quantum computing On September 10, 2019, IBM and the Fraunhofer-Gesellschaft, Europe’s leading applied research organisation, announced a partnership agreement set to deliver major advances in quantum computing research in Germany. The objective is to boost the development of skills and strategies relating to commercial and application-oriented quantum computing. Quantum communication Digital sovereignty and data security are essential for a well-functioning digital society. Quantum communication takes security to a whole new level. Researchers are currently developing quantum-based cryptographic procedures which will in future make it impossible to eavesdrop on transmitted data. Quantum imaging Quantum imaging will have far-reaching effects in all areas of optics. It will eliminate existing blind spots in fields as diverse as medical imaging and diagnostics, security technology and autonomous mobility. Quantum AI The race to develop quantum AI Quantum computing is expected to be the springboard for a huge leap forward in artificial intelligence (AI). A new interdisciplinary research field called quantum machine learning (QML) has emerged at the intersection of these two key technologies. Quantum sensors Quantum sensors: new opportunities for medical technology Photons are one, but not the only, way of taking measure of the quantum world. Electrons are another. A research team at Fraunhofer IAF is using these tiny particles to develop ultra-precise quantum sensors. Interview with Professor Tünnermann, Fraunhofer IOF “Germany is at a very good vantage point” One of the leading minds in the field of quantum technologies at Fraunhofer, Prof. Andreas Tünnermann heads up the Fraunhofer Institute for Applied Optics and Precision Engineering IOF at Jena. Fraunhofer lighthouse projects and initiatives on quantum technology Fraunhofer lighthouse project QUILT – Quantum Methods for Advanced Imaging Solutions In recent years, quantum researchers have achieved a series of scientific breakthroughs. They are now able to create and manipulate specific exotic quantum states at will and have demonstrated their disruptive practical potential in ground-breaking experiments. There is talk of a ‘second quantum revolution’, with quantum technology set to become a key technology for our modern information society. In the quantum imaging field in particular, through its institutes and scientific and commercial partnerships, the Fraunhofer-Gesellschaft is in an excellent position to proactively shape this revolution. In this context, QUILT is developing entirely novel imaging and detection methods.   Fraunhofer lighthouse project QMag – Quantum magnetometry Fraunhofer’s IAF, IPM and IWM Institutes, all based in Freiburg, are aiming to bring quantum magnetometry out of the research lab to deliver real commercial applications. In close collaboration with the Fraunhofer IMM and IISB Institutes and the Fraunhofer Center for Applied Photonics (CAP), the research team is developing highly integrated imaging quantum magnetometers with extremely high spatial resolution and optimum sensitivity. QuNET – Interception-proof quantum communication As part of the BMBF-funded QuNET initiative, the Fraunhofer-Gesellschaft, Max Planck Society and German Aerospace Center will set up a pilot quantum communication network in Germany. This network will allow eavesdropping-proof, tamper-proof data transfer. Key Strategic Initiative quantum technology Welcome to the quantum world Welcome to the quantum world Quantum physics is not something most of us encounter in our daily lives. Everything we can experience first hand – the big stuff of our macroscopic world – obeys the laws of conventional physics. Sub-atomic particles defy these familiar principles. The laws of quantum physics rule on the atomic scale, where strange things happen. This is a world where elementary particles, atoms or even molecules can behave like particles or like waves. They can even exist in several states at once. Two particles can become entangled, so that one always possesses the complementary information on its twin, irrespective of the latter’s location. This uncertainty about a particle’s actual state goes to the heart of quantum physics. Rather than being in one state or changing between a variety of states, particles exist across several possible states at the same time in what is called a superposition. Nothing is fixed and anything is possible, so we are dealing with probabilities here – or, more precisely, with probability waves. We cannot know the exact position or state of a particle until we observe or measure it, which destroys the quantum state. Wave-particle duality Elementary particles such as photons or electrons, and even atoms and molecules sometimes behave like a wave, at others like a particle. A conventional particle can only occupy one position, but a wave propagates in space and can overlap other waves. The quantum tunnel effect The wave-like properties of particles allow them to move through energy barriers as if passing through walls. Humans consist of particles, so the theoretical probability of each particle in the human body possessing the ability to cross the rectangular potential barriers of a wall is greater than zero. Be warned, though: Attempts to demonstrate this ability could prove painful. Schrödinger’s cat Schrödingers Katze Perhaps the most famous thought experiment for explaining quantum physics involves a cat and a flask of poison gas. The two are placed in a box containing a radioactive source and a mechanism that breaks the flask when it detects a radioactive particle. The probability that the poison gas will be released is given at any moment. The process of radioactive decay is an ideal randomizer for determining this moment in time. In other words, without any interaction with the outside world, Schrödinger’s “quantum” cat is in a state of superposition, entangled with the state of a radioactive particle. The cat is therefore both dead and alive its state remaining indeterminate until someone opens the lid to a look inside the box. Quantum entanglement Einstein once described this effect as “spooky action at a distance.” The properties of entangled particles are always complementary. And, although they may be light-years apart, they are inseparably linked to one another. If, for example, a state of vertical polarization is observed in one of a pair of photons, then the other must be horizontally polarized. And this despite the fact that its state has not been previously ascertained and no signal has been exchanged between the two particles. Timeline: The history of quantum physics The dawn of quantum physics – Max Planck postulates quantum theory: light consists of tiny, discrete packets of energy known as quanta Niels Bohr first formulates a quantized model of the atom Albert Einstein postulates the general theory of relativity and posits the existence of photons as particles Louis-Victor de Broglie postulates wave-particle duality Erwin Schrödinger describes matter waves as probability waves and postulates the Schrödinger equation, one of the fundamental equations of quantum mechanics Werner Heisenberg formulates the uncertainty principle: the position and momentum of an electron cannot be determined simultaneously The thought experiment Schrödinger’s cat Otto Hahn discovers nuclear fission; the first atomic bomb soon follows European countries establish CERN (Conseil Européen pour la Recherche Nucléaire) to investigate subatomic particles From the 1950s Scientists put macroscopic quantum systems to practical use, ushering in the first quantum revolution The first microwave The first microchip The first laser John Bell formulates Bell’s theorem: there are no local parameters determining the behavior of a quantum system The double-slit experiment with single electrons demonstrates the theory of wave-particle duality Experiments by Alain Aspect prove the hypothesis of quantum entanglement From the 1990s Scientists manipulate individual quanta, sparking the second quantum revolution Prof. Anton Zeilinger of Innsbruck University demonstrates quantum teleportation In the 1990s The first experimental quantum computers with 3, 5 and 7 qubits emerge Error-free data transfer via teleportation establishes the basis for a quantum Internet China launches Micius, the first quantum communications satellite, for research purposes China builds the world’s first quantum communications link The EU Quantum Flagship provides one billion euros for research. The Fraunhofer lighthouse project QUILT gets underway IBM unveils Q System One, the world’s first commercial quantum computer. Fraunhofer, Max Planck and DLR launch the QuNet initiative. Fraunhofer kicks off the QMAG lighthouse project. Fraunhofer and IBM announce plans to bring the first quantum computer to Europe Five sectors destined to be changed by quantum technology Medicine and healthcare Deeper insights into biological processes, more efficient drug discovery, more powerful diagnostics Logistics and transport Optimized route planning, more informed decisions about the best locations for logistics centers and the like, better power grid management Enhanced portfolio management, risk analysis and fraud detection and prediction; secure communications and data transfer for online banking and the like Material sciences Simulation of unprecedented materials at the molecular level, faster development of materials, more precise testing methods IT and security More powerful computers for previously unsolvable problems, encrypted communications networks
38f1619a5311ac2f
The Azimuth Project Blog - time inversion symmetry breaking (changes) Showing changes from revision #40 to #41: Added | Removed | Changed This page is a blog article in progress, written by Zoltán Zimborás. To see discussions of this article while it was being written, visit the Azimuth Forum. Directing Quantum Motion: the Art of Time-Reversal Symmetry Breaking guest post by Zoltán Zimborás Jacob’s comments: • figures larger: 400 pixels • don’t worry about this part: I’ll write it later If you are a follower of the network theory series on Azimuth, you must feel pretty well-informed in comparing the differences and similarities between stochastic and quantum mechanics. However, we still hope to be able to surprise you by presenting a situation where these two worlds differ in a particularly subtle way: the case of directed (or biased) walks. One surprising difference is that for quantum walks, biasing a direction ABA \to B (compared to the reversed BAB \to A direction) is only possible if the topological structure of the underlying graph is apt for it. This is related to certain obstructions on time-reversal symmetry breaking in quantum evolutions, which we will discuss today. Sounds like fun math! But before we jump into the details, let’s mention that these considerations are not only mathematical amusements. The discussed topological effects appear in solid state physics and, more intriguingly, they may even be used by plants! Light-harvesting complexes of plants and bacteria, as was mentioned in the previous post on Azimuth, offer one of the main motivations for studying quantum evolutions on graphs with complicated topologies. These complexes are believed to be evolutionarily optimized for the (quantum) transport of energy, see a discussion here. Interestingly, recent theoretical and experimental investigations indicate that quantum directional biasing, the theme of today’s blog post, might also be present in this type of energy transport. It is an especially good time to consider this topic since we just published a paper in Scientific Phys. Reports Rev. A on the subject together with experimentalists from the Zoltán Institute Kádár for Quantum Computing, James Whitfield and Ben Lanyon: D. Lu et al,Chiral Quantum Walks This was based on our previous theory paper in Scientific Reports that we wrote together with Zoltán Kádár, James Whitfield and Ben Lanyon: We discussed the theoretical aspects and also showed by numerics that directional biasing (or time-reversal symmetry breaking) can in principle considerably speed up transport in light-harvesting complexes and in other complex quantum networks. In this post, we will only concentrate on the basic features. Since some parts of our paper papers are rather technical, here we will make the exposition more comprehensible with the help of some old friends of quantum mechanics:kets… cats, I mean. Many of you reading this will know that the use of cats in quantum physics is ever so common. The application of cats is credited to Schrödinger; they are nearly always threatened but rarely harmed. In this tradition, today we’ll appeal to these fuzzy quantum felines to conduct our experiments, and illustrate the concepts of the paper. Catwalk on a ladder Imagine our Gedanken-cat sitting on a rung of a horizontal ladder. Being a bit restless, she sometimes jumps to one of the neighboring rungs - with equal probabilities to the left or to the right. quantum ladder without dog This type of continuous-time random (cat)walk is described by a master equation - as was discussed in parts 16 and 20 of the network theory series, and in the previous post on Azimuth. The main ingredient in this description is the adjacency matrix, which characterizes the topology of the possible elementary jumps. For the depicted six-rung ladder, the actual topology is the following: from the first and the last rung the cat can jump only to one other rung; while from any of the four middle rungs there are two jumping possibilities (to the left and to the right). In a matrix from this neighborhood structure can be encoded as A=(0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0). A=\left( \begin{matrix} 0 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1& 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0\end{matrix} \right). The adjacency matrix AA defines a Laplacian through the formula L=DAL=D-A, where the entries of the diagonal degree-matrix DD are defined by D ii= j=1 6A ijD_{ii}=\sum_{j=1}^6 A_{ij}, with the result being: L=DA=(1 0 0 0 0 0 0 2 0 0 0 0 0 0 2 0 0 0 0 0 0 2 0 0 0 0 0 0 2 0 0 0 0 0 0 1)(0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0)=(1 1 0 0 0 0 1 2 1 0 0 0 0 1 2 1 0 0 0 0 1 2 1 0 0 0 0 1 2 1 0 0 0 0 1 1). L = D- A=\left( \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 2 & 0 & 0 & 0 & 0\\ 0 & 0 & 2 & 0 & 0 & 0\\ 0 & 0 & 0& 2 & 0 & 0\\ 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{matrix} \right) -\left( \begin{matrix} 0 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1& 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0\end{matrix} \right)= \left( \begin{matrix} 1 & -1 & 0 & 0 & 0 & 0\\ -1 & 2 & -1 & 0 & 0 & 0\\ 0 & -1 & 2 & -1 & 0 & 0\\ 0 & 0 & -1& 2 & -1 & 0\\ 0 & 0 & 0 & -1 & 2 & -1 \\ 0 & 0 & 0 & 0 & -1 & 1\end{matrix} \right). The probability vector ψ=(ψ 1,ψ 2,,ψ 6)\psi=(\psi_1, \psi_2, \ldots , \psi_6), where the entry ψ k\psi_k gives the probability that the cat is on the kkth rung, evolves according to the master equation generated by L-L: ddtψ(t)=Lψ(t). \frac{d}{d t} \psi(t) = -L \psi(t). It will be Suppose that while our cat sits on the ladder in the autumn sun, it is approached by the neighbor’s dog from the the left. quantum ladder with dog As the two species have a different picture of reality, unavoidable conflicts pop up. Hence, as an educated guess, we could assume that the cat’s motion would in this situation be biased towards the left. A biased stochastic motion can be characterized by a weighted and directed adjacency matrix, i.e., with an A dA_{d} matrix that can have any real nonnegative entries (not only 00s and 11s) and that is not symmetric. In the present example, it could be A d=(0 1+p 0 0 0 0 1p 0 1+p 0 0 0 0 1p 0 1+p 0 0 0 0 1p 0 1+p 0 0 0 0 1p 0 1+p 0 0 0 0 1p 0), A_d= \left( \begin{matrix} 0 & 1+p & 0 & 0 & 0 & 0\\ 1-p & 0 & 1+p & 0 & 0 & 0\\ 0 & 1-p & 0 & 1+p & 0 & 0\\ 0 & 0 & 1-p& 0 & 1+p & 0\\ 0 & 0 & 0 & 1-p & 0 & 1+p \\ 0 & 0 & 0 & 0 & 1-p & 0\end{matrix} \right), Here pp is between 00 and 11. By increasing pp, the stochastic walk generated by the Laplacian L d=D dA dL_{d}=D_{d} - A_{d} would be more and more biased towards the right. The limiting probability distribution, for example, would be more and more skewed towards the right - with p=1p=1 being a situation when the cat goes strictly to the right, and the limiting probability distribution would be that the cat is with probability 11 on the rightmost rung. The unbearable unbiasedness of quantum being How would the previous two scenarios look like if our kitten behaved quantum mechanically? The quantum analogue of the undirected walk, i.e., when the Laplacian is symmetric has been extensively treated in part 16 of our network series. In this case, we could get an analogous quantum walk by the Schrödinger equation: ddtψ(t)=iHψ(t) \frac{d}{d t} \psi(t) = - i H \psi(t) where the quantum Hamiltonian HH is simply the negative Laplacian H=L=(1 1 0 0 0 0 1 2 1 0 0 0 0 1 2 1 0 0 0 0 1 2 1 0 0 0 0 1 2 1 0 0 0 0 1 1).H=-L= \left( \begin{matrix} -1 & 1 & 0 & 0 & 0 & 0\\ 1 & -2 & 1 & 0 & 0 & 0\\ 0 & 1 & -2 & 1 & 0 & 0\\ 0 & 0 & 1& -2 & 1 & 0\\ 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0 & 1 & -1\end{matrix} \right). We expect that the quantum walk defined this way would not be biased. But what does this mean in exact mathematical terms? Let P AB(T)P_{A \to B}(T) denote the probability that we find our cat on site BB at time t=Tt=T supposing that she started the walk from site AA at time t=0t=0. Similarly, let P BA(T)P_{B \to A}(T) be defined by the reverse situation, i.e., the probability of finding the kitten at AA when she started from BB. We call a quantum walk directionally unbiased if P AB(T)=P BA(T) P_{A \to B}(T)=P_{B \to A}(T) holds for all times TT and all pairs of sites (A,B)(A,B). In the next section section, we will prove that this property holds not only for our particular walk, but for any quantum walk that is generated by a real quantum Hamiltonian. One would expect that such a general result is related to some type of symmetry. This is indeed the case, what we are seeking istime-reversal symmetry. If only I could turn back time Our classical intuition tells us already that time reversal-symmetry and directional motion are naturally related: Jacob’s comment: Example starts by explaining a video played forwards or backwards. I can get the artist to draw the cats motion on what looks like film. Now let’s take a look at this relation in the quantum world! In this very simple model of one-particle… one-cat quantum mechanics, the operation implementing time-reversal is complex conjugation, which we will denote by CC. You may recall from our earlier posts, that in the quantum case the vector ψ=(ψ 1,ψ 2,)\psi=(\psi_1, \psi_2, \ldots) has complex entries, and the action of CC on such a ψ\psi is simply Cψ=ψ¯, C \psi = \overline{\psi}, where ψ¯=(ψ 1 *,ψ 2 *,)\overline{\psi} =(\psi_1^{*}, \psi_2^{*}, \ldots), i.e., the action is componentwise complex conjugation. CC is a so-called anti-linear operator, and is also an involution (C 1=CC^{-1}=C). But how does CC implement time-reversal symmetry? (To be continued…) Walk the line - with phases Okay, so H=LH=-L, being a symmetric matrix, will not yield directional biasing, and we cannot use L d-L_d as a quantum Hamiltonian either since it is not self-adjoint. Then purr, purr… purrhaps we should consider using a self-adjoint, but complex Hamiltonian, i.e., setting complex phases into HH: H d=(1 e iα 0 0 0 0 e iα 2 e iα 0 0 0 0 e iα 2 e iα 0 0 0 0 e iα 2 e iα 0 0 0 0 e iα 2 e iα 0 0 0 0 e iα 1).H_d= \left( \begin{matrix} -1 & e^{i \alpha} & 0 & 0 & 0 & 0\\ e^{-i \alpha} & -2 & e^{i \alpha} & 0 & 0 & 0\\ 0 & e^{-i \alpha} & -2 & e^{i \alpha} & 0 & 0\\ 0 & 0 & e^{-i \alpha}& -2 & e^{i \alpha} & 0\\ 0 & 0 & 0 & e^{-i \alpha} & -2 & e^{i \alpha} \\ 0 & 0 & 0 & 0 & e^{-i \alpha} & -1\end{matrix} \right). (To be continued…) Circle the cat Achiral cat Chiral cat Light-harvesting Complexes Blog ends here Here the draft for the blog post ends. Below you may find the material that we may use for this blog article. background material from the paper: In the standard literature on continuous time quantum walks [FG98,CCDFGS03, MB11,Kempe03, Kendon06], the time-independent walk Hamiltonian is defined by a real weighted adjacency matrix JJ of an underlying undirected graph, H= n,m sitesJ nm(|nm|+|mn|) H = \sum^{sites}_{n,m} J_{nm}(|n\rangle\langle m| + |m\rangle\langle n|) The condition that the hopping weights J nmJ_{nm} are real numbers implies that the induced transitions between two sites are symmetric under time inversion. We can break this symmetry while maintaining the hermitian property of the operator by appending a complex phase to an edge: J nmJ nme iθ nmJ_{nm}\rightarrow J_{nm}e^{i\theta_{nm}} resulting in a continuous time chiral quantum walk (CQW) governed by H= n,m sitesJ nme iθ nm|nm|+J nme iθ nm|mn| H = \sum^{sites}_{n,m} J_{nm} e^{i\theta_{nm}}|n\rangle\langle m| + J_{nm} e^{-i\theta_{nm}}|m\rangle\langle n| When acting on the single exciton subspace the Hamiltonian given in Eq. \eqref{eqn:cqw} can be expressed in terms of the spin-half Pauli matrices: H CQW= n,mJ nmcos(θ nm)(σ n xσ m x+σ n yσ m y) + n,mJ nmsin(θ nm)(σ n xσ m yσ n yσ m x)\begin{aligned} H_{CQW}=& \sum_{n,m} J_{nm}\cos(\theta_{nm})(\sigma^x_{n}\sigma^x_{m} +\sigma^y_{n}\sigma^y_{m}) \\ & +\sum_{n,m} J_{nm}\sin(\theta_{nm})(\sigma_{n}^x\sigma^y_{m} -\sigma^y_{n}\sigma^x_{m}) \end{aligned} which arises in a variety of physical systems when magnetic fields are considered. We explore a proof-of-concept experimental demonstration of this effect in Supplementary Information, Section S2. In the CQW framework, we investigate coherent quantum dynamics and incoherent dynamics within the Markov approximation. Both types of evolution are included in the Lindblad equation [Kossakowski72,Lindblad76,Breuer02,Whitfield10]: ddtρ(t)= {ρ}=i[H CQW:ρ] + kL kρL k dag12(L k dagL kρ+ρL k dagL k)\begin{aligned} \frac{d}{dt}\rho(t)=& \mathcal{L}\{\rho\} = -i[H_{CQW}:\rho]\\ &+\sum_k L_k \rho L_k^\dag-\frac{1}{2}\left(L_k^\dag L_k\rho+\rho L_k^\dag L_k\right) \end{aligned} where ρ(t)\rho(t) is the density operator describing the state of the system at time tt and L kL_k are Lindblad operators inducing stochastic jumps between quantum states. For example, using the usual terminology of Markovian processes, we call site tt a trap if it is coupled to site ss by the Lindblad jump operators, L k=kettbrasL_k=\ket{t}\bra{s}. The site-to-site transfer probability, P nm(t)=m|ρ(t)|mP_{n\rightarrow m}(t)=\langle m|\rho(t)|m\rangle, gives the occupancy probability of site mm at time tt with initial condition ρ(0)=|nn|\rho(0)=|n\rangle\langle n|. Note that the present study, while utilizing open system dynamics, is not related to the enhancement of transport due to quantum noise [SMPE12,MRLA08] which has been well studied in the context of photosynthesis[MRLA08,lloyd2011]. Here the emphasis is instead on the effect the breaking time-reversal symmetry of the Hamiltonian dynamics can have on transport. To quantify the transport properties of quantum walks, we use the half-arrival time, τ 1/2\tau_{1/2}, as the earliest time when the occupancy probability of the target site is one half. We will also make use of the transport speed, ν 1/2\nu_{1/2}, defined as the reciprocal of τ 1/2\tau_{1/2}. The probability for a quantum walker to start from a node SS and reach the node EE at time tt is: P SE(t)=Tr(e iHtρ Se iHtρ E) P_{S\to E} (t) = \text{Tr}(e^{-iHt}\rho_S e^{iHt}\rho_E) In this settings, the time inversion is given by the complex conjugation operation in the natural vertexes basis: vVα v|v= vVα v *|v \sum_{v \in V} \alpha_v | v \rangle = \sum_{v \in V} \alpha^*_v | v \rangle The time-reversal of a Hamiltonian HH is given as THT dag=THTTHT^\dag=THT. The HTHTH\to THT action is represented in parameter space by the replacement J nmJ nm *J_{nm}\to J_{nm}^*. Thus exactly the achiral quantum walks (real Hamiltonians) are left invariant by this action. which can be verified using TthoT=ρT\tho T=\rho and the cyclicity of the trace as follows: P SE (t) =Tr(e i(THT)tρ Se i(THT)tρ E) =Tr(Te iHtTρ STe iHtTρ E) =Tr(e iHtTρ STe iHtTρ ET) =Tr(e iHtρ Se iHtρ E)=P SE(t) P SE(t) =Tr(e iHtρ Se iHtρ E) =Tr(e iHtρ Ee iHtρ S)=P ES(t) \begin{aligned} P^'_{S\to E}(t) &=\text{Tr} (e^{-i(THT)t}\rho_S\, e^{i(THT)t}\rho_E)\\ &=\text{Tr}(Te^{iHt} T\rho_S\, T e^{-iHt} T\rho_E)\\ &=\text{Tr} (e^{iHt}T\rho_S\, T e^{-iHt} T\rho_E T)\\ &=\text{Tr} (e^{iHt} \rho_S\, e^{-iHt} \rho_E)= P_{S\to E}(-t)\\ P_{S\to E}(-t)&= \text{Tr} (e^{iHt} \rho_S\, e^{-iHt} \rho_E)\\ &=\text{Tr} (e^{-iHt} \rho_E\, e^{iHt} \rho_S)= P_{E\to S}(t) \end{aligned} Gauge Transformation A crucial consequence of the above is that in the case of achiral quantum walks, the transition probabilities are the same at time tt and t-t, i.e. P SE(t)=P SE(t)P_{S \to E}(t)=P_{S \to E}(-t), and directional biasing is prohibited P SE(t)=P ES(t)P_{S\to E}(t) = P_{E\to S}(t). However, HTHT dagH\neq THT^\dag does not necessarily imply that transition rates are asymmetric in time. This is because THT dagTHT^\dag might be gauge-equivalent to HH, as will be seen in the next section. We now introduce a quantum switch which enables directed transport and could, in principle, be used to create a logic gate and offer future implementations of transport devices to store and process energy and information. Fig.~[fig:switch] presents an example of this switch. The value of a phase (e iθe^{i \theta }) appended to a single control edge across the junction allows selective biasing of transport through the switch. The maximal biasing occurs at |θ|=π/2|\theta|=\pi/2, and the sign determines the direction. The first maxima of P SE(t)P_{S\rightarrow E}(t) (transfer probability from site S to E) in the unitary dynamics without traps can be enhanced by 134\% or suppressed to 91\% with respect to the non-chiral case. When considering traps in the Lindbladian evolution, the optimal transport efficiency is 81.4\% in the preferred direction. The switch violates TRS as P SE(t)P SE(t)P_{S\to E}(-t)\neq P_{S\to E}(t). By using P SE(t)=P ES(t)P_{S\to E}(-t)=P_{E\to S}(t) and the symmetry of the configuration P ES(t)=P SF(t)P_{E\to S}(t)=P_{S\to F}(t), we conclude that transport is biased towards the opposite pole when running backwards in time, see Fig~[fig:switch]. Note that the behaviour of the switch is largely independent of the length of the connecting wires. We will now utilize the directional biasing of the triangle to give an example of a speed-up of chiral walks. Using the composition of eight triangular switches as depicted in Fig.~[fig:saw+fmo]a, by simultaneously varying all phases along the red control edges to the same value, we examine the effect of time-reversal asymmetry on transport. We find that the occupation probability as a function of θ\theta is symmetric about ±π/2\pm \pi/2 with the negative value corresponding to maximal enhancement and the positive value to maximal suppression. Unlike the occupation probability maxima in the switch, here the first apexes are separated in time. When we include trapping, the half-arrival time is reduced from the non-chiral value τ 1/2=38.1\tau_{1/2}=38.1 to 5.25.2 which represents a 633633\% enhancement. To conclude this section we focus on suppression of transport by chiral quantum walks. A good example is the polygon with an even number of sites. In this case complete suppression can be achieved by appending a phase of π\pi to one of the links in the cycle; thereby rendering it impossible for the quantum walker to move to the diametrically opposite site. This is a discrete space version of a known effect in Aharonov-Bohm loops [Datta]. The proof that the site-to-site transfer probability is zero in this case for all times also in our discrete-space and open-system walks can be found in the Methods Section. However, note that the discrete even-odd effect, which implies that only loops comprised of odd particles can exhibit transport enhancement, and only even loops may exhibit complete suppression, has no known continuous analog. In natural and synthetic excitonic networks such as photosynthetic complexes and solar cells, we are faced with non-unitary quantum evolution due to dissipative and decoherent interaction with the environment. Studies have shown that dissipative quantum evolution surpasses both classical and purely quantum transport (for interesting recent examples see[Whitfield10,SMPE12]). A widely studied process of such dissipative exciton transport is the one occurring in the Fenna-Matthews-Olsen complex (FMO), which connects the photosynthetic antenna to a reaction centre in green sulphur bacteria[MRLA08,caruso09,fleming10,ringsmuth2012]. Due to the low light exposure of these bacteria, there is evolutionary pressure to optimize exciton transport. Therefore, the site energies and site-to-site couplings in the system are evolutionarily optimized, yielding a highly efficient transport[lloyd2011]. However, it is an open question whether or not there occurs time-reversal asymmetric hoping terms in these systems, and whether these are optimized. Recent 2D Electronic Spectroscopy results lead to the conclusion that , e.g., in the light harvesting complex LH2 hopping terms with complex phases are indeed present [Engel]. Here we ask whether such TRS breaking interactions may further enhance the efficiency of the light harvesting process. We consider the traditional real-hopping Hamiltonian modeling transport on the FMO, and allow for TRS breaking by introducing complex phases and find that the transport speed can be further increased. We study the seven site model of the FMO using an open system description that includes the thermal bath, trapping at the reaction centre, and recombination of the exciton[MRLA08,caruso09, plenio08]. By performing a standard optimization procedure (as outlined in the Supplementary Information, Section S3) that varies the phase on a subset of seven edges, we found a combination of phases where the transport speed, ν 1/2\nu_{1/2}, is enhanced by 7.687.68\%. In Fig.~[fig:saw+fmo]b, the enhancement of the time dependent occupation probability is shown for the chiral quantum walk. We note that optimization over only three edges already changes the transport speed by 5.92\%, see Supplementary Information, Section S3. Complex network theory has been used in abstract studies of quantum information science; see for example [Acin, Acin2]. Here we turn to the theory of complex networks to determine if optimization procedures limited to small subsets of edges will generally lead to improved transport in larger and possibly randomly generated networks. We found a positive answer when testing the site-to-site transport between oppositely aligned nodes in the Watts-Strogatz model~[WS98]. This family of small-world networks continuously connects a class of regular cyclic graphs to that of completely random networks (Erd\H{o}s-R'enyi models[ER60]) by changing the value of the rewiring probability. We numerically investigated graphs with 32 nodes, average degree four and range over rewiring probability pp considering 200 different graph realizations for each value of pp. An example with p=0.2p=0.2 is depicted in Fig.~[fig:WS]a. Here the occupancy of a sink connected to site EE is compared between the chiral walk and its achiral counterpart. The particle begins at site SS and we perform the optimization of the phases only on edges connected to site EE. In the case of the chiral quantum walk, the sink reaches half-occupancy in 54.8\% less time on average. A quantum walk is widely used tool to study or simulate a variety of different quantum systems. A quantum walk, give us an insight on the transport properties of the system and on the dynamics that characterize it’s evolution. A quantum walk is defined as the one-particle subspace of a quantum system described by some Hamiltonian. System size scaling Experiment: graph Experiment: probabilities Experiment: frequencies FMO complex with phases examples of time symmetry breaking even-odd cycles The switch Chiral switch The quantum switch. (a) Directional biasing: enhanced transport in the preferred direction. (b) The plot shows the occupancy probability P SEP_{S\to E} of site EE with the particle initially starting from site SS with and without sink (dashed and solid lines, respectively). This evolution is time-reversal asymmetric as replacing tt with t-t results in the particle moving from site SS towards site FF. When starting at site EE, the particle evolves towards site FF. By replacing tt with t-t, a particle initially at site EE evolves towards the initial configuration (b). To recover time-reversal symmetric transition probabilities in the evolution (b), requires that one also performs the antiunitary operation [W31] on the Hamiltonian mapping θ\theta to θ-\theta. This has the same effect as reflecting the configuration horizontally across the page while leaving the site labels intact. The tooth-saw Triangle chain and FMO complex (a) Triangle chain and (b) the FMO complex. (a) The phase e iθe^{i \theta} is applied to the red edges simultaneously in the triangle chain. The plot illustrates the occupancy probability at the end site EE as a function of time for different values of the phase θ\theta with and without trapping (dashed and solid lines, respectively). (b) shows the occupancy difference with respect to the time reversal symmetric Hamiltonian of the FMO complex. We use an optimization procedure to enhance the transport. While holding the magnitude of the couplings constant, we optimize two sets of phases, A 1A_1 and A 2A_2, which correspond to seven and three edges with an enhancement at τ 1/2\tau_{1/2} of 3.253.25\% and 2.252.25\%, respectively. complex netoworks Complex topology Transport enhancement of the chiral quantum walk is robust across randomly generated Watts-Strogatz networks. An example of this small-world network, with rewiring probability p=0.2p=0.2, is depicted in (a). The transfer probability PP from site SS to the sink connected to site EE is plotted in a realization of the network. (b) shows the average enhancement of half arrival time (Δτ 1/2\Delta\tau_{1/2}) for different values of pp. A list of papers that we might use when discussing the effect of stochastic noise on quantum transport (TODO We have to select from these later, see which fits into our story): Studying the crossover between stochatic and quantum transport (Verstraete): The above work was based on this (Prosen): An important and interesting feature in this respect is “negative differential conductivity” (Prosen) : Dephasing enhanced transport (Clark): From ballistic to diffusive behavior (in heat transport) (Clark): Combined effect of disorder, noise and interaction (Plenio: When the the initial state has a momentum (Eisfeld): Noise assisted transport - mostly in photosynthetic complexes (Plenio): Here the pictures by federica: Wall bounce Wall hall Cat interference Chiral cat Achiral cat quantum ladder quantum ladder with dog quantum ladder without dog
70f8c8cecd0e2131
Quantum Mirror Map for Del Pezzo Geometries Tomohiro Furukawa ***,  Sanefumi Moriyama ,  Yuji Sugimoto  Nambu Yoichiro Institute of Theoretical and Experimental Physics (NITEP), Osaka City University Advanced Mathematical Institute (OCAMI), Sumiyoshi-ku, Osaka 558-8585, Japan [6pt] Interdisciplinary Center for Theoretical Study, University of Science and Technology of China, Hefei, Anhui 230026, China Mirror maps play an important role in studying supersymmetric gauge theories. In these theories the dynamics is often encoded in an algebraic curve where two sets of periods enjoy the symplectic structure. The A-periods contribute to redefinitions of chemical potentials called mirror maps. Using the quantization of the del Pezzo geometry, which enjoys the symmetry of the Weyl group, we are able to identify clearly the group-theoretical structure and the multi-covering structure for the mirror map. After the identification, we have interesting observations: The representations appearing in the quantum mirror map are the same as those appearing in the BPS indices except for the trivial case of degree 1 and the coefficients are all integers. 1 Introduction Mirror map is one of the most fascinating subjects in recent studies of supersymmetric gauge theories. Classically [1] it was known that the Yukawa couplings in the complex structure parameters are not renormalized by instanton effects while those in the Kähler class parameters are, where two sides are related by mirror maps for the mirror pair of Calabi-Yau manifolds. Later it turns out that similar structures are omnipresent in various examples, including matrix models or supersymmetric gauge theories. Typically in supersymmetric gauge theories the dynamics is encoded in a classical algebraic curve where two sets of period integrals (called A-periods and B-periods) obtained by integrating the meromorphic one-form along two sets of cycles (called A-cycles and B-cycles correspondingly) enjoy the symplectic structure. Depending on the physical system, the two sets of periods give respectively variables such as gluino condensations or chemical potentials and derivatives of the quantity characteristic of the system such as prepotential or free energy [2, 3, 4, 5, 6]. Since the derivatives of free energy characterizing the dynamics are obtained from the B-periods***Usually, the B-periods are expressed in terms of the complex structure moduli and it is only after we reexpress them by the Kähler parameters using the A-periods that we encounter the BPS indices. Hence in this sense, it is also common to regard the BPS indices as obtained not only from the B-periods but the A-periods as well., the B-periods were studied extensively and it was found that they are determined by positive integers called BPS indices (generalizing the Gopakumar-Vafa invariants [7, 8]). Depending on degree and spins of the spacetime the total BPS indices were computed in [9] for several geometries. After assigning various Kähler parameters for the geometry, the total BPS indices are decomposed as by various combinations of degrees (with ) corresponding to the Kähler parameters [10]. It was known that, for curves with group-theoretical structures such as curves of genus one known as del Pezzo geometries, the BPS indices inherits the group-theoretical structures of the curves. Namely, the BPS indices form representations of the group and are specified by multiplicities of the irreducible representations [11]. Furthermore, for a certain fixed background geometry determined by the Kähler parameters, the BPS indices are split as the decompositions of representations into an unbroken subgroup preserved by the background [12, 13]. It was also known that the derivative of free energy enjoys a multi-covering structure, where BPS indices of lower degrees can appear in effects of higher degrees. On the other hand, the mirror maps are obtained from the A-periods by integrating along the A-cycles. Although the mirror maps redefining the physical variables serve half of the role in determining the dynamics, compared with the progress in the derivatives of free energy charactering the dynamics directly, the progress in the mirror maps seems immature. For example, although the mirror map is given in terms of the characters in [14] for the case, the symmetry group is not large enough to convince ourselves of the group-theoretical structure. Also it is desirable to confirm the multi-covering structure proposed in [14] with a larger symmetry group. The supersymmetric gauge theories we have in mind for the application of our quantum mirror map are three-dimensional superconformal Chern-Simons gauge theories. The most standard example is the ABJM theory, that is, the superconformal Chern-Simons theory with gauge group UU (the subscripts denoting the Chern-Simons levels) and two pairs of bifundamental matters, describing the worldvolume of coincident M2-branes on the target space [15]. Let us define the partition function of M2-branes on as and move to the grand partition function by regarding the rank of the ABJM theory as a particle number and introducing the dual fugacity . On one hand, we can rewrite the grand partition function into the Fredholm determinant [16] of a quantum-mechanical spectral operator which is reminiscent of the geometry after change of variables. On the other hand, by fully analyzing the partition function, we find that, aside from the perturbative part contributing as the Airy function [17, 18, 19], it can be described by the free energy of topological strings on the same local geometry [20, 21, 22, 23, 24, 14]. Note, however, that to describe these supersymmetric gauge theories the algebraic curve in (1.3) is quantized [5, 25], which means that the canonical variables and describing the curve obey the commutation relation as in quantum mechanics. Correspondingly, the two sets of classical periods obtained by integrating the meromorphic one-form along cycles are promoted to quantum periods which are defined using the wave function for the Hamiltonian [6]. Accordingly, we are naturally led to the concept of the quantum mirror map and the quantum-corrected free energy [5]. Following the idea of the quantum mirror map, in the description of the grand partition function (1.1) in terms of the free energy of topological strings [14], it is natural that we first need to redefine the chemical potential by a quantum-corrected one (often called effective chemical potential) [24]. As we have explained above generally, it was known that the quantum mirror map is encoded in the A-period of the curve while the derivative of the free energy is encoded in the B-period. Although the group-theoretical structure and the multi-covering structure for the B-period were studied carefully previously, not so much was known for the quantum mirror map obtained from the A-period. In [14] these structures of the quantum mirror map were proposed and studied, though it is desirable to study them from an example with a larger symmetry group to explore general structures systematically. There are several generalizations for the ABJM theory which share the interpretation as the worldvolume theory of M2-branes and enjoy larger symmetry groups. In this paper we concentrate especially on two superconformal Chern-Simons theories connected [10, 13] by the Hanany-Witten transitions [26]. One of them is the Chern-Simons theory with gauge group UUUU and bifundamental matters connecting subsequent group factor (called model after the powers appearing in the spectral operator [27, 28]), while the other is that with gauge group UUUU and bifundamental matters (called model after [29]). Unlike the ABJM case where the symmetry group is only , this time the symmetry group is and it is possible to study the quantum mirror map more systematically. Here we study the group-theoretical structure and the multi-covering structure for the quantum mirror map carefully. After that by specifying the parameters of the curve correctly, we can reproduce the effective chemical potential for the model and the model. In the next section, we first recapitulate the setup for the quantum mirror maps especially focusing on the case of the quantum curve. Then in section 3 we head for the study of the quantum mirror map for the curve and observe clearly the group-theoretical structure and the multi-covering structure. After that in section 4 we turn to the analysis of gauge theories and find that our quantum mirror map reproduces the effective chemical potential correctly. Finally, we conclude with discussions on further directions in section 5. 2 Quantum mirror map To explain the symmetry breaking pattern for rank deformations of the model and the model [28, 10, 11], quantum curves were introduced and the quantum curve was studied carefully in [12, 13]. In this section we recapitulate the setup for the quantum curves following [12] and the expression for the quantum mirror maps following [6, 14]. We define quantum curves to be Hamiltonians given by the canonical variables , satisfying . We then parameterize the quantum curve by where the parameters are subject to the constraint By choosing the parameters suitably, we are able to express the Hamiltonians appearing in the Fredholm determinant (1.2) for the grand partition functions of the model, the model and their rank deformations [10, 13] connecting the two models. Quantum-mechanically, Hamiltonians are defined up to similarity transformations which do not change the spectrum. Combining two degrees of freedom from similarity transformations , with two degrees of freedom in parameterizing 8 asymptotic points with 10 parameters , we can fix the gauge by setting where the last equation comes from the constraint (2.2). Then the Weyl group action can be given unambiguously by where we have defined , with . See figure 1 for our numbering of the simple root for the algebra. Dynkin diagram of the Figure 1: Dynkin diagram of the algebra. In order to study the Fredholm determinant (1.2), it is important to extract information from the Hamiltonian . The standard process for it is to consider the Schrödinger equation for the wave function Although our supersymmetric gauge theories correspond to special choices of the parameters , we leave them arbitrarily to investigate the group-theoretical structure. Using the parametrization (2.1) with the gauge fixing condition (2.3), we can express the Schrödinger equation asSimilar Schrödinger equations and their generalizations appear in [30, 31] in the context of Painlevé equations. with defined in (2.3). Here we have used where in the last equalities we denote and consider to be a function of which is denoted by . We can further introduce the ratio of the wave functions by mimicking the action of the momentum operator in (2.7) and rewrite the equation as Then we can solve this equation order by order in the large expansion and find Classically, periods (A and B) are respectively defined by integrating the meromorphic one-form along the corresponding cycles. The definition can be generalized into quantum periods using the wave function or (2.8) introduced above. Namely, since the classical A-period is given by using the exponential canonical variables and , it is natural to define the quantum A-period as by picking up the residue at . Using the quantum A-period, the quantum mirror map is given by We shall adopt these setups to study the quantum mirror map for the quantum curve in the next section. 3 Quantum mirror map in characters In the previous section we have recapitulated the ideas of quantum curves and quantum mirror maps. After fully identifying the Weyl group action in the quantum curve, it is natural to expect that the Weyl group action is helpful for studying the quantum mirror map. At the same time, for the consistency of the definition of the A-period, the mirror map has to be symmetric under the Weyl group action. In this section, we explicitly perform the analysis and confirm these consistencies. 3.1 Group-theoretical structure In this subsection we embark on the study of the quantum mirror map for the curve. We can perform the large expansion for the ratio of the wave functions (2.10) and the A-period (2.12) order by order. Here we make several observations for the consistency of the results with the Weyl group action. Our first observation from the first few orders in the large expansion is that the quantum A-period is given by the general expression where is a function of the parameters , and . Namely, if we expand the coefficients of the A-period for each order in , a higher order term contains lower orders in the nest as in Note that although for our current case is vanishing, we introduce it from the consistency with other terms. If we redefine by the quantum mirror map (2.13) is reexpressed by The explicit forms of and , after fixing the gauge and solving the constraint (2.3), are with defined by . As a second observation, we note that contains terms while , after factoring out , contains 16 terms which are reminiscent of the characters of the representations and or . Obviously this is not quite correct, since the characters for real representations such as should be symmetric under reversing the powers. Nevertheless, it is not difficult to observe that the combinations and are invariant under the Weyl group (2.4) and should be expressed in terms of the characters after suitable modifications. To identify the results as characters correctly, we need to reconsider the role of the parameter or . In parameterizing the quantum Hamiltonian in (2.1), the parameter is simply an overall factor. On one hand, in identifying the Weyl group action it turns out that this parameter transforms non-trivially as in (2.4). On the other hand, since the transformations (2.4) on the remaining parameters already generate the Weyl group, the parameter is redundant in the group action. This means that it should be possible for us to construct a combination from the parameters which transforms exactly in the same manner as the parameter . It turns out that the combination transforming as the parameter can be constructed explicitly and we identify them by As in section 5 the physical meaning of this identification is not very clear to us and needs further clarifications. To match with the standard characters, we need to switch the fundamental weights identified for the quantum curve in [12] into those in the standard orthonormal basis Namely, as in [13], we express the powers by the fundamental weights of [12], , switch them into those in the orthonormal basis, , and identify the result as . This is solved reversely by With this parameterization is now a function of and . We then find that which are nothing but the characters and . Hence, in this subsection we conclude that the quantum A-period is described by the group-theoretical language of characters if we correctly identify the overall parameter by (3.7). 3.2 Multi-covering structure In the previous subsection, we have found that the results of the quantum A-period are given by the characters. Exactly the same structure works for the B-period and this further enables us to count the contribution for various representations and summarize the results by multiplicities of representations [11]. We hope to perform the same analysis for the A-period. If we proceed to higher orders in (3.5), we find, however, fractions. In this subsection, we explain how to take care of the fractions by identifying the multi-covering structure correctly and derive the multiplicities of the characters. Table 1: Components appearing in a tentative multi-covering structure (3.14) for the mirror map of degree in terms of the character . In [24] it was known that the inverse quantum mirror map is cleaner than the original quantum mirror map. For this purpose we solve the quantum mirror map (3.5) inversely where is given in terms of by
bdaabe7a50e1ef17
David Mumford Archive for Reprints, Notes, Talks, and Blog Professor Emeritus Brown and Harvard Universities skip to content The Shape of Rogue Waves June 27, 2021 Back in 2020, I got an unexpected email from Al Osborne, a physics Professor at the University of Torino and researcher at the Office of Naval Research in the US. I discovered that he is one the preeminent world experts on rogue waves, the 50-100 foot monsters that can arise even in moderate sea conditions and sink ships. Here's an excerpt from a BBC documentary on rogue waves. He turned out to be a fan of my work, years ago, on theta functions, as they produce soliton-type solutions of the non-linear Schrödinger equation which are a possible model for such waves. I was doubly fascinated because a) this was something that my student Emma Previato had worked out for her thesis (cf. her paper: Duke Math. J., 1985) and b) I have done a fair bit of ocean sailing and am most curious about such waves. And after struggling with the literature, it dawned on me that this also fits in with my work on the infinite dimensional manifold of simple closed plane curves and the idea of shape spaces. Let me explain. I. Nonlinear gravity waves Like almost all physics, one begins by simplifying the problem! Water is incompressible, ok, so their velocity vector field has no divergence. But their theory gets truly messy and complicated by their vorticity, the curl of that vector field. Well, don't forget that vorticity is preserved along streamlines in the absence of any external force. And when water truly settles down, as it does from time to time, even in mid-ocean (I have seen this and swam in deep ocean water as flat as a pancake), then its velocity vector field is zero! So mostly ocean water can be modeled by curl free divergence free vector fields. Sure, the wind is an external force and shelving bottoms create external forces near shores but in deep water and ignoring the topmost layers being blown around, it is irresistible to assume the curl is zero too. Aha, harmonic functions now make their appearance. Let's do the math. First a domain: assume \( z \) is the vertical dimension and we wish equations for the time varying surface of an ocean \( \Omega^{(t)} \) of infinite depth. Denote the ocean's surface by \(\Gamma^{(t)}\) and its equation by \(z=\eta(x,y,t)\), (excluding breaking waves whose tops outrun the troughs). Let \( \vec v(x,y,z,t) \) be the velocity vector of the water. The motion of the surface is given by a normal vector field on \(\Gamma^{(t)}\) which must equal the normal component of \(\vec v\): $$\frac{\partial\Gamma^{(t)}}{\partial t}(P,t) = \vec v(P,t) \cdot \vec N_{\Gamma^{(t)}}(P), \text{ or } \frac{\partial\eta}{\partial t}(P,t)=\vec v(P,t) \cdot \left(-\frac{\partial\eta}{\partial x},-\frac{\partial \eta}{\partial y},1\right)$$ Next, there potential \( \phi(x,y,z,t) \) on \( \bigcup_t \Omega^{(t)} \), harmonic in \((x,y,z)\) such that \(\vec v = \nabla \phi\). Euler's equation becomes now the definition of the pressure: $$ \frac{\partial \phi}{\partial t} + \frac12\| \nabla \phi \|^2 = -p - gz$$ where we take the density of water to be 1, and g to be the force of gravity on earth's surface. However, on the surface, p must equal the atmospheric pressure, which we can absorb into the normalization of z, hence set p at the surface to zero. We assume that, at the bottom of the ocean, \(\phi\text{ and }\nabla\phi \rightarrow 0, z \rightarrow -\infty, p \rightarrow +\infty\). Finally, for every simply connected domain, one has the Poisson kernel \(\mathcal{P}_\Omega\) that computes every harmonic function on the domain from its boundary values. For flat seas, for instance, the domain is the lower half space and the kernel is \(-z/2\pi(x^2+y^2+z^2)^{3/2}\). Thus we complete the set of equations for the evolution of gravity waves using: $$\frac{\partial \phi}{\partial t} = -\mathcal{P}_{\Omega^{(t)}} \ast \left(gz+\tfrac12\|\nabla\phi\|^2\right)\big |_{\Gamma^{(t)}}$$ The majority of work on gravity waves deals with "wave trains", waves which are independent of one of the horizontal coordinates, e.g. y, leaving (x,z). In this case, \(\Omega^{(t)}\) can be taken as a plane domain and harmonic functions are the real parts of complex analytic functions of x+iz. Their real and imaginary parts are conjugate harmonic functions that determine each other by an integral transform generalizing the Hilbert transform. But very few people use these equations. Instead, they start with the ansatz: $$\eta(x,t) = \text{Re}\left(A(x,z,t).e^{i(kx-\omega t)}\right)$$ where A is a slowly varying "complex wave envelope". Then, by discarding judiciously terms thought to be small, one derives the result that A satisfies the non-linear Schrödinger equation with coefficients expressed in terms of k,ω. The beauty of this is that one has explicit solutions of the non-linear Schrödinger equation arising from theta functions on Jacobians of algebraic curves that appear to produce "rogue waves", (cf. Osborne's book Nonlinear Ocean Waves and the Inverse Scattering Transform, 2010). But wouldn't it be more fun to avoid the ansatz? II. Shape spaces Starting from completely different questions and motivations, Peter Michor and I had been studying, since the early 2000s, the infinite dimensional manifolds formed by the totality of a large variety of geometric structures. For example, if you fix an ambient manifold and look at all its submanifolds of some type, then the totality of such submanifolds is itself a manifold, albeit a pretty big one. Following algebro-geometric traditions, we called these the differentiable Chow manifolds. Riemann himself had noted the existence of such manifolds in his famous Habilitation lecture. There are many other examples but to fix ideas, the prime example, the one that has given rise to the most work, is this: take the ambient space to be simply the plane and consider in it all simple closed plane curves, making this a manifold in its own right. What continues to amaze me is the huge diversity of the geometric properties of this one space in the many natural metrics that it carries. A caution: I have on purpose not said how smooth or how jagged the curves are that define points in this space. Because of this, we don't have literally one space. It's exactly like the linear situation for function spaces: for each metric, there are distinct completions and these nest in each other in complex ways. OK, we have the same in the nonlinear realm: many instantiations, all being completions in different metrics of the core set of \(C^\infty\) curves. And there are finite-dimensional "approximations" like the space of non-intersecting n-gons. I'll give three examples of Riemannian metrics on this space. Let's denote this core space by 𝒮 and its members by Γ with interior Ω. Then, as above, for all Γ, let \(T_\Gamma\text{ and }N_\Gamma\) be their tangent and normal bundles in the plane. A section of the normal bundle \(a:s \mapsto a(s).\vec N_\Gamma(s)\) represents a tangent vector to 𝒮 at the point representing Γ. A Riemannian metric on 𝒮 is then defined by a quadratic norm on every such section. The simplest possible one is just the \(L^2\) metric \( \|a\|^2 = \int_\Gamma a(s)^2ds.\) where s is arc-length. The resulting Riemannian manifold is a strange bird indeed: the infimum of path lengths between any two points of 𝒮 is zero! Geometrically, what's happening is that the sectional curvatures are all non-negative and, at any point unbounded so that conjugate points are dense on geodesics. Visually, the intermediate curves can grow spikes that shorten the above distance along any path as much as you want. To get metrics that behave more normally, the standard way is to use Sobolev-type metrics. The best way to do this is by viewing 𝒮 as a quotient by the diffeomorphism group of the plane, Diff(R2) by the subgroup of diffeomorphisms that map the unit circle to itself. The Lie algebra of this group is the vector space of smooth vector fields on the plane and one can put Sobolev norms on them component-wise: $$ \|\vec v\|^2_{Sob-n} = \int_{R^2} ((I-\Delta)^n \vec v \cdot \vec v)dxdy.$$ If one extends this norm to be one-sided invariant and takes cosets on the same side, you get a quotient metric on 𝒮 for which the map from Diff to 𝒮 is a submersion: the tangent bundle "upstairs" splits into a vertical part tangent to the cosets and a horizontal part that is the pull back of the tangent bundle "downstairs". This is an isometry between the quotient metric on 𝒮 and the restriction of the one-sided invariant metric on Diff. All geodesics on 𝒮 for this metric lift to horizontal geodesics on Diff. A simple way to understand this definition is: $$ \| a \|^2_{Sob-n} = \inf\left\{ \|\vec v\|^2_{Sob-n}, \vec v \text{ on } R^2 \big| \vec v\cdot \vec N_\Gamma (s) = a(s) \right\}$$ In the land of pseudo-differential operators, there is such an Ln for which \( \|a\|^2_{Sob-n} = \int _\Gamma (L_n(a)a.ds\). Here n need not be an integer but, in all cases, Ln has degree 2n-1. So long as n > 1, these manifolds behave well, having geodesics and curvature etc., just like finite dimensional manifolds. Michael Miller's group at Johns Hopkins has used the 3D version of these metrics extensively to analyze medical scans. What makes these Sobolev metrics really great is that, because arise from a one-sided invariant metric on a group, the lifted geodesics conserve their "momentum", it is transported by the diffeomorphisms in the lifted geodesic, leading to very simple geodesic equations. The cotangent space to 𝒮 at Γ can be thought of as the space of 1-forms ω to R2, but given only along Γ and that kill the tangent space of Γ. The inverse \(L_{Sob-n}^{-1}\) defines a norm here which has degree 1-2n, i.e. it's given by an integral kernel. Upstairs, the kernel is just convolution with a modified Bessel function, namely \(\|\vec x\|^{n-1}K_{n-1}(\|\vec x\|) \) times a constant. As Darryl Holm pointed out to me, if n > 1, this is a continuous function at 0 so the completion of the cotangent bundle contains δ functions. This means we can set the momentum to a sum of delta functions on Γ and get ODEs for the resulting geodesics which may be thought of as a kind of soliton. Note that the metric on the cotangent bundle is always weaker than that on the tangent bundle. The final example is given by the Weil-Petersson metric in a suitable model of the universal Teichmüller space. One starts with the Riemann mapping from a) the inside of the unit disk to the inside of Γ, call this φint, and b) from the outside to the outside, called φext. The latter can be normalized by asking that infinity is mapped to infinity and that the derivative there is positive real. Then \(\psi = \phi_\text{ext}^{-1} \circ \phi_\text{int}\big|_{S^1}\) is a diffeomorphism of the circle called the welding map and is unique up to composition on the right by an a conformal self-map of the unit disk, i.e. a Möbius map. It can be shown that this map \(\mathcal{S} \mapsto \text{Diff}(S^1)\) creates an isomorphism between 𝒮 mod translations and scaling and the group of smooth diffeomorphisms of S1 modulo right multiplication by the three-dimensional Möbius subgroup of Diff. Once again we have a one-sided invariant metric on Diff(S1) via the formula: $$\| a(\partial/\partial\theta)\|^2 = \int_{S^1}H (a''' - a').a.d\theta = \sum_{n>0}(n^3-n)|\hat{a}(n)|^2$$ where prime is the θ derivative and H is the Hilbert transform for periodic functions. This defines a homogeneous norm of Sobolev degree 3/2 on 𝒮. The dual metric is given by a simple explicit continuous kernel. This is the famous Weil-Petersson metric. It turns out to be Kähler-Einstein metric with all negative sectional curvatures. The Einstein property says that its sectional curvatures must be small enough to make the Ricci trace finite, so in some sense, I think it is nearly flat. I think it's a gem of a space. Essentially all the material in this section is available on my website, especially the notes from some Pisa lectures click here III. Zakharov's Hamiltonian Returning to the notation of the first part, the clue to linking these ideas on shape spaces to gravity waves is to consider the kinetic energy \(\int_\Omega\tfrac12\|\nabla \phi\|^2\) as a metric on 𝒮. OK, not exactly 𝒮 but now curves z = η(x) which are suitably tame at infinity (near the real axis) bounding a 2D slice of oceanic domains with infinite depth below them. Call this 𝒮Z. We assume the domain has fixed volume, meaning the mean of η is zero. A tangent vector to 𝒮Z at Γ is a normal vector field \(a(s)\vec N_\Gamma(s)\) to Γ such that \(\int_\Gamma a(s)ds = 0\). The Neumann boundary problem then defines a unique harmonic function in the interior with a as its normal derivative along the Γ and that also goes to 0 at -∞. If \(K_\text{Neu}\) is the corresponding Neumann kernel for the domain Ω, the metric is: $$\|a\|^2_Z = \iint_\Omega \tfrac12 \|\nabla \phi \|^2 dx dz, \quad \phi = K_\text{Neu}*a.$$ Note that because φ is harmonic, the integral can be rewritten: $$\iint_\Omega \|\nabla \phi \|^2 = \iint_\Omega \text{div}(\phi.\nabla\phi) = \int_\Gamma \phi.\frac{\partial \phi}{\partial n} = \int_\Gamma \phi.a$$ Thus we can interpret φ/2 as being the dual 1-form of the tangent vector a. In the simplest case η ≡ 0, \(K_\text{Neu}(s,x+iz) = \tfrac{1}{\pi} \log|s-(x+iz)| \) , hence \( \|a\|^2_Z = \tfrac{1}{2\pi}\iint \! a(s).a(t) \log|s-t| dsdt\). This is exactly the Sobolev H norm because its Fourier transform is \( \int |\hat{a}(\xi)|^2 d\xi/\xi.\) So we are doing the opposite to what we did strengthening the L2 norm via derivatives. Here we have a weaker norm on the tangent bundle whose dual is stronger than it is. On the other hand, to regularize the situation, we have potential energy as well as kinetic energy. This means the gravity wave equation is not a simple geodesic flow but a Hamiltonian flow where the potential is added to the norm squared. This is V.E. Zakharov's beautiful discovery in his 1968 paper Stability of Periodic Waves on the Surface of a Deep Fluid in the Zhurnal Prikladnoi Mekhaniki. The idea is to identify the cotangent space T *𝒮 with pairs (Γ,φ), φ harmonic on Ω and going to zero at -∞, taking Γ and φ as canonical dual variables. The Hamiltonian now is \(H(\Gamma, \phi) = \iint_\Omega \left( \tfrac12 \|\nabla \phi \|^2 + g.z\right)dxdz\) where the z term, after subtracting an infinite constant, should be interpreted as \(\int_\Gamma (g\eta(x)^2/2)dx\). One then checks that, if we write δΓ = a, then $$\delta H = \iint_\Omega \langle \nabla \phi, \nabla \delta \phi \rangle + \int_\Gamma \left( \tfrac12\|\nabla \phi\|^2 + g\eta\right) \delta\Gamma.ds$$ Rewriting the first term the way we did above for the metric, we find \( \iint_\Omega \langle \nabla \phi, \nabla \delta \phi \rangle = \int_\Gamma a.\delta\phi \), and we see that the Hamiltonian equations are the same as the equations for gravity waves: \(\frac{\delta H}{\delta \phi} = a = \frac{\partial \Gamma}{\partial t}\) and \(-\frac{\delta H}{\delta \Gamma} = -(\tfrac12\scriptsize{\|\nabla \phi\|^2 + g\eta})\big|_\Gamma = \frac{\partial \phi}{\partial t}\big|_\Gamma\). Can we compute with such a system of equations? A key point is that the Hamiltonian is conformally invariant, hence one can shift everything to the unit disk using the time varying conformal map from the unit disk to Ω. This has been worked out by Dyachenko et al: see A.I.Dyachenko, E.A.Kuznetsov, M.D.Spector and V.E.Zakharov, Analytical Description of the Free Surface Dynamics of an Ideal Fluid, Physics Letters A, vol. 221, 1996 and V.E.Zakharov, A.I.Dyachenko and O.E.Vasilyev, New Method of Numerical Simulation of a Non-stationary Potential Flow of Incompressible Fluid with a Free Surface, European J. of Mechanics B - Fluids, vol.21, 2002. I would suggest however that an easy way to do numerical experiments is to replace the infinitely deep 2D ocean with the interior of a simple closed curve, close to the unit disk, as in the shape section above, while making gravity into a central force field based at the origin. Then Fourier series can be used and simulations without changing coordinates might be possible. Finding the rogue wave solutions by this route is a fascinating challenge and might even be of use to the study of genuine ocean rogue waves.
e273ca21fb83a1d7
It is susceptible to attack by many insect-pests, and more severe It is susceptible to THZ1 price attack by many insect-pests, and more severely affected by the fruit and shoot borer (FSB). These insects effectively damage (60–70%) the crop even following the average 4.6 kg of insecticides and pesticides per hectare [2]. Therefore, to control the indiscriminate use of insecticides, the transgenic approach is being opted that is eco-friendly and shows promise to control the FSB infecting brinjal. The use of insecticidal proteins from the bacterium Bacillus thuringiensis (Bt) in the improvement of crop productivity via transgenic crop (Bt crop) MGCD0103 manufacturer is being promoted in most cases. However, the potential risk associated with the impact of transgenic crops on non-target microorganisms and flora and fauna in the environment, is still a matter of concern. Bt crops have the potential to alter the microbial community dynamics in the soil agro-ecosystem owing to the release of toxic Cry proteins into the soil via root exudates [3], and through decomposition of the crop residues [4]. The available reports, however, are not consistent regarding the nature of interaction of transgenic crops with the native microbial community. Icoz and Stotzky [5] presented a comprehensive analysis of the fate and effect of Bt crops in soil ecosystem and emphasized LY2109761 research buy for the risk assessment studies of transgenic crops. Phylogenetically, actinomycetes are the member of taxa under high G + C sub-division of the Gram positive bacteria [6]. Apart from bacteria and fungi, actinomycetes are an important microbial group known to be actively involved in degradation of complex organic materials in soils and contribute to the biogeochemical cycle [7]. The presence of Micromonospora in soils contributes to the production of secondary metabolite (antibiotics) like anthraquinones [8], and Arthrobacter globiformis degrades substituted phenyl urea in soil [9]. Nakamurella group are known for the production of catalase and storing polysaccharides [10]. Thermomonospora, common to decaying organic Branched chain aminotransferase matter, are known for plant cell degradation [11]. Frankia is widely known for N2 fixation [12], Sphaerisporangium album in starch hydrolysis and nitrate reduction in soils [13], Agromyces sp. degrades organophosphate compounds via phosphonoacetate metabolism through catabolite repression by glucose [14]. Janibacter in rhizospheric soils, are widely known to degrade 1, 1-dichloro-2, 2- bis (4-chlorophenyl) ethylene (DDE) [15], while Streptomyces for the production of chitinase as well as antibiotics [16]. These studies suggest that most of the representative genera of actinomycetes in the soil, contribute to maintenance of the soil fertility. Most studies on transgenic crops have been carried out on cotton, corn, tomato, papaya, rice, etc., with emphasis on protozoal, bacterial and fungal communities [5]. VanSaun2, Lynn M Matrisian2, D Lee Gorden2 1 Department of Surg VanSaun2, Lynn M. Matrisian2, D. Lee Gorden2 1 Department of Surgery, St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea Republic, 2 Department of Cancer Biology, Vanderbilt University, Nashville, Tennessee, USA Purpose: Pro-inflammatory processes of the early postoperative states may induce peritoneal metastases in patients with advanced diseases. To identify that wound 4-Hydroxytamoxifen cost healing response after an abdominal incision leads to increased MMP-9 activity locally, therefore providing a favorable environment for peritoneal metastasis. Increased MMP9 in a post-operative injury setting increases the number and severity of peritoneal metastasis when compared to mice without wounds. Methods: Eighteen C57bl/6 J male mice were obtained at 8 weeks of age. Metastatic tumors were initiated using a peritoneal injection model with syngeneic MC38 murine colon cancer cells. Peritoneal EPZ5676 cost injections were performed into the intraperitoneum at right lower quadrant area via 25G syringe. A 1.5 cm upper midline incision was made in the abdominal wall to recapitulate the postoperative wound model. The abdominal wall was closed by a continuous 4-0 prolene suture with 5 stitches. Mice were sacrificed at various time points. And we observed the rate of the peritoneal metastasis from each group. Results: By making incision into the abdominal wall, we induced inflammation of the mouse and observed the incidence of the peritoneal metastasis was increased(Fig.1). Early stage of wound healing process increases pro-inflammatory cytokines and number of inflammatory cells in the peritoneum, and this leads to increase pro-MMP9 proteins. And the inflammatory process which initiated by the wound, in turn, increased the proliferation of the mesothelial cells and provoked expression of the inflammatory cells and increased parietal peritoneal metastasis. Conclusion: stage of wound healing process increases pro-inflammatory cytokines and number of inflammatory cells in the peritoneum, and this leads to increase pro-MMP9 proteins. So the increased pro-MMP9 proteins play a key role on the growth and progressions of cancer cells in peritoneal Cobimetinib concentration metastasis. Figure 1. Poster No. 87 Cytokine-Mediated Activation of Gr-1 + Inflammatory Cells and Macrophages in Squamous Cell Selleck YM155 Carcinoma towards a Tumor-Supporting Pro-Invasive and Pro-Angiogenic Phenotype Nina Linde 1 , Dennis Dauscher1, Margareta M. Mueller1 1 Group Tumor and Microenvironment, German Cancer Research Center, Heidelberg, Germany Inflammatory cells have been widely accepted to contribute to tumor formation and progression. In a HaCaT model for human squamous cell carcinoma (SCC) of the skin, we have observed that infiltration of inflammatory cells does not only promote tumorigenesis but is indispensable for persisten angiogenesis and the development of malignant tumors. Recent studies suggest that BRCA proteins are required for protec Recent studies suggest that BRCA proteins are required for protecting the genome from damage [12]. Mutations in BRCA genes have been established to predispose women to breast and ovarian cancer, the end point of BRCA protein dysfunction. Mutations in both genes are spread throughout the entire gene. More than 600 different mutations have been identified in BRCAl gene and 450 mutations in BRCA. The majorities of mutations, known to be disease-causing, results in a truncated protein due to frame shift, nonsense, or splice site alternations. Nonsense mutations occur when the nucleotide substitution produces a stop codon (TGA, TAA, or TAG) and translation of the protein is terminated at this point. Frame shift mutations occur when one or more nucleotides are either inserted or deleted, resulting in missing or non-functional protein. Splice site mutations cause abnormal inclusion or exclusion of DNA in the coding sequence, resulting in an abnormal protein. Other kind of mutations results from a single nucleotide substitution is missense mutations in which the substitution changes a single amino acid but does not affect the remainder of the protein translation [13, 14]. Studies of BRCAl mutation occurrence suggested that nearly half of the families at high risk for breast cancer carried BRCAl mutation [15]. However, other analysis suggest that the actual incidence of BRCAl in high risk families (>3 cases of breast and/or ovarian SC79 cost cancer) might be as low as 12.8% to 16% [4]. Substantial variation in the prevalence of BRCA1 mutations in high risk families in various countries has been observed which are more common than BRCA2 mutations [16, 17]. The main objectives of the present work were to identify germline mutations in BRCA1 (exons 2, 8, 13, 22) and BRCA2 (exon 9) genes for the early detection of presymptomatic mutation carriers in Egyptian healthy females who were first degree relatives of affected women from families with and without family history of breast cancer. Subjects and Methods Patients and families Sixty breast cancer patients (index patients), Selleckchem Fludarabine derived from 60 families, considered being at high risk, due to medicinal examination and they were grid 3 patients, were selected for molecular genetic testing of BRCA1 and BRCA2 genes. They were referred to the Clinical Oncology Unit in Medical Research learn more Institute, Alexandria University, for chemotherapy as part of their curative treatment after mastectomy. Selected index patients were preferred to be at early onset age at diagnosis, possessing a positive family history and bilateral breast cancer. The study also included one hundred and twenty healthy first degree female relatives of index patients either sisters and/or daughters for early detection of mutation carriers. The decision to undergo genetic testing was taken after the participants were informed about benefits and importance of genetic testing. CrossRef 31 Tang CG, Chen YH, Xu B, Ye XL, Wang ZG: Well-width d CrossRef 31. Tang CG, Chen YH, Xu B, Ye XL, Wang ZG: Well-width dependence of in-plane optical anisotropy in (001) GaAs/AlGaAs quantum wells induced by in-plane uniaxial strain and interface asymmetry . J Appl Phys 2009,105(10):103108.CrossRef 32. Tang CG, Chen YH, Ye XL, Wang ZG, Zhang WF: Strain-induced in-plane optical anisotropy in (001) GaAs/AlGaAs superlattice studied by reflectance difference spectroscopy . J Appl Phys 2006,100(11):113122.CrossRef 33. Krebs O, Voisin P: Giant optical anisotropy of semiconductor heterostructures with no common atom and the quantum-confined Pockels effect . Phys Rev Lett 1996, 77:1829.CrossRef 34. Yu J, Chen Y, Cheng S, Lai Y: Spectra of circular and linear photogalvanic effect at BIX 1294 in vivo inter-band GDC-0449 mw excitation in In 0.15 Ga 0.85 As/Al 0.3 Ga 0.7 As multiple quantum wells . Phys E: Low-dimensional Systems and Nanostructures 2013,49(0):92–96. 35. Takagi T: Refractive index of Ga 1-x In x As prepared by vapor-phase epitaxy . Japanese J Appl Phys 1978, 17:1813–1817.CrossRef 36. Park YS, Reynolds DSC: Exciton structure in photoconductivity of CdS, CdSe, and CdS: Se single crystals . Phys Rev 1963, 132:2450–2457.CrossRef 37. Ohno Y, Terauchi R, Adachi T, Matsukura F, Ohno H: Spin relaxation CX-5461 nmr in GaAs(110) quantum wells . Phys Rev Lett 83:4196–4199. 38. Damen TC, Via L, Cunningham JE, Shah J, Sham LJ: Subpicosecond spin relaxation dynamics of excitons and free carriers in GaAs quantum wells . Phys Rev Lett 1991, 67:3432–3435.CrossRef 39. Roussignol P, Rolland P, Ferreira R, Delalande C, Bastard G, Vinattieri A, Martinez-Pastor Protein kinase N1 J, Carraresi L, Colocci M, Palmier JF, Etienne B: Hole polarization and slow hole-spin relaxation in an n-doped quantum-well structure . Phys Rev B 1992, 46:7292–7295.CrossRef 40. Mattana R, George J-M, Jaffrès H, Nguyen Van Dau F, Fert A, Lépine B, Guivarc’h A, Jézéquel G: Electrical detection of spin accumulation in a p-type GaAs quantum well . Phys Rev Lett 2003, 90:166601.CrossRef 41. Bulaev DV, Loss D: Spin relaxation and decoherence of holes in quantum dots . Phys Rev Lett 2005, 95:076805.CrossRef 42. Gvozdic DM, Ekenberg U: Superefficient electric-field-induced spin-orbit splitting in strained p-type quantum wells . Europhys Lett 2006, 73:927.CrossRef 43. Chao CY, Chuang SL: Spin-orbit-coupling effects on the valence-band structure of strained semiconductor quantum wells . Physical Review B 1992,46(7):4110.CrossRef 44. Foreman BA: Analytical envelope-function theory of interface band mixing . Phys Rev Lett 1998,81(2):425.CrossRef 45. Muraki K, Fukatsu S, Shiraki Y, Ito R: Surface segregation of in atoms during molecular-beam epitaxy and its influence on the energy-levels in InGaAs/GaAs quantum-wells . Appl Phys Lett 1992,61(5):557–559.CrossRef 46. Chen YH, Wang ZG, Yang ZY: A new interface anisotropic potential of zinc-blende semiconductor interface induced by lattice mismatch . Chinese Phys Lett 1999,16(1):56–58.CrossRef 47. The inset in (e) shows the corresponding selected area diffractio The inset in (e) shows the corresponding selected area diffraction pattern with a zone axis of [1–30]. The second processing parameter we investigated was the vapor pressure. Figure 3a,b,c show our SEM studies for 100, 300, and 500 Torr, respectively. It turns out that CoSi Selleckchem U0126 nanowires grew particularly well at the reaction pressure of 500 Torr. In this experiment, the higher the vapor pressure, the longer the nanowires grown. Additionally, with the increasing vapor pressure, the number of nanoparticles reduces, Tariquidar but the size of the nanoparticles increases. Figure 3 SEM images of CoSi nanowires. At vapor pressures AZD8931 research buy of (a) 100, (b) 300, and (c) 500 Torr, respectively. For the synthesis of cobalt silicide nanowires, the third and final processing parameter we studied was the gas flow rate. We conducted experiments at the gas flow rate of 200, 250, 300, and 350 sccm, obtaining the corresponding results shown in Figure 4a,b,c,d, respectively. It can be found in the SEM images of Figure 4 that at 850°C ~ 880°C, the number of CoSi nanowires reduced with the increasing gas flow rate; thus, more CoSi nanowires appeared as the gas flow rate was lower. Figure 4 SEM images of CoSi nanowires. At gas flow rates of (a) 200, (b) 250, (c) 300, and (d) 350 sccm, respectively. The growth mechanism of the cobalt silicide nanowires in this work is of interest. Figure 5 is the schematic illustration of the growth mechanism, showing the proposed growth steps of CoSi nanowires with a SiOx outer layer. When the system temperature did not reach the reaction temperature, CoCl2 reacted with H2 (g) to form Co following step (1) of Figure 5: Figure 5 The schematic illustration of the growth mechanism. (1) CoCl2(g) + H2(g) → Co(s) + 2HCl(g), (2) 2CoCl2(g) + 3Si(s) → 2CoSi(s) + SiCl4(g), (3) SiCl4(g) + 2H2(g) → Si(g) + 4HCl(g), (4) 2Si(g) + O2(g) → 2SiO(g), and (5) Co(solid or vapor) + 2SiO(g) → CoSi(s) + SiO2(s). The Co atoms agglomerated to PTK6 form Co nanoparticles on the silicon substrate. When the system temperature reached the reaction temperatures, 850°C ~ 880°C, CoCl2 reacted with the silicon substrate to form a CoSi thin film and SiCl4 based on step (2) of Figure 5: The SiCl4 product then reacted with H2(g) to form Si(g) following step (3) of Figure 5: The Si here reacted with either residual oxygen or the exposed SiO2 surface to form SiO vapor from step (4) of Figure 5[30]: The SiO vapor reacted with Co nanoparticles via vapor-liquid–solid mechanism. We will comment not only on the strengths but also on the technic We will comment not only on the strengths but also on the technical pitfalls and the current limitations of the technique, discussing the performance of DFT and the foreseeable achievements in the near future. Theoretical background To appreciate the special place of DFT in the modern arsenal of quantum chemical methods, it is useful first to have a look into VX-689 nmr the more traditional wavefunction-based approaches. These attempt to provide approximate solutions to the Schrödinger equation, the fundamental equation of quantum mechanics that describes any given chemical system. The most fundamental of these approaches originates from the pioneering work of Hartree and Fock in the 1920s (Szabo and Ostlund 1989). The HF method assumes that the exact N-body wavefunction of the C59 wnt chemical structure system BIBF 1120 solubility dmso can be approximated by a single Slater determinant of N spin-orbitals. By invoking the variational principle, one can derive a set of N-coupled equations for the N spin orbitals. Solution of these equations yields the Hartree–Fock wavefunction and energy of the system, which are upper-bound approximations of the exact ones. The main shortcoming of the HF method is that it treats electrons as if they were moving independently of each other; in other words, it neglects electron correlation. For this reason, the efficiency and simplicity of the HF method are offset by poor performance for systems of relevance to bioinorganic chemistry. Thus, HF is now principally used merely as a starting acetylcholine point for more elaborate “post-HF” ab initio quantum chemical approaches, such as coupled cluster or configuration interaction methods, which provide different ways of recovering the correlation missing from HF and approximating the exact wavefunction. Unfortunately, post-HF methods usually present difficulties in their application to bioinorganic and biological systems, and their cost is currently still prohibitive for molecules containing more than about 20 atoms. Density functional theory attempts to address both the inaccuracy of HF and the high computational demands of post-HF methods by replacing the many-body electronic wavefunction with the electronic density as the basic quantity (Koch and Holthausen 2000; Parr and Yang 1989). Whereas the wavefunction of an N electron system is dependent on 3N variables (three spatial variables for each of the N electrons), the density is a function of only three variables and is a simpler quantity to deal with both conceptually and practically, while electron correlation is included in an indirect way from the outset. Modern DFT rests on two theorems by Hohenberg and Kohn (1964). The first theorem states that the ground-state electron density uniquely determines the electronic wavefunction and hence all ground-state properties of an electronic system. The extracted ΦB values of these samples are presented in the Fig The extracted ΦB values of these samples are presented in the Figure 4. The highest ΦB value attained by the ABT-737 nmr sample annealed in O2 ambient (3.72 eV) was higher than that of metal-organic selleck inhibitor decomposed CeO2 (1.13 eV) spin-coated on n-type GaN substrate [20]. No ΦB value has been extracted for the sample annealed in N2 ambient due to the low E B and high J of this sample, wherein the gate oxide breaks down prior to the FN tunneling mechanism. Figure 7 Experimental data fitted well with FN tunneling model. Experimental data (symbol) of samples annealed in O2, Ar (HJQ and KYC, unpublished work), and FG ambient fitted well with FN tunneling model (line). Table 1 compares the computed ΔE c values from the XPS characterization with the ΦB value extracted from the FN tunneling model. From this table, it is distinguished that the E B of the sample annealed in O2 ambient is dominated by the breakdown of IL as PI3K Inhibitor Library datasheet the obtained value of ΦB from the FN tunneling model is comparable with the value of ΔE c(IL/GaN) computed from the XPS measurement. For samples annealed in Ar and FG ambient, the acquisition of ΦB value that is comparable to the ΔE c(Y2O3/GaN) indicates that the E B of these samples is actually dominated by the breakdown of bulk Y2O3. Since the leakage current of the sample annealed in N2 ambient is not governed by FN tunneling mechanism, a conclusion in determining whether the E B of this sample is dominated by the breakdown of IL, Y2O3, or a combination of both cannot be deduced. Based on the obtained values of ΔE c(Y2O3/GaN), ΔE c(IL/GaN), and ΔE c(Y2O3/IL), the E B of this sample is unlikely to be dominated by IL due to the acquisition of a negative ΔE c(IL/GaN) value for this sample. Thus, the E B of this sample is most plausible to be dominated by either Y2O3 or a combination of Y2O3 and IL. However, the attainment of ΔE c(Y2O3/IL) value which is larger than that of ΔE c(Y2O3/GaN) value obtained for the samples annealed in Ar and FG ambient eliminates the latter possibility. The reason behind Methisazone it is if the E B of the sample annealed in N2 ambient is dominated by the combination of Y2O3 and IL, this sample should be able to sustain a higher E B and a lower J than the samples annealed in Ar and FG ambient. Therefore, the E B of the sample annealed in N2 ambient is most likely dominated by the breakdown of bulk Y2O3. Table 1 Comparison of the obtained Δ E c and Φ B values   XPS: conduction band offset     J-E   Y 2 O 3 /GaN IL/GaN Y 2 O 3 /IL Barrier height O2 3.00 3.77 0.77 3.72 Ar 1.55 1.40 0.15 1.58 FG 0.99 0.68 0.31 0.92 N2 0.70 −2.03 2.73 a aNot influenced by FN tunneling. Therefore, barrier height is not extracted from the FN tunneling model. A 100 μL drop of MSgg was mounted on top of the biofilm and NO mi A 100 μL drop of MSgg was mounted on top of the biofilm and NO microprofiles SBI-0206965 research buy were measured immediately with an NO microsensor as described previously [43]. For each experimental treatment, MSgg was supplied either with or without 300 μM of the NO donor SNAP. SNAP was mixed to MSgg directly before the experiment. Experimental treatments were as followed: (i) wild-type: B. subtilis 3610 for which MSgg agar and drop were added without further supplementation; (ii) wild-type: B. subtilis 3610 for which MSgg agar and drop were supplemented with 100 μM L-NAME; and (iii) B. subtilis 3610 Δnos for which MSgg agar and drop were added without further supplementation. Acknowledgements We thank Bernhard Fuchs (MPI Bremen) for help with flow cytometry and Pelin Yilmaz (MPI Bremen) for help during initial check details stages of swarming experiments. This study was supported by the Max Planck Society. Electronic supplementary material Additional file 1: Figure S1. Theoretical formation of NO from the NO donor Noc-18. The figure shows the calculated formation of NO over time for different starting concentrations of Noc-18. Figure S2. Theoretical formation of NO from the NO donor SNAP. The figure shows the calculated formation of NO over time for different starting concentrations of SNAP. (PDF 160 KB) References 1. Bredt DS, Snyder SH: Nitric-Oxide – a Physiological Messenger Molecule. Annu Rev Biochem 1994, 63:175–195.PubMedCrossRef 2. Alderton WK, Cooper CE, Knowles RG: Nitric oxide synthases: structure, function and inhibition. Biochem J 2001, 357:593–615.PubMedCrossRef 3. Stamler JS, Lamas S, Fang FC: Nitrosylation: The prototypic redox-based signaling mechanism. Cell 2001, 106:675–683.PubMedCrossRef 4. Sudhamsu J, Crane BR: Bacterial nitric oxide synthases: what are they good for? Trends Microbiol 2009, 17:212–218.PubMedCrossRef 5. Adak S, Aulak KS, Stuehr DJ: Direct evidence for nitric oxide production by a nitric-oxide synthase-like protein from Bacillus subtilis. J Biol Chem 2002, 277:16167–16171.PubMedCrossRef 6. Gusarov I, Nudler E: NO-mediated cytoprotection: Instant Luminespib cost adaptation to oxidative stress Carteolol HCl in bacteria. Proc Natl Acad Sci USA 2005, 102:13855–13860.PubMedCrossRef 7. Gusarov I, Shatalin K, Starodubtseva M, Nudler E: Endogenous Nitric Oxide Protects Bacteria Against a Wide Spectrum of Antibiotics. Science 2009, 325:1380–1384.PubMedCrossRef 8. Kers JA, Wach MJ, Krasnoff SB, Widom J, Cameron KD, Bukhalid RA, Gibson DM, Crane BR, Loria R: Nitration of a peptide phytotoxin by bacterial nitric oxide synthase. Nature 2004, 429:79–82.PubMedCrossRef 9. Spiro S: Regulators of bacterial responses to nitric oxide. Fems Microbiol Rev 2007, 31:193–211.PubMedCrossRef 10. Zumft WG: Nitric oxide reductases of prokaryotes with emphasis on the respiratory, heme-copper oxidase type. J Inorg Biochem 2005, 99:194–215.PubMedCrossRef 11. Aguilar C, Vlamakis H, Losick R, Kolter R: Thinking about Bacillus subtilis as a multicellular organism. Similar observations were made for the total score of these quest Similar observations were made for the total score of these questionnaires (Fig. 3). Doramapimod ic50 TH-302 price Patients with a fracture on the right side had significantly higher scores immediately after the fracture for the IOF physical function domain [right vs left, median (interquartile range, IQR): 89 (75, 96) vs 71 (61, 86), P = 0.002]. A fracture on the dominant side was associated with higher scores than a fracture on the non-dominant side with regard to physical function [89 (75, 96) vs 70 (59, 82), P < 0.001] and overall score [67 (54, 79) vs 56 (47, 67), P = 0.016]. The latter is shown in Fig. 4. Patients undergoing surgical treatment had lower scores of Qualeffo-41, indicating better quality of life, on general health (P = 0.013) and mental health selleck compound (P = 0.004) than patients with non-surgical treatment. Patients using analgesics had a higher scores of the IOF-wrist fracture questionnaire on pain (P = 0.009), on physical function (P = 0.001) and a higher overall score (P = 0.002) than patients not using analgesics. Table 5 Comparison of IOF-wrist domain and EQ-5D scores over time   IOF-wrist EQ-5D Pain Upper limb symptoms Physical function General health Overall score Overall score Baseline 50 (25, 50) 25 (8, 42) 75 (61, 93) 75 (50, 75) 60 (50, 73) 0.59 (0.26, 0.72) 104 104 105 92 105 104 6 weeks 25 (25, 50) 29 (8,42) 57 (36, 79) 50 (25, 75) 48 (31, 65) 0.66 (0.59, 0.78) 0.002 0.688 <0.001 0.001 <0.001 <0.001 17-DMAG (Alvespimycin) HCl 98 98 98 95 98 97 3 months 25 (25, 50) 25 (8, 42) 25 (11, 46) 25 (0, 50) 25 (13, 46) 0.76 (0.66, 0.88) <0.001 0.007 <0.001 <0.001 <0.001 <0.001 89 89 89 88 89 85 6 months 25 (0, 50) 17 (8, 33) 14 (0, 33) 25 (0, 50) 15 (4, 34) 0.78 (0.69, 1.00) <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 87 87 87 87 87 86 12 months 0 (0, 25) 8 (0, 25) 4 (0, 29) 0 (0, 25) 8 (2, 27) 0.80 (0.69, 1.00) <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 87 87 87 86 87 85 Data presented as: median score (IQR) p value for difference between time point score and baseline score No. of subjects Fig. 2 IOF-wrist fracture median domain scores by time point Fig. 3 IOF-wrist fracture and Qualeffo-41 (spine) median overall scores by time point Fig. 4 IOF-wrist fracture median overall score by side of fracture and by time point Utility data could be calculated from the EQ-5D results. Immediately after the fracture, the utility was 0.59, increasing to 0.76 after 3 months and to 0.80 after 1 year. Assuming that the quality of life and the utility after 1 year are similar to that before the fracture, the utility loss due to the distal radius fracture is more than 0.20 in the first weeks. Most of the utility loss was regained after 3 months. Discussion The results from this study show that the IOF-wrist fracture questionnaire has an adequate repeatability, since the kappa statistic was moderate to good for most questions and quite similar to data obtained with Qualeffo-41 [10]. The concentration of RNA was adjusted to 100 ng/μl, and the sampl The concentration of RNA was adjusted to 100 ng/μl, and the samples were stored at −70°C. cDNA templates were synthesized from 50 ng RNA with PrimeScript™ 1st strand cDNA Synthesis Kit (TaKaRa) and gene-specific primers at 42°C for 15 m, 85°C for 5 s. Real-time PCR was performed with the cDNA and SYBR Premix Ex Taq (TaKaRa) using a StepOne Real-Time PCR System (Applied Biosystems). The quantity of cDNA measured by real-time PCR was normalised to the abundance of 16S cDNA. Real-time RT-PCR was repeated three times in triplicate parallel experiments. Statistical analysis The paired t test was used for statistical comparisons between groups. The level of statistical significance was set at a P value of ≤ 0.05. Results AI-2 inhibits biofilm formation FRAX597 order in a concentration-dependent manner under static conditions Previous studies showed that biofilm formation was influenced by the LuxS/AI-2 system both in Gram-positive and Gram-negative bacteria [32, 34]. The genome of S. aureus encodes a typical luxS gene, which plays a role in the regulation of capsular polysaccharide synthesis and virulence [43]. In this study, to investigate whether LuxS/AI-2 system regulates Protein Tyrosine Kinase inhibitor biofilm formation in S. aureus, we monitored the biofilm formation of S. aureus WT NCT-501 concentration strain RN6390B and the isogenic derivative ΔluxS strain using a microtitre plate assay. As shown in Figure 1A, the WT strain formed almost no biofilm after 4 h incubation at 37°C. However, the ΔluxS strain formed strong biofilms as measured by quantitative spectrophotometric analysis based on OD560 after crystal violet staining (Figure 1A). This discrepancy could be complemented by introducing a plasmid that contains the luxS gene (Figure 1B). Figure 1 Biofilm formation under static conditions and chemical complementation by DPD of different concentrations. Biofilm growth of S. aureus WT (RN6390B), ΔluxS and ΔluxS complemented with different concentrations of chemically synthesized DPD in 24-well plates for 4 h under aerobic conditions (A1: 0.39 nM, A2: 3.9 nM, A3: 39 nM, A4: 390 nM). The cells that adhered to the plate after staining with crystal violet were measured by OD560 . The effects of LuxS could be attributed to its central metabolic function or the AI-2-mediated next QS regulation, which has been reported to influence biofilm formation in some strains [32–34]. To determine if AI-2, as a QS signal, regulates biofilm formation in S. aureus, the chemically synthesized pre-AI-2 molecule DPD at concentrations from 0.39 nM to 390 nM was used to complement the ΔluxS strain. The resulting data suggested that exogenous AI-2 could decrease biofilm formation of the ΔluxS strain and the effective concentration for complementation was from 3.9 nM to 39 nM DPD (Figure 1A). As expected, these concentrations were within the range that has been reported [51]. The phenomenon that the higher concentration of AI-2 does not take effect on biofilm formation is very interesting, which has also been found in other species [51].
3b8de440ac8da0bc
Feynman Paths post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-17T06:32:28.000Z · LW · GW · Legacy · 30 comments Previously in seriesThe Quantum Arena At this point I would like to introduce another key idea in quantum mechanics.  Unfortunately, this idea was introduced so well in chapter 2 of QED: The Strange Theory of Light and Matter by Richard Feynman, that my mind goes blank when trying to imagine how to introduce it any other way.  As a compromise with just stealing his entire book, I stole one diagram—a diagram of how a mirror really works. In elementary school, you learn that the angle of incidence equals the angle of reflection.  But actually, saith Feynman, each part of the mirror reflects at all angles. So why is it that, way up at the human level, the mirror seems to reflect with the angle of incidence equal to the angle of reflection? Because in quantum mechanics, amplitude that flows to identical configurations (particles of the same species in the same places) is added together, regardless of how the amplitude got there. To find the amplitude for a photon to go from S to P, you've got to add up the amplitudes for all the different ways the photon could get there—by bouncing off the mirror at A, bouncing off the mirror at B... The rule of the Feynman "path integral" is that each of the paths from S to P contributes an amplitude of constant magnitude but varying phase, and the phase varies with the total time along the path.  It's as if the photon is a tiny spinning clock—the hand of the clock stays the same length, but it turns around at a constant rate for each unit of time. Feynman graphs the time for the photon to go from S to P via A, B, C, ...  Observe: the total time changes less between "the path via F" and "the path via G", then the total time changes between "the path via A" and "the path via B".  So the phase of the complex amplitude changes less, too. And when you add up all the ways the photon can go from S to P, you find that most of the amplitude comes from the middle part of the mirror—the contributions from other parts of the mirror tend to mostly cancel each other out, as shown at the bottom of Feynman's figure. There is no answer to the question "Which part of the mirror did the photon really come from?"  Amplitude is flowing from all of these configurations.  But if we were to ignore all the parts of the mirror except the middle, we would calculate essentially the same amount of total amplitude. This means that a photon, which can get from S to P by striking any part of the mirror, will behave pretty much as if only a tiny part of the mirror exists—the part where the photon's angle of incidence equals the angle of reflection. Unless you start playing clever tricks using your knowledge of quantum physics. For example, you can scrape away parts of the mirror at regular intervals, deleting some little arrows and leaving others.  Keep A and its little arrow; scrape away B so that it has no little arrow (at least no little arrow going to P).  Then a distant part of the mirror can contribute amplitudes that add up with each other to a big final amplitude, because you've removed the amplitudes that were out of phase. In which case you can make a mirror that reflects with the angle of incidence not equal to the angle of reflection.  It's called a diffraction grating.  But it reflects different wavelengths of light at different angles, so a diffraction grating is not quite a "mirror" in the sense you might imagine; it produces little rainbows of color, like a droplet of oil on the surface of water. How fast does the little arrow rotate?  As fast as the photon's wavelength—that's what a photon's wavelength is.  The wavelength of yellow light is ~570 nanometers:  If yellow light travels an extra 570 nanometers, its little arrow will turn all the way around and end up back where it started. So either Feynman's picture is of a very tiny mirror, or he is talking about some very big photons, when you look at how fast the little arrows seem to be rotating.  Relative to the wavelength of visible light, a human being is a lot bigger than the level at which you can see quantum effects. You'll recall that the first key to recovering the classical hallucination from the reality of quantum physics, was the possibility of approximate independence in the amplitude distribution.  (Where the distribution roughly factorizes, it can look like a subsystem of particles is evolving on its own, without being entangled with every other particle in the universe.) The second key to re-deriving the classical hallucination, is the kind of behavior that we see in this mirror.  Most of the possible paths cancel each other out, and only a small group of neighboring paths add up.  Most of the amplitude comes from a small neighborhood of histories—the sort of history where, for example, the photon's angle of incidence is equal to its angle of reflection.  And so too with many other things you are pleased to regard as "normal". My first posts on QM showed amplitude flowing in crude chunks from discrete situation to discrete situation.  In real life there are continuous amplitude flows between continuous configurations, like we saw with Feynman's mirror.  But by the time you climb all the way up from a few hundred nanometers to the size scale of human beings, most of the amplitude contributions have canceled out except for a narrow neighborhood around one path through history. Mind you, this is not the reason why a photon only seems to be in one place at a time.  That's a different story, which we won't get to today. The more massive things are—actually the more energetic they are, mass being a form of energy—the faster the little arrows rotate. Shorter wavelengths of light having more energy is a special case of this.  Compound objects, like a neutron made of three quarks, can be treated as having a collective amplitude that is the multiplicative product of the component amplitudes—at least to the extent that the amplitude distribution factorizes, so that you can treat the neutron as an individual. Thus the relation between energy and wavelength holds for more than photons and electrons; atoms, molecules, and human beings can be regarded as having a wavelength. But by the time you move up to a human being—or even a single biological cell—the mass-energy is really, really large relative to a yellow photon.  So the clock is rotating really, really fast.  The wavelength is really, really short.  Which means that the neighborhood of paths where things don't cancel out is really, really narrow. By and large, a human experiences what seems like a single path through configuration space—the classical hallucination. This is not how Schrödinger's Cat works, but it is how a regular cat works. Just remember that this business of single paths through time is not fundamentally true.  It's merely a good approximation for modeling a sofa.  The classical hallucination breaks down completely by the time you get to the atomic level.  It can't handle quantum computers at all.  It would fail you even if you wanted a sufficiently precise prediction of a brick.  A billiard ball taking a single path through time is not how the universe really, really works—it is just what human beings have evolved to easily visualize, for the sake of throwing rocks. (PS:  I'm given to understand that the Feynman path integral may be more fundamental than the Schrödinger equation: that is, you can derive Schrödinger from Feynman.  But as far as I can tell from examining the equations, Feynman is still differentiating the amplitude distribution, and so reality doesn't yet break down into point amplitude flows between point configurations.  Some physicist please correct me if I'm wrong about this, because it is a matter on which I am quite curious.) Part of The Quantum Physics Sequence Next post: "No Individual Particles" Previous post: "The Quantum Arena" Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27). comment by JessRiedel · 2008-04-17T07:18:41.000Z · LW(p) · GW(p) The Feynman path integral (PI) and Schrödinger's equation (SE) are completely equivalent formulations of QM in the sense that they give the same time evolution of an initial state. They have exactly the same information content. It's true that you can derive SE from the PI, while the reverse derivation isn't very natural. On the other hand, the PI is mathematically completely non-rigorous (roughly, the space of paths is too large) while SE evolution can be made precise. Practically, the PI cannot be used to solve almost anything except the harmonic oscillator. This is a serious handicap in QM, since SE can be used to solve many problems exactly. But in quantum field theory, all the calculations are perturbations around harmonic oscillators, so the PI can be very useful. Many physicists would agree that the PI is more "fundamental" because it's gives insight into QFT and theoretical physics. But the distinction is largely a matter of taste. comment by Philippe2 · 2008-04-17T12:56:46.000Z · LW(p) · GW(p) The way Feynman expresses the flow of amplitude to a certain point given a prior configuration is as a weighted sum over space of sums over path weights. The sum over space is simply weighted by the amplitude distribution of the given configuration and the weight of each path is but itself a sum over time of a quantity called Lagrangian (more precisely the complex exponential of this quantity but whatever) along said path. Since this quantity is the difference between kinetic and potential energy, it normally should only depends on the position and time derivatives along the path. In that sense the path integral formalism for a finite number of particles is independent of the derivative of the amplitude distribution itself and thus of Schrödinger equation. If one now goes to a situation with an infinite number of degrees of freedom, that is a field, and tries to implement there also a path integral formalism, then the equation changes slightly. Amplitude doesn't flow from one point to the other but rather between field configurations. In that case the second sum is not over all possible paths in between two points but over all possible field configurations in between two field configurations. Doing so, the quantity used to weight configurations now depends on the amplitude and space derivatives of the field everywhere. And if one fancies a Schrödinger equation for a quantum field, then in the interacting and non-relativistic case this equation turns out to be nonlocal and nonlinear. comment by Psy-Kosh · 2008-04-17T13:02:27.000Z · LW(p) · GW(p) Feynman paths would basically correspond to points in spacetime configuration space, ie, histories, rather than points on plain ole position configuration space, wouldn't it? (Actually, summing over histories is basically one way of explaining WHY in GR things follow geodesics. Think of metric as being analogous to refractive index, affecting the "optical" path length, so you end up getting the same idea as principle of least action.) comment by Stirling_Westrup · 2008-04-17T15:00:48.000Z · LW(p) · GW(p) Ever since I read QED a few years ago, I've wanted to write a Quantum Ray-Tracing package that would use a discrete version of this summation over arrows to render scenes composed of a 3D grid of particles. It would have the advantage that certain classical ray-tracing problems having to do with questions of what, exactly, is a surface and its normal would go away. It would also correctly render diffraction gratings, butterfly wings and oil slicks, just given their physical arrangements. On the negative side, it would require some serious R&D into rendering algorithms to get the computation times down to acceptable levels. Alas, I've never had the leisure to spend that kind of time on the problem. comment by Roland2 · 2008-04-17T15:25:32.000Z · LW(p) · GW(p) And when you add up all the ways the photon can go from S to P, you find that most of the amplitude comes from the middle part of the mirror - the contributions from other parts of the mirror tend to mostly cancel each other out, as shown at the bottom of Feynman's figure. Eliezer, one thing that is confusing me is that you are trying to show that the billiard ball and the "particles have identities" analogy is wrong. At the same time you keep speaking from "the photon". In the quotation the impression I get is that "the photon" splits up into the different paths it travels. Why does it split up in the first place? Again the "splitting up" assumes that there is a particle(alias small billiard ball) but your writing seems to imply this. Btw, I have posted a question to your last entry "The quantum arena" which unfortunately wasn't answered and has to do with this confusion. Thanks, Roland PS: I'm no physicist and from reading the other comments I have the impression that most who are following this are physicists or at least have quite an advanced knowledge of QM. Please don't subestimate the inferential distance for those of us who don't have all that knowledge. Replies from: rosyatrandom comment by rosyatrandom · 2011-04-16T18:45:39.707Z · LW(p) · GW(p) Very late response: I think that the splitting of the photon's path is pretty much entirely a human construction - the smaller the components it is split into, the more accurate the calculation, and each partition is itself an approximation that can be refined by splitting it up further in exactly the same manner. Essentially, it's a shortcut to doing a path integral over the entire range down to the planck level. Maybe... I'm not sure! comment by Scott_Aaronson2 · 2008-04-17T17:47:17.000Z · LW(p) · GW(p) As Jess says, Schrödinger and Feynman are formally equivalent: either can be derived from the other. So if the question of which is more "fundamental" can be answered at all, it will have to be from other considerations. My own favorite way to think about the difference between the two pictures is in terms of computational complexity. The Schrödinger equation can be seen as telling us that quantum computers can be simulated by classical computers in exponential time: just write out the whole amplitude vector to reasonable precision, which takes exponentially many floating-point numbers, then update it step by step. The Feynman path integral can be seen as telling us that quantum computers can be simulated by classical computers in polynomial space: just add up the amplitudes of all paths leading to the quantum computer accepting, reusing the same memory from one path to another. Since polynomial space is contained in exponential time, the Feynman picture yields the better simulation -- and on that basis, one could argue that it's the more "fundamental" of the two representations. comment by komponisto2 · 2008-04-17T18:01:13.000Z · LW(p) · GW(p) Since Scott Aaronson has chimed in, it is worth pointing to this discussion on his blog in which Greg Kuperberg explains the Hilbert space issues from the previous thread. comment by LazyDave · 2008-04-17T19:13:06.000Z · LW(p) · GW(p) What Roland's PS said :) comment by Doug_S. · 2008-04-17T19:25:06.000Z · LW(p) · GW(p) On the subject of 3D rendering: Treating light as a classical wave can also produce pretty good experimental results on the scale of everyday life. Ray tracing algorithms ignore the properties light shares with classical waves, such as diffraction. I suspect that you don't need "quantum amplitude tracing" algorithm for more accurate 3D rendering, just a "classical wave tracing" algorithm. (Ordinary ray tracing is already rather computationally expensive anyway...) comment by DonGeddis · 2008-04-19T21:49:45.000Z · LW(p) · GW(p) You know, I enjoyed this post when I first read it, but now upon further thought it doesn't make any sense at all. We're talking about the fundamental nature of reality, right? Photons are a fundamental thing? Taking all paths from S to P is fundamental? Little rotating arrows corresponding to wavelength is fundamental? OK, fine. But what the heck is this "mirror" thing you then introduced? I'm supposed to assume that a mirror is a fundamental component of reality too? No, obviously a mirror is just made up of atoms, which is just a pile of subatomic particles, also. You don't explain how a photon interacts with even a single other particle, but we're supposed to know how it interacts with a mirror? Especially a "flat" mirror, when we know that real physical mirrors must be very bumpy at a subatomic level. And a lot of them are silver; what's so special about silver atoms? Why is a mirror different from a (not very) flat rock? See, here's the problem: you spend all this time telling us how our macroscopic intuitions are wrong, and we can't trust them, and that QM is the reality of how the universe works. And then, in the explanation of QM, you slip in a "mirror", and rely on our naive pre-QM common-sense understanding of mirrors to complete the example. But you've just told us that those intuitions are false! I think you need to at least give some QM explanation of what a mirror "is", before using it in this example. comment by Yoron · 2008-05-30T06:14:33.000Z · LW(p) · GW(p) Awh, what are you trying to do here :) Giving me a headache? If Feynman paths is correct then it invalidates most of what we experience at a every day level. For example there are no photons traveling from the sun to us, and every photon takes every possible path which in fact means that one photon fills up our universe. And how do we get away from that? By saying that they are 'probabilities' :) huh. Which to my eyes comes very near 'parallel universes' as well as 'many paths universes'. It all seems reasonable but the implications gives me a headache. And if to that add the idea of time as being just 'spacetime intervals' that is not a flow but distinct static 'frames'? Nah.. However much I like Feynman I wonder what kind of universe his ideas leaves us with. Replies from: wizzwizz4 comment by wizzwizz4 · 2020-05-30T12:52:06.164Z · LW(p) · GW(p) there are no photons traveling from the sun to us Woah, where did this assertion come from? which in fact means that one photon fills up our universe. This doesn't follow. And how do we get away from that? By saying that they were 'probabilities' :) Who's saying that? This post is talking about amplitudes. (And so on for the next paragraph.) comment by mattpc · 2008-07-20T18:09:37.000Z · LW(p) · GW(p) I'm not a physicist so my question may be really old hat, but whatever. I can think of two situations in which one ends up with a diffusion equation but in which the underlying physics is quite different. First, the flow of heat in a solid. Here there is a continuous 'heat flows down a temperature gradient' picture that is mathematically equivalent to a picture in which individual particles follow Brownian motions. Physically, the former is just a sort of averaged version of the latter - some accounting short cuts - while the latter is some way closer to reality; the particles are really diffusing. Second, the flow of water in an aquifer. Here the Darcian flow is proportional to the pressure gradient. For the sake of argument, imagine a medium that is a perfectly regular and homogenous 3D network of tiny tubes or something. In this case, there is no 'diffusion' of the fluid particles; they flow in a completely deterministic (indeed reversible) way through the network. But of course, one could presumably 'solve' the aquifer equation with a Monte Carlo similation of a diffusion process, if one really wanted to or if that was handy. So to my question: Does the Feynman path integral purport to represent what's actually going on in any sense? Or is it more in the nature of a device for solving the problem? Or is this one of those things that is not answerable? comment by mattpc · 2008-07-20T18:53:32.000Z · LW(p) · GW(p) Sorry I asked that wrong. I don't mean heat flow in the first case, there are no diffusing particles there. Say concentration of tracer in fluid suspension or something. comment by AnthonyC · 2011-04-06T15:45:42.168Z · LW(p) · GW(p) "How fast does the little arrow rotate? As fast as the photon's wavelength - that's what a photon's wavelength is. The wavelength of yellow light is ~570 nanometers: If yellow light travels an extra 570 nanometers, its little arrow will turn all the way around and end up back where it started." Which would seem to make it a ruler as well as a clock. But then, since general relativity made time an axis like space, I have sometimes wondered why we don't measure time in meters or distance in seconds. Replies from: wnoise comment by wnoise · 2011-04-06T19:01:07.640Z · LW(p) · GW(p) We do. Replies from: Sniffnoy comment by Sniffnoy · 2011-04-06T22:02:43.886Z · LW(p) · GW(p) To expand on that point, we also measure energy in hertz, and temperatures in Joules, and ultimately everything in pure numbers. :) Replies from: Dojan comment by Dojan · 2011-12-13T02:53:41.255Z · LW(p) · GW(p) We do. The speed of light is used to define not only the lightyear, but also the common metre. comment by MinibearRex · 2011-06-11T22:46:04.656Z · LW(p) · GW(p) I know this is an old post, but I'm hoping someone will see this. I read this a long time ago and have been thinking about QM questions (non-professionally) for a while. Recently, I started to wonder about a specific question regarding this post. Specifically, I'm thinking about the idea that we are summing paths leading to "identical configurations". While the various paths the photon takes in this problem do appear to lead to the same configuration, it seems to me that this is only true if you are just looking at the configuration of the photon and the mirror. The path A takes much more time to be completed than the path G, and it seems to me that during that time, the configuration of the rest of the universe would change as well, so the two configurations aren't the same. I think this understanding is probably wrong, but I have about twenty guesses as to mistakes I could be making, and no clue which ones are genuine. Can anyone who has studied QM more help me out? Replies from: Luke_A_Somers comment by Luke_A_Somers · 2011-10-21T13:30:08.822Z · LW(p) · GW(p) You don't need to add them ALL up at the same time, just notice that as you get further and further from the middle, each part begins canceling with nearer and nearer neighbors. To be more concrete: at some point, you start sending your pulse. The shortest path/specular reflection gets the signal there first; other paths begin contributing later. After a short time, the time offset to get to the destination is large enough that the beginning of the pulse from one angle is cancelling with the middle of the pulse from a neighboring angle. Beyond that point, unless the packet had some special structure, there's not much in the way of reflection. To be perfectly frank, the mirror isn't necessary for this problem to work - all it really needs to do is justify Huygens' principle. This also goes a way towards addressing DonGeddis's question - pretend the mirror isn't there, and reflect the upward rays down. The mirror no longer exists, and this now becomes the question of why light doesn't spontaneously turn angles for no reason at all. Is that better? Replies from: MinibearRex comment by MinibearRex · 2011-10-21T14:31:29.317Z · LW(p) · GW(p) That's a pretty good way of explaining it. I actually read QED last summer, after posting this, and (I believe in chapter 3) Feynman covers this topic briefly. EY just didn't describe it. Thanks for posting the clarification! comment by Insert_Idionym_Here · 2011-12-16T23:51:01.230Z · LW(p) · GW(p) Okay, so where did those arrows come from? I see how the graph second from the top corresponds to the amount of time a particle, were particles to exist, would take if it bounced, if it could bounce, because it's not actually a particle, off of a specific point on the mirror. But how does one pull the arrows out of that graph? Replies from: arundelo comment by arundelo · 2011-12-17T01:30:01.120Z · LW(p) · GW(p) Feynman talks about this between 59:33 and 60:32 of part one of his 1979 Douglas Robb lectures. Between 29:41 and 36:27 of part two, he draws the "arrows" diagram on the chalkboard. If you find this topic interesting, you'll enjoy all four parts of the lecture series. See also 63:26 to 63:35 of part one, which is relevant to your other question. Edit: To explicitly answer your question, the angle of each arrow is proportional to the height of the graph above that arrow. Note that different heights on the graph can correspond to identical angles, since (for example) 0 radians, 2pi radians, and 4pi radians are all the same angle. Replies from: Insert_Idionym_Here comment by Insert_Idionym_Here · 2011-12-17T05:44:49.697Z · LW(p) · GW(p) Thank you very much. comment by alex2718 · 2012-03-24T17:08:33.090Z · LW(p) · GW(p) When we sum over all paths some paths are longer than others. The argument says that the phase arrow will move further round because the time is longer. If the time is longer the the path won't end at the destination at the right time to coincide with the other paths. So how can this work? Replies from: Tyrrell_McAllister comment by Tyrrell_McAllister · 2012-04-06T20:45:50.622Z · LW(p) · GW(p) The amplitudes don't coincide at the end. In fact some are pointing oppositely to each other and so cancel out. The final amplitude for a photon at P is the sum of the configurations coming into P. The amplitudes don't equal each other, but they can be added together to yield the amplitude for a photon at P. comment by Oscar_Cunningham · 2015-09-17T14:14:40.070Z · LW(p) · GW(p) Feynman really does give you the amplitude for going from one point distribution to another point distribution. The formula for the path integral doesn't involve any derivatives of the amplitude distribution. But your fundamental point is still correct. Nature can't be viewed as classical just by thinking only in terms of point distributions. This is because the point distribution evolves into a non-point distribution. So even if you start out thinking in terms of point distributions you are immediately forced to consider other distributions. (You might be worried that the point distribution has infinite second derivative, and so can't be evolved using the Schrodinger equation. But if you turn down your rigour dial you can find the solution: phi = exp[i x^2 / (4t) ]/sqrt[4 pi i t] (This is the solution for a free particle in one dimension where I've picked the mass hbar/2 for convenience.) One can sort of see how this becomes a point distribution as t tends to zero. The amplitude becomes very oscillatory everywhere except zero, and at zero all those oscillations cancel out. Meanwhile the magnitude increases like 1/sqrt(t) as t tends to zero, so at zero it has the correct value of sqrt(infintiy).) comment by jason martin (jason-martin) · 2019-10-19T22:41:25.787Z · LW(p) · GW(p) what does the path integral actually look like? Replies from: Mitchell_Porter comment by Mitchell_Porter · 2019-10-18T02:02:42.994Z · LW(p) · GW(p) Do you understand ordinary integration?
bd3c806254901336
American Journal of Applied Mathematics Volume 3, Issue 3, June 2015, Pages: 106-111 Estimation of Boron Ground State Energy by Monte Carlo Simulation K. M. Ariful Kabir1, Amal Halder2 1Department Mathematics, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh 2Department of Mathematics, University of Dhaka, Dhaka, Bangladesh Email address: (K. M. A. Kabir) To cite this article: K. M. Ariful Kabir, Amal Halder. Estimation of Boron Ground State Energy by Monte Carlo Simulation. American Journal of Applied Mathematics. Vol. 3, No. 3, 2015, pp. 106-111. doi: 10.11648/j.ajam.20150303.15 Abstract: Quantum Monte Carlo (QMC) method is a powerful computational tool for finding accurate approximation solutions of the quantum many body stationary Schrödinger equations for atoms, molecules, solids and a variety of model systems. Using Variational Monte Carlo method we have calculated the ground state energy of the Boron atom. Our calculations are based on using a modified five parameters trial wave function which leads to good result comparing with fewer parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Boron. Based on comparisons, the energy obtained in our simulation are in excellent agreement with experimental and other well established values. Keywords: Monte Carlo Simulation, Boron, Ground State Energy, Schrödinger Equation 1. Introduction Variational Monte Carlo (VMC) method has become a powerful tool in Quantum Chemistry calculations [1-3]. In most of its current applications the VMC method has become a valuable method because of a wide variety of wave function forms for which analytical integration would be impossible. The major advantage of this method is the possibility to freely choose the analytical form of the trial wave function which may contain highly sophisticated term in such a way that electron correlation is explicitly taken into account [4-5]. This is an important feature valid for quantum Monte Carlo methods, which are therefore extremely useful to study physical cases where the electron correlation plays a crucial role. For two-electron system, which considered as the simplest few-body systems, VMC method provides accurate estimations of the ground and excited state energies and properties of atomic and molecular systems [6]. Boron is a chemical element with symbol B and atomic number 5. The word boron was coined from borax, the mineral from which it was isolated, by analogy with carbon, which it resembles chemically. Boron is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporates, such as borax and kernite. The largest proven boron deposits are in Turkey, which is also the largest producer of boron minerals. The Hylleraas method is a variational method which introduces the correlation effects by including explicity the inter-electronic distances in the wave function. In recent years efforts have been done to develop an approach for constructing trial wave functions in order to calculate the ground state energy of the Boron atom and achieve high level of accuracy. In 2006, the VMC method has been used to study the ground state energy of the atoms Li through Kr using explicitly correlated wave functions which consists of product of a Jastrow correlation factor times a model function with 17 variational parameters [7]. The obtained results were an improvement over all the previous results. Recently, calculations on the ground state of boron atom have been made using the single and 150 term wave functions constructed with Slater orbitals by M.B Ruiz. [8]. Despite the fact that there is no shortage of extremely accurate wave functions for the Boron atom and some five-electron atomic ions, most of these studies search for accurate results but nevertheless compact wave functions. From this point, [9-13] proposed a simple compact trial function for the ground state of the Boron atom which provided a very accurate energy in such a way that it could considered as the most accurate among existent few-parameter trial functions. In the present paper we shall use VMC method to study five- electron system (or Boron atom), which may be studied by using different method and different form of trial wave functions. Accordingly, we shall use a compact five parameters wave function to obtain the ground state energy for Boron ground state. The calculation will be done in the frame of VMC method. 2. Many-Bodies Stationary Schrödinger Equation Let us consider a system of quantum particles, such as electrons and ions interacting via Coulomb potentials. Since the masses of nuclei and electrons differ by three orders of magnitude or more and the Hamiltonian is given by where i and j label the electrons and I runs over the ions with charges  . Throughout the review, we employ the atomic units,, where  is the electron mass,  is the electron charge and  is the permittivity of a vacuum. The Schrödinger equation is Here,  is the Laplacian operator,  is a function of their positions, and  is the molecule’s ground state energy. The electrostatic potential  in the molecule is given by Here, the first summation is over all electrons and nuclei, where and are the nuclear charges and locations, respectively. The second summation is over all pairs of electrons. 3. Variational Monte Carlo for the Boron Atom Variational Monte Carlo (VMC) is based on a direct application of Monte Carlo integration to explicitly correlated many-body wave functions. The variational principle of quantum mechanics, states that the energy of a trial wave function will be greater than or equal to the energy of the exact wave function. Optimized forms for many-body wave functions enable the accurate determination of expectation values. Variational Monte Carlo is a method of computing the total energy and its variance ( statistical error) Here, H is the Hamiltonian,  is the value of the trial wave function at the Monte Carlo integration point. The constant  is fixed at a value close to the desired state in order to start the optimization in the proper region. The exact wave function is known to give both the lowest value of  and a zero variance. If the adjustable parameters in the trial wave function are optimized so as to minimize the energy, instability often occurs. This happens when a set of parameters causes  to be estimated a few sigmas too low. Although such parameters will produce a large variance, they are favored by the minimization. This problem can be avoided only by using a very large number of configurations during the optimization of the wave function so as to distinguish between those wave functions for which  is truly low and those which are merely estimated to be low. In contrast, variance minimization favors those wave functions which have a constant local energy. Parameter values which do not produce this property will be eliminated by the optimization process. As a result, only a small fixed set of configurations is needed to accurately determine the variance. Previous studies have shown that the rate of convergence of a variational calculation can be tremendously accelerated by using basis functions which satisfy the two-electron cusp condition and which have the correct asymptotic behavior [12]. Unfortunately, the integrals of such functions can rarely be evaluated analytically. Because our method uses Monte Carlo integration, we can easily build into the trial wave function many features which will accelerate convergence. Although, in principle, this flexibility leads to an enormous number of possible forms, in practice, the ideal trial wave-function form must have a low variance, must add adjustable parameters in a straightforward manner, and must be easy to optimize. 4. The Trial Wave Function The calculations for the Boron atom obtained previously using trial wave functions which takes the form [14]: where, the symbol denotes the angular variables of and the function  is the hydrogen-like wave function in the nl-state with effective charge  . This wave function was used to calculate the energy of Beryllium ground state with quite accurate results. In the present paper we introduce some modifications to this trial wave function in order to get more accurate results. Firstly, we will consider the correlation between each two electrons. In order to include the electron-electro correlation we have to modify the form of this trial wave function by multiplying by the following factor: which expresses the correlation between the two electrons i and j due to their Coulomb repulsion. That is, we expect f to be small when  is small and to approach a large constant value as the electrons become well separated. Then, for the ground-state of the BoronEq. (6) reduces to the following form: where the indices i, j run over the number of the electrons. The variational parameters and  will determined for each value of Z by minimizing the energy. The function of equation (6) is constant for the ground-state of Beryllium. 5. Estimate of the Smallest Eigenvalue To estimate the smallest eigenvalue, Let us rewrite equation (2) more compactly as where  represents the sum of both operators on the left-hand side of equation (9). The variational principle tells us [17] that The integration is over the three coordinates of each of the four electrons, altogether a 12 dimensional problem and  is any trial solution to equation (2). The limit of equality holds only for the exact solution, but for approximate solutions, called variational estimates of the ground-state energy, the left-hand side of equation (10) is usually quite close to. The main problem is how to evaluate the two 12-dimensional integrals; this is impossible to do analytically and not feasible even numerically. But, Monte Carlo is the process that can easily simulate 12-dimensional integrals. Let us first define the local Energy by, The left hand side of (10) can now be written as Next, we randomly generate a large sample of 10000 configurations of 12 variables denoted collectively as  and compute the corresponding, D and. By averaging the 10000 values of , we get an estimate of . Unfortunately, this estimate will be very inaccurate since our random sample of configurations bears, at this point, no relationship to  of equation (12). To fix this, we move each configuration to a new location, specified by where  is the drift function evaluated at the old location , N is a random vector of 12 independent components from the normal distribution (with mean 0 and standard deviation 1), and  is an extra parameter called the step size, which controls the speed of this motion. This will bring us a step closer to the desired distribution of configurations whose probability density function is proportional to, but it will take dozens of such moves to reach it. Monitoring the consecutive sample averages of, we find no systematic change but only random fluctuations after reaching a so-called equilibration. Once in equilibrium, we continue advancing our configuration for as many steps (called iterations) as feasible, to reduce the statistical error of the final estimate. This is computed by combining all the individual sample averages into one. There is only one little snag: the result will still have an error proportional to the step size. To correct for this, we would have to make  impractically small and equilibration would take forever. Fortunately, there is another way, called Metropolis sampling [19], for each proposed move (13) we compute a scalar quantity where the subscripts n and o mean that  and D have been evaluated at the new or old location, respectively. The move is then accepted with a probability equal to T. When T > 1, the move is accepted automatically. When a move is rejected, the configuration simply remains at its old location. The step size  should be adjusted to yield a reasonable proportion of rejections, say between 10% and 30%. Rejecting configurations in this manner creates the last small problem, in our original random sample there is usually a handful of configurations which, because they have landed at "wrong" locations, just would not move. To fix this, we have to monitor, for each configuration, the number of consecutive times a move has been rejected, and let it move, regardless of T, when this number exceeds a certain value. After this is done and the sample equilibrates, the problem automatically disappears, and no configuration is ever refused its move more than six consecutive times. To get an accurate estimate of, we repeat the simulation with substantially more iterations, and then computing the grand mean of the values. In our case, this yields  atomic units, with an average acceptance rate of about 90%. The easiest way to find the corresponding statistical error is to execute the same program, independently, 5 to 10 times, and then to combine the individual results. This improves the estimate to atomic units, with the standard error of ±0.05326. The obvious discrepancy, well beyond the statistical error, between our estimate and this value is due to our use of a rather primitive trial function. In accordance with the variational principle, our estimate remains higher than the exact value. Figure 1. 1000 iterations of VMC to reach equilibrium. 6. Monte Carlo Estimation Replacing  by in equation (12), the expression then yields "nearly" the exact value of, subject only to a small nodal error [16]. So, all we need to do is to modify our simulation program accordingly, to get a sample from a distribution whose probability density function is proportional to  instead of . This can be achieved by assuming that each configuration carries a different weight, computed from where is the local energy of the configuration as computed  iterations ago, the summation is over all past iterations, and  is a rough estimate of  (the variational result will do). The sum in (10) "depreciates" the past  values at a rate that should resemble the decrease in serial correlation of the  sequence, which can be easily monitored during the variational simulation. The new estimates of  are then the correspondingly weighted averages, computed at each step and then combined in the usual grand-mean fashion. There are two slight problems with this algorithm, but both can be easily alleviated. Firstly, when an electron moves too close to a nucleus, may acquire an unusually low value, making the corresponding  rather large, sometimes larger than all the remaining weights combined. We must eliminate "outliers" outside the  range. It is better to do this in a symmetrical way by truncating the value to the nearest boundary of the interval. Secondly, The final (grand-mean) estimate may have a small,  -proportional bias. This can be removed only by repeating the simulation, preferably more than once, at several (say 3 to 5) distinct values of , and getting an unbiased estimate of  by performing a simple polynomial regression. It is the intercept of the resulting regression line (corresponding to ) that yields the final answer. 7. Result and Discussion In this paper we have calculated the ground state energy of the Boron atom using a modified wave function. Moreover, we have succeeded to use this trial wave function to obtain the energy of the Boron ground state by using VMC Method. The iteration averages of  will show that equilibration now takes many more steps (about 1000, when ) than in the case of variational simulation. We have thus decided to discard the first 1000 results and partition the remaining 6000 into six blocks of 1000. We can produce six such values with  and . It is now easy to find the resulting intercept. Table 1. Monte Carlo Result.   Estimate Standard Error P-Value This yields the value of  for the corresponding intercept. This is in Reasonable agreement, in view of the nodal error, with the experimental value of -24.541246 atomic units. This visualizes the regression fit is Table 2. Energy Values. Experiment Energy Hylleraas Experiment Monte Carlo Method Table-1 shows the final results of the energy using a very large of MC points together with the standard deviation given by equation (4) and equation (5). It is clear from Table-2 that, the obtained result is of good agreement with the recent experimental Hylleraas value. Also, the associated standard deviations have very small value; this is due to the large number of MC points. The electron-electron correlation factor, which has been included in the trial wave function, played a crucial role in improving the results. In fact, the ground state energy of the Boron atom which was obtained previously using the same trial wave function, but without introducing the factor, was . It is clear from Table-1 that our result for the Boron atom (Z=5) is, which is very accurate. Figure 2. Presenting the energy level as a function of Boron and using a set of Monte Carlo points. Table 3. Comparison of the energies for boron atom calculated with different methods and the nonrelativistic energy (all values in a.u.)[8]. Reference Year Method Energy Clementi 1989 HF -24.495670 Clementi, Roetti 1974 HF -24.498369 Ruiz 2004 Ref Hy -24.498369 Clementi, Raimondi 1963 HF -24.498370 Huzinaga 1977 HF -24.528709 Froese-Fisheret La 1993 Numerical HF -24.529036 Mayer 2004 Full-CI -24.530874 Ruiz 2004 Hylleraas -24.541246 Froese-Fisher La 1994 Multiref. CI -24.560354 Estimated nonrelativistic 1993 Full-CI -24.65391 Ruiz[8] 2004 Hydrogen Like Orbital -24.501187 Present work   Monte Carlo Method -24.546707 Finally, in Table 3, we compare our energy results with the ones of other methods. The ground state energy obtained with Variational Monte Carlo Method was -24.546707 a.u. This energy lies 0.056% above the Froese-Fisher et la result and is comparable in accuracy to Ruiz result (Table-3). Only the energy obtained by Ruiz [8] lies closer to our result and the error was only 0.02%. By comparing with the values of Clementi, Raimondi, Froese-Fisheret La, Mayer and Huzinaga (Table-3), we see that our result is an improved one when compared with experimental result. In a follow-up article, it showed that how Monte Carlo procedure can be extended to estimate atomic properties, including geometry and polarizability, etc. and how to optimize Parameters of a trial function, to make the Monte Carlo method more "self-sufficient. This means that VMC method introduced a very well description for the Beryllium ground state using a trial wave function expressed in hydrogen-like orbital’s with a rather simple factor, describing the correlations between the electrons. I would like to express my very great appreciation to my teacher and colleague for their willingness to support me. I also like to acknowledge the financial, academic and technical support of Bangladesh University of Engineering and Technology, and its staff, that provided the necessary financial and logistic support. 1. B. L. Hammond, W. A. Lester Jr and P. J. Reynolds, Monte Carlo methods in ab initio quantum chemistry, pp.1-10,(1994). 2. E. Buendía, F. J. Gálvez, A. Sarsa, Chem. Phys. Lett., Explicitly correlated energies for neutral atoms and cations ,465: pp.190-192 (2008). 3. S. A. Alexander, R. L. Coldwell, Int. J. Quantum Chem, Atomic wave function forms, 63,pp.1001-1022 (1997). 4. K. E. Riley, J. B. Anderson, Chem. Phys. Lett., A new variational Monte Carlo trial wave function with directional Jastrow functions 366: pp.153-156 (2002). 5. S. A. Alexander, R. L. Coldwell, Int. J. Quantum Chem, Ro-vibrationally averaged properties of H2 using Monte Carlo methods, 107: pp.345-352 (2007). 6. S. Datta, S. A. Alexander, R. L. Coldwell, Int. J. Quantum Chem. The lowest order relativistic corrections for the hydrogen molecule, pp.731-739 (2012). 7. Buendı´a, F.J. Ga´lvez, A. Sarsa, Chem. Phys. Lett., Correlated wave functions for the ground state of the atoms Li through Kr, 428: pp.241-244 (2006). 8. Ruiz, M.B and Rojas,M. Computational Method in Science and Technology, Variational Calculations on the 2P1/2 Ground State of Boron Atom using Hydrogenlike orbitals.9(1-2), 101-112 (2003) 9. A. M. Frolov, Eur. Phys. J. D, Atomic Excitations During the Nuclear β− Decay in Light Atoms, 61: pp. 571-577 (2011). 10. N. L. Guevara, F. E. Harris, A. Turbiner, Int. J. Quantum Chem., An Accurate Few-parameter Ground State Wave Function for the Lithium Atom, 109: pp.3036-3040 (2009). 11. S. B. DomaandF. El-Gamal, Monte Carlo Variational Method and the Ground-State of Helium, pp-78-83. 12. C. Schwarz, Many-body methods in quantum chemistry: proceedings of the symposium (1989). 13. W. Kutzelnigg and J. D. Morgan III,Explicitly Correlated Wave Functions in Chemistry andPhysics. Phys. pp. 96-104,(1992). 14. D. Ruenn Su, Theory of a Wave Function within a Band and a New Process for Calculation of the Band Energy, pp. 17-28 (1976) 15. P. J. Reynolds, D. M. Ceperley, B. J. Alder, and W. A. Lester, Jr., The Journal of Chemical Physics, 77(12), 1982 pp. 5593–5603. doi:10.1063/1.443766. 16. J. B. Anderson, The Journal of Chemical Physics, 73(8), 1980 pp. 3897–3899. doi:10.1063/1.440575 17. D. M. Ceperley and B. J. Alder, "Quantum Monte Carlo," Science, 231(4738), 1986 pp. 555–560. doi:10.1126/science.231.4738.555. Article Tools Follow on us Science Publishing Group NEW YORK, NY 10018 Tel: (001)347-688-8931
c31d5f16a24aff04
Get Wave essential facts below. View Videos or join the Wave discussion. Add Wave to your PopFlock.com topic list for future reference or share this resource on social media. Surface waves in water showing water ripples Example of biological waves expanding over the brain cortex. Spreading Depolarizations.[1] In physics, mathematics, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities, sometimes as described by a wave equation. In physical waves, at least two field quantities in the wave medium are involved. Waves can be periodic, in which case those quantities oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction it is said to be a traveling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero. The types of waves most commonly studied in classical physics are mechanical and electromagnetic. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves, string vibrations (standing waves), and vortices[dubious ]. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields which sustains propagation of a wave involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, according to their frequencies (or wavelengths) have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays. Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves[dubious ]; plasma waves that combine mechanical deformations and electromagnetic fields; reaction-diffusion waves, such as in the Belousov-Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy,[2]momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals.[3] On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps. Some, like the probability waves of quantum mechanics, may be completely static[dubious ]. A physical wave is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains. A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal if those vectors are exactly in the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization which can be an important attribute for waves having more than one single possible polarization. Mathematical description Single waves A wave can be described just like a field, namely as a function where is a position and is a time. The value of is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space . However, in many cases one can ignore one dimension, and let be a point of the Cartesian plane . This is the case, for example, when studying vibrations of a drum skin. One may even restrict to a point of the Cartesian line -- that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time , on the other hand, is always assumed to be a scalar; that is, a real number. The value of can be any physical quantity of interest assigned to the point that may vary with time. For example, if represents the vibrations inside an elastic solid, the value of is usually a vector that gives the current displacement from of the material particles that would be at the point in the absence of vibration. For an electromagnetic wave, the value of can be the electric field vector , or the magnetic field vector , or any related quantity, such as the Poynting vector . In fluid dynamics, the value of could be the velocity vector of the fluid at the point , or any scalar property like pressure, temperature, or density. In a chemical reaction, could be the concentration of some substance in the neighborhood of point of the reaction medium. For any dimension (1, 2, or 3), the wave's domain is then a subset of , such that the function value is defined for any point in . For example, when describing the motion of a drum skin, one can consider to be a disk (circle) on the plane with center at the origin , and let be the vertical displacement of the skin at the point of and at time . Wave families Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echos one could get from an airplane that may be approaching an airport. In some of those situations, one may describe such a family of waves by a function that depends on certain parameters , besides and . Then one can obtain different waves -- that is, different functions of and -- by choosing different values for those parameters. Sound pressure standing wave in a half-open pipe playing the 7th harmonic of the fundamental (n = 4) For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as The parameter defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); is the speed of sound; is the length of the bore; and is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position should be measured from the mouthpiece, and the time from any moment at which the pressure at the mouthpiece is maximum. The quantity is the wavelength of the emitted note, and is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters. As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance from the center of the skin to the strike point, and on the strength of the strike. Then the vibration for all possible strikes can be described by a function . Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function such that is the initial temperature at each point of the bar. Then the temperatures at later times can be expressed by a function that depends on the function (that is, a functional operator), so that the temperature at a later time is Differential wave equations Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of , only constrains how those values can change with time. Then the family of waves in question consists of all functions that satisfy those constraints -- that is, all solutions of the equation. This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation where is the heat that is being generated per unit of volume and time in the neighborhood of at time (for example, by chemical reactions happening there); are the Cartesian coordinates of the point ; is the (first) derivative of with respect to ; and is the second derivative of relative to . (The symbol "" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.) This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures. For another example, we can describe all possible sounds echoing within a container of gas by a function that gives the pressure at a point and time within that container. If the gas was initially at uniform temperature and composition, the evolution of is constrained by the formula Here is some extra compression force that is being applied to the gas near by some external process, such as a loudspeaker or piston right next to . This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is , the second derivative of with respect to time, rather than the first derivative . Yet this small change makes a huge difference on the set of solutions . This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves. Wave in elastic medium Wavelength ?, can be measured between any two corresponding points on a waveform • in the direction in space. For example, let the positive direction be to the right, and the negative direction be to the left. • with constant amplitude • with constant velocity , where is • with constant waveform, or shape This wave can then be described by the two-dimensional functions (waveform traveling to the right) (waveform traveling to the left) or, more generally, by d'Alembert's formula:[6] representing two component waveforms and traveling through the medium in opposite directions. A generalized representation of this wave can be obtained[7] as the partial differential equation General solutions are based upon Duhamel's principle.[8] Wave forms Sine, square, triangle and sawtooth waveforms. The form or shape of F in d'Alembert's formula involves the argument x - vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).[9] In the case of a periodic function F with period ?, that is, F(x + ? - vt) = F(x - vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period ? (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x - v(t + T)) = F(x - vt) provided vT = ?, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = ?/v.[10] Amplitude and modulation Amplitude modulation can be achieved through f(x,t) = 1.00×sin(2?/0.10×(x-1.00×t)) and g(x,t) = 1.00×sin(2?/0.11×(x-1.00×t))only the resultant is visible to improve clarity of waveform. Phase velocity and group velocity A wave with the group and phase velocities going in different directions Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes - modulation or envelope of the wave. Sine waves Sinusoidal waves correspond to simple harmonic motion. • is the space coordinate • is the time coordinate • is the wavenumber • is the angular frequency • is the phase constant. The units of the amplitude depend on the type of wave. Transverse mechanical waves (for example, a wave on a string) have an amplitude expressed as a distance (for example, meters), longitudinal mechanical waves (for example, sound waves) use units of pressure (for example, pascals), and electromagnetic waves (a form of transverse vacuum wave) express the amplitude in terms of its electric field (for example, volts/meter). The wavelength of a sinusoidal waveform traveling at constant speed is given by:[16] Although arbitrary wave shapes will propagate unchanged in lossless linear time-invariant systems, in the presence of dispersion the sine wave is the unique shape that will propagate unchanged but for phase and amplitude, making it easy to analyze.[18] Due to the Kramers-Kronig relations, a linear medium with dispersion also exhibits loss, so the sine wave propagating in a dispersive medium is attenuated in certain frequency ranges that depend upon the medium.[19] The sine function is periodic, so the sine wave or sinusoid has a wavelength in space and a period in time.[20][21] The sinusoid is defined for all times and distances, whereas in physical situations we usually deal with waves that exist for a limited span in space and duration in time. An arbitrary wave shape can be decomposed into an infinite set of sinusoidal waves by the use of Fourier analysis. As a result, the simple case of a single sinusoidal wave can be applied to more general cases.[22][23] In particular, many media are linear, or nearly so, so the calculation of arbitrary wave behavior can be found by adding up responses to individual sinusoidal waves using the superposition principle to find the solution for a general waveform.[24] When a medium is nonlinear, then the response to complex waves cannot be determined from a sine-wave decomposition. Plane waves A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction () and time (). Since the wave profile only depends on the position in the combination , any displacement in directions perpendicular to cannot affect the value of the field. Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other. Standing waves Standing wave. The red dots represent the wave nodes A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions. Physical properties Waves exhibit common behaviors under a number of standard situations, for example: Transmission and media Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories: Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored. Identical waves from two sources undergoing interference. Observed at the bottom one sees 5 positions where the waves add in phase, but in between which they are out of phase and cancel. When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one weren't present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern. Circular.Polarization.Circularly.Polarized.Light Circular.Polarizer Creating.Left.Handed.Helix.View.svg A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is most easily seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton performed experiments with light and prisms, presenting his findings in the Opticks (1704) that white light consists of several colors and that these colors cannot be decomposed any further.[25] Mechanical waves Waves on strings where the linear density ? is the mass per unit length of the string. Acoustic waves Acoustic or sound waves travel at speed given by Water waves Shallow water wave.gif • Sound - a mechanical wave that propagates through gases, liquids, solids and plasmas; Seismic waves Doppler effect The Doppler effect (or the Doppler shift) is the change in frequency of a wave in relation to an observer who is moving relative to the wave source.[26] It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842. Shock waves Formation of a shock wave by a plane. Electromagnetic waves Onde electromagnétique.png Quantum mechanical waves Schrödinger equation Dirac equation de Broglie waves Louis de Broglie postulated that all particles with momentum have a wavelength where the wavelength is determined by the wave vector k as: and the momentum by: In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet.[31] Gaussian wave packets also are used to analyze water waves.[32] For example, a Gaussian wavefunction ? might take the form:[33] at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as ?0 = 2? / k0. It is well known from the theory of Fourier analysis,[34] or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian.[35] Given the Gaussian: the Fourier transform is: The Gaussian in space therefore is made up of waves: that is, a number of waves of wavelengths ? such that k? = 2 ?. The parameter ? decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/?. That is, the smaller the extent in space, the larger the extent in k, and hence in ? = 2?/k. Gravity waves Gravitational waves Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.[36] Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity. See also Waves in general Electromagnetic waves In fluids In quantum mechanics In relativity Other specific types of waves Related topics 1. ^ Santos, Edgar; Schöll, Michael; Sánchez-Porras, Renán; Dahlem, Markus A.; Silos, Humberto; Unterberg, Andreas; Dickhaus, Hartmut; Sakowitz, Oliver W. (2014-10-01). "Radial, spiral and reverberating waves of spreading depolarization occur in the gyrencephalic brain". NeuroImage. 99: 244-255. doi:10.1016/j.neuroimage.2014.05.021. ISSN 1095-9572. PMID 24852458. S2CID 1347927. 2. ^ (Hall 1982, p. 8) 3. ^ Pragnan Chakravorty, "What Is a Signal? [Lecture Notes]," IEEE Signal Processing Magazine, vol. 35, no. 5, pp. 175-177, Sept. 2018. doi:10.1109/MSP.2018.2832195 4. ^ Michael A. Slawinski (2003). "Wave equations". Seismic waves and rays in elastic media. Elsevier. pp. 131 ff. ISBN 978-0-08-043930-3. 5. ^ Lev A. Ostrovsky & Alexander I. Potapov (2001). Modulated waves: theory and application. Johns Hopkins University Press. ISBN 978-0-8018-7325-6. 6. ^ Karl F Graaf (1991). Wave motion in elastic solids (Reprint of Oxford 1975 ed.). Dover. pp. 13-14. ISBN 978-0-486-66745-4. 8. ^ Jalal M. Ihsan Shatah; Michael Struwe (2000). "The linear wave equation". Geometric wave equations. American Mathematical Society Bookstore. pp. 37ff. ISBN 978-0-8218-2749-9. 9. ^ Louis Lyons (1998). All you wanted to know about mathematics but were afraid to ask. Cambridge University Press. pp. 128 ff. ISBN 978-0-521-43601-4. 11. ^ Christian Jirauschek (2005). FEW-cycle Laser Dynamics and Carrier-envelope Phase Detection. Cuvillier Verlag. p. 9. ISBN 978-3-86537-419-6. 12. ^ Fritz Kurt Kneubühl (1997). Oscillations and waves. Springer. p. 365. ISBN 978-3-540-62001-3. 13. ^ Mark Lundstrom (2000). Fundamentals of carrier transport. Cambridge University Press. p. 33. ISBN 978-0-521-63134-1. 14. ^ a b Chin-Lin Chen (2006). "§13.7.3 Pulse envelope in nondispersive media". Foundations for guided-wave optics. Wiley. p. 363. ISBN 978-0-471-75687-3. 15. ^ Stefano Longhi; Davide Janner (2008). "Localization and Wannier wave packets in photonic crystals". In Hugo E. Hernández-Figueroa; Michel Zamboni-Rached; Erasmo Recami (eds.). Localized Waves. Wiley-Interscience. p. 329. ISBN 978-0-470-10885-7. 16. ^ David C. Cassidy; Gerald James Holton; Floyd James Rutherford (2002). Understanding physics. Birkhäuser. pp. 339ff. ISBN 978-0-387-98756-9. 19. ^ See Eq. 5.10 and discussion in A.G.G.M. Tielens (2005). The physics and chemistry of the interstellar medium. Cambridge University Press. pp. 119 ff. ISBN 978-0-521-82634-1.; Eq. 6.36 and associated discussion in Otfried Madelung (1996). Introduction to solid-state theory (3rd ed.). Springer. pp. 261 ff. ISBN 978-3-540-60443-3.; and Eq. 3.5 in F Mainardi (1996). "Transient waves in linear viscoelastic media". In Ardéshir Guran; A. Bostrom; Herbert Überall; O. Leroy (eds.). Acoustic Interactions with Submerged Elastic Structures: Nondestructive testing, acoustic wave propagation and scattering. World Scientific. p. 134. ISBN 978-981-02-4271-8. 20. ^ Aleksandr Tikhonovich Filippov (2000). The versatile soliton. Springer. p. 106. ISBN 978-0-8176-3635-7. 21. ^ Seth Stein, Michael E. Wysession (2003). An introduction to seismology, earthquakes, and earth structure. Wiley-Blackwell. p. 31. ISBN 978-0-86542-078-6. 22. ^ Seth Stein, Michael E. Wysession (2003). op. cit.. p. 32. ISBN 978-0-86542-078-6. 23. ^ Kimball A. Milton; Julian Seymour Schwinger (2006). Electromagnetic Radiation: Variational Methods, Waveguides and Accelerators. Springer. p. 16. ISBN 978-3-540-29304-0. Thus, an arbitrary function f(r, t) can be synthesized by a proper superposition of the functions exp[i (k·r-?t)]... 24. ^ Raymond A. Serway & John W. Jewett (2005). "§14.1 The Principle of Superposition". Principles of physics (4th ed.). Cengage Learning. p. 433. ISBN 978-0-534-49143-7. 26. ^ Giordano, Nicholas (2009). College Physics: Reasoning and Relationships. Cengage Learning. pp. 421-424. ISBN 978-0534424718. 27. ^ Anderson, John D. Jr. (January 2001) [1984], Fundamentals of Aerodynamics (3rd ed.), McGraw-Hill Science/Engineering/Math, ISBN 978-0-07-237335-6 28. ^ M.J. Lighthill; G.B. Whitham (1955). "On kinematic waves. II. A theory of traffic flow on long crowded roads". Proceedings of the Royal Society of London. Series A. 229 (1178): 281-345. Bibcode:1955RSPSA.229..281L. CiteSeerX doi:10.1098/rspa.1955.0088. S2CID 18301080. And: P.I. Richards (1956). "Shockwaves on the highway". Operations Research. 4 (1): 42-51. doi:10.1287/opre.4.1.42. 29. ^ A.T. Fromhold (1991). "Wave packet solutions". Quantum Mechanics for Applied Physics and Engineering (Reprint of Academic Press 1981 ed.). Courier Dover Publications. pp. 59 ff. ISBN 978-0-486-66741-6. (p. 61) ...the individual waves move more slowly than the packet and therefore pass back through the packet as it advances 30. ^ Ming Chiang Li (1980). "Electron Interference". In L. Marton; Claire Marton (eds.). Advances in Electronics and Electron Physics. 53. Academic Press. p. 271. ISBN 978-0-12-014653-6. 31. ^ See for example Walter Greiner; D. Allan Bromley (2007). Quantum Mechanics (2 ed.). Springer. p. 60. ISBN 978-3-540-67458-0. and John Joseph Gilman (2003). Electronic basis of the strength of materials. Cambridge University Press. p. 57. ISBN 978-0-521-62005-5.,Donald D. Fitts (1999). Principles of quantum mechanics. Cambridge University Press. p. 17. ISBN 978-0-521-65841-6.. 32. ^ Chiang C. Mei (1989). The applied dynamics of ocean surface waves (2nd ed.). World Scientific. p. 47. ISBN 978-9971-5-0789-3. 34. ^ Siegmund Brandt; Hans Dieter Dahmen (2001). The picture book of quantum mechanics (3rd ed.). Springer. p. 23. ISBN 978-0-387-95141-6. 35. ^ Cyrus D. Cantrell (2000). Modern mathematical methods for physicists and engineers. Cambridge University Press. p. 677. ISBN 978-0-521-59827-9. • Fleisch, D.; Kinnaman, L. (2015). A student's guide to waves. Cambridge: Cambridge University Press. Bibcode:2015sgw..book.....F. ISBN 978-1107643260. • French, A.P. (1971). Vibrations and Waves (M.I.T. Introductory physics series). Nelson Thornes. ISBN 978-0-393-09936-2. OCLC 163810889. • Hall, D.E. (1980). Musical Acoustics: An Introduction. Belmont, CA: Wadsworth Publishing Company. ISBN 978-0-534-00758-4.. • Ostrovsky, L.A.; Potapov, A.S. (1999). Modulated Waves, Theory and Applications. Baltimore: The Johns Hopkins University Press. ISBN 978-0-8018-5870-3.. • Griffiths, G.; Schiesser, W.E. (2010). Traveling Wave Analysis of Partial Differential Equations: Numerical and Analytical Methods with Matlab and Maple. Academic Press. ISBN 9780123846532. External links Music Scenes
b1c8f42eb9797d67
The heavy baryons in the nonperturbative string approach I.M.Narodetskii, M.A.Trusov Institute of Theoretical and Experimental Physics, Moscow, Russia We present some piloting calculations of the short-range correlation coefficients for the light and heavy baryons and masses of the doubly heavy baryons and () in the framework of the simple approximation within the nonperturbative QCD approach. 1. Introduction The observation of meson by the CDF collaboration [1] opens a new direction in the physics of hadrons containing two heavy quarks. Presently at the LHC, -factories, and the Tevatron with high luminosity, several experiments have been proposed, in which there is a possibility to identify and study hadrons containing two heavy quarks, like doubly-charm baryons or baryons with charm and beauty111 Here, and throughout this paper, denotes a light quark or .. In the more distant future the next generation experiments with high bottom quark production rate will provide excellent possibilities for the study bottom baryons and their decays. In view of this project, it is important to have safe theoretical predictions for heavy baryon masses as a guide to the experimental search of these hadrons. A number of authors [2]-[12] have already considered baryons containing two heavy quarks in anticipation of future experiments which may discover these particles. In most of these works, however, theoretical predictions are somewhat biased by the introduction of the additional dynamical assumptions and supplementary dynamical parameters like constituent quark masses in addition to the only one parameter really pertinent to QCD – the overall scale of the theory . The purpose of this paper is to calculate the masses of the heavy baryons in a simple approximation within the nonperturbative QCD, developed in [13]-[16]. This method has been already applied to study baryon Regge trajectories [15] and, very recently, for computation of magnetic moments of light baryons [17]. The essential point of this paper is that it is very reasonable that the same method should also hold for hadrons containing heavy quarks. In this work we will concentrate on the masses of doubly heavy baryons. As in [17] we take as the universal parameter the QCD string tension , fixed in experiment by the meson and baryon Regge slopes. We also include the perturbative Coulomb interaction with the frozen coupling . The basic feature of the considered approach is the dynamical calculation of the quark constituent masses in terms of the quark current masses . This is done using the einbein (auxiliary fields) formalism, which is proven to be rather accurate in various calculations for relativistic systems. The einbeins are treated as the variational parameters which are to be found form the condition of the minimum of baryon eigen energies [18]. 2. Formalism The starting point of the approach is the Feynman–Schwinger representation of the Green’s function, where the role of ”time” parameter along a quark path is played by the Fock–Schwinger proper time. The final step is the derivation of the c.m. Effective Hamiltonian (EH) containing the dynamical quark masses as parameters. For many details see the original papers [13]-[16]. Consider a baryon consisting of three quarks with arbitrary masses , . In what follows we confine ourselves to consideration of the ground state baryons without radial and orbital excitations in which case tensor and spin-orbit forces do not contribute perturbatively. Then only the spin-spin interaction survives in the perturbative approximation. The EH has the following form: where are the current quark masses and are the dynamical quark masses to be found from the minimum condition (see Eq. (2) below). Since for light quarks, but for heavy quarks, each light quark contributes to the baryon mass an additional mass (not as in the ordinary nonrelativistic quark model), whereas each heavy quark contributes an additional mass . The dynamical quark masses are evaluated from the equations defining the stationary points of the baryon mass as function of Let be the quark coordinates. The kinetic momentum operator in Eq. (1) acquires the familiar form is the sum of the perturbative Coulomb-like one gluon exchange potential and the string potential. The Coulomb-like potential is where the factor is the value of the quadratic Casimir operator for the group . The string potential has been calculated in [15] as the static energy of the three heavy quarks where is the sum of the three distances from the string junction point, which for simplicity is chosen as coinciding with the centre-of-mass coordinate . 3. Hyper Radial Approximation We use the hyperspherical formalism approach (for detail see original papers [19]). In the hyperradial approximation (HRA) corresponding to the truncation of the wave function by the component with grand orbital momentum the three-quark wave function depends only on the hyperradius , where and are the three-body Jacobi variables222for their definition see Appendix, and does not depend on angular variables. The confining potential (5) has a specific three-body character. However, this potential as well as the Coulomb potential in Eq. (4) is smooth in the sense that the HRA (where only the part of the potential which is invariant under rotation in the six-dimensional space spanned by the Jacobi coordinates is taken into account) is already an excellent approximation. The HRA neglects the mixed symmetry components of the three-quark wave function, which appear in the higher approximations of the hyperspherical formalism [19]. Introducing the reduced function333In what follows we omit the value of to avoid subscripts. Note that the radially symmetric component with is the dominant one in the three-quark wave function. and averaging over the six-dimensional sphere one obtains the Schrödinger equation where is an arbitrary parameter with the dimension of mass which drops off in the final expressions. The last term in (6) represents the three-body centrifugal barrier and is the average of the three-quark potentials over the six-dimensional sphere: The mass depending constants and are defined by Eqs. (A.2) and (A.13) in the Appendix. It is convenient to introduce a new variable , to eliminate an artificial dependence of Eq. (6) on , then the equation (6) becomes Since , (see Eqs. (A.2), (A.13)), the eigenvalue in (6) does not depend on . 4. The quark dynamical masses Equation (9) applied to the nucleon yields the dynamical mass of the light quark, and applied to the strange hyperons gives the strange quark mass . In the same manner application of this equation to the charm and beauty baryons yields the constituent masses of - and -quarks. In our calculations we use the same parameters as in [22], namely  GeV, , GeV, GeV, GeV, and GeV. We solve Eq. (9) using both the quasiclassical and variational solutions. The first approach is based on the well-known fact that interplay between the centrifugal term and the confining potential produces a minimum of the effective potential specific for the three-body problem. The numerical solution of (9) for the ground state eigen energy may be reproduced on a per cent level of accuracy by using the parabolic approximation for the effective potential [20], [21]. This approximation provides an analytical expression for the eigen energy. The potential has the minimum at a point , which is defined by the condition , i.e.: Expanding in the vicinity of one obtains: i.e. the potential of the harmonic oscillator with the frequency . Therefore the energy eigenvalue is In Table 1 we show the dynamical masses and the ground state eigenvalues for various baryons calculated using the procedure described above. Our values of light quark mass qualitatively agree with the results of [22] obtained from the analysis of the heavy-light ground meson states, but MeV higher than those of [15], [17]. This difference is due to the different treatment of the Coulomb and spin-spin interactions. In [15] both interactions have not been included and the light quark mass has been calculated from the fit of the mass of where the Coulomb-like potential and the spin-spin interaction seem to balance each other. In [17] the smeared spin-spin interaction for the light quarks has been included into Eq. (2) defining the dynamical mass of the light quark. In our calculation as in [22] we include the Coulomb-like term, but neglect the spin-spin interaction. There is no good theoretical reason why dynamical quark masses need to be the same in different mesons and baryons. From the results of Table 1 we conclude that the dynamical masses of the light quarks (, , or ) are increased by when going from the light to heavy baryons. For the heavy quarks ( and ) the variation in the values of their dynamical masses is marginal. In Table 2 we compare the quark masses in and baryons with those calculated in [22] in and mesons. One observes that the masses of the light quarks in baryons are slightly smaller than those in the mesons. The small variations in the values of and are within the accuracy of our calculations. 5. Correlation functions for the baryons For many applications the quantities are needed. To estimate effects related to the baryon wave function we solve Eq. (9) by the variational method. We introduce a simple variational ansätz for where is the variational parameter, and the numerical factor is chosen so that . The trial three-quark Hamiltonian admits explicit solutions for the energy, the wave function, and the density matrix: The density matrix (the correlation function) in a baryon {ijk} is defined as: so that For the trial function (13) are evaluated explicitly: where is the reduced mass of the quarks and , and is to be found from the condition The expectation values depend on the third or ‘spectator’ quark through the three-quark wave function. Let us define the quantities The corresponding quantity for a meson is denoted as . The results of the variational calculations are given in Table 3 where for each baryon we show the variational parameters , the quantities (in units of GeV), and the average distances (in units of fm). The variational estimations of and quark dynamical masses do not differ from those shown in Table 1. Comparing the results of Table 3 with those of [22] we obtain (see Table 4)444 Inequalities (21) and (22) were first suggested in [23] from the observed mass splitting in mesons and baryons. Note, however, that if are the light quarks, and the quarks and are the heavy, then (e.g., ) in agreement with the limit of the heavy quark effective theory. Our estimations for the ratios agree with the results obtained using the nonrelativistic quark model or the bag model [24]-[26] or QCD sum rules [27] which are typically in the range . On the other hand, our result for disagrees with the one by Rosner [28] who estimated the heavy-light diquark density at zero separation in from the ratio of hyperfine splittings between and baryons and and mesons and found , if the baryon splitting is taken to be , or even to , if the surprisingly small and not confirmed yet DELPHI result is used. From the results of Table 3 it follows that the correlation between two quarks depends on the third one. Note also that the wave function calculated in HRA shows the marginal diquark clustering in the doubly heavy baryons. This is principally kinematic effect related to the fact that in the HRA the difference between the various in a baryon is due to the factor which varies between for and for . In Table 5 we compare the short-range correlation coefficients in the doubly heavy baryons with those calculated in [7] using the pair-wise quark interaction with the power-law potential, and in [9] using the non-relativistic model with the Buchmüller–Tye potential. 6. Masses of doubly heavy baryons To calculate hadron masses we, as in [15], first renormalise the string potential where the constants take into account the residual self-energy (RSE) of quarks. In principle, these constants can be expressed in terms of the two scalar functions entering covariant expansion of the bilocal cumulants of gluonic fields in the QCD vacuum [14, 15]. In the present work we treat them phenomenologically. To find in (23) we assume, first, that the spin splittings of hadrons with a given quark content arise from the colour-magnetic interaction in QCD. Indeed, for the ground state hadrons the hadron wave functions have no orbital angular momentum, so tensor and spin-orbit forces do not contribute. The second assumption is that the colour-magnetic interaction can be treated perturbatively [29, 30]: Because the colour-magnetic interaction between two quarks goes inversely as the product of their masses, the perturbative approximation improves as the quark mass increases. However, this approximation may not be good for the baryons containing light quarks555Note that dependence in Eq. (24), if treated literally in the EH method, results in a collapse both in the pseudoscalar channel and the proton. That may be a signal of the Nambu–Goldstone phenomenon.. In what follows we adjust the RSE constants to reproduce the centre-of-gravity for baryons with a given flavour. To this end we consider the spin-averaged masses, such as: and analogous combinations for and states. Then we obtain We keep these parameters fixed to calculate the masses given in Table 6, namely the spin-averaged masses (computed without the spin-spin term) of the lowest doubly heavy baryons. Our results are very similar to those obtained in [7] using the pair-wise power-law potential. 7. Conclusions In this paper we employ the general formalism for the baryons, which is based on nonperturbative QCD and where the only inputs are the string tension , the strong coupling constant , and two additive constants, and , the residual self-energies of the light quarks. We present some piloting calculations of the dynamical quark masses for various baryons (see Table 1). The latters are computed solely in terms of and and depend on a baryon. The second important point of our investigation is the calculation of the correlation functions for baryons. They are given, among the other things, in Table 3. We have also performed the calculations of the spin-averaged masses of baryons with two heavy quarks. One can see from Table 6 that our predictions are especially close to those obtained in [7] using a variant of the power-law potential adjusted to fit ground state baryons. Evaluation of the spin-spin interactions requires inclusion of the hyperspherical components and/or more sophisticated treatment of the colour-magnetic interaction. We shall consider these calculations in the next publication. We thank Yu.S.Kalashnikova and Yu. A. Simonov for useful discussions. We also thank K.A.Ter-Martirosian for his interest in this work. This work was supported in part by RFBR grants ## 00-02-16363 and 00-15-96786. Consider three quarks with arbitrary masses , , and coordinates . The problem is conveniently treated using Jacobi coordinates and : Here and are the reduced masses Altogether with the centre-of-mass coordinate Jacobi coordinates determine completely the position of the system. The Jacobian of the transformation for the differential volume elements is 1, i.e., The inverse transformations for the relative coordinates and are The hyperradius is defined as and does not depend on the order of the quark numbering: Written in terms of Eq. (A.6) reads: In the centre-of-mass frame the invariant kinetic energy operator (3) is written in terms the Jacobi coordinates (A.1) as where is angular momentum operator whose eigen functions (the hyperspherical harmonics) are with being the grand orbital momentum. In terms of the wave function can be written in a symbolical shorthand as In the HRA and . Note that the centrifugal potential in the Schrödinger equation for the radial function with a given is not zero even for . For the reduced function one obtains after averaging the interaction over the six-dimensional sphere Eq. (6) with One can easily see that the definition of does not depend on the order of the quark numeration. In terms of the Jacobi coordinates the Coulomb and string potentials read: Using the relations [20] valid for any pair (ij), one obtains Eqs. (8). Table 1. The constituent quark masses and the ground state eigen energies (in units of GeV) for the various baryon states. (The results obtained from the quasiclassical solution and from the variational one practically coincide.) 0.446 0.446 0.446 1.438 0.451 0.451 0.485 1.414 0.457 0.490 0.490 1.392 0.495 0.495 0.495 1.370 0.519 0.519 1.502 1.176 0.522 0.555 1.505 1.157 0.589 0.589 1.507 1.138 0.564 0.564 4.836 1.057 0.567 0.601 4.837 1.038 0.604 0.604 4.838 1.019 0.569 1.555 1.555 0.926 0.604 1.557 1.557 0.908 0.606 1.616 4.866 0.783 0.642 1.618 4.867 0.765 0.636 4.931 4.931 0.582 0.673 4.931 4.931 0.565 Table 2. The dynamical quark masses for the ground state , , , mesons [22] and for the corresponding ground state baryons. 0.529 1.497 0.569 1.501 0.519 1.502 0.522 0.555 1.505 0.619 4.84 0.658 4.842 0.564 4.836 0.567 0.601 4.838 Table 3.   in units of GeV and in units of fm. (The results are obtained from the trial functions (13) with the variational parameters given in units of GeV in the first column. The results for light baryons are presented for completeness.) 0.472 0.00564 0.00564 0.00564 0.777 0.777 0.777 0.470 0.00567 0.00598 0.00598 0.775 0.762 0.762 0.469 0.00600 0.00633 0.00600 0.760 0.747 0.760 0.467 0.00636 0.00636 0.00636 0.746 0.746 0.746 0.454 0.00626 0.0113 0.0113 0.750 0.615 0.615 0.452 0.00656 0.0121 0.0113 0.738 0.601 0.615 0.451 0.00688 0.0121 0.0121 0.727 0.602 0.602 0.447 0.00681 0.0163 0.0163 0.729 0.545 0.545 0.446 0.00711 0.0176 0.0163 0.719 0.531 0.545 0.445 0.00742 0.0176 0.0176 0.708 0.531 0.531 0.439 0.0116 0.0296 0.0116 0.611 0.447 0.611 0.438 0.0123 0.0294 0.0123 0.599 0.448 0.599 0.436 0.0123 0.0562 0.0166 0.599 0.361 0.541 0.435 0.0130 0.0559 0.0178 0.587 0.361 0.529 0.438 0.0181 0.165 0.0181 0.527 0.252 0.527 0.437 0.0194 0.165 0.0194 0.515 0.252 0.515 Table 4.  The ratios of the squares of the wave functions determining the probability to find a light quark at the location of the heavy quark inside the heavy baryon and the corresponding meson. (The meson wave functions are taken from [22].) 0.436 0.405 0.373 0.340 Table 5. Short-range correlation coefficients . In the parentheses are shown the corresponding quantities calculated using the power-law potential [7]. In the square brackets are shown correlation coefficients calculated using non-relativistic model with Buchmüller-Tye potential. 0.030 (0.039) [0.022] 0.012 (0.009) 0.012 (0.009) 0.030 (0.042) [0.022] 0.012 (0.019) 0.012 (0.019) 0.165 (0.152) [0.144] 0.018 (0.012) 0.018 (0.012) 0.165 (0.162) [0.144] 0.019 (0.028) 0.019 (0.028) 0.056 (0.065) [0.042] 0.012 (0.010) 0.017 (0.011) 0.056 (0.071) [0.042] 0.013 (0.021) 0.018 (0.025) Table 6. Masses of baryons containing two heavy quarks State present [7] [8] [10] [11] [12] 3.69 3.70 3.71 3.66 3.61 3.48 3.86 3.80 3.76 3.74 3.71 3.58 6.96 6.99 6.95 7.04 6.82 7.13 7.07 7.05 7.09 6.92 10.16 10.24 10.23 10.24 10.09 10.34 10.30 10.32 10.37 10.19 The additive nonrelativistic quark model with the power-law potential. Relativistic quasipotential quark model. The Feynman-Hellmann theorem. Approximation of doubly heavy diquark. For everything else, email us at [email protected].
6586eb94ab54198d
Saturday, February 29, 2020 14 Years BackRe(Action) [Image: Scott McLeod/Flickr] 14 years ago, I was a postdoc in Santa Barbara, in a tiny corner office where the windows wouldn't open, in a building that slightly swayed each time one of the frequent mini-earthquakes shook up California. I had just published my first blogpost. It happened to be about the possibility that the Large Hadron Collider, which was not yet in operation, would produce tiny black holes and inadvertently kill all of us. The topic would soon rise to attention in the media and thereby mark my entry into the world of science communication. I was well prepared: Black holes at the LHC were the topic of my PhD thesis. A few months later, I got married. Later that same year, Lee Smolin's book "The Trouble With Physics" was published, coincidentally at almost the same time I moved to Canada and started my new position at Perimeter Institute. I had read an early version of the manuscript and published one of the first online reviews. Peter Woit's book "Not Even Wrong" appeared at almost the same time and kicked off what later became known as "The String Wars", though I've always found the rather militant term somewhat inappropriate. Time marched on and I kept writing, through my move to Sweden, my first pregnancy and the following miscarriage, the second pregnancy, the twin's birth, parental leave, my suffering through 5 years of a 3000 km commute while trying to raise two kids, and, in late 2015, my move back to Germany. Then, in 2018, the publication of my first book. The loyal readers of this blog will have noticed that in the past year I have shifted weight from Blogger to YouTube. The reason is that the way search engine algorithms and the blogosphere have evolved, it has become basically impossible to attract new audiences to a blog. Here on Blogger, I feel rather stuck on the topics I have originally written about, mostly quantum gravity and particle physics, while meanwhile my interests have drifted more towards astrophysics, quantum foundations, and the philosophy of physics. YouTube's algorithm is certainly not perfect, but it serves content to users that may be interested in the topic of a video, regardless of whether they've previously heard of me. I have to admit that personally I still prefer writing over videos. Not only because it's less time-consuming, but also because I don't particularly like either my voice or my face. But then, the average number of people who watch my videos has quickly surpassed the number of those who typically read my blog, so I guess I am doing okay. On this occasion I want to thank all of you for spending some time with me, for your feedback and comments and encouragement. I am especially grateful to those of you who have on occasion sent a donation my way. I am not entirely sure where this blog will be going in the future, but stay around and you will find out. I promise it won't be boring. Friday, February 28, 2020 Quantum Gravity in the Lab? The Hype Is On. Quanta Magazine has an article by Phillip Ball titled “Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab”. It’s about using quantum simulations to study the behavior of black holes in Anti De-Sitter space, that is a space with a negative cosmological constant. A quantum simulation is a collection of particles with specifically designed interactions that can mimic the behavior of another system. To briefly remind you, we do not live in Anti De-Sitter space. For all we know, the cosmological constant in our universe is positive. And no, the two cases are not remotely similar. It’s an interesting topic in principle, but unfortunately the article by Ball is full of statements that gloss over this not very subtle fact that we do not live in Anti De-Sitter space. We can read there for example: “In principle, researchers could construct systems entirely equivalent to wormhole-connected black holes by entangling quantum circuits in the right way and teleporting qubits between them.” The correct statement would be: “Researchers could construct systems whose governing equations are in certain limits equivalent to those governing black holes in a universe we do not inhabit.” Further, needless to say, a collection of ions in the laboratory is not “entirely equivalent” to a black hole. For starters that is because the ions are made of other particles which are yet again made of other particles, none of which has any correspondence in the black hole analogy. Also, in case you’ve forgotten, we do not live in Anti De-Sitter space. Why do physicists even study black holes in Anti-De Sitter space? To make a long story short: Because they can. They can, both because they have an idea how the math works and because they can get paid for it. Now, there is nothing wrong with using methods obtained by the AdS/CFT correspondence to calculate the behavior of many particle systems. Indeed, I think that’s a neat idea. However, it is patently false to raise the impression that this tells us anything about quantum gravity, where by “quantum gravity” I mean the theory that resolves the inconsistency between the Standard Model of particle physics and General Relativity in our universe. Ie, a theory that actually describes nature. We have no reason whatsoever to think that the AdS/CFT correspondence tells us something about quantum gravity in our universe. As I explained in this earlier post, it is highly implausible that the results from AdS carry over to flat space or to space with a positive cosmological constant because the limit is not continuous. You can of course simply take the limit ignoring its convergence properties, but then the theory you get has no obvious relation to General Relativity. Let us have a look at the paper behind the article. We can read there in the introduction: “In the quest to understand the quantum nature of spacetime and gravity, a key difficulty is the lack of contact with experiment. Since gravity is so weak, directly probing quantum gravity means going to experimentally infeasible energy scales.” This is wrong and it demonstrates that the authors are not familiar with the phenomenology of quantum gravity. Large deviations from the semi-classical limit can occur at small energy scales. The reason is, rather trivially, that large masses in quantum superpositions should have gravitational fields in quantum superpositions. No large energies necessary for that. If you could, for example, put a billiard ball into a superposition of location you should be able to measure what happens to its gravitational field. This is unfeasible, but not because it involves high energies. It’s infeasible because decoherence kicks in too quickly to measure anything. Here is the rest of the first paragraph of the paper. I have in bold face added corrections that any reviewer should have insisted on: “However, a consequence of the holographic principle [3, 4] and its concrete realization in the AdS/CFT correspondence [5–7] (see also [8]) is that non-gravitational systems with sufficient entanglement may exhibit phenomena characteristic of quantum gravity in a space with a negative cosmological constant. This suggests that we may be able to use table-top physics experiments to indirectly probe quantum gravity in universes that we do not inhabit. Indeed, the technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab, except that, by the methods described in this paper, we cannot actually test quantum gravity in our universe. For this, other experiments are needed, which we will however not even mention. The purpose of this paper is to discuss one way in which quantum gravity can make contact with experiment, if you, like us, insist on studying quantum gravity in fictional universes that for all we know do not exist.” I pointed out that these black holes that string theorists deal with have nothing to do with real black holes in an article I wrote for Quanta Magazine last year. It was also the last article I wrote for them. Thursday, February 20, 2020 The 10 Most Important Physics Effects Today I have a count-down of the 10 most important effects in physics that you should all know about. 10. The Doppler Effect The Doppler effect is the change in frequency of a wave when the source moves relative to the receiver. If the source is approaching, the wavelength appears shorter and the frequency higher. If the source is moving away, the wavelength appears longer and the frequency lower. The most common example of the Doppler effect is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you. But the Doppler effect does not only happen for sound waves; it also happens to light which is why it’s enormously important in astrophysics. For light, the frequency is the color, so the color of an approaching object is shifted to the blue and that of an object moving away from you is shifted to the red. Because of this, we can for example calculate our velocity relative to the cosmic microwave background. The Doppler effect is named after the Austrian physicist Christian Doppler and has nothing to do with the German word Doppelgänger. 9. The Butterfly Effect Even a tiny change, like the flap of a butterfly’s wings, can making a big difference for the weather next Sunday. This is the butterfly effect as you have probably heard of it. But Edward Lorenz actually meant something much more radical when he spoke of the butterfly effect. He meant that for some non-linear systems you can only make predictions for a limited amount of time, even if you can measure the tiniest perturbations to arbitrary accuracy. I explained this in more detail in my earlier video. 8. The Meissner-Ochsenfeld Effect The Meissner-Ochsenfeld effect is the impossibility of making a magnetic field enter a superconductor. It was discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933. Thanks to this effect, if you try to place a superconductor on a magnet, it will hover above the magnet because the magnetic field lines cannot enter the superconductor. I assure you that this has absolutely nothing to do with Yogic flying. 7. The Aharonov–Bohm Effect Okay, I admit this is not a particularly well-known effect, but it should be. The Aharonov-Bohm effect says that the wave-function of a charged particle in an electromagnetic field obtains a phase shift from the potential of the background field. I know this sounds abstract, but the relevant point is that it’s the potential that causes the phase, not the field. In electrodynamics, the potential itself is normally not observable. But this phase shift in the Aharonov-Bohm Effect can and has been observed in interference patterns. And this tells us that the potential is not merely a mathematical tool. Before the Aharonov–Bohm effect one could reasonably question the physical reality of the potential because it was not observable. 6. The Tennis Racket Effect If you throw any three-dimensional object with a spin, then the spin around the shortest and longest axes will be stable, but that around the intermediate third axis not. The typical example for the spinning object this is a tennis racket, hence the name. It’s also known as the intermediate axis theorem or the Dzhanibekov effect. You see a beautiful illustration of the instability in this little clip from the International Space Station. 5. The Hall Effect If you bring a conducting plate into a magnetic field, then the magnetic field will affect the motion of the electrons in the plate. In particular, If the plate is orthogonal to the magnetic field lines, you can measure a voltage flowing between opposing ends of the plate, and this voltage can be measured to determine the strength of the magnetic field. This effect is named after Edwin Hall. If the plate is very thin, the temperature very low, and the magnetic field very strong, you can also observe that the conductivity makes discrete jumps, which is known as the quantum Hall effect. 4. The Hawking Effect Stephen Hawking showed in the early 1970s that black holes emit thermal radiation with a temperature inverse to the black hole’s mass. This Hawking effect is a consequence of the relativity of the particle number. An observer falling into a black hole would not measure any particles and think the black hole is surrounded by vacuum. But an observer far away from the black hole would think the horizon is surrounded by particles. This can happen because in general relativity, what we mean by a particle depend on the motion of an observer like the passage of time. A closely related effect is the Unruh effect named after Bill Unruh, which says that an accelerated observer in flat space will measure a thermal distribution of particles with a temperature that depends on the acceleration. Again that can happen because the accelerated observer’s particles are not the same as the particles of an observer at rest. 3. The Photoelectric Effect When light falls on a plate of metal, it can kick out electrons from their orbits around atomic nuclei. This is called the “photoelectric effect”. The surprising thing about this is that the frequency of the light needs to be above a certain threshold. Just what the threshold is depends on the material, but if the frequency is below the threshold, it does not matter how intense the light is, it will not kick out electrons. The photoelectric effect was explained in 1905 by Albert Einstein who correctly concluded that it means the light must be made of quanta whose energy is proportional to the frequency of the light. 2. The Casimir Effect Everybody knows that two metal plates will attract each other if one plate is positively charged and the other one negatively charged. But did you know the plates also attract each other if they are uncharged? Yes, they do! This is the Casimir effect, named after Hendrik Casimir. It is created by quantum fluctuations that create a pressure even in vacuum. This pressure is lower between the plates than outside of them, so that the two plates are pushed towards each other. However, the force from the Casimir effect is very weak and can be measured only at very short distances. 1. The Tunnel Effect Definitely my most favorite effect. Quantum effects allow a particle that is trapped in a potential to escape. This would not be possible without quantum effects because the particle just does not have enough energy to escape. However, in quantum mechanics the wave-function of the particle can leak out of the potential and this means that there is a small, but nonzero, probability that a quantum particle can do the seemingly impossible. Saturday, February 15, 2020 The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop Monday, February 10, 2020 Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 2.” by Tim Palmer [This is the second part of Tim’s guest contribution. The first part is here.] In this second part of my guest post, I want to discuss how the concepts of undecidability and uncomputability can lead to a novel interpretation of Bell’s famous theorem. This theorem states that under seemingly reasonable conditions, a deterministic theory of quantum physics – something Einstein believed in passionately – must satisfy a certain inequality which experiment shows is violated. These reasonable conditions, broadly speaking, describe the concepts of causality and freedom to choose experimental parameters. The issue I want to discuss is whether the way these conditions are formulated mathematically in Bell’s Theorem actually captures the physics that supposedly underpins them. The discussion here and in the previous post summarises the essay I recently submitted to the FQXi essay competition on undecidability and uncomputability. For many, the notion that we have some freedom in our actions and decisions seems irrefutable. But how would we explain this to an alien, or indeed a computer, for whom free will is a meaningless concept? Perhaps we might say that we are free because we could have done otherwise. This invokes the notion of a counterfactual world: even though we in fact did X, we could have done Y. Counterfactuals also play an important role in describing the notion of causality. Imagine throwing a stone at a glass window. Was the smashed glass caused by my throwing the stone? Yes, I might say, because if I hadn’t thrown the stone, the window wouldn’t have broken. However, there is an alternative way to describe these notions of free will and causality without invoking counterfactual worlds. I can just as well say that free will denotes an absence of constraints that would otherwise prevent me from doing what I want to do. Or I can use Newton’s laws of motion to determine that a stone with a certain mass, projected at a certain velocity, will hit the window with a momentum guaranteed to shatter the glass. These latter descriptions make no reference to counterfactuals at all; instead the descriptions are based on processes occurring in space-time (e.g. associated with the neurons of my brain or projectiles in physical space). What has all this got to do with Bell’s Theorem? I mentioned above the need for a given theory to satisfy “certain conditions” in order for it to be constrained by Bell’s inequality (and hence be inconsistent with experiment). One of these conditions, the one linked to free will, is called Statistical Independence. Theories which violate this condition are called Superdeterministic. Superdeterministic theories are typically excoriated by quantum foundations experts, not least because the Statistical Independence condition appears to underpin scientific methodology in general. For example, consider a source of particles emitting 1000 spin-1/2 particles. Suppose you measure the spin of 500 of them along one direction and 500 of them along a different direction. Statistical Independence guarantees that the measurement statistics (e.g. the frequency of spin-up measurements) will not depend on the particular way in which the experimenter chooses to partition the full ensemble of 1000 particles into the two sub-ensembles of 500 particles. If you violate Statistical Independence, the experts say, you are effectively invoking some conspiratorial prescient daemon who could, unknown to the experimenter, preselect particles for the particular measurements the experimenter choses to make - or even worse perhaps, could subvert the mind of the experimenter when deciding which type of measurement to perform on a given particle. Effectively, violating Statistical Independence turns experimenters into mindless zombies! No wonder experimentalists hate Superdeterministic theories of quantum physics!! However, the experts miss a subtle but crucial point here: whilst imposing Statistical Independence guarantees that real-world sub-ensembles are statistically equivalent, violating Statistical Independence does not guarantee that real-world sub-ensembles are not statistically equivalent. In particular it is possible to violate Statistical Independence in such a way that it is only sub-ensembles of particles subject to certain counterfactual measurements that may be statistically inequivalent to the corresponding sub-ensembles with real-world measurements. In the example above, a sub-ensemble of particles subject to a counterfactual measurement would be associated with the first sub-ensemble of 500 particles subject to the measurement direction applied to the second sub-ensemble of 500 particles. It is possible to violate Statistical Independence when comparing this counterfactual sub-ensemble with the real-world equivalent, without violating the statistical equivalence of the two corresponding sub-ensembles measured along their real-world directions. However, for this idea to make any theoretical sense at all, there has to be some mathematical basis for asserting that sub-ensembles with real-world measurements can be different to sub-ensembles with counterfactual-world measurements. This is where uncomputable fractal attractors play a key role. It is worth keeping an example of a fractal attractor in mind here. The Lorenz fractal attractor, discussed in my first post, is a geometric representation in state space of fluid motion in Newtonian space-time. The Lorenz attractor. [Image Credits: Markus Fritzsch.] As I explained in my first post, the attractor is uncomputable in the sense that there is no algorithm which can decide whether a given point in state space lies on the attractor (in exactly the same sense that, as Turing discovered, there is no algorithm for deciding whether a given computer program will halt for given input data). However, as I lay out in my essay, the differential equations for the fluid motion in space-time associated with the Lorenz attractor are themselves solvable by algorithm to arbitrary accuracy and hence are computable. This dichotomy (between state space and space-time) is extremely important to bear in mind below. With this in mind, suppose the universe itself evolves on some uncomputable fractal subset of state space, such that the corresponding evolution equations for physics in space-time are computable. In such a model, Statistical Independence will be violated for sub-ensembles if the corresponding counterfactual measurements take states of the universe off the fractal subset (since such counterfactual states have probability of occurrence equal to zero by definition). In the model I have developed this always occurs when considering counterfactual measurements such as those in Bell’s Theorem. (This is a nontrivial result and is the consequence of number-theoretic properties of trigonometric functions.) Importantly, in this theory, Statistical Independence is never violated when comparing two sub-ensembles subject to real-world measurements such as occurs in analysing Bell’s Theorem. This is all a bit mind numbing, I do admit. However, the bottom line is that I believe that the mathematical definitions of free choice and causality used to understand quantum entanglement are much too general – in particular they admit counterfactual worlds as physical in a completely unconstrained way. I have proposed alternative definitions of free choice and causality which strongly constrain counterfactual states (essentially they must lie on the fractal subset in state space), whilst leaving untouched descriptions of free choice and causality based only on space-time processes. (For the experts, in the classical limit of this theory, Statistical Independence is not violated for any counterfactual states.) With these alternative definitions, it is possible to violate Bell’s inequality in a deterministic theory which respects free choice and local causality, in exactly the way it is violated in quantum mechanics. Einstein may have been right after all! If we can explain entanglement deterministically and causally, then synthesising quantum and gravitational physics may have become easier. Indeed, it is through such synthesis that experimental tests of my model may eventually come. In conclusion, I believe that the uncomputable fractal attractors of chaotic systems may provide a key geometric ingredient needed to unify our theories of physics. My thanks to Sabine for allowing me the space on her blog to express these points of view. Saturday, February 08, 2020 Philosophers should talk more about climate change. Yes, philosophers. I never cease to be shocked – shocked! – how many scientists don’t know how science works and, worse, don’t seem to care about it. Most of those I have to deal with still think Popper was right when he claimed falsifiability is both necessary and sufficient to make a theory scientific, even though this position has logical consequences they’d strongly object to. Trouble is, if falsifiability was all it took, then arbitrary statements about the future would be scientific. I should, for example, be able to publish a paper predicting that tomorrow the sky will be pink and next Wednesday my cat will speak French. That’s totally falsifiable, yet I hope we all agree that if we’d let such nonsense pass as scientific, science would be entirely useless. I don’t even have a cat. As the contemporary philosopher Larry Laudan politely put it, Popper’s idea of telling science from non-science by falsifiability “has the untoward consequence of countenancing as `scientific’ every crank claim which makes ascertainably false assertions.” Which is why the world’s cranks love Popper. But you are not a crank, oh no, not you. And so you surely know that almost all of today’s philosophers of science agree that falsification is not a sufficient criterion of demarcation (though they disagree on whether it is necessary). Luckily, you don’t need to know anything about these philosophers to understand today’s post because I will not attempt to solve the demarcation problem (which, for the record, I don’t think is a philosophical question). I merely want to clarify just when it is scientifically justified to amend a theory whose predictions ran into tension with new data. And the only thing you need to know to understand this is that science cannot work without Occam’s razor. Occam’s razor tells you that among two theories that describe nature equally well you should take the simpler one. Roughly speaking it means you must discard superfluous assumptions. Occam’s razor is important because without it we were allowed to add all kinds of unnecessary clutter to a theory just because we like it. We would be permitted, for example, to add the assumption “all particles were made by god” to the standard model of particle physics. You see right away how this isn’t going well for science. Now, the phrase that two theories “describe nature equally well” and you should “take the simpler one” are somewhat vague. To make this prescription operationally useful you’d have to quantify just what it means by suitable statistical measures. We can then quibble about just which statistical measure is the best, but that’s somewhat beside the point here, so let me instead come back to the relevance of Occam’s razor. We just saw that it’s unscientific to make assumptions which are unnecessary to explain observation and don’t make a theory any simpler. But physicists get this wrong all the time and some have made a business out of it getting it wrong. They invent particles which make theories more complicated and are of no help to explain existing data. They claim this is science because these theories are falsifiable. But the new particles were unnecessary in the first place, so their ideas are dead on arrival, killed by Occam’s razor. If you still have trouble seeing why adding unnecessary details to established theories is unsound scientific methodology, imagine that scientists of other disciplines would proceed the way that particle physicists do. We’d have biologists writing papers about flying pigs and then hold conferences debating how flying pigs poop because, who knows, we might discover flying pigs tomorrow. Sounds ridiculous? Well, it is ridiculous. But that’s the same “scientific methodology” which has become common in the foundations of physics. The only difference between elaborating on flying pigs and supersymmetric particles is the amount of mathematics. And math certainly comes in handy for particle physicists because it prevents mere mortals from understanding just what the physicists are up to. But I am not telling you this to bitch about supersymmetry; that would be beating a dead horse. I am telling you this because I have recently had to deal with a lot of climate change deniers (thanks so much, Tim). And many of these deniers, believe that or not, think I must be a denier too because, drums please, I am an outspoken critic of inventing superfluous particles. Huh, you say. I hear you. It took me a while to figure out what’s with these people, but I believe I now understand where they’re coming from. You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes, it’s exactly the same thing that all these physicists do each time their hypothetical particles are not observed! They just fiddle with the parameters of the theory to evade experimental constraints and to keep their pet theories alive. But Popper already said you shouldn’t do that. Then someone yells “Epicycles!” And so, the deniers conclude, climate scientists are as wrong as particle physicists and clearly one shouldn’t listen to either. But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do. The more and the better data you have, the higher the demands on your theory. Sometimes this means you actually need a new theory. Sometimes you have to adjust one or the other parameter. Sometimes you find an actual mistake and have to correct it. But more often than not it just means you neglected something that better measurements are sensitive to and you must add details to your theory. And this is perfectly fine as long as adding details results in a model that explains the data better than before, and does so not just because you now have more parameters. Again, there are statistical measures to quantify in which cases adding parameters actually makes a better fit to data. Indeed, adding epicycles to make the geocentric model of the solar system fit with observations was entirely proper scientific methodology. It was correcting a hypothesis that ran into conflict with increasingly better observations. Astronomers of the time could have proceeded this way until they’d have noticed there is a simpler way to calculate the same curves, which is by using elliptic motions around the sun rather than cycles around cycles around the Earth. Of course this is not what historically happened, but epicycles in and by themselves are not unscientific, they’re merely parametrically clumsy. What scientists should not do, however, is to adjust details of a theory that were unnecessary in the first place. Kepler for example also thought that the planets play melodies on their orbits around the sun, an idea that was rightfully abandoned because it explains nothing. To name another example, adding dark matter and dark energy to the cosmological standard model in order to explain observations is sound scientific practice. These are both simple explanations that vastly improve the fit of the theory to observation. What is not sound scientific methodology is then making these theories more complicated than needs to be, eg by replacing dark energy with complicated scalar fields even though there is no observation that calls for it, or by inventing details about particles that make up dark matter even though these details are irrelevant to fit existing data. But let me come back to the climate change deniers. You may call me naïve, and I’ll take that, but I believe most of these people are genuinely confused about how science works. It’s of little use to throw evidence at people who don’t understand how scientists make evidence-based predictions. When it comes to climate change, therefore, I think we would all benefit if philosophers of science were given more airtime. Thursday, February 06, 2020 Ivory Tower [I've been singing again] I caught a cold and didn't come around to record a new physics video this week. Instead I finished a song that I wrote some weeks ago. Enjoy! Monday, February 03, 2020 Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 1.” by Tim Palmer [Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.] [Screenshot from Tim’s public lecture at Perimeter Institute] Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other. The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it. Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality. Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates. So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post. To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem. Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor. One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor A of a chaotic system is uncomputable and the proposition “x belongs to A” is algorithmically undecidable. How does this help unify physics? Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory. That was easy! The rest is not so easy which is why I need two guest posts and not one! When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos. To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting). Fig 1: Evolution of a contour of probability, based on ensembles of integrations of the Lorenz equations, is shown evolving in state space for different initial conditions, with the Lorenz attractor as background.  In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between. Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time. The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities. However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post. Sunday, February 02, 2020 Does nature have a minimal length? Molecules are made of atoms. Atomic nuclei are made of neutrons and protons. And the neutrons and protons are made of quarks and gluons. Many physicists think that this is not the end of the story, but that quarks and gluons are made of even smaller things, for example the tiny vibrating strings that string theory is all about. But then what? Are strings made of smaller things again? Or is there a smallest scale beyond which nature just does not have any further structure? Does nature have a minimal length? This is what we will talk about today. When physicists talk about a minimal length, they usually mean the Planck length, which is about 10-35 meters. The Planck length is named after Max Planck, who introduced it in 1899. 10-35 meters sounds tiny and indeed it is damned tiny. To give you an idea, think of the tunnel of the Large Hadron Collider. It’s a ring with a diameter of about 10 kilometers. The Planck length compares to the diameter of a proton as the radius of a proton to the diameter of the Large Hadron Collider. Currently, the smallest structures that we can study are about ten to the minus nineteen meters. That’s what we can do with the energies produced at the Large Hadron Collider and that is still sixteen orders of magnitude larger than the Planck length. What’s so special about the Planck length? The Planck length seems to be setting a limit to how small a structure can be so that we can still measure it. That’s because to measure small structures, we need to compress more energy into small volumes of space. That’s basically what we do with particle accelerators. Higher energy allows us to find out what happens on shorter distances. But if you stuff too much energy into a small volume, you will make a black hole. More concretely, if you have an energy E, that will in the best case allow you to resolve a distance of about ℏc/E. I will call that distance Δx. Here, c is the speed of light and ℏ is a constant of nature, called Planck’s constant. Yes, that’s the same Planck! This relation comes from the uncertainty principle of quantum mechanics. So, higher energies let you resolve smaller structures. Now you can ask, if I turn up the energy and the size I can resolve gets smaller, when do I get a black hole? Well that happens, if the Schwarzschild radius associated with the energy is similar to the distance you are trying to measure. That’s not difficult to calculate. So let’s do it. The Schwarzschild radius is approximately M times G/c2 where G is Newton’s constant and M is the mass. We are asking, when is that radius similar to the distance Δx. As you almost certainly know, the mass associated with the energy is E=Mc2. And, as we previously saw, that energy is just ℏcx. You can then solve this equation for Δx. And this is what we call the Planck length. It is associated with an energy called the Planck energy. If you go to higher energies than that, you will just make larger black holes. So the Planck length is the shortest distance you can measure. Now, this is a neat estimate and it’s not entirely wrong, but it’s not a rigorous derivation. If you start thinking about it, it’s a little handwavy, so let me assure you there are much more rigorous ways to do this calculation, and the conclusion remains basically the same. If you combine quantum mechanics with gravity, then the Planck length seems to set a limit to the resolution of structures. That’s why physicists think nature may have a fundamentally minimal length. Max Planck by the way did not come up with the Planck length because he thought it was a minimal length. He came up with that simply because it’s the only unit of dimension length you can create from the fundamental constants, c, the speed of light, G, Newton’s constant, and ℏ. He thought that was interesting because, as he wrote in his 1899 paper, these would be natural units that also aliens would use. The idea that the Planck length is a minimal length only came up after the development of general relativity when physicists started thinking about how to quantize gravity. Today, this idea is supported by attempts to develop a theory of quantum gravity, which I told you about in an earlier video. In string theory, for example, if you squeeze too much energy into a string it will start spreading out. In Loop Quantum Gravity, the loops themselves have a finite size, given by the Planck length. In Asymptotically Safe Gravity, the gravitational force becomes weaker at high energies, so beyond a certain point you can no longer improve your resolution. When I speak about a minimal length, a lot of people seem to have a particular image in mind, which is that the minimal length works like a kind of discretization, a pixilation of an photo or something like that. But that is most definitely the wrong image. The minimal length that we are talking about here is more like an unavoidable blur on an image, some kind of fundamental fuzziness that nature has. It may, but does not necessarily come with a discretization. What does this all mean? Well, it means that we might be close to finding a final theory, one that describes nature at its most fundamental level and there is nothing more beyond that. That is possible, but. Remember that the arguments for the existence of a minimal length rest on extrapolating 16 orders magnitude below the distances what we have tested so far. That’s a lot. That extrapolation might just be wrong. Even though we do not currently have any reason to think that there should be something new on distances even shorter than the Planck length, that situation might change in the future. Still, I find it intriguing that for all we currently know, it is not necessary to think about distances shorter than the Planck length.
8bf81c4034735cbc
BSPS-Forum Event: Parallel Universes Fay Dowker, Eleanor Knox and Simon Saunders Read more BSPS Video Lecture: Heather Dyke (LSE), “Experience of Passage in a Static World” Abstract: The view that experience seems to tell us directly that time flows has long been accepted by both A-theorists and B-theorists in the philosophy of time. A-theorists take it as a powerful endorsement of their position, sometimes using it explicitly in an argument for their view, and other times more implicitly, as a kind of non-negotiable, experiential given. B-theorists have tended to accept that we have this experience, and have sought alternative explanations for it, consistent with the B-theory. The so-called argument from temporal experience has received a lot of attention in recent years, and B-theory responses to it have begun to coalesce around two distinct positions. Illusionists adopt the traditional B-theoretic response of accepting that we do seem to experience temporal passage, and offering a B-theoretic explanation of it, thereby arguing that the experience is illusory. Veridicalists, on the face of it, take a more radical stance. They deny that we seem to experience temporal passage at all. I argue that there is something right in each of these responses, and by combining these features, we may be able to forge a third alternative, namely, that our temporal phenomenology is exactly what we should expect if the B-theory is true, and given our physical and psychological makeup. I discuss some results from psychology and cognitive science to support my view. In particular, I develop an explanation for the phenomenon that we sometimes seem to experience time speeding up or slowing down, a feature of temporal experience which has been largely neglected in the philosophical literature. BSPS Video Lecture: Karim Thébault (Bristol), “Cosmic Singularity Resolution via Quantum Evolution” Classical models of the universe generically feature a big bang singularity. That is, when we consider progressively earlier and earlier times, physical quantities stop behaving in a reasonable way. A particular problem is that physical quantities related to the curvature of spacetime become divergent. A long standing hope is that a theory of quantum gravity would “resolve” the big bang singularity by providing quantum models of the early universe in which all physical quantities are always finite. Unfortunately, not only does the conventional Wheeler-DeWitt approach to quantum gravity fail to resolve the big bang singularity in this sense (without the addition of loop variables or exotic matter), but it also renders the universe fundamentally timeless. We offer a new proposal for singularity resolution in quantum cosmology based upon quantum evolution. In particular, we advocate a new approach to quantum cosmology based upon a Schrödinger equation for the universe. For simple models with a massless scalar field and positive cosmological constant we show that: i) well-behaved quantum observables can be constructed; ii) generic solutions to the universal Schrödinger equation are singularity-free; and iii) specific solutions display novel phenomenology including a cosmic bounce.
bef3f7bc21c42114
So you think YOU'RE confused about quantum mechanics? So you think YOU'RE confused a... Quantum physicists appear to be as confused about quantum mechanics as the average man in the street (Image: Shutterstock) View 4 Images Schrödinger's Cat in a many-worlds quantum mechanical world Schrödinger's Cat in a many-worlds quantum mechanical world (Image: Dhatfield via Wikimedia Commons) (Image: Dhatfield via Wikimedia Commons) (Photo: Aaron O'Connell via Wikimedia Commons) (Photo: Aaron O'Connell via Wikimedia Commons) View gallery - 4 images An invitation-only conference held back in 2011 on the topic "Quantum Physics and the Nature of Reality" (QPNR) saw top physicists, mathematicians, and philosophers of science specializing in the meaning and interpretation of quantum mechanics wrangling over an array of fundamental issues. An interesting aspect of the gathering was that when informally polled on the main issues and open problems in the foundations of quantum mechanics, the results showed that the scientific community still has no clear consensus concerning the basic nature of quantum physics. Quantum mechanics (QM), together with its extensions into quantum electrodynamics and quantum field theory, is our most successful scientific theory, with many results agreeing to better than a part in a billion with experiment. However, at its roots QM is ghost-like – when you try to pin down just what it means, it tends to slip between the fingers. It is full of apparent paradoxes, incompatible dualities, and "spooky actions." Simply put, although QM works amazingly well, why and how it works remains elusive. While it's unlikely that many physicists lose much sleep over the meaning of quantum mechanics, the advent of quantum information physics (quantum cryptography, quantum computing, etc.) has directly confronted them with many fundamental questions about QM. Quantum mechanics works regardless of interpretation, but our intuition seems to be very weak when applied to situations that bring out the stranger aspects of QM. As a result, the amount of effort applied to clarifying the foundations of QM has increased considerably over the past three decades. What, then, does the QPNR poll tell us about the state of our knowledge of quantum mechanics? While it is impractical to poke into every nook and cranny of the poll, the answers to a few of the questions merit our attention. (Note that people were allowed to vote for more than one answer, so the percentages in the source sometimes do not add up to 100 percent. I have taken the liberty of normalizing the results so they do equal 100 percent, and in some cases I have simplified the issues to more clearly state the options.) Introduction to QM We'll start with a QPNR poll question about the quantum measurement problem, as this will provide the opportunity to introduce some of the main concepts in QM. In QM, the wavefunction of an object describes all measurable properties of that object. It is a complete description of what is called the quantum state of that object. The wavefunction is governed by the Schrödinger equation, which tells the wavefunction how to change in response to external conditions. The mathematical details are not important right now, save for one – the Schrödinger equation is a linear equation. If you add together several different solutions to a linear equation, that sum is also a solution. This is called the principle of superposition, and is not a physical result, but rather a property of the basic mathematical structure of QM. The implication is that there exist a class of wavefunctions, called quantum superpositions, which simultaneously describe multiple quantum states of an object. Let's put an object into a superposition, measure it, and see what results are found according to standard quantum mechanics. Begin with a red QM ball and a green QM ball that are otherwise identical. Set them each rotating with two quanta (one quantum is considered half a unit) of angular momentum (which we will call spin) so that the red ball has its spin up, while the blue ball has its spin down. The quantum state of the two balls before they interact is red-up + blue-down. If you measure the spin of the two balls, you will find the red ball always has a spin of +1, and the blue ball always has a spin of -1, making the total spin of the pair equal to zero. This is important because the total spin of a system is constant in QM. Now knock the balls together. If their surfaces have some property analogous to friction, the two balls can pass spin from one to the other. The most likely results are no change (red-up + blue-down, which we'll call [1 -1]); spin exchange (red-down + blue-up, or [-1 1]); and spin cancellation (red-0 + blue-0, or [0 0]). As any of the three can happen, before either of the balls are measured they are in a state of entangled superposition. Their quantum state after colliding and before measuring is [1 -1] + [-1 1] + [0 0]. [For the quantum skeptics: If we measure the spin of the red and blue balls along different directions, Bell's theorem tells us that the correlations between the measurement results will be stronger than is possible for classical or predetermined systems. This theoretical result is also what is observed experimentally, providing experimental evidence that the spins of the balls following the collision have no definite value until they are measured.] After the collision, measure the spin of the red ball. If you measure a spin of 1, the quantum state of the two balls after the measurement is the [1 -1] state – the other two superposed states have vanished, as they are not consistent with the measurement. Similarly, if the measurement is -1 or zero, after those measurements the quantum state of the two balls is [-1 1] and [0 0], respectively. Any states inconsistent with the measurement result disappear, even though those states existed in the original superposition. The quantum measurement problem So what happens if we decide to really believe quantum mechanics? Quantum mechanics is supposed to describe all measurable phenomena, after all. The instrument that measures spin is a rather complex quantum system, and the person operating it is a more complex quantum system. If I can get three different results out of a spin measurement, why don't I go into a superposition of having measured each of the three possible results?As far as we know, no human has ever noticed being in a superposed state – even though we don't really know what that would feel like. The result of a measurement such as that described above is, in our experience, a single definite number. To make QM treat observers as our experience suggests, standard QM assumes that measuring devices and observers are classical in their behavior. No superpositions of classical measuring devices and observers can exist, so measurements give a single unambiguous answer, just as we expect. This was originally thought to be a reasonable assumption, but has caused many arguments and sleepless nights among quantum physicists. The problem is that there is every reason to believe that measuring devices and observers are not truly classical in behavior. Rather, their QM wavefunction combined with the Schrödinger equation provides a complete description of the possible behaviors of the object. The nonclassical behavior of large measuring devices has been proven within standard QM by the insolubility theorem. If the structure of QM does hold for all systems, then at the end of a measurement process the observer, the measuring apparatus, and the object being measured exist in a quantum superposition of all states consistent with the wavefunction of the object being measured. Given this, the quantum measurement problem can be summarized thusly: Why do measurements taken by complex, large-scale quantum devices (including ourselves) appear to have a single, definite result? If some aspect of QM interactions does cause the measurement process to narrow to a specific result, what is it? Does it exist within properties of quantum systems having many degrees of freedom, or does QM need to be extended? • The original notions of collapsing wavefunctions and classical observers were an attempt to answer this question, but the insolubility theorem shows this is inadequate for the purpose. • Some have proposed that the Schrödinger equation should be altered to include some nonlinear terms that will produce pure states under measurement. These attempts have their own problems, primarily because standard quantum mechanics works so well – it is difficult to change its fundamental equation without spoiling the good parts. • In Everett-type many-worlds theories, carrying out a measurement with multiple results causes the formation of a set of alternate universes – one for each possible result. This avoids the measurement problem – the observer splits with the measuring device, and so doesn't notice the multiplicity. But you have to be able to believe that bouncing a photon off an atom creates new universes... • Decoherence, which results from the interaction of a quantum system with its surroundings, can render the superposed states of the wavefunction incapable of interfering with each other, at which point their probabilities become independent. Some believe this takes the place of wavefunction collapse, but others believe it has no bearing at all on the measurement problem, as all that is accomplished is to make a superposition with the entangled environment. So what did the QPNR poll say about the quantum measurement problem? • Pseudoproblem (will go away with additional work) – 20% • Solution through decoherence – 11% • Solution in some other manner – 30% • Seriously threatens QM – 18% • None of the above – 20% These results are nearly indistinguishable from random choices. Schrödinger's cat and macroscopic superpositions (Image: Dhatfield via Wikimedia Commons) (Image: Dhatfield via Wikimedia Commons) The plight of Schrödinger's Cat is known to many readers. A cat, a conscious, complex quantum system, is placed in a box. Also in the box is a radiation-triggered hammer positioned to smash a glass bottle containing cyanide when radiation is detected. Finally, a very weak radiation source that on average emits one particle per hour is placed in the box, and the box is soundproof, opaque, and sealed. You are sitting outside the box. An hour later, is the cat dead, alive, neither, or both? The structure of the experiment amplifies an issue accurately described by QM (has a particular radioactive atom decayed?) into what appears to be a classical issue (is the cat alive or dead?). We want to see at what step in the experiment the result stops being quantum mechanical and becomes a definite classical yes or no. One direction of argument holds that until the box is opened, that cat is in a quantum superposition of dead cat and live cat. On the other hand, if the cat qualifies as an observer, it at least knows if it is alive. (In order for the cat to know it is dead depends on the physical existence of an afterlife – not a standard assumption in QM.) Discussions can become heated, as there are many possible answers. Schrödinger's Cat in a many-worlds quantum mechanical world Schrödinger's Cat in a many-worlds quantum mechanical world In the many-worlds theories, the fate of the cat is a bit different. When the box is opened, the universe splits into two – one containing a live cat, the other containing a dead cat. Schrödinger's Cat led to a specific quantum mechanics question on the QPNR poll: Are superpositions of macroscopically distinct states (such as a dead/alive cat) possible in principle, possible in a laboratory, or impossible in principle? • Macroscopic superpositions are possible in principle – 55% • Macroscopic superpositions can be formed in a lab – 30% • Macroscopic superpositions are impossible in principle – 15% • This issue is significant, as it can be tested experimentally. (Photo: Aaron O'Connell via Wikimedia Commons) (Photo: Aaron O'Connell via Wikimedia Commons) The largest system that has been successfully put into quantum superposition is a quantum microphone weighing about a nanogram (ten trillion atoms) with a volume around 450 cubic microns. This isn't very large, but is far beyond sizes associated with the usual atomic and subatomic interactions which we usually associate with quantum mechanics. The rapid evolution of the field in creating quantum superpositions of larger and larger objects is probably part of the reason that the QPNR poll was rather positive about macroscopic superpositions. This will be a theme – if you can test an idea, consensus forms over time. Reality or description? One issue at the foundations of QM involves the physical reality of quantum states. The QPNR poll asked if quantum states only describe reality (are epistemic), or if quantum states are as real as an electric field whose strength can easily be measured (are ontic). • Epistemic – 27% • Ontic – 24% • Both – 33% • Purely statistical – 3% • Other – 13% The answers to this very crucial question are consistent with random responses – the collective confusion appears very large. Randomness in QM Another fundamental issue in quantum mechanics involves the randomness of individual quantum events, such as the decay of a radioactive atom. Quantum mechanics predicts behavior that is consistent with random decays having a characteristic half-life for a given decay mode. But is the decay process actually random, or does it just seem that way? The QPNR poll offers four options: Hidden determinism; only appears to be random; irreducible randomness; and randomness is a fundamental concept in nature. Hidden determinism is Einstein's view – there is a hidden clockwork underlying what we perceive as quantum reality. Phenomena are really classical and mechanistic, but we can't see that at present. The universe only appears to be random in Everett-like many-world interpretations, in which the perception of randomness is an artifact of finding yourself in only one of the new branches of the universe. The tricky part is deciding on the difference between irreducible randomness and randomness as a fundamental concept in nature. The meaning of the latter is particularly fuzzy. Roughly speaking, irreducible randomness describes a universe in which measured phenomena yield unpredictable results, while fundamental randomness describes a universe whose innermost workings are random. Fundamental randomness is not hidden determinism, saying rather that if there are sublevels of reality, they are also random. The QPNR answers are: • Hidden determinism – 0% • Apparent randomness – 7% • Irreducible randomness – 40% • Fundamental randomness – 53% The lack of support for hidden determinism is probably related to the many experimental tests of Bell's Theorem, which strongly suggest the inapplicability of hidden-variable theories to our universe. Apparent randomness received fewer than half the votes received by Everett-like interpretations, suggesting that not all Everett supporters agree that the randomness observed therein is apparent. Irreducible randomness received 40 percent of the votes, while fundamental randomness received 53 percent. It would appear that confusion between these two positions is not limited to your scribe, as all fundamentally random systems are also irreducibly random, but the voting went the other way. Science or personal prejudice? To sum up the state of the field of QM interpretations, one particular QPNR poll question is quite revealing. The question is simply: How much is the choice of interpretation a matter of personal philosophical prejudice? • A lot – 58% • A little – 27% • None at all – 15% Eighty-five percent of those polled believe that the choice of QM interpretation depends to some extent on one's personal philosophical leanings. More than any of the other questions and answers, this shows that the interpretation of quantum mechanics, at present, is not science, little as I like to admit it. A sign that a description of nature is not fundamental is when it provides little if any justification for working. However, this is also a sign of a new fundamental description of nature lacking the correct language to make clear how and why it works. Personally, I believe the key to answering Feynman's question "But how can it be like that?" is linguistic – we lack a viewpoint and language from which understanding can flow. But the question, entertaining as it is, is really beyond my pay grade. Sources:, Interpretations of Quantum Mechanics: a critical survey View gallery - 4 images "In Everett-type many-worlds theories ... you have to be able to believe that bouncing a photon off an atom creates new universes..." Not necessarily. You could trying taking a five dimensional view of the situation. We know from relativity that time is a fourth dimension as real as the space dimensions and connected up with them into a four dimensional whole. This refutes the view that only the present moment is real with the past vanishing into non-existence and the future not yet formed. In the relativistic view, all moments in time, past, present and future are equally real and it is an illusion that only the present is real. We can extend this idea to include a fifth dimension of alternative possibilities. They are not suddenly and mysteriously created. They have "always" existed. With respect to quantum randomness, that can be interpreted in the context of my previous comment presenting a five dimensional view of the Many Worlds idea. Think of reality as a solid five dimensional block of space-time plus a possibility dimension hosting the many worlds alternatives. Discrete observable events can be viewed as tiny particle-like dots embedded in this greater reality. Some of these can be linked by directed lines, from past to future, in the time dimension which represent causality. From the dot in the past we could draw many such lines fanning out through the fifth dimension to the alternative causality linked results. Lines might also be drawn sideways in the time dimension representing not causality but quantum entanglement, suggesting an underlying unity between causality and entanglement. Perhaps some other dots could exist without causal links into the past. These are events that are simply there. They happened for no particular reason. They still have to conform to fundamental physical principles, especially the conservation laws, but they are otherwise random. If such dots exist, then there is fundamental randomness but if not then not. The Many Worlds interpretation is, in this five dimensional view, agnostic on the question. Jamie Sheerin Whenever quantum mechanics threatens to overwhelm me, I think of the philosophical quagmire surrounding the physical interpretation of the mathematical results and thank my lucky stars it's not examinable :) I do believe most probably the select few who's opinion on QM that are being taken seriously. Simply have already put to many limits on what they deem accurate or inaccurate. For Example: Who's to say super position exactly has any true real science to the principles of QM....? Like seriously the understanding of QM can not be defined with such theory or experiments of a cat being dead and alive, or multiple beings of the same body in different dimensions. If one has yet to realize, QM is for certain much more simplified than all previous works of science. It's the Salt & Pepper. Not the Cake. Using simple logic as QM or QM thinking.. When I listen to these guys say there's a cat dead and alive etc etc... My Quantum instinct says, Super Position in QM is more like the ability to recycle a plastic bottle into a Plastic Plate... The possibilities for the plastic bottle to make up any and every other object that is defined of its unique atomic property's is Infinite. So to say in the ideas of QM... If I have a Plastic Bottle.. I also have a Plastic Plate, Pen, Clip or anything else that is formed out of the same nature in which my plastic Bottle is.. QM truly defines such principles that one with a value of energy can become any and all property's of mass in the universe with the right ingredients of Energy. Ken Brody @Adam_Smith. Agree with your extra-dimensional interpretation in parameter space, but we ought to look at CONVERGENCE of events in 5-space as well as an annealing of the quantum phenomena that cause divergence. As larger quantum objects, the experimenters and their apparatus are possibly subject to interactions that cause the diverging Everett worlds to come together again, so we do not experience superposed states on the macro level. I propose no mechanism for this. Julian Barbour's work has great bearing on this issue. He sees a "probability fog" spread along the parameter points denoting the higher probabilities of some positions over others. The tiny quantum splits just vanish leaving us a perceptible and stable timeline. Flipider Comm Lucky for us we didn't need to invent matter & this all was provided to us for our enjoyment. Because of humanities backward engineering of everything. I will keep to this perspective and respectively read this piece backwards. waves or particles- fluid dynamics? where is the transition from laminar to turbulent with neutrinos? The bending of space near gravitational bodies is only detectable if you are outside the effected space and then does your mass affect/effect the measurements? If space is shorter(bent) relative to the assumed constant speed of light(so the math is solveable) -Is it still constant when it traverses a 'bent' path that appears linear to us? Why would you limit God to physics? (paraphrased very loosely....) tri-state analogies keep getting in the way- alive or dead? or? The proverbial cat for any variation on unified field theory would be where the density is such that the neutron,proton and electron are all one mass that doesn't have any motion at all- possibly why gamma ray burst emit from black holes as the mass is converted to energy(???)... QM is such a horribly abused term. It seems that every time I turn around that somebody is mis-using it to try to bring credibility to whatever pop foolishness that they are trying to pass off as "scientific". Typical example: "Since Quantum Science has proven that anything is possible as long as there are an infinite number of variables and we KNOW that the Universe is infinite- therefore,somewhere there is a world where Dorothy still lives in Oz!" Such people also use "reasoning" such as "Ancient Aliens had to exist- because if they didn't, then how did they build everything?" As I watch science,history and politics devolve into un-reality TV shows, I am losing interest over straw-man arguments of all sorts. I think far too much of that which is called "science" is really just "scientism" impersonating science. Bob SpencerSpencer At my age I love to try to understand things like QM, it chases the cobwebs out of the corner of my brain and replaces them with baffle. I am encouraged to find that real scientists have difficulties also. The word "consensus" has no place in science.
f1a13c49853c5986
Inverse scattering: applications to nuclear physics [1 cm] R.S. Mackintosh, Department of Physical Sciences, The Open University, Milton Keynes, MK7 6AA, U.K. [3 mm] Review commissioned for Scholarpedia [1 cm] Draft of November 14, 2020 Abstract:  In what follows we first set the context for inverse scattering in nuclear physics with a brief account of inverse problems in general. We then turn to inverse scattering which involves the S-matrix, which connects the interaction potential between two scattering particles with the measured scattering cross section. The term ‘inverse’ is a reference to the fact that instead of determining the scattering S-matrix from the interaction potential between the scattering particles, we do the inverse. That is to say, we calculate the interaction potential from the S-matrix. This review explains how this can now be done reliably, but the emphasis will be upon reasons why one should wish to do this, with an account of some of the ways this can lead to understanding concerning nuclear interactions. 1 General introduction: what are inverse problems? The subject of this review is a very specific part of the research field ‘Inverse problems’, a field of vast scope, with dedicated journals including: Inverse Problems in Science and Engineering (ISSN 1741-5977, print, and 1741-5985, online), and, Inverse Problems (ISSN 0266-5611, print, and 1361-6420, online). Inverse problems can be understood by contrast with the corresponding direct problems, as some examples should make clear: 1. Given a distribution of electrical current within the brain, it is a straightforward direct problem to calculate the magnetic fields outside the head; the inverse problem of calculating the currents from measured magnetic fields is the much more difficult ‘biomagnetic inverse problem’. 2. Determining the size and location of a mass of iron ore from sensitive variations of gravity at the Earth’s surface is much harder that the straightforward direct problem of calculating fluctuations in gravity at the surface due to known mass concentrations. 3. Various forms of tomography that are central to modern medicine can be seen as inverse problems. Such problems have given rise to an active subdiscipline of applied mathematics centred around integral equations; browsing recent issues of the journals mentioned above will give a flavour of the subject, and hint at its importance in modern pure and applied science. The subject of this article is much more restricted: the application of inverse scattering in nuclear physics. The nuclear inverse problem shares one key property with all of those mentioned here: it is much harder and generally less well-developed than the corresponding direct problem. It differs somewhat in that each of the other inverse problems is widely accepted and plays a key role in the relevant scientific, medical or commercial activity. The usefulness of inverse methods in nuclear physics is less well-known; this article will give examples of where it has been useful, and maybe will inspire some new applications that this author has not considered. 2 Introduction to inverse scattering in nuclear physics In this article, inverse scattering chiefly refers to the determination of a local scattering potential that yields a given set of S-matrix elements. Although ‘true’ nuclear interactions are understood to be non-local, we discuss only the derivation of local representations of the S-matrix. There always exists such a representation and in view of the wide range of possible forms of non-locality, the determination of a non-local potential from a single set of S-matrix elements at a single energy will be under-determined and we do not discuss this. We do remark, however, that inversion can be the most natural way of determining a local equivalent (in the sense of yielding the same S-matrix) of a non-local potential. Inverse scattering can also be extended to include the determination of the interaction potential directly from scattering observables (from this viewpoint, optical model fitting is an elementary form of inverse scattering.) The inverse in ‘inverse scattering’ indicates a contrast with the (much easier) direct scattering problem in which the S-matrix, and thus the scattering observables, are calculated from an interaction potential. The physical context in which inverse and direct scattering are discussed in this article is: the scattering of one microscopic body from another in a model where the interaction between the bodies is described by a potential and the natural solution involves using this potential in the Schrödinger equation. In accord with the title of this article, the microscopic bodies that predominantly feature in this article are pairs of atomic nuclei, with one of them often a nucleon. This review sets out to do the following: 1. Define the categories of nuclear inverse scattering cases, specifying those that will be given fuller treatment in this review and giving references for those that will not. 2. Present an overview of the various methods that have been applied to the nuclear inverse scattering problem. 3. Present an account of a particular inversion procedure, the iterative-perturbative (IP) inverse scattering algorithm, that has had a wide range of applications. This will include specific examples of what it can do. 4. Show ‘what inversion can do for you’. This takes the form of a range of examples showing what kind of information or understanding concerning nucleus-nucleus interactions can be obtained using inversion. At the end, we leave it up to the imagination of the reader to extend that range. This review is specifically not a comprehensive review of the formal theory of inverse scattering. 3 Definitions and notation The direct scattering problem, which is in the background to this review, is the calculation of the scattering of interacting microscopic particles, typically but not exclusively a nucleon and an atomic nucleus. We assume that there is an interaction potential between the interacting particles that can be substituted into the time independent Schrödinger’s equation111We do not consider wave packets nor do we consider the justification for the stationary state treatment of scattering.. This can be solved for the radial wave function for specific values of the orbital angular momentum (we generalize shortly to the case where the particle has spin) having asymptotic form Eqn. 1 defines the S-matrix for orbital angular momentum quantum number for a spinless projectile; and are the ingoing and outgoing radial solutions, Coulomb wave functions, for the case where the projectile and target are both charged. The S-matrix is frequently expressed in terms of phase-shifts : . For complex potentials, is complex and to preserve unitarity. For a spin- projectile, we have where . The case of spin-1 projectiles, such as deuterons, can also be handled; this involves the inversion of a coupled channel S-matrix: where is the total angular momentum (assuming spin zero target nucleus), , and have the same parity; for there is no coupling. The inverse problem in these cases reverses the situation: given , or , what is the interaction potential? We shall also touch on the inverse problem of establishing , or from measured observables. General references: The book by K. Chadan and P.C. Sabatier [1] presents a comprehensive account of inverse scattering with an emphasis on the formal aspects. We refer to this as CS89. An old but still useful introduction to inversion is Chapter 20 of the book by R.G. Newton [2]. Review article: The review, ‘The application of inversion to nuclear physics’, by Kukulin and Mackintosh [3] reviews inverse scattering particularly as applied to nuclear scattering, up to 2003 and provides a much more comprehensive bibliography than the present one. We refer to it as KM04. Conference proceedings: Many articles discussing the theory and application of inverse scattering, as well as broader aspects of inversion, can be found in the conference proceedings: [4, 5, 6]. 4 Categories of inverse scattering In this review we shall refer to the following categories of inverse scattering: 1. Fixed- inversion Given for all energies at a fixed , determine the potential that gives . This is the classical inversion problem solved by Gel’fand and Levitan and also Marchenko, see CS89. The term ‘fixed- inversion’ is misleading since it has been generalized to include derivation of spin-orbit and tensor terms for specific fixed and parity, see Sections 6.1 and 7.1. 2. Fixed energy inversion Given for all at a specific energy , fixed inversion determines that reproduces at energy . We include under this heading inversion and also the generalization for spin-1 projectiles leading to the determination of a specific tensor interaction from (where ). Scattering from target nuclei with spin can sometimes be treated by determining independent interactions for different values of the channel spin. Applications in which more general non-diagonal S-matrices are inverted to non-diagonal potentials (coupled channel inversion) have been discussed. 3. Mixed case inversion At low energies, there may be too few active partial waves for satisfactory inversion. If there exist sets of at closely spaced energies, they can be inverted together to determine a potential in ‘mixed case’ inversion, having aspects of fixed- and fixed- inversion. It can be viewed as incorporating information from the local energy dependence of . Mixed case inversion is possible with the iterative-perturbative (IP) inversion procedure that is presented in Section 6.2.3. 4. Energy-dependent inversion Nuclear potentials are inherently energy dependent. Given for a wider range of energies than is appropriate for mixed case inversion, energy-dependent inversion determines a potential with an appropriately parameterized energy dependence, and can be considered a generalization of mixed case inversion. 5. Variants of fixed energy, mixed case and energy-dependent inversion Scattering of identical bosons provides for just the even values of . Where there are sufficient active partial waves, this situation can be handled straightforwardly by the IP method and semi-classical (WKB) methods. Likewise, it is often straightforward to obtain with IP inversion a parity-dependent potential in cases where exchange processes (for example) require separate potentials for odd-parity and even-parity partial waves (see Remark 4, below). 6. Direct observable-to-potential inversion Using IP inversion, it is possible to combine in one algorithmic procedure inversion together with a determination of from a fit to data. This can be applied with any of types 2 to 5 above. (This is distinct from the two-step inversion mentioned in Remark 2 below.) Remark 1. In principle, type 1 (fixed ) requires for all energies and type 2 (fixed ) requires for all , but practical implementations have been developed. For fixed energy inversion, this allows a potential to be defined out to a specific radius to be determined from a set over an appropriate range of . This effectively puts a lower limit to the energy for which a potential can be determined. Remark 2. In principle all methods are subject to problems of non-uniqueness and errors, though these problems can often be minimized in practice. Important for this is the fact that practical inversion methods may allow the inclusion of a priori information, especially in cases tending to be under-determined. Remark 3. In practical applications, the S-matrix to be inverted generally comes from theory or from fits such as R-matrix fits or effective range fits to measured observables over a range of energies. There have also been applications in which have been determined by fitting observables at a single energy with a direct search. Such S-matrix fitting is also an inverse problem and the technical and formal aspects of observable-to- inversion are to be found elsewhere, e.g. CS89. We do mention some applications of the resulting ‘two-step’ phenomenology, which might be considered as an alternative form of model-independent optical model (OM) fitting, having certain advantages. This is true also of type 6, direct observable-to-potential inversion. Remark 4. Methods for both fixed- and fixed- exist for including the energy of bound states as input information for the inversion. Remark 5. A parity-dependent component (e.g. real or imaginary central, real or imaginary spin-orbit) of a potential may be written and, in the context of parity dependence, we refer in what follows to and as the Wigner and Majorana components. We reserve the term ‘-dependent’ for other forms of partial wave dependence, never for parity dependence. 5 Alternative inversions As an alternative to determining the potential that reproduces the S-matrix, one can determine the potential that reproduces the radial wave function for a given partial wave. The trivially equivalent local potential (TELP) of Franey and Ellis [7] is necessarily -dependent. However -weighted TELPs, applicable for all , can be constructed (as in the CC code FRESCO [8]) and are widely used to represent dynamic polarization potentials, DPPs, (discussed below). As suggested by, for example, the somewhat arbitrary nature of the partial wave weighting, weighted TELPs cannot be expected to give the same potential as inversion and actual comparison [9] confirms that indeed there are substantial differences. The consideration of such alternative ways of defining a local potential can throw light on the physics of local potential models of scattering, as discussed by M. S. Hussein, et al [10]. As an alternative to the -dependent TELP, one can produce a spatial representation of the potential that reproduces the elastic channel wave function throughout the nucleus; note the vectorial dependence upon . Such a representation illuminates the non-locality induced by channel coupling, showing regions where flux leaves and then returns to the elastic channel. The real and imaginary parts of the ‘-potentials’ are determined, respectively, from the real and imaginary parts of as described in Ref. [11], and for spin- projectiles, Ref. [12] and references therein. Ref. [13] compares the very different wave functions within the nucleus for -dependent and -independent potentials that have the same asymptotic form, i.e. the same . 6 Methods for inversion A number of techniques for inversion have been put forward and here we list the most significant with the emphasis on the historically significant and those that have been widely applied, leading to the understanding of nuclear interactions. In principle, as has been pointed out by Chadan and Sabatier [1], the nuclear scattering inverse problem is under-determined and hence subject to ambiguities. This is more of a problem for formal methods that do not readily permit the inclusion of prior information. In fact, it proves not to be a problem in many practical cases, especially where the sought-for potential is not too far from some known potential, as, for example, when determining a dynamic polarization potential. In other cases, it does matter, and the inclusion of prior information in the overall inversion problem has to be accepted as reasonable. It’s a strength of certain inversion procedures that this is possible with them. A specific example will be given. A possible consequence of under-determination is the occurrence of rapid wiggles on the potentials that are determined. In effect, potentials and , where , a ‘null potential’ [14], is a function in the form of a set of short wavelength oscillations, have exactly, or very nearly the same . This is particularly significant since genuine wavy features do occur, e.g. as a consequence of underlying -dependence. Such genuine waviness must be distinguished from the spurious. The IP method does afford means for such discrimination. 6.1 Fixed- methods The inversion formalism of Gel’fand, Levitan and Marchenko [1] can be made to yield a spin-orbit potential from and also (in coupled-channel form) yield a tensor force when different values contribute to the S-matrix for specific total angular momentum and parity . The problems are: (i) nuclear potentials are typically energy dependent but the method relates to a single potential for the whole energy range, and (ii) a very wide energy range is required to determine the potential. Even for nucleon-nucleon scattering, where the method has been applied, the pion threshold limits the energy range. 6.2 Fixed- methods and extensions Newton [2, 15], starting from the fixed- formalism of Gel’fand and Levitan, devised a restricted fixed- inversion procedure that was further developed by Sabatier [1] and others (see Section 6.2.1) into the Newton-Sabatier (NS) inversion method. This formalism directly derives a potential from S-matrix elements and is formally exact. The related method of Lipperheide and Fiedeldey [16, 17] starts from a specific parameterization of . Fixed- inversion methods for application at higher energies based on the JWKB approximation and other semi-classical approximations have been developed, see Kujawski [18] and others [19]. For applications see Section 6.2.2. The most widely applied inversion method is the iterative-perturbative (IP) algorithm [20] based upon the generally linear response of to changes in  [38]. IP inversion has been extended to handle mixed-case inversion, energy-dependent inversion, spin- inversion and some cases of spin-1 inversion leading to a tensor interaction. A number of other approaches to fixed- inversion are referenced in the review [3]. The subject of fixed- inversion is a topic of on-going research, see e.g. [22]. 6.2.1 Newton-Sabatier and related methods The formal Newton-Sabatier (NS) inversion method was developed into a practical applicable method in the important work of Münchow and Scheid [23] (MS). Key aspects were the matching of the radial range to the range of -values for which was provided, and the adoption of an over-determined matrix algorithm. For the later extension of the MS method to spin- see Ref. [3]. Formal inversion methods of this kind simply translate a set of values to a function with the disadvantage that when, for example, suspected ‘noise’ in the input leads to oscillatory features in , it is not possible to adjust the precision required of the inversion to evaluate the physicality of these features. It is also not straightforward to include prior information concerning the potential; such information is useful in difficult applications and for eliminating the effects of the general under-determination of nuclear inverse scattering [1]. Test inversions do appear to exhibit some spurious oscillations that might be difficult to distinguish from genuine waviness, see above, It seems that there have been rather few papers exploiting the MS-NS method to extract information about nuclear scattering. However, the formal developments by Newton and his successors have been of great value, not least for showing that there always is a local potential corresponding to an appropriate radial range for a corresponding range of partial waves. An inversion procedure of a similar kind, that due to Lipperheide and Fiedeldey [16, 17], LF, was actually the first to extract information concerning nuclear interactions by means of inversion: the long range interaction generated by Coulomb excitation of heavy ions, see [24]. With MS-NS or LF inversion, is uniquely determined by , so prior information must sometimes be included in the determination of . An application [25] to the analysis of scattering data illustrates this. The resulting potential is compared by Brandan and Satchler [26] and adjudged less physical than an alternative described in Section 7.2.1. We emphasize that this is not a criticism of LF inversion except insofar as the the inversion procedure [16, 17] requires that must be represented rather precisely in a specific multi-term rational function form. 6.2.2 Semiclassical inversion, WKB methods WKB methods [18, 19] are expected to work well at higher energies. The implementation of these methods is described in the following papers in which inversions exploiting the WKB approximations have been carried out, Refs. [27, 28]. The WKB inversion procedure has also been exploited in an interesting study of the effects of systematic errors in the analysis of nuclear scattering data [29]. 6.2.3 Iterative-perturbative (IP) inversion IP inversion [20, 30] exploits the relatively linear response [14, 38] of to changes in to construct a procedure based on the iterative correction of a ‘starting reference potential’, SRP. The SRP in many cases can be a zero potential. Ref. [20] demonstrated the method in a calculation of the dynamic polarization potential for the breakup of Li. IP inversion was independently developed by Kukulin [31]. The extension to spin- was presented in Ref. [32], the introduction of error analysis in Ref. [33] and mixed case inversion in Ref. [34]. More details of IP inversion and its extensions are given in Section 6.3, and applications will be described in Section 7. The extension of IP inversion to data-to-potential direct inversion is described in Section 7.3. 6.2.4 Other fixed- inversion methods There are various other inversion techniques referred to in [3] and [4, 5, 6]. These sources cite many references that are valuable for understanding the formal issues connected with inversion, but few of the other techniques seem to have yielded information concerning nuclear interactions. 6.3 The IP method and its extensions A key feature of IP inversion is that it is not tied to specific analytic properties of Schrödinger’s equation, but simply to the fact that the response of the -matrix to changes in the potential is approximately linear. This near linearity leads to properties which give IP inversion a powerful advantage as a practical tool. These include: (i) It is highly generalizable. Hence, for example, inclusion of spin-orbit and even tensor forces requiring coupled channel extensions, are relatively straightforward and do not compromise the accuracy of the inversion. (ii) Mixed-case and energy-dependent inversion, as defined above, are possible. (iii) Useful information can be obtained when the input data are noisy and incomplete. The iterative procedure can be halted before S-matrix elements are inverted to a greater precision than is warranted by their own precision. (iv) IP inversion can be incorporated into a one-step observable-to-potential inversion algorithm. The IP approach described here determines a local potential corresponding to given -matrices as calculated by Schrödinger’s equation; however it has also been applied [36] to determine a Dirac potential simply by applying the appropriate transformation to the extracted Schrödinger potential. IP inversion is implemented in the Fortran 90 code Imago. An Imago users’s manual [37], and also Ref. [3], give a more general and detailed account of the formalism than is given below, which presents the basic idea. An earlier short review of IP inversion and its applications, including some examples not discussed here, can be found in Ref. [35]. 6.3.1 Basic IP method The IP approach is based on the fact that, in general, the scattering matrix (or phase shift) responds in a remarkably linear way to changes in the scattering potential; explicit examples are given in the appendix of Ref. [38]. This makes possible a step-wise linearization procedure in which the potential corresponding to some given -matrix can be established in a series of iterations starting from a guessed potential. The linear response of the -matrix to changes in , which lies at the heart of the IP method, can be expressed in various equivalent forms. The change, , in the scattering matrix for partial wave and CM energy induced by a small change222When spin-orbit interactions are considered, we add relevant labels, e.g. . in the scattering potential, , is where is a regular solution of Schrödinger’s equation with the asymptotic normalization: and and are the conventional incoming and outgoing Coulomb wave functions. This well known result follows immediately from the Wronskian relationships for and . These functions can be written in terms of the regular and irregular Coulomb solutions and . If the Coulomb interaction is absent, and become spherical Bessel and Neumann functions: and If the target S-matrix to be inverted is , then the aim is to determine the potential for which the S-matrix renders the sum: as close to zero as possible, or at least as close as is reasonable. We add that qualifier since the target S-matrix will have numerical imprecision or errors or superimposed ‘noise’; it is a virtue of the IP method that precise inversion can be avoided where it is inappropriate. Eqn. 6 omits any labels relating to spin. We shall also omit labels on components of the potentials (real, imaginary, real and imaginary spin-orbit etc) which are included in Refs. [37] and [3]. The minimization is carried out iteratively, starting from , the starting reference potential (SRP). In favourable cases, the SRP can be a zero potential. An iteration is carried out as follows: if previous iterations lead to the current potential , the next step is to find where the are amplitudes to be determined and the are members of the inversion basis of dimension . For the inversion basis, the inversion code Imago [37] offers a choice including: displaced gaussian functions, zeroth order Bessel functions, spline functions and others. To determine amplitudes such that the current S-matrix may become the target S-matrix (or, at least, closer to ), we identify with the change expressed in Eqn. 4 resulting from the perturbation , to get where is the regular solution involving the current potential . The next step involves matrix algebra to determine the best set for the (in general) over-determined system Eqn. 8; note that , the number of values included in the inversion. In the original IP work [20], and in some subsequent work, we followed Ref. [23] in their use of the standard matrix method. Using natural matrix notation, Eqn. 8 can be written leading to: with the hermitian adjoint of . From these values of , the new current potential can be calculated from Eqn. 7. Further iterations can be carried out until a suitable low level of is reached. Before describing the practical implementation, which involves sequences of iterations rather than a single sequence, we note that an alternative to the direct inversion of Eqn. 9 has proven superior: singular value decomposition, SVD, see, e.g. Ref. [39]. SVD makes convergent iteration possible in cases where the direct matrix method fails. The first step is to re-write in the following product form: where is square, and . Matrix will not be square since we are, in general, dealing with an over-determined system. Matrix is diagonal with elements for . We can then write where is diagonal with elements . In general, vary over many orders of magnitude. The smallest are the least accurately determined, and so can be eliminated. The program Imago does this by setting a tolerance limit, with any elements that are below that limit being set to zero. This limit can be lowered in successive sequences of iterations as we now describe. The iterative inversion is not carried out in a single sequence, but in a series of discrete sequences of iterations. This allows divergences or oscillatory potentials, following too large an inversion basis or too small an SVD tolerance, to be avoided. A sequence of iterations leading to a modest reduction in without divergence or spurious oscillations can be followed by another sequence that has a basis with a larger or wider radial range, and/or a smaller SVD tolerance. This will then, typically, converge to a lower value of . After any sequence of iterations, it is possible to backtrack if the chosen inversion parameters lead to divergence or to a potential with oscillatory features that might be spurious. In practice, depending on the case, one or several sequences of iterations will lead to a potential that gives a very close fit to the S-matrix without spurious oscillations. As implemented in Imago, the fit to both the S-matrix and the observables can be seen interactively, on-screen, after each sequence of iterations. Interestingly, can often be reduced by an order of magnitude by further iterations even after a visually nearly perfect fit to has been achieved over almost the entire -range over which is appreciable. Furthermore, for cases with many partial waves, a perfect fit to the observables at far backward angles requires an exceptionally low , sometimes much lower than required for a good visual fit to . It is a matter of good practice to test the uniqueness of the inversion by verifying that the same result is obtained with a different SRP or inversion basis. Following a converged inversion procedure, one can be assured that a potential has indeed been found that reproduces the input S-matrix to a precision quantified by and verified by visual fits to both the S-matrix and the observables (as mentioned, the latter often being more sensitive). In addition, the degree of uniqueness can be tested in the way just described. In the case of input that is noisy or less well determined, the iterative process can be stopped at a larger value of and useful information extracted, perhaps from a range of alternative solutions. This can happen in cases of heavy ion scattering at low energies where there is little useful information in for low values of where is typically very small. 6.3.2 Spin- inversion The generalization of IP inversion to the spin- case [40], , is straightforward and its implementation in the code Imago is described if Ref. [37]. For spin- scattering the potential becomes, With suitable indexing, it is straightforward to expand the matrix system to accommodate the new terms. An independent inversion basis for the spin-orbit term can be defined and may differ from that for the central term in two respects: (i) it does not extend quite to the origin, since only the radial wave function is non-zero there and is zero for , and, (ii) it need not extend to such a great radius, since, in fact, spin-orbit terms tend to be small at a radius where central terms are still finite. 6.3.3 Mixed case and energy-dependent inversion Mixed case inversion determines the potential that reproduces for a few discrete energies . This is particularly useful at low energies and with light target nuclei when only a few partial waves are active — too few to define the potential for a single energy. It is quite straightforward to expand the matrix system to include multiple energies and multiple sets of target . Because the nuclear optical potential is intrinsically energy dependent, mixed case inversion is useful only over relatively narrow ranges of energy, unlike energy dependent inversion. Energy dependent IP inversion allows the parameters of postulated energy dependent functions to be optimized in an expansion of the matrix system described above. Ref. [41] describes the formulation as originally applied and as is implemented in Imago. A significant feature is that different parameterized forms must be used for the real and imaginary components. This is obviously important in the case discussed in Ref. [41] where the energy range involved crossed the inelastic threshold for p + He scattering. 6.3.4 Parity dependence and identical bosons Parity-dependent inversion determines, in effect, separate interactions for the even-parity and odd-parity partial waves. Parity-dependent terms can also be straightforwardly generated by including a Marjorana inversion basis so that any term x, where x could refer to real central, imaginary central, real spin-orbit or imaginary spin-orbit, can have an added Majorana term of the form . Inversion for the scattering of identical bosons involves inversion for which exists only for even . The only issue for IP inversion is whether there are enough partial waves to define the potential with sufficient accuracy. Section 7.2.1 describes several cases of inversion involving identical bosons at fairly high energies for which there were plenty of partial waves for satisfactory inversion. For low energy identical bosons involving calculated from theory, it might be possible to interpolate to odd . 6.3.5 Including bound state energies It is possible to include the energies of bound states as input to the inversion process. This will be useful at low energies where, even with an S-matrix for multiple energies in energy dependent inversion, there is a paucity of information with which to define the potential. The method was introduced by Cooper and fully described in [81]. In that reference, a fit to the energy of an bound state in Be was included in an inversion of low energy for He scattering on He, leading to a parity-dependent potential. 6.3.6 Spin-1 inversion yielding tensor interaction The extension of the IP algorithm to spin-1 inversion is not as straightforward as the extensions described above. This follows from the fact that the scattering of spin-1 projectiles from a spin-zero target requires, in general, a coupled channel calculation. For example for total angular momentum and positive parity, the and channels may be coupled by a tensor force interaction. The general S-matrix for spin-1 projectiles scattering from a spin-zero target may be written where . Channels with will, in general, be coupled. The possible tensor interactions were classified by Satchler [42, 43] and labeled , and . The third of these is believed to be small, and the interaction appears to be hard to distinguish phenomenologically from the first, , interaction. The interaction might be important, but involves gradient operators for which an inversion procedure has not been devised. We therefore assume that the inter-nuclear interaction may contain a tensor force component of the form: To invert an S-matrix of the form to determine a potential including a interaction, requires coupled channel inversion in which a non-diagonal S-matrix yields a non-diagonal potential. A specification of the formalism together with an account of tests of its performance can be found in Ref. [44]. This reference also presents the results of inverting generated with a interaction leading to a interaction. The tests in Ref. [44] were for a single energy case with no parity dependence, but did include an extension to direct data-to-potential inversion, see Section 7.3. Spin-1 inversion leading to a interaction is straightforwardly extended to include parity dependence, Section 6.3.4 in all components as well as energy dependence, Section 6.3.3. An application of this very general inversion to deuteron scattering from He at low energies is presented in Ref. [45]. 6.3.7 Further extensions? Spin-1 inversion, just discussed, is a restricted form of coupled channel inversion in which a non-diagonal S-matrix is inverted to give a non-diagonal potential of a very specific form. Just how far the IP concept can be pushed to provide a more general form of coupled channel inversion is a challenge for the future. The difficulty of providing experimental data of sufficient breadth and precision might restrict the application of coupled channel inversion to non-diagonal S-matrices that have been calculated from theory. 7 Applications of inversion in nuclear scattering There are a variety of good reasons for determining local potentials that reproduce given sets of S-matrices or phase shifts, and some of these will emerge from the account given here of the various applications of inversion. We cannot give an exhaustive list of possible applications, and readers may well be inspired to find new ones. 7.1 Nucleon-nucleon and similar interactions Unlike nucleon-nucleus potentials, nucleon-nucleon interactions are not generally considered to be explicitly energy-dependent and are therefore a candidate for the application of fixed- phase-shift-to-potential inversion. This would exploit the comprehensive phase-shift analyses covering a wide energy range. In fact ‘fixed-’ is a misleading term since the tensor interaction mixes channels for given conserved total angular momentum and parity. The -dependence of the potential and the small number of active partial waves effectively rule out fixed- inversion. The challenge of applying Gel’fand-Levitan-Marchenko methods, generalized to allow for coupled channels inversion to determine the non-diagonal tensor interaction, was successfully taken up by von Geramb and collaborators, see their articles [5]. 7.2 ‘Two-step’ nuclear elastic scattering phenomenology. It is often desirable to fit elastic scattering observables with potential models that have as few as possible a priori assumptions (or prejudices) concerning their nature. To achieve this, a number of ‘model independent’ fitting algorithms to determine optical model (OM) potentials, typically based on sums of gaussian or spline functions, have been developed. These often allow point-by-point uncertainties to be assigned to the potential. The possibility of inversion affords an alternative approach and arguments in support of this approach have been made [14, 46, 47, 48, 49]. The idea is to first determine or by fitting the elastic scattering observables (angular distributions (ADs) analyzing powers (APs) etc). These or can then be inverted in a subsequent step. We refer to this overall procedure as ‘two-step nuclear (elastic scattering) phenomenology’. We here distinguish two classes of two-step phenomenology: (1) determination of the S-matrix at one or a few discrete energies, and, (2) the fitting (usually for few-nucleon target nuclei) of functional forms or , over a fairly continuous range of energies, by means of an R-matrix or effective range procedure. We discuss these two cases separately, and show that they unambiguously reveal the parity dependence of many light-ion nucleus-nucleus interactions. 7.2.1 Discrete-energy OM fitting by inversion When elastic scattering data, particularly data of typical precision and angular range, are fitted by searching on S-matrix elements, various profoundly different solutions may be found. Each S-matrix solution will lead, by inversion, to a different potential. In fact, the experience of carrying out such searches is very revealing about the under-determination of the potential by the data. Even so-called ‘model-independent’ OM fits often implicitly embody prior information that appears to ameliorate, but not completely remove, this problem. S-matrix searches at single energies, therefore, should be constrained with prior information; this is possible. We note a study of the effects of systematic errors in the analysis of nuclear scattering data [29]. There are various ways of incorporating prior information. One can start with an analytic form for (e.g. McIntyre, Wang and Becker [50] (MWB)) and search on the parameters [46, 47], or one can start from an MWB or similar analytic form and search on an additive component in a way that preserves unitarity. Generally, it is important to regulate or constrain the search in appropriate ways. Useful, physically motivated, starting functions for the S-matrix search are the S-matrices calculated by a Glauber model or by an existing phenomenological fit, see [48]. In Ref. [46], the elastic scattering of O on C at 608 MeV was analyzed by first determining by fitting the elastic scattering data by means of an additive correction to the of the McIntyre, Wang and Becker (MWB) [50] parameterized form. The correction was a searchable spline function of . The angular distribution was first approximately fitted with a five parameter MWB form ; the subsequent fitted spline function addition led to a threefold reduction in . The resulting corrected MWB-derived potential revealed a very different degree of surface transparency compared to the uncorrected MWB potential. At this energy the IP inversion is precise and stable, leading to effectively identical potentials independently of whether the iterative inversion of started from a zero potential, , or a Woods-Saxon potential in the neighbourhood of the expected result. Ref. [47] discusses in depth the application of two-step phenomenology in an account of its application to C + C elastic scattering at 9 energies from 140 to 2400 MeV. This paper, in effect, presents a critique of standard OM phenomenology, with a discussion of the advantages (including computational efficiency) of two-step phenomenology and the means of implementing it. The strategy for avoiding spurious solutions was discussed, including the application of continuity-with-energy to the solutions, which, in this case, revealed apparent serious shortcomings of the data at one specific energy. This study also revealed weaknesses in the conventional Woods-Saxon phenomenology across the whole energy range. At these energies, there is no problem in inverting for just even values of , as arises with the scattering of identical bosons. An analysis of O + O elastic scattering was carried out [48], fitting wide angular range and precise data at a single energy, 350 MeV. Alternative solutions for starting from the S-matrices calculated by a Glauber model or an existing phenomenological fit, led to very precise fits for spanning nearly 5 orders of magnitude, leading to very similar potentials for fm. There is little sensitivity at radii less than that, contradicting claims for repulsive effects at a small radius. Brandan and Satchler [26], their Section 9.2, evaluate the resulting potentials [48] in comparison with potentials obtained using alternative procedures. We note in passing that the simplified Glauber model gave , but not , in close agreement with values from the final fitted . In Ref. [49] two-step inversion is applied to the elastic scattering of Li from Si (at 319 MeV) and from C (at 637 MeV). This work revealed the profound ambiguities in , and thus in , that occur when fitting data of limited range and precision. These ambiguities are more extreme than those that are found with conventional OM fitting. Good fits to the data were easily (too easily?) obtained in which the large- tail was surprisingly extended, although this possibly results from contamination of the forward angle elastic AD data with inelastic scattering. Realizing the full potential of the two-step method for analyzing the elastic scattering of halo nuclei awaits the advent of sufficiently high quality data. Inversion can, of course, be applied to an independently fitted published S-matrix. In Ref. [36] IP inversion of for p + He elastic scattering was carried out for that had been fitted to angular distribution data at a single energy, 64.9 MeV. The resulting potential was not of Woods-Saxon-like form, but exhibited long-wavelength oscillations. The reason for these became evident later with subsequent multi-energy inversions, see Section 7.2.2. Ref. [36] also presented real and imaginary, scalar and vector, Dirac equivalent to the Schrödinger potentials. This demonstrated that, with certain limitations, Dirac equation inversion is possible by way of Schrödinger equation equivalence. 7.2.2 Inverting S-matrices from R-matrix and effective range fits Mixed case and energy-dependent inversion make possible an alternative form of two-step inversion. The first step now takes the form of an R-matrix or effective range fit to elastic scattering data over a possibly quite wide energy range, which might include shape resonances. The result is an analytic form of S-matrix in which the (typically) small number of active partial waves is compensated for by the fact that exists for a substantial range of energy . Satisfactory inversion becomes possible even in cases where the energy range is much less than is required for fixed- inversion and, also, the number of partial waves is much fewer than what is required for fixed- inversion. Inversion of this kind becomes particularly interesting when the results can be compared with the potentials derived from the inversion of from RGM or similar theories for the same few-nucleon systems; we discuss the specific case of nucleon scattering from He in Section 7.4.2. In Section 7.2.1 it was mentioned that single-energy two-step inversion for p + He led to wavy potentials. Multi-energy inversion provides an explanation and points to a significant property of nucleus-nucleus interactions between few-nucleon nuclei. Ref. [34] applied mixed case inversion to for p + He that had been fitted, using R-matrix and effective range expansions, to experimental data at several discrete energies. As a result, the following alternative emerged: either (i) the potential is wavy (as it was for 64.9 MeV), or, (ii) there is a smooth but parity-dependent potential. The parity dependence is such that the odd-parity potential has a substantially greater range and volume integral than the even-parity potential. We shall see in Section 7.4.2 that exactly this form of parity dependence emerges from the inversion derived from theories that include exchange processes (excluding knock-on, Fock term, exchange). Ref. [34] found that the p + He and n + He nuclear interactions were essentially identical in the odd-parity channels (as required by charge symmetry) but differed somewhat for for the even-parity channels. In Ref. [34], only energies below the inelastic threshold were involved so the potentials were real, unlike those of Ref. [36] or those found in Ref. [41] in which energy dependent IP inversion was introduced. Potentials for p + He scattering were found by inverting from various R-matrix and effective range fits to experimental data from zero energy to about 65 MeV. The resulting potentials fitted the shape resonances at energies below the inelastic threshold, and became complex above the threshold, reproducing the data reasonably well up to 65 MeV. The opening of a relatively small number of specific channels at various energies above the threshold revealed the limits of potential models in which the potential varies smoothly with energy. Such a model evidently requires either the ‘many open channel’ condition of the standard optical model, or, the ‘zero open channel’ situation that holds below threshold; neither is true between the threshold and 65 MeV for p + He. Refs. [34, 41] together demonstrated, on an empirical basis, that the real (and imaginary above threshold) central as well as spin-orbit components of the nucleon-He potential are parity dependent; they also established the practicality of energy-dependent inversion. The key finding: over the whole energy range considered, the parity dependence of the real central part is such that the odd-parity component has both a longer range and a greater volume integral than the even-parity term. Thus, potentials that have a factor multiplying a single radial form (as have been applied in optical model fits) are too restrictive. Parity dependence extends beyond 5-nucleon systems: IP inversion of that had been fitted [51] to He + C elastic scattering data yielded [52] a strongly parity-dependent potential that reproduced the scattering data very well, including the shape resonances. The potential differed in the surface region from previous phenomenological potentials that had been found in a conventional way; this is of possible astrophysical significance. The He + He interaction, which is also of astrophysical importance, was shown to be strongly parity-dependent by Cooper in Ref. [81] in which both empirical and theoretical were inverted. The bound state energy was also included as input to an IP inversion for the first time, contributing to the determination of the interaction. 7.3 Data-to-potential direct inversion It is possible to combine the determination and the IP inversion of the S-matrix into one algorithm, see Refs [53, 54, 55, 44, 45]. Scattering data that has been measured for multiple energies can be included, in this way implementing energy-dependent, direct data-to-potential inversion. In Ref. [54], for protons scattering from O at 7 energies from 27.3 MeV to 46.1 MeV, very precise wide angular range AD and AP data were fitted using energy-dependent direct data-to-potential inversion. Parity-dependent real and imaginary central potentials and complex Wigner spin-orbit potentials were determined. The even-parity and odd-parity central potentials were smooth and behaved in a regular way with energy. The odd-parity (Majorana) terms were very like those found from the inversion, described elsewhere, of derived from RGM calculations of protons scattering from O. Fits of equal quality lacking parity dependence would certainly require wavy potentials. This work by Cooper might reasonably be described as state-of-the-art nucleon scattering phenomenology for a single nucleon-nucleus pair; it conclusively establishes the parity dependence of the interaction between a nucleon and O. As a result, we conclude that the omission of parity dependence from tests of folding model theories, as applied to nuclei as light as O, will lead to misleading results. In Ref. [55] a parity-dependent potential, including spin-orbit terms, that gave a fair simultaneous fit to ADs and APs for Li + He scattering at 19.6, 27.7 and 37.5 MeV, was found. The Majorana terms were essential for the fit in this few-nucleon system. This appears to be something one must presume to be required in all few-nucleon system inversions, see Section 7.4.2. In Ref. [44], which introduced coupled-channel IP inversion for the scattering of spin-1 projectiles leading to a tensor interaction, data-to-potential inversion was carried out fitting multiple energy data including angular distributions, vector analyzing powers and the three tensor analyzing powers for deuterons scattering from He. The energy range was from 8 to 13 MeV for 6 discrete energies; 4000 data were fitted with a potential that included parity-dependent central and tensor terms. A subsequent study [45] fitted a wider range of energies, including the three D-state resonances. Strong, complex parity-dependent tensor interactions were revealed. Direct data-to- inversion of data covering a substantial range of energies, leading to a tensor interaction, with all components parity-dependent (where required by the data) and energy dependent, represents the most complete implementation of the IP inversion procedure. 7.4 Determination of potentials from theory The inversion of S-matrices calculated from a theory is not subject to the problems that may arise, e.g. from noise and experimental uncertainties, when the S-matrix determined from scattering data is inverted. Applications of the inversion of calculated S-matrices include: (i) determining in a natural and efficient way, the local potential that is S-matrix equivalent to non-local or -dependent potentials; (ii) deriving potentials that represent the scattering for those theories (see e.g. Section 7.4.2 and Section 7.4.3) that calculate scattering directly without the intermediary of potentials; (iii) providing arguably the best method of calculating the dynamic polarization potential (DPP) due to the coupling of inelastic or reaction channels to the elastic channel. 7.4.1 Dynamic polarization potentials (DPP) by CC-plus-inversion The contribution of inelastic processes to nucleus-nucleus interactions is represented by the dynamic polarization potential (DPP) [26, 43] the non-local and -dependent form of which (for inelastic scattering, at least) was derived by Feshbach [57]. This has been calculated with various approximations, see for example Refs. [58, 59], and leads to a highly non-local and -dependent expression. Moreover, the inclusion of coupling to transfer channels with full finite range coupling and the inclusion of non-orthogonality terms has never been achieved in such calculations. Finally, the results are not easily related to phenomenology since it is necessary to establish local and -independent equivalent potentials from such calculations and this requires the calculation of from non-local and -dependent potentials. There is an alternative procedure for calculating DPPs that can handle reaction channels, coupling of all orders and (in principle) exchange processes: ‘coupled channel plus inversion’. In this method, a potential is first found by inverting the elastic channel S-matrix from a coupled channel calculation. When the bare potential of the CC calculation is subtracted from this inverted potential, the resulting difference potential is a local and -independent representation of the DPP arising from the channel coupling. A full discussion of this procedure, its advantages and limitations, and a comparison with other methods, is given in Ref. [60]. The method used in earlier attempts to extract the contribution of channel coupling to the optical potential (see Ref. [61] and references cited there) was to refit the CC angular distributions, but this is subject to the many limitations of optical model fitting. Many processes that contribute to the DPP can be studied with CC-plus-inversion: coupling to inelastic channels, reaction channels, particle or cluster exchange and projectile breakup. IP inversion was introduced in a study [20] of the contribution of projectile breakup to the Li-nucleus interaction. Many other studies of the DPP due to breakup of Li have been made; Ref. [62] described generic properties of the DPP that were common to the breakup of deuterons and Li. A recent paper on Li breakup, Ref. [9] which has many references, compared DPPs derived using -matrix inversion and those from an -weighted TELP, see Section 5, and found marked differences. The DPP due to the breakup of the halo nucleus He has also been studied [63, 64, 65]: both the real and imaginary parts have remarkably long tails attributable to Coulomb breakup. The tail on the imaginary part is absorptive for fm but emissive for smaller ; the real part is attractive at large radii, but with a sharp change to repulsion at about 15 fm. Potentials are presented out to 60 fm. In Ref. [66] DPPs are compared for the breakup of Li, Be and B scattering from Ni. Breakup in which the Coulomb interaction plays a much smaller role was evaluated for protons scattering from He in Ref. [67]. A stimulus to the development of IP inversion was the discovery [68, 69, 70] that the coupling to deuteron channels appeared to have a large effect on proton scattering. This raised the question of what contribution this coupling makes to the nucleon OMP. An early application [71] of spin- [40] IP inversion presented the effects of finite range pd coupling on the real and imaginary, central and spin-orbit terms of the p + Ca OMP at 30.3 MeV. The effect of finite-range coupling (not included in the earlier work) on the DPP was shown and the individual and total contributions of lumped +, + and + pickup states was presented. Later studies of the contribution to the proton OMP of pd coupling, including non-orthogonality terms previously omitted, are referenced in Ref. [60]; these include cases of proton scattering from halo nuclei, most recently [72, 73]. In Ref. [74] it was found that the coupling mass-three channels had a major effect of deuteron elastic scattering; inversion revealed large repulsive DPPs. Subsequent development of CRC codes permit the inclusion of non-orthogonality corrections and finite range interactions, not included in Ref. [74], and, at the same time, spin-1 inversion leading to the had been developed. These advances were exploited in Ref. [75] which determined the real and imaginary, central, spin-orbit and tensor DPPs generated by coupling to mass-3 pickup channels for 52 MeV deuterons scattering from Ca. The volume integral of the real, central DPP is much smaller than before [74], but the magnitude point-by-point is not small, reflecting the wavy character of both real and imaginary components. This is indicative of -dependence, Sect. 7.4.4, and could not have been picked up by refitting the elastic scattering angular distributions from CC calculations, as in Ref. [61]. J-weighted inversion. Inversion has not been developed for spin and Ref.[75] introduced and tested a means of achieving inversion leading to a meaningful central potential for projectiles with large spin. The idea is to define a J-weighted S-matrix: that could be inverted in the usual way. It was found that for the spin-1 case studied in Ref.[75], the imaginary part of the J-weighted DPP was close to the imaginary part of the central DPP from the complete inversion. The real part was qualitatively reproduced. The J-weighted procedure was later applied to the breakup of Li, Be and B scattering from Ni mentioned above, Ref. [66]. 7.4.2 RGM, GCM and other few-body cases; exchange contributions Inversion can play a particular role in supporting our understanding of the scattering of few-nucleon systems. Even a scattering system as simple as a nucleon plus He becomes very complicated if all reaction channels, realistic NN interactions and a full account of exchange processes are to be included. At least until recently, the standard methods of calculating scattering observables would be the application of resonating group methods (RGM) or generator coordinate methods (GCM) [76, 77]. Even for systems of 4 or 5 nucleons, greatly simplified nucleon-nucleon interactions, generally omitting tensor terms, are employed. How are such theories to be tested in view of the inevitably approximate fits to data? Inversion provides a partial solution. In Section 7.2.2 we described the inversion of S-Matrices that had been fitted to experimental observables and noted that definite qualitative features of the potentials emerged, e.g. parity-dependent potentials with specific differences between the strength and range of the even-parity and odd-parity terms. Such features are a consequence of the various exchange terms that are precisely those aspects of the scattering process that RGM and GCM calculations treat correctly. The inversions to be described support very well the general conclusions of RGM and GCM calculations. In particular, it becomes possible to study the contributions of the many different exchange processes, apart from knock-on exchange (c.f. Section 7.4.4), that are included in RGM-GCM calculations, including those that are responsible for parity dependence. LeMere et al [78, 79] discussed in qualitative terms the nature of the contributions specific exchange terms would make. Moreover, Baye [80] has made predictions concerning the way the effect of exchange processes, including those leading to parity dependence, depends upon the masses of the interacting nuclei. It is important to check such things, particularly since, as we have already seen, parity dependence is substantial for neutrons or protons scattering on He and apparently important for p + O. Is it still important for n + Ca? Baye’s work would suggest not, but we need to know since its presence or absence would make a difference in precise checks of folding model theory for n + Ca. The p + He case is of particular interest for two reasons: it is less intractable than most, and clear parity dependence has been found from the analysis of experimental data, as reported in Section 7.2.2. RGM calculations of for p + He scattering, in some of which d + He configurations were included, were inverted using mixed case inversion. The calculations were below threshold so the potentials were real. The inversion confirmed the key result from the inversion of empirical R-matrix phase shifts, that the potential shows strong parity dependence such that the odd-parity potential is of considerably greater range and volume integral than the even-parity term. More elaborate p + He RGM calculations extending above the inelastic threshold with coupling to the d + He channels were presented in Ref. [82]. The contribution of the breakup of the deuteron in the d + He channels was studied. It was found that the reaction channels increased the Majorana term in the proton potential. In this work, the contribution of the p + He channel to the d + He interaction was determined for and channel spins. A challenge for energy-dependent IP inversion was the inversion [83] of from 0 to 25 MeV for p + He, calculated [84] using RGM. No reaction channels were included, but parity dependence was allowed for in the inversion. Spin-orbit terms were also included. Two specific questions were posed: (i) is the energy dependence of the real, central Wigner term consistent with the energy dependence of the global optical potential? and, (ii) is there a Majorana term that is less than that for p + He in a way that is consistent with the predictions of Baye [80]? Inversion yielded a single parity-dependent potential with an energy-dependent form that fitted over essentially the whole energy range, and verified both points (i) and (ii). Point (i) is interesting since it shows that the global OM energy dependence extends down to mass 6. Since there were no reaction channels in the RGM, it suggests that, as widely assumed but sometimes doubted, the bulk of the energy dependence of the nucleon-nucleus interaction is, indeed, a product of knock-on exchange. In Ref. [85], RGM S-matrices for the following cases were inverted and the potentials evaluated: p + , n + , p + He, n + H, n + Li, n + O and n + Ca. For the cases where the channel spin exceeded , potentials (without spin-orbit components) were determined separately for each value of the channel spin. The Majorana terms were generally large, and in cases where there were two values of channel spin the Majorana term was quite different for each value, usually in a way for which plausible physical reasons exist, in line with what had been suggested [78, 79]. This work verified the parity dependence of the p + O interaction, and also found very little parity dependence for n + Ca, as long as the partial wave was excluded. Apart from this last proviso, this is in close accord with the predictions of Baye [80]. The contributions, together and separately, of specific exchange terms for He + O, He + He and H + He, were studied by inversion in Ref. [86]. Of the various conclusions that were drawn, we mention just one: when a parity-independent purely phenomenological imaginary potential is included in the RGM calculation, in order to enable a more reasonable comparison with experiment, the imaginary part of the inverted potential is parity dependent. This might be due to the fact that the real potentials for each parity had differing degrees of non-locality and hence differing Perey effects. Perey effects are discussed in Section 7.4.4 below where the effect of non-locality on the inverted imaginary potential is noted. S-matrices, , from RGM calculations [87] for O + O elastic scattering, for seven energies from 30 MeV to 500 MeV, were inverted [88] using IP inversion. Since the direct (non-exchange) potential was fixed, this provided an energy-dependent local equivalent of the interaction generated by the exchange term. The authors [87] had determined from the the RGM using -weighted WKB inversion, so the IP inversions constituted a test of -weighted WKB inversion; this appeared to work quite well for energies of 150 to 300 MeV, but was not accurate at the lower energies studied, 30 - 59 MeV. At 150 MeV and below, the exchange terms are quite strongly repulsive at the nuclear surface but attractive at the nuclear centre. These effects are much smaller at the higher energies, there being no surface repulsion at 500 MeV. 7.4.3 Potential representation of KMT and comparable theories Other theoretical formulations of nuclear scattering also produce an S-matrix directly, without the intermediate stage of generating a potential. Such methods do not as yet give perfect fits to observables, so it is of interest to compare the local potential that would reproduce the of the formalism with the well-established phenomenological OMP. An example of such a theory is that due to Kerman, McManus and Thaler [89], KMT. More recent developments of KMT theory have included multiple scattering terms and Pauli blocking effects. In Ref. [12] first and second order KMT potentials for nucleon-O scattering are calculated at 100 and 200 MeV by applying IP inversion to the corresponding KMT S-matrix. The volume integrals and rms radii of the four components of the inverted potentials are given, facilitating a comparison with established global phenomenology as well as with the results of alternative theories based on local density approximation nuclear matter theory. Ref. [12] gives references to earlier KMT calculations by the first three authors and others. 7.4.4 Inverting from non-local and explicitly -dependent potentials Non-locality: IP inversion, whether energy dependent or not, is a convenient means of determining the local equivalent of a non-local potential. The non-locality in the nucleon-nucleus interaction that is due to inelastic processes is not well established, but the non-locality due to knock-on exchange (Fock term) is well known, and is the major source of the energy dependence of the local OMP. The energy independent Perey-Buck [90] non-local potential, which fits nucleon elastic scattering over a wide energy range, is thought to represent the exchange non-locality in a simple parameterized way. If the for a non-local potential can be calculated (this is straightforward), then IP inversion immediately yields the local-equivalent potential. If is calculated over a wide range of energies, then energy-dependent IP inversion immediately yields the energy dependence of the local equivalent potential. This has been done [91] for the Perey-Buck potential, leading to a calculated energy dependence that well matches the energy dependence of the empirical OMP. IP inversion was also applied to that had been calculated from a potential in which the real part was of Perey-Buck non-local form but in which the imaginary part was local. The resulting local potential had an imaginary term that was reduced compared to that included with the non-local real potential; the reduction factor was just the Perey [92] reduction factor of the wave function within the nucleus that is due to the non-locality of the real potential. The S-matrix for a microscopic non-local calculation of neutron-O scattering was inverted in Ref. [93]. It was found that the damping of the wave function within the nucleus, following from an exact microscopic treatment of exchange, matched very closely the damping associated with the phenomenological Perey-Buck potential: the original Perey effect [92]. This leaves surprisingly little room for damping from reaction and inelastic processes suggesting, maybe in line with Austern’s picture [94], that the non-locality arising from channel coupling redistributes flux, but this occurs without a global reduction in the magnitude of the wave function. -dependence: IP inversion also provides a means of finding the -independent equivalent to an explicitly -dependent potential; this must exist. Parity dependence is not the only form of -dependence that has been proposed, on various grounds, as a property of phenomenological OMPs (the Feshbach theoretical potential is -dependent and also see Refs. [95, 96, 97]). As we have emphasized, the S-matrix that is calculated from an -dependent potential can be inverted to yield an -independent equivalent. There is a motivation for doing so: if model-independent OM phenomenology produced a local potential of a form that was recognizably equivalent to an -dependent potential, that might be regarded as evidence that there is ‘really’ -dependence of that form. As an example, in Ref [13], from -dependent potentials that had been fitted to O + O scattering for energies from 30 MeV to 150 MeV were inverted. The real and imaginary -dependencies were of quite different forms according to quite different physical motivations. The inverted potentials varied with energy in a systematic way. However, the imaginary part in particular was quite unlike any found with standard optical model phenomenology, casting some doubt on the particular -dependence of the imaginary potential introduced by Chatwin et al [98]. In Ref. [99], local and -independent DPPs arising from pickup coupling were found that had a significant emissive region at the nuclear centre. The possibility that this points to an -dependent underlying DPP is supported by earlier [100] model calculations. In these model calculations, calculated for an explicitly -dependent local potential were inverted leading to an imaginary term with a strong emissive region at the nuclear centre. Note that such an emissive region does not imply unitarity breaking: an -dependent potential can be devised for which for all , and for which the -independent equivalent potential nevertheless has local emissive regions. Note that the emissive region reported in Ref. [99] was in the DPP, not the full potential. In Section 7.2.2, it was reported that nucleon-He scattering presented the following alternative: the potential exhibited either waviness or parity dependence. RGM calculations clearly imply a preference for parity dependence. Exchange processes that occur with much heavier scattering pairs of nuclei are also believed to lead to parity dependence. Michel and Reidemeister [101] presented strong evidence that the He - Ne interaction at 54.1 MeV contained a Majorana (i.e. ) term. Cooper and Mackintosh [102] applied IP inversion to the derived from the potential of Ref. [101] and found a parity-independent potential giving the same . It had a substantial oscillatory feature, suggesting that the ‘waviness or parity-dependence’ alternative is a general feature. In this case, a quite small Majorana term led to quite a considerably wavy -independent equivalent. This is not an argument against parity dependence, but, apart from showing the power of IP inversion, it also suggests that wavy potentials found in model independent OM fitting should be considered seriously as a clue to underlying parity dependence. Therefore, seeking perfect fits to elastic scattering data, even when wavy potentials result, should not be dismissed as ‘fitting elephants’ (see page 223 of Ref. [26]); all the information content of the experimental elastic scattering data can be given meaning — that is surely desirable. 8 Summary and outlook For a wide range of energies and projectile-target combinations, inversion, and also observable inversion, are straightforward, and would be routine if the possible applications were more widely appreciated. This review has concentrated on giving some account of the information concerning nuclear interactions that has been obtained, and left to other reviews the task of an exposition of the mathematical basis of inverse scattering and the attendant formal problems. We note here some present limitations to the application of inversion, with the hope that others might rise to the challenge of solving them: Limitation 1: Spin. At present successful inversions are routine for the following cases: spin-zero on spin-zero, spin- on spin zero; spin-1 on spin-zero. Spin-1 on spin-zero is currently limited to determining the interaction, the best established tensor interaction. This limitation on spin gets in the way of desired calculations mostly for the inversion of S-matrix elements from theory rather than for S-matrices extracted from experiments. For example, the calculation of the DPP for Li or Be breakup cannot exploit the full S-matrix from a CC calculation in which the spin of the projectile was treated properly, even for a spin-zero target. The multiplicity of possible tensor forms is discouraging, to say the least. However, a suitable weighted S-matrix, such as that defined in Eqn. 14, can determine at least the central potential implicit in the S-matrix output from a coupled channel calculation with particles of spin in the elastic channel. This approach was used for deuterons in Ref. [75] and for Li (1+), Be () and B (2) in Ref. [66], but no spin-orbit potentials could be extracted. Useful inversions can be performed in cases where both projectiles have spin: in Ref. [85] separate potentials (without spin-orbit interaction) were derived for each of the two values of channel-spin for p + He and n + Li scattering (in both cases, the Majorana term was very different for each value of channel spin.) However, full inversion for higher spin remains a challenge. Limitation 2: Coupled channel inversion. The full and practical solution to the problem remains elusive, though there have been proposed extensions of the NS method. The one fully successful coupled channel inversion procedure is the IP extension described above leading to the non-diagonal interaction for spin-1 projectiles. This suggests that further extensions are possible, although the profusion of possible non-diagonal potentials is challenging. Coupled channel inversion might make it possible to answer questions such as: how do coupled reaction channels [103] modify the deformation parameters that emerge in the analysis of inelastic scattering from deformed nuclei? Finally, a personal viewpoint: understanding the nucleon-nucleus interaction potential is of fundamental importance in nuclear physics. Much progress has been made in achieving a unified perspective for positive and negative energies. Nevertheless, some form of local density approximation is implicit in most studies and the extracted potentials are local and -independent. The non-locality due to knock-on exchange is included in an approximate way, but the role of other forms of non-locality, which are known to be present, is obscure at best. Moreover, there is little understanding of how the interaction depends upon the particular last-occupied orbitals or collectivity of individual nuclei. Coupled channel-plus-inversion offers scope for understanding the processes which occur when a nucleon, and particles in channels coupled to the nucleon channels, interact with the curved nuclear surface, with its density gradients. For everything else, email us at [email protected].
06ec409958b6bfe2
Friday, April 20, 2012 Schrödinger has never met Newton Sabine Hossenfelder believes that Schrödinger meets Newton. But is the story about the two physicists' encounter true? Yes, these were just jokes. I don't think that Sabine Hossenfelder misunderstands the history in this way. Instead, what she completely misunderstands is the physics, especially quantum physics. She is in a good company. Aside from the authors of some nonsensical papers she mentions, e.g. van Meter, Giulini and Großardt, Harrison and Moroz with Tod, Diósi, and Carlip with Salzman, similar basic misconceptions about elementary quantum mechanics have been promoted by Penrose and Hameroff. Hameroff is a physician who, along with Penrose, prescribed supernatural abilities to the gravitational field. It's responsible for the gravitationally induced "collapse of the wave function" which also gives us consciousness and may be even blamed for Penrose's (not to mention Hameroff's) complete inability to understand rudimentary quantum mechanics, among many other wonderful things; I am sure that many of you have read the Penrose-Hameroff crackpottery and a large percentage of those readers even fail to see why it is a crackpottery, a problem I will try to fix (and judging by the 85-year-long experience, I will fail). It's really Penrose who should be blamed for the concept known as the Schrödinger-Newton equations So what are the equations? Sabine Hossenfelder reproduces them completely mindlessly and uncritically. They're supposed to be the symbiosis of quantum mechanics combined with the Newtonian limit of general relativity. They say:\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) &= \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \\ \Delta \Phi(t,\vec x) &= 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Don't get misled by the beautiful form they take in \(\rm\LaTeX\) implemented by MathJax; superficial beauty of the letters doesn't guarantee the validity. Sabine Hossenfelder and others immediately talk about mechanically inserting numbers into these equations, and so on, but they never ask a basic question: Are these equations actually right? Can we prove that they are wrong? And if they are right, can they be responsible for anything important that shapes our observations? Of course that the second one is completely wrong; it fundamentally misunderstands the basic concepts in physics. And even if you forgot the reasons why the second equation is completely wrong, they couldn't be responsible for anything important we observe – e.g. for well-defined perceptions after we measure something – because of the immense weakness of gravity (and because of other reasons). Analyzing the equations one by one So let us look at the equations, what they say, and whether they are the right equations describing the particular physical problems. We begin with the first one,\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) = \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \] Is it right? Yes, it is a conventional time-dependent Schrödinger equation for a single particle that includes the gravitational potential. When the gravitational potential matters, it's important to include it in the Hamiltonian as well. The gravitational potential energy is of course as good a part of the energy (the Hamiltonian) as the kinetic energy, given by the spatial Laplacian term, and it should be included in the equations. In reality, we may of course neglect the gravitational potential in practice. When we study the motion of a few elementary particles, their mutual gravitational attraction is negligible. For two electrons, the gravitational force is more than \(10^{40}\) times weaker than the electrostatic force. Clearly, we can't measure the transitions in a Hydrogen atom with the relative precision of \(10^{-40}\). The "gravitational Bohr radius" of an atom that is only held gravitationally would be comparably large to the visible Universe because the particles are very weakly bound, indeed. Of course, it makes no practical sense to talk about energy eigenstates that occupy similarly huge regions because well before the first revolution (a time scale), something will hit the particles so that they will never be in the hypothetical "weakly bound state" for a whole period. But even if you consider the gravity between a microscopic particle (which must be there for our equation to be relevant) such as a proton and the whole Earth, it's pretty much negligible. For example, the protons are running around the LHC collider and the Earth's gravitational pull is dragging them down, with the usual acceleration of \(g=9.8\,\,{\rm m}/{\rm s}^2\). However, there are so many forces that accelerate the protons much more strongly in various directions that the gravitational pull exerted by the Earth can't be measured. But yes, it's true that the LHC magnets and electric fields are also preventing the protons from "falling down". The protons circulate for minutes if not hours and as skydivers know, one may fall pretty far down during such a time. An exceptional experiment in which the Earth's gravity has a detectable impact on the quantum behavior of particles are the neutron interference experiments, those that may be used to prove that gravity cannot be an entropic force. To describe similar experiments, one really has to study the neutron's Schrödinger equation together with the kinetic term and the gravitational potential created by the Earth. Needless to say, much of the behavior is obvious. If you shoot neutrons through a pair of slits, of course that they will accelerate towards the Earth much like everything else so the interference pattern may be found again; it's just shifted down by the expected distance. People have also studied neutrons that are jumping on a trampoline. There is an infinite potential energy beneath the trampoline which shoots the neutrons up. And there's also the Earth's gravity that attracts them down. Moreover, neutrons are described by quantum mechanics which makes their energy eigenstates quantized. It's an interesting experiment that makes one sure that quantum mechanics does apply in all situations, even if the Earth's gravity plays a role as well, and that's where the Schrödinger equation with the gravitational potential may be verified. I want to say that while the one-particle Schrödinger equation written above is the right description for situations similar to the neutron interference experiments, it already betrays some misconceptions by the "Schrödinger meets Newton" folks. The fact that they write a one-particle equation is suspicious. The corresponding right description of many particles wouldn't contain wave functions that depend on the spacetime, \(\Psi(t,\vec x)\). Instead, the multi-particle wave function has to depend on positions of all the particles, e.g. \(\Psi(t,\vec x_1,\vec x_2)\). However, the Schrödinger equation above already suggests that the "Schrödinger meets Newton" folks want to treat the wave function as an object analogous to the gravitational potential, a classical field. This totally invalid interpretation of the objects becomes lethal in the second equation. Confusing observables with their expectation values, mixing up probability waves with classical fields The actual problem with the Schrödinger-Newton system of equations is the second equation, Poisson's equation for the gravitational potential,\[ \Delta \Phi(t,\vec x) = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Is this equation right under some circumstances? No, it is never right. It is a completely nonsensical equation which is nonlinear in the wave function \(\Psi\) – a fatal inconsistency – and which mixes apples with oranges. I will spend some time with explaining these points. First, let me start with the full quantum gravity. Quantum gravity contains some complicated enough quantum observables that may only be described by the full-fledged string/M-theory but in the low-energy approximation of an "effective field theory", it contains quantum fields including the metric tensor \(\hat g_{\mu\nu}\). I added a hat to emphasize that each component of the tensor field at each point is a linear operator (well, operator distribution) acting on the Hilbert space. I have already discussed the one-particle Schrödinger equation that dictates how the gravitational field influences the particles, at least in the non-relativistic, low-energy approximation. But we also want to know how the particles influence the gravitational field. That's given by Einstein's equations,\[ \hat{R}_{\mu\nu} - \frac{1}{2} \hat{R} \hat{g}_{\mu\nu} = 8\pi G \,\hat{T}_{\mu\nu} \] In the quantum version, Einstein's equations become a form of the Heisenberg equations in the Heisenberg picture (Schrödinger's picture looks very complicated for gravity or other field theories) and these equations simply add hats above the metric tensor, Ricci tensor, Ricci scalar, as well as the stress-energy tensor. All these objects have to be operators. For example, the stress-energy tensor is constructed out of other operators, including the operators for the intensity of electromagnetic and other fields and/or positions of particles, so it must be an operator. If an equation relates it to something else, this something else has to be an operator as well. Think about Schrödinger's cat – or any other macroscopic physical system, for that matter. To make the thought experiment more spectacular, attach the whole Earth to the cat so if the cat dies, the whole Earth explodes and its gravitational field changes. It's clear that the values of microscopic quantities such as the decay stage of a radioactive nucleus may imprint themselves to the gravitational field around the Earth – something that may influence the Moon etc. (We may subjectively feel that we have already perceived one particular answer but a more perfect physicist has to evolve us into linear superpositions as well, in order to allow our wave function to interfere with itself and to negate the result of our perceptions. This more perfect and larger physicist will rightfully deny that in a precise calculation, it's possible to treat the wave function as a "collapsed one" at the moment right after we "feel an outcome".) Because the radioactive nucleus may be found in a linear superposition of dictinct states and because this state is imprinted onto the cat and the Earth, it's obvious that even the gravitational field around the (former?) Earth is generally found in a probabilistic linear superposition of different states. Consequently, the values of the metric tensors at various points have to be operators whose values may only be predicted probabilistically, much like the values of any observable in any quantum theory. Let's now take the non-relativistic, weak-gravitational-field, low-energy limit of Einstein's equations written above. In this non-relativistic limit, \(\hat g_{00}\) is the only important component of the metric tensor (the gravitational redshift) and it gets translated to the gravitational potential \(\hat \Phi\) which is clearly an operator-valued field, too. We get\[ \Delta \hat\Phi(t,\vec x) = 4\pi G \hat\rho(t,\vec x). \] It looks like the Hossenfelder version of Poisson's equation except that the gravitational potential on the left hand side has a hat; and the source \(\hat\rho\), i.e. the mass density, has replaced her \(m \abs{\Psi(t,\vec x)}^2\). Fine. There are some differences. But can I make special choices that will produce her equation out of the correct equation above? What is the mass density operator \(\hat\rho\) equal to in the case of the electron? Well, it's easy to answer this question. The mass density coming from an electron blows up at the point where the electron is located; it's zero everywhere else. Clearly, the mass density is a three-dimensional delta-function:\[ \hat\rho(t,\vec x) = m \delta^{(3)}(\hat{\vec X} - \vec x) \] Just to be sure, the arguments of the field operators such as \(\hat\rho\) – the arguments that the fields depend on – are ordinary coordinates \(\vec x\) which have no hats because they're not operators. In quantum field theories, whether they're relativistic or not, they're as independent variables as the time \(t\); after all, \((t,x,y,z)\) are mixed with each other by the relativistic Lorentz transformations which are manifest symmetries in relativistic quantum field theories. However, the equation above says that the mass density at the point \(\vec x\) blows up iff the eigenvalue of the electron's position \(\hat X\), an eigenvalue of an observable, is equal to this \(\vec x\). The equation above is an operator equation. And yes, it's possible to compute functions (including the delta-function) out of operator-valued arguments. Semiclassical gravity isn't necessarily too self-consistent an approximation. It may resemble the equally named song by Savagery above. Clearly, the operator \(\delta^{(3)}(\hat X - \vec x)\) is something different than Hossenfelder's \(\abs{\Psi(t,\vec x)}^2\) – which isn't an operator at all – so her equation isn't right. Can we obtain the squared wave function in some way? Well, you could try to take the expectation value of the last displayed equation:\[ \bra\Psi \Delta \hat\Phi(t,\vec x)\ket\Psi = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Indeed, if you compute the expectation value of the operator \(\delta^{(3)}(\hat X - \vec x)\) in the state \(\ket\Psi\), you will obtain \(\abs{\Psi(t,\vec x)}^2\). However, note that the equation above still differs from the Hossenfelder-Poisson equation: our right equation properly sandwiches the gravitational potential, which is an operator-valued field, in between the two copies of the wave functions. Can't you just introduce a new symbol \(\Delta\Phi\), one without any hats, for the expectation value entering the left hand side of the last equation? You may but it's just an expectation value, a number that depends on the state. The proper Schrödinger equation with the gravitational potential that we started with contains the operator \(\hat\Phi(t,\vec x)\) that is manifestly independent of the wave function (either because it is an external classical field – if we want to treat it as a deterministically evolving background field – or because it is a particular operator acting on the Hilbert space). So they're different things. At any rate, the original pair of equations is wrong. Nonlinearity in the wave function is lethal Those deluded people are obsessed with expectation values because they don't want to accept quantum mechanics. The expectation value of an operator "looks like" a classical quantity and classical quantities are the only physical quantities they have really accepted – and 19th century classical physics is the newest framework for physics that they have swallowed – so they try to deform and distort everything so that it resembles classical physics. An arbitrarily silly caricature of the reality is always preferred by them over the right equations as long as it looks more classical. But Nature obeys quantum mechanics. The observables we can see – all of them – are indeed linear operators acting on the Hilbert space. If something may be measured and seen to be equal to something or something else (this includes Yes/No questions we may answer by an experiment), then "something" is always associated with a linear operator on the Hilbert space (Yes/No questions are associated with Hermitian projection operators). If you are using a set of concepts that violate this universal postulate, then you contradict basic rules of quantum mechanics and what you say is just demonstrably wrong. This basic rule doesn't depend on any dynamical details of your would-be quantum theory and it admits no loopholes. Two pieces of the wave function don't attract each other at all You could say that one may talk about the expectation values in some contexts because they may give a fair approximation to quantum mechanics. The behavior of some systems may be close to the classical one, anyway, so why wouldn't we talk about the expectation values only? However, this approximation is only meaningful if the variations of the physical observables (encoded in the spread of the wave function) are much smaller than their characteristic values such as the (mean) distances between the particles which we want to treat as classical numbers, e.g.\[ \abs{\Delta \vec x} \ll O(\abs{\vec x_1-\vec x_2}) \] However, the very motivation that makes those confused people study the Schrödinger-Newton system of equations is that this condition isn't satisfied at all. What they typically want to achieve is to "collapse" the wave function packets. They're composed of several distant enough pieces, otherwise they wouldn't feel the need to collapse them. In their system of equations, two distant portions of the wave function attract each other in the same way as two celestial bodies do – because \(m \abs{\Psi}^2\) enters as the classical mass density to Poisson's equation for the gravitational potential. They write many papers studying whether this self-attraction of "parts of the electron" or another object may be enough to "keep the wave function compact enough". Of course, it is not enough. The gravitational force is extremely weak and cannot play such an essential role in the experiments with elementary particles. In Ghirardi-Rimini-Weber: collapsed pseudoscience, I have described somewhat more sophisticated "collapse theories" that are trying to achieve a similar outcome: to misinterpret the wave function as a "classical object" and to prevent it from spreading. Of course, these theories cannot work, either. To keep these wave functions compact enough, they have to introduce kicks that are so large that we are sure that they don't exist. You simply cannot find any classical model that agrees with observations in which the wave function is a classical object – simply because the wave function isn't a classical object and this fact is really an experimentally proven one as you know if you think a little bit. But what the people studying the Schrödinger-Newton system of equations do is even much more stupid than what the GRW folks attempted. It is internally inconsistent already at the mathematical level. You don't have to think about some sophisticated experiments to verify whether these equations are viable. They can be safely ruled out by pure thought because they predict things that are manifestly wrong. I have already said that the Hossenfelder-Poisson equation for the gravitational potential treats the squared wave function as if it were a mass density. If your wave function is composed of two major pieces in two regions, they will behave as two clouds of interplanetary gas and these two clouds will attract because each of them influences the gravitational potential that influences the motion of the other cloud, too. However, this attraction between two "pieces" of a wave function definitely doesn't exist, in a sharp contrast with the immensely dumb opinion held by pretty much every "alternative" kibitzer about quantum mechanics i.e. everyone who has ever offered any musings that something is fundamentally wrong with the proper Copenhagen quantum mechanics. There would only be an attraction if the matter (electron) existed at both places because the attraction is proportional to \(M_1 M_2\). However, one may easily show that the counterpart of \(M_1M_2\) is zero: the matter is never at both places at the same time. Imagine that the wave function has the form\[ \ket\psi = 0.6\ket \phi+ 0.8 i \ket \chi \] where the states \(\ket\phi\) and \(\ket\chi\) are supported by very distant regions. As you know, this state vector implies that the particle has 36% odds to be in the "phi" region and 64% odds to be in the "chi" region. I chose probabilities that are nicely rational, exploiting the famous 3-4-5 Pythagorean triangle, but there's another reason why I didn't pick the odds to be 50% and 50%: there is absolutely nothing special about wave functions that predict exactly the same odds for two different outcomes. The number 50 is just a random number in between \(0\) and \(100\) and it only becomes special if there is an exact symmetry between \(p\) and \((1-p)\) which is usually not the case. Much of the self-delusion by the "many worlds" proponents is based on the misconception that predictions with equal odds for various outcomes are special or "canonical". They're not. Fine. So if we have the wave function \(\ket\psi\) above, do the two parts of the wave function attract each other? The answer is a resounding No. The basic fact about quantum mechanics that all these Schrödinger-Newton and many-worlds and other pseudoscientists misunderstand is the following point. The wave function above doesn't mean that there is 36% of an object here AND 64% of an object there. (WRONG.) Note that there is "AND" in the sentence above, indicating the existence of two objects. Instead, the right interpretation is that the particle is here (36% odds) OR there (64% odds). (RIGHT.) The correct word is "OR", not "AND"! However, unlike in classical physics, you're not allowed to assume that one of the possibilities is "objectively true" in the classical sense even if the position isn't measured. On the other hand, even in quantum mechanics, it's still possible to strictly prove that the particle isn't found at both places simultaneously; the state vector is an eigenstate of the "both places" projection operator (product of two projection operators) with the eigenvalue zero. (The same comments apply to two slits in a double-slit experiment.) The mutually orthogonal terms contributing to the wave function or density matrix aren't multiple objects that simultaneously exist, as the word "AND" would indicate. You would need (tensor) products of Hilbert spaces and/or wave functions, not sums, to describe multiple objects! Instead, they are mutually excluding alternatives for what may exist, alternative properties that one physical system (e.g. one electron) may have. And mutually excluding alternatives simply cannot interact with each other, gravitationally or otherwise. Imagine you throw dice. The result may be "1" or "2" or "3" or "4" or "5" or "6". But you know that only one answer is right. There can't be any interaction that would say that because both "1" and "6" may occur, they attract each other which is why you probably get "3" or "4" in the middle. It's nonsense because "1" and "6" are never objects that simultaneously exist. If they don't simultaneously exist, they can't attract each other, whatever the rules are. They can't interact with one another at all! While the expectation value of the electron's position may be "somewhere in between" the regions "phi" and "chi", we may use the wave function to prove with absolute certainty that the electron isn't in between. The proponents of the "many-worlds interpretation" often commit the same trivial mistake. They are imagining that two copies of you co-exist at the same moment – in some larger "multiverse". That's why they often talk about one copy's thinking how the other copy is feeling in another part of a multiverse. But the other copy can't be feeling anything at all because it doesn't exist if you do! You and your copy are mutually excluding. If you wanted to describe two people, you would need a larger Hilbert space (a tensor product of two copies of the space for one person) and if you produced two people out of one, the evolution of the wave function would be quadratic i.e. nonlinear which would conflict with quantum mechanics (and its no-xerox theorem), too. These many-worlds apologists, including Brian Greene, often like to say (see e.g. The Hidden Reality) that the proper Copenhagen interpretation doesn't allow us to treat macroscopic objects by the very same rules of quantum mechanics with which the microscopic objects are treated and that's why they promote the many worlds. This proposition is what I call chutzpah. In reality, the claim that right after the measurement by one person, there suddenly exist several people is in a striking contradiction with facts that may be easily extracted from quantum mechanics applied to a system of people. The quantum mechanical laws – laws meticulously followed by the Copenhagen school, regardless of the size and context – still imply that the total mass is conserved, at least at a 1-kilogram precision, so it is simply impossible for one person to evolve into two. It's impossible because of the very same laws of quantum mechanics that, among many other things, protect Nature against the violation of charge conservation in nuclear processes. It's them, the many-worlds apologists, who are totally denying the validity of the laws of quantum mechanics for the macroscopic objects. In reality, quantum mechanics holds for all systems and for macroscopic objects, one may prove that classical physics is often a valid approximation, as the founding fathers of quantum mechanics knew and explicitly said. The validity of this approximation, as they also knew, is also a necessary condition for us to be able to make any "strict valid statements" of the classical type. The condition is hugely violated by interfering quantum microscopic (but, in principle, also large) objects before they are measured so one can't talk about the state of the system before the measurement in any classical language. In Nature, all observables (as well as the S-matrix and other evolution operators) are expressed by linear operators acting on the Hilbert space and Schrödinger's equation describing the evolution of any physical system has to be linear, too. Even if you use the density matrix, it evolves according to the "mixed Schrödinger equation" which is also linear:\[ i\hbar \ddfrac{}{t}\hat\rho = [\hat H(t),\hat \rho(t)]. \] It's extremely important that the density matrix \(\hat \rho\) enters linearly because \(\hat \rho\) is the quantum mechanical representation of the probability distribution, even the initial one. And the probabilities of final states are always linear combinations of the probabilities of the initial states. This claim follows from pure logic and will hold in any physical system, regardless of its laws. Why? Classically, the probabilities of final states \(P({\rm final}_j)\) are always given by\[ P({\rm final}_j) = \sum_{i=1}^N P({\rm initial}_i) P({\rm evolution}_{i\to j}) \] whose right hand side is linear in the probabilities of the initial states and the left hand side is linear in the probabilities of the final states. Regardless of the system, these dependences are simply linear. Quantum mechanics generalizes the probability distributions to the density matrices which admit states arising from superpositions (by having off-diagonal elements) and which are compatible with the non-zero commutators between generic observables. However, whenever your knowledge about a system may be described classically, the equation above strictly holds. It is pure maths; it is as questionable or unquestionable (make your guess) as \(2+2=4\). There isn't any "alternative probability calculus" in which the final probabilities would depend on the initial probabilities nonlinearly. If you carefully study the possible consistent algorithms to calculate the probabilities of various final outcomes or observations, you will find out that it is indeed the case that the quantum mechanical evolution still has to be linear in the density matrix. The Hossenfelder-Poisson equation fails to obey this condition so it violates totally basic rules of the probability calculus. Just to connect the density matrix discussion with a more widespread formalism, let us mention that quantum mechanics allows you to decompose any density matrix into a sum of terms arising from pure states,\[ \hat\rho = \sum_{k=1}^M p_k \ket{\psi_k}\bra{\psi_k} \] and it may study the individual terms, pure states, independently of others. When we do so, and we often do, we find out that the evolution of \(\ket\psi\), the pure states, has to be linear as well. The linear maps \(\ket\psi\to U\ket\psi\) produce \(\hat\rho\to U\hat\rho \hat U^\dagger\) for \(\hat\rho=\ket\psi\bra\psi\) which is still linear in the density matrix, as required. If you had a more general, nonlinear evolution – or if you represented observables by non-linear operators etc. – then these nonlinear rules for the wave function would get translated to nonlinear rules for the density matrix as well. And nonlinear rules for the density matrix would contradict some completely basic "linear" rules for probabilities that are completely independent of any properties of the laws of physics, such as\[ P(A\text{ or }B) = P(A)+P(B) - P(A\text{ and }B). \] So the linearity of the evolution equations in the density matrix (and, consequently, also the linearity in the state vector which is a polotovar for the density matrix) is totally necessary for the internal consistency of a theory that predicts probabilities, whatever the internal rules that yield these probabilistic predictions are! That's why two pieces of the wave function (or the density matrix) can never attract each other or otherwise interact with each other. As long as they're orthogonal, they're mutually exclusive possibilities of what may happen. They can never be interpreted as objects that simultaneously exist at the same moment. The product of their probabilities (and anything that depends on its being nontrivial) is zero because at least one of them equals zero. And the wave functions and density matrix cannot be interpreted as classical objects because it's been proven, by the most rudimentary experiments, that these objects are probabilistic distributions or their polotovars rather than observables. These statements depend on no open questions at the cutting edge of the modern physics research; they're parts of the elementary undergraduate material that has been understood by active physicists since the mid 1920s. It now trivially follows that all the people who study Schrödinger-Newton equations are profoundly deluded, moronic crackpots. And that's the memo. Single mom: totally off-topic Totally off-topic. I had to click somewhere, not sure where (correction: e-mail tip from Tudor C.), and I was led to this "news article"; click to zoom in. Single mom Amy Livingston of Plzeň, 87, is making $14,000 a month. That's not bad. First of all, not every girl manages to become a mom at the age of 87. Second of all, it is impressive for a mom with such a name – who probably doesn't speak Czech at all – to survive in my hometown at all. Her having 12 times the average salary makes her achievements even more impressive. ;-) 1. Lubos, Your points are well taken: the Schrodinger-Newton equation is fundamentally flawed. Expanding on these issues, I'd like to know your views on the validity of: 1)the WKB approximation, 2)semiclassical gravity, 3)quantum chaos and quantization of classically chaotic dynamical systems? 2. Dear Ervin, thanks for your listening. All the entries in your systems are obviously legitimate and interesting approximations (1,2) or topics that may be studied (3). That doesn't mean that all people say correct things about them and use them properly, of course. ;-) The WKB approximation is just the "leading correction coming from quantum mechanics" to classical physics. Various simplified Ansaetze may be written down in various contexts. Semiclassical gravity either refers to general relativity with the first (one-loop) quantum corrections; or it represents the co-existence of quantized matter fields with non-quantized gravitational fields. This is only legitimate if the gravitational fields aren't affected by the matter fields - if the spacetime geometry solve the classical Einstein equations with sources that don't depend on the microscopic details of the matter fields and particles which are studied in the quantum framework. The matter fields propagate on a fixed classical background in this approximation but they don't affect the background by their detailed microstates. Indeed, if the dependence of the gravitational fields on the properties of the matter fields is substantial or important, there's no way to use the semiclassical approximation. Some people would evolve the gravitational fields according to the expectation values of the stress-energy tensor but that's the same mistake as discussed in this article in the context of the Poisson-Hossenfelder equation. Classical systems may be chaotic - unpredictable behavior very sensitive on initial conditions. Quantum chaos is about the research of the complicated wave functions etc. in systems that are analogous to (hatted) classically chaotic systems. 3. Thanks Lubos. I also take classical approximations with a grain of salt. For instance, mixing classical gravity with quantum behavior is almost always questionable a way or another. Here is a follow up question. What would you say if experiments on carefully prepared quantum systems could be carried out in highly accelerated frames of references? Could this be a reliable way of falsifying predictions of semiclassical gravity, for example?
4a5a2efe0bce5245
Take the 2-minute tour × Suppose I have a wave function $\Psi$ (which is not an eigenfunction) and a time independent Hamiltonian $\hat{\mathcal{H}}$. Now, If I take the classical limit by taking $\hbar \to 0$ what will happen to the expectation value $\langle\Psi |\hat{\mathcal{H}}|\Psi\rangle$? Will it remain the same (as $\hbar = 1.0$) or will it be different as $\hbar\to 0$? According to correspondence principle this should be equal to the classical energy in the classical limit. What do you think about this? Your answers will be highly appreciated. share|improve this question For a particle in ground state of a box the expectation value of hamiltonian is $\frac{\pi^2\hbar^2}{2mL^2}$ which tends to zero when $\hbar \rightarrow 0$. –  richard Oct 31 '13 at 17:34 Related: physics.stackexchange.com/q/17651/2451 and links therein. –  Qmechanic Nov 1 '13 at 14:54 4 Answers 4 up vote 1 down vote accepted The above posters seem to have missed the fact that $\Psi$ is not an eigenfunction, but an arbitrary wavefunction. The types of wavefunctions we normally see when we calculate things are usually expressed in terms of eigenfunctions of things like energy or momentum operators, and have little to do, if anything, with classical behaviour (e.g. look at the probability density of the energy eigenstates for the quantum harmonic oscillator and try to imagine it as describing a mass connected to a spring). What you might want to do is construct coherent states which are states where position and momentum are treated democratically (uncertainty is shared equally between position and momentum). Then, the quantum number that labels your state might be thought of as the level of excitation of the state. For the harmonic oscillator, this is roughly the magnitude of the amount of energy in the state in that $E = \langle n \rangle \hbar= |\alpha^2| \hbar$. If you naively take $\hbar \to 0$ then everything vanishes. But if you keep, say, the energy finite, while taking $\hbar \to 0$, then you can recover meaningful, classical answers (that don't depend on $\alpha$ or $\hbar$). share|improve this answer First of all I would like to thank all of you for sharing your valuable ideas. I have asked this question, in connection with the coherent state. If you have a pure classical phase-space variables (p,q) then you can find a coherent state with $\alpha=\frac{1}{2}(p+i q)$. Classical energy is given by the classical hamiltonian $H(p,q)$ and the quantum energy can be computed from the $\langle\alpha|H|\alpha \rangle$. These classical and quantum energy values will be different due to zero-point energy. In this case as $\hbar \to 0$ it seems that quantum energy decreases. –  Sijo Joseph Nov 8 '13 at 1:31 For the case of a particle in a potential, $\hat{\mathcal H} = \frac{\hat{p}^2}{2m}+V({\mathbf x})$, let an arbitrary wavefunction be written in the form $$\Psi({\mathbf{x}},t) = \sqrt{\rho({\mathbf x},t)}\exp\left(i\frac{S({\mathbf x},t)}{\hbar}\right)\text{,}$$ where $\rho \geq 0$. Then it becomes a simple calculus exercise to derive: $$\Psi^*\hat{\mathcal{H}}\Psi = \rho\left[\frac{1}{2m}\left|\nabla S({\mathbf x},t)\right|^2 + V(\mathbf{x})\right] + \mathcal{O}(\hbar)\text{,}$$ where I'm omitting terms that have at least one power of $\hbar$. Since $\langle\Psi|\hat{\mathcal H}|\Psi\rangle$ is the spatial integral of this quantity, integrating this instead is what we want for an $\hbar\to 0$ limit of energy. [Edit] As @Ruslan says, the wavefunction would have to oscillate faster to have a kinetic term. In the above, keeping $S$ independent of $\hbar$ means increasing the phase at the same proportion that $\hbar$ is lowered. Additionally, substituting this form for $\Psi$ into the Schrödinger equation gives, after similarly dropping $\mathcal{O}(\hbar)$ terms, $$\underbrace{\frac{1}{2m}\left|\nabla S({\mathbf x},t)\right|^2 + V(\mathbf{x})}_{{\mathcal H}_\text{classical}} + \frac{\partial S({\mathbf x},t)}{\partial t} = 0\text{,}$$ which is the classical Hamilton-Jacobi equation with $S$ taking the role of the Hamilton's principal function. share|improve this answer That's the right answer. UpVote $0$k. –  Felix Marin Nov 2 '13 at 9:48 I like this, thank you. I partially agree with you. Normally quantum energy of a coherent state will be higher than the energy of the corresponding classical counter part due to zero point energy. As $\hbar\to0$ the zero point energy vanishes hence quantum energy should depend on the $\hbar$ value. That gives me a contradiction. –  Sijo Joseph Nov 8 '13 at 1:51 Normal time-independent hamiltonian looks like $\hat H=\hat T+\hat V$, where $\hat T=-\frac{\hbar^2}{2m}\nabla^2$ is kinetic energy operator and $\hat V=V(\hat x)$ is potential energy operator. As seen from these expressions, only kinetic energy operator changes with $\hbar$. Now we can see that 1. Quantum mechanical expectation value of particle total energy is sum of expectation values for kinetic and potential energies: $$\langle\Psi\left|\hat H\right|\Psi\rangle=\langle\Psi\left|\hat T\right|\Psi\rangle+\langle\Psi\left|\hat V\right|\Psi\rangle$$ 2. Taking $\hbar\to0$, we get $\hat T\to \hat 0\equiv0$. Now expectation value for particle total energy becomes equal to expectation value of its potential energy: $$\langle\Psi\left|\hat H_{\hbar=0}\right|\Psi\rangle=\langle\Psi\left|\hat V\right|\Psi\rangle$$ From this follows immediate answer: no, the expectation value will not remain the same. And interesting result is that for any smooth wavefunction expectation value of kinetic energy is zero when $\hbar$ is zero. This implies that for classical limit the wavefunction must oscillate infinitely fast (i.e. have zero wavelength) to remain at the same total energy. As you make $\hbar$ smaller, the state with given total energy gets larger quantum number - i.e. becomes more excited. share|improve this answer Yes, this can be answered using a classical perspective. We all know the electromagnetic or optical equation: $$ E =\nu h = \omega \hbar \longrightarrow 0 = \omega 0 $$ As Richard has indicated the answer to this can be produced from a visit to wiki, "the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies" $$ \mathcal{ \hat H } =\hat T + \hat V = {{\hat p^2} \over {2 m}}+V = V - { \hbar^2 \bigtriangledown^2 \over 2m} $$ For this case: $$\mathcal{ \hat H } \rightarrow \hat V = V=0$$ "V" is just the potential the system is placed at, and for our universe we can assume V=0. $$\Psi=\Psi( \vec{r} ) \space \space \space \space and \space thus: \space \space \space \space \mathcal{ \hat H } \mid \Psi \rangle = i \hbar {{\partial \Psi } \over {\partial \vec{r}}} \rangle$$ $$ \langle \Psi \mid \mathcal{ \hat H } \mid \Psi \rangle =\int \Psi^* \mathcal{ \hat H } ( \Psi ) d \vec{r} = \int \Psi^* i \hbar ( \Psi ' ) d \vec{r} $$ So it does not matter what Psi is or what the derivative of Psi over some dimension is or what dimensions Psi is existent in, or what the complex conjugate of Psi is or what limits we integrate over. The solution is a multiple of h. share|improve this answer Your Answer
d67f670e3beda1f7
Disclaimer: I do not know a whole terrible lot about the intricacies of either chaos theory or quantum mechanics, let alone the combination of the two, this is more a philosophical thing than a scientific one, I know I get a lot of things wrong (on both sides) Further disclaimer (thank to ariels for the information): The 'snapshot' mentioned above is a well defined object in dynamics (its mathematical form containing firm proofs and a specific ontology). However, I think the point below still stands. Though the 'snapshot' as defined mathematically may be vastly different than the 'snapshot' the lay person is familiar with, I think Feyerabend would still argue that the very choosing of the term 'snapshot' is a metaphorical/rhetorical one, that cannot be encompassed by an easy rationality... Applying Philosophy of Science Feyerabendian and Lakatosian analyses of Quantum Chaos I will be discussing an article entitled “Chaos on the Quantum Scale” by Mason A. Porter and Richard L. Liboff from the November-December 2001 issue of American Scientist. The article discusses recent advances in recent attempts to model systems that behave chaotically on the quantum (sub-atomic) scale. It will be helpful to briefly summarize the main points of the article: The first few introductory paragraphs relate quantum mechanics and chaos theory by placing emphasis on their respective uses of uncertainty. From this common point of uncertainty, the authors state that because scientists seem to ‘find’ chaotic phenomena at all scales, they cannot rule out the possibility of chaos at the sub-atomic level. The next section of the article is a brief history of chaos theory that describes the early work of Henri Poincaré and mentions the later work in the 1960’s by meteorologist Edward Lorenz. They then explain that chaos has been found in so many disparate disciplines of science, and once again reiterate that they cannot rule it out at the quantum level. Here they also mention possible applications of such quantum level chaos in nanotechnology. From here they move into the largest section of the article, the billiard-themed thought experiment/model. They move from a simple two dimensional billiard table to increasingly more chaotic and quantum-like billiard tables. There is a two-dimensional table with a circular rail, a spherical ‘table’, a spherical table with wave-particles as ‘balls’, and finally, a spherical table with an oscillating boundary and with wave-particles of different frequencies. Within this section, they also explain the more technical aspects of their attempt to model quantum chaos. They explain their plotting methods (the Poincaré section) as well as their mathematical methods as well (the Schrödinger equation and Hamiltonians). With the final few examples they show us that they cannot as yet model true quantum chaos, but only semi-quantum chaos (which requires mathematics from the realm of classical physics as well as quantum mechanics). After this admission, they go on to describe in detail future applications that successful quantum chaotic modeling will have in nanotechnology, from superconducting quantum-interference devices (SQUIDs) to carbon nanotubes. The final sentence of the article sums up the general attitude of the authors: “As we have shown… this theory possesses beautiful mathematical structure and the potential to aid progress in several areas of physics both in theory and in practice” (Porter 537). I shall now attempt to analyze the article in light of two very different ‘theories’ (though one can certainly not firmly be called a ‘theory’): namely, those of Paul Feyerabend and Imre Lakatos. I will begin my discussion with Feyerabend’s thought, and then move on to Lakatos. After these analyses, I will engage both authors with each other, and attempt to bring out certain problems in each of their ‘theories’ that I see myself. Paul Feyerabend introduces the Chinese Edition to his book Against Method by stating his thesis that: the events, procedures and results that constitute the sciences have no common structure; there are no elements that occur in every scientific investigation but are missing elsewhere. Concrete developments… have distinct features and we can often explain why and how these features led to success. But not ever discovery can be accounted for in the same manner, and procedures that paid off in the past may create havoc when imposed on the future. Successful research does not obey general standards; it relies now on one trick, now on another…(AM 1). So, we can (and do) explain why certain scientific developments/revolutions do occur, but we should not expect these explanations to bud into theories, and we should definitely not expect that our explanations should apply in all cases. This inability for universally applicable theories to be universally applied, is not a result of our inability to hit upon the correct theory, but is a result of the non-uniform character of what we call ‘science’. Science is not a homogenous enterprise. It comprises everything from sociology to quantum mechanics. Before we can expect to have an absolute theory (which Feyerabend thinks is neither possible nor desirable) we would have to have an absolute definition of what ‘science’ is. (Here we can see the influence of Wittgenstein’s idea of language games on Feyerabend’s thought). Perhaps science isn’t something we can have a theory about. So, it being understood that Feyerabend believes that ‘science’ is not homogenous, and that we can only explain individual cases with individual criteria, what processes would he think applicable in the article at hand? Obviously this is a difficult question to answer. I think a fruitful way of approaching the task is through a very un-Feyerabendian process. By seeing what he has done in the past (e.g. in his previous analyses of scientific ‘developments’) we may be able to surmise what he would be likely to note in our particular example. In Feyerabend’s analysis of Galileo (specifically in chapter 7 of Against Method) he emphasizes the role of rhetoric, and ‘propaganda’ in scientific change. He states that:>/p> Galileo replaces one natural interpretation by a very different and as yet (1630) at least partly unnatural interpretation. How does he proceed? How does he manage to introduce absurd and counterinductive assertions, such as the assertion that the earth moves, and yet get them a just and attentive hearing? One anticipates that arguments will not suffice - an interesting and highly important limitation of rationalism – and Galileo’s utterances are indeed arguments in appearance only. For Galileo uses propaganda (AM 67). So, as it seems that an analysis of non-argumentative (rhetorical) uses of language aided Feyerabend in his discussion of Galileo. Thus, one possibly fruitful method of analysis may be to search out similar uses of language in our article. Which is precisely what I will do. Here is a good example of the use of non-rational, non-argumentative means of convincing someone of your point: The trail of evidence towards a commingling of quantum mechanics and chaos started late in the 19th century, when … Henri Poincaré started working on equations to predict the positions of the planets as they rotated around the sun (Porter 532). Here we are led to believe by Porter/Liboff that Poincaré’s work is part of a ‘trail of evidence’ that provides support for their work (‘the commingling of quantum mechanics and chaos’). By the appeal to an accepted authority (it is generally accepted in the chaos community that Poincaré is the ‘father of chaos theory’) we are supposed to lend further credence to their own work (though, as we are told in the last portion of the article, this work has not provided a true connection between the two theories). But, is there, in Poincaré’s work any evidence of this commingling of chaos and quantum mechanics? Hardly. The ‘evidence’ they refer to is simply the birth of chaos theory. If we accept their claim, one might analogously state that my birth contains ‘evidence’ for whom I will marry in the future. (Putting aside genetic predisposition toward certain possible mates, this is absurd.) We cannot (rationally) justify the claim that the birth of chaos theory provides evidence for the future ‘commingling’ of that theory with quantum mechanics. It does, however, provide a nice segue for the authors into a historical summary of the birth of chaos theory. Rather than an argument, it is a literary device (like exaggeration, alliteration, etc.) that aids both the achievement of the authors’ goal (describing quantum chaos) and making the text itself more fluid. Staunch rationalists would argue (Feyerabend might say) that this example mistakes a literary device for a scientific argument, and that if we simply separated the two, the problem would dissolve. Feyerabend’s position however, is that we are unable to separate the two. He states in Against Method That interests, forces, propaganda and brainwashing techniques play a much greater role than is commonly believed in …the growth of science, can also be seen from an analysis of the relation between idea and action. It is often taken for granted that a clear and distinct understanding of new ideas precedes, and should precede, their formulation and institutional expression. (An investigation starts with a problem, says Popper.) First, we have an idea, or a problem, then we act, i.e. either speak, or build, or destroy. Yet this is certainly not the way in which small children develop. They use words … they play with them, until they grasp a meaning that has so far been beyond their reach… There is no reason why this mechanism should cease to function in the adult. We must expect, for example, that the idea of liberty could be made clear only by means of the very same actions, which were supposed to create liberty (AM 17). Putting aside the theory of language acquisition proposed here, we see that Feyerabend believes that the form of our investigation is just as important as the content or result of it. Thus, we cannot understand an argument separately from the language it is phrased in, language that often contains suggestive (propagandistic) phrases. In other words what you say is often inseparable from how you say it Analogies to real world objects are also used by Porter/Liboff. For example: “A buckyball has a soccer-ball shape…” (Porter 536); “Nanotubes can also vibrate like a plucked guitar string…” (Porter 537); and, “Such a plot represents a series of snapshots of the system under investigation” (Porter 534). These analogies appear to be used simply to enhance the more abstract qualities of the quantum-chaotic world the authors are describing, and make them more understandable. But, it seems there is more going on here. If we view the article in the Feyerabendian sense that I have been developing above, the choice of metaphor can also affect the readers’ conception of the ‘ideas’ that the authors are attempting to put across. In particular, the ‘snapshot’ analogy seems suggestive to me. What the authors describe as ‘snapshots’ are Poincaré sections taken from higher-than-three dimensional systems. In effect, two-dimensional plots that are, by a mathematical process, abstracted from ‘multi-dimensional masses.’ These are possibly some of the most theoretical objects ever created yet the authors describe them as ‘snapshots’. Obviously there are qualities of the Poincaré section that lend it to the comparison: both a snapshot and a Poincaré section are thought to be reports of a particular time and space. But, other aspects of the comparison may (hopefully, for the Porter/Liboff) lead the reader into accepting highly theoretical concepts as real objects, more so than they would have without the analogy. Obviously the creation of a photographic snapshot is itself based on theory, but it is one that we use (and accept) in everyday life, one that we accept without reservations. Not only that, but the real-life snapshot (as opposed to the Poincaré section snapshot) represents things which we already accept as existing in the real world. In comparing the Poincaré section to a snapshot, the authors attempt to further solidify the reality of the objects that the section represents. Rather than seeing the n-dimensional objects of the Poincaré section as abstract objects, we are now more suggested to picture them as objects like our vacation slides, or wedding photos. Imre Lakatos’ great contribution to the history and philosophy of science (and the historiography of science) is the concept of the research programme. As a general illustration of the role of a research programme, the following quote may be helpful: the great scientific achievements are research programmes which can be evaluated in terms of progressive and degenerating problemshifts; and scientific revolutions consist of one research programme superseding (overtaking in progress) another (Lakatos 115). How can we apply such a methodology to the emergence of quantum-chaos? Well, to start with, we might ask just what research programme, or programmes we are working with. Are quantum mechanics, chaos theory and quantum-chaos all individual research programmes, and, if so, how do we explain the emergence of quantum-chaos (a theory that contains elements of both quantum mechanics and chaos theory) in relation to the other two? I shall attempt to answer these two questions in order. To answer the first, we should define more firmly what Lakatos means by the term ‘research programme’. He states that: The basic unit of appraisal must be not an isolated theory or conjunction of theories but rather a ‘research programme’, with a conventionally accepted (and thus by provisional decision ‘irrefutable’) ‘hard core’ and with a ‘positive heuristic’ which defines problems, outlines the construction of a belt of auxiliary hypotheses, foresees anomalies and turns them victoriously into examples, all according to a preconceived plan. The scientist lists anomalies, but as long as his research programme sustains its momentum, he may freely put them aside. It is primarily the positive heuristic of his programme, not the anomalies, which dictate the choice of his problems (Lakatos 116). So, in order to determine whether or not our three ‘categories’ can be aptly described as research programmes they must have a ‘hard core’ (which I take to mean principles or examples that one has to accept in order to work within the research programme), and also a ‘positive heuristic’ that determines what problems will be addressed (and how to address them). For brevity’s sake I shall limit my discussion to the ‘hard core’ and the problem-determining function of the positive heuristic while ignoring the role of anomalies in negative determination of problems (a role that Lakatos, unlike Popper, believes is secondary to that of the positive heuristic). Quantum mechanics definitely seems to have a ‘hard core’ that its adherents agree is irrefutable, and essential to its elaboration. Historical examples of such an irrefutable core can be found in papers (from the late 19th century to the first quarter of the 20th) by Planck, Bohr, Einstein and others. These papers contain principles that form the unshakeable core of quantum mechanics even now. Here is just one example, which should suffice to illustrate the point: Today we know that no approach which is founded on classical mechanics and electrodynamics can yield a useful radiation formula. … Planck in his fundamental investigation based his radiation formula…on the assumption of discrete portions of energy quanta from which quantum theory developed rapidly (Einstein 63). So, quantum mechanical theory develops directly from Planck’s assumption of quanta. Although this is an oversimplification, it does illustrate that there are basic assumptions which quantum theorists are unwilling to sacrifice. We have our ‘hard core’, now the question is: does quantum mechanics have its own ‘positive heuristic’? I think the easiest way to answer this is to rephrase the question slightly: has quantum mechanics generally determined its own problems positively (i.e. set out to solve them) before they are negatively determined by emergent anomalies? Obviously searching out the ‘general’ answer to this question is well beyond the scope of this essay, but finding a few examples can at least allow us to provisionally classify quantum mechanics as a research programme. One example is the full, and accurate, derivation of Planck’s law. Planck proposed the idea of quanta (discrete units of energy) in 1900 and the perfection of a law describing this idea was worked on until 1926. The idea of quanta was proposed as a basic tenet of quantum mechanics (it was ‘anomalous’ only for the then degenerating research programme of classical mechanics), though it could not be perfectly derived. So, setting it up as a problem, quantum mechanics attempted to ‘solve’ it (and eventually did). The problem of splitting the atom, though it may have been motivated by outside political factors, was internally posed to quantum mechanics as well, and consequently solved as ‘predicted’ by theory. Undoubtedly, then, Lakatos would define quantum mechanics as a research programme, and not merely a theory contained within a larger research programme. Can the same be said of chaos theory? Well, chaos theory seems to have its own ‘hard core’. This much we can see from the Porter/Liboff. The theory’s basic assumption is that some phenomena… depend intimately on a system’s initial conditions, so that an imperceptible change in the beginning value of a variable can make the outcome of a process impossible to predict (Porter 532). All applications of chaos theory work outward from this core principle, which is also historically situated (in the article) through the work of Poincaré: Poincaré started working on equations to predict the positions of the planets as they rotated around the sun… Note the starting positions and velocities, feed them into a set of equations based on Newton’s laws of motion, and the results should predict future positions. But the outcome turned Poincaré’s expectations upside down. With only two planets under consideration, he found that even tiny differences in the initial conditions… elicited substantial changes in future positions (Porter 532). So, like quantum mechanics, the hard core of chaos is situated historically in a few irrefutable examples and principles. For quantum mechanics some examples of the core principles are the Heisenberg uncertainty principle and Planck’s assumption of discrete quanta. The individuals most often recognized historically as exemplars of quantum mechanical theory are Einstein, Bohr, Born, Ehrenfest, to name a few. These examples are constantly cited and referred to both pedagogically, and in scientists’ description of the birth of their field. Chaos theory’s core principle is that we cannot accurately predict the future state of a dynamical (i.e. chaotic) system. This principle is exemplified in the early work of Poincaré (which is generally seen as proto-chaotic) and the later meteorological studies of Lorenz (who is also mentioned by Porter/Liboff). Now we move on to the question of whether or not chaos theory has a positive heuristic, which determines the problems to be solved. It seems, at least prima facie (which is as far as such a limited study can go) that, unlike quantum mechanics (whose scope is internally limited to the ‘quantum realm’) that chaos theory has the potential to be applied to any system. In this respect, can it be considered a research programme? If it has historically been applied only within other research programmes (meteorology, electrodynamics, planetary motion, to name only a few mentioned in the article itself) it does not seem plausible that it can define its own problems and attempt to solve them in seclusion from other research programmes. Rather than a research programme, I propose that chaos theory is a self-contained theory (a modeling or mathematical tool) that functions within a variety of established and independent research programmes. On this view, it would appear that quantum-chaos, far from being an independent research programme, is the result of a development that is internal to the progressive research programme of quantum mechanics. Quantum-chaos is not an entirely new system of ideas, but a growth of new ideas within the boundaries of the quantum realm. That is, without quantum mechanics, there would be no realm in which to create quantum-chaos, and no ‘rules’ with which to describe it. Critique of Feyerabend and Lakatos Now that we have seen a few of the ideas of Feyerabend and Lakatos in application (albeit forcefully) I shall move on to a critical engagement of the two, playing off their views (as well as my own) against one another. I will start with Lakatos. It seems that though the research programme is a valuable historiographical lens with which to view scientific history, it has obvious limitations. Although it enables the historian of science to encompass more examples than something like (what Lakatos calls) a ‘conventionalist’ historiography, it is by no means all encompassing. The main problem that I see with his methodology is one that Lakatos states himself. The methodology of research programmes – like any other theory of scientific rationality – must be supplemented by empirical-external history. No rationality theory will ever solve the problems like why Mendelian genetics disappeared in Soviet Russia in the 1950’s, or why certain schools of research into genetic racial differences or into the economics of foreign aid came into disrepute in the Anglo-Saxon countries in the 1960’s… (Lakatos 119). So, like most other rationalist reconstructions of the history of science, his attempt must be supplemented by psychological, sociological and other explanations. The difference between a falsificationist like Popper and someone like Lakatos is that Lakatos at least admits that there are other factors in the history of science than rational ones. But, for a rationalist project, whose aim is to explain all scientific change, this fundamental problem simply cannot be overcome. The problem is that the human agents in science (who, despite any talk of a ‘third world’ are key agents in scientific change) are never fully, or exclusively, rational. If we are bound by a purely rational reconstruction of the history of science, then the irrational in science (which Lakatos admits exists) will always elude our methodological understanding. Lakatos denies that any theory of scientific rationality can succeed in this task. The problem of irrationality in science is one that I believe Feyerabend can overcome more easily. To him it seems that if a completely rational reconstruction (based on the rigorous application of a specific ‘system’) is bound to fail, then should we not look at the possibility of an irrational, even non-systematic explanation of the history of science? Obviously such an explanation could not be termed a ‘methodology’ but through something like it, we could attempt to explain any historical stage of science. Such an irrational, anti-methodological approach is precisely what Paul Feyerabend calls for. Feyerabend’s explanations do not rely on the constancy of a specific method or concept, but fluctuate based on the particular situation they are attempting to ‘explain’. When talking about a series of lectures he had given at the London School of Economics, Feyerabend sketches out for us his intent: My aim in the lectures was to show that some very simple and plausible rules and standards which both philosophers and scientists regarded as essential parts of rationality were violated in the course of episodes (Copernican Revolution; triumph of the kinetic theory; rise of quantum theory; and so on) they regarded as equally essential. More specifically I tried to show (a) that the rules (standards) were actually violated and that the more perceptive scientists were aware of the violations; and (b) that they had to be violated. Insistence on the rules would not have improved matters, it would have arrested progress (SFS 13). Feyerabend suggests here that not only are rules not always fruitful in science, but that strict adherence to those rules sometimes hinders its progress. The same can be said about historiography of science. If we insist on strict adherence to specific rules in all cases then not only are going to get it ‘wrong’, but we may make it harder to get it ‘right’ (i.e. more useful, less problematic historical descriptions). So, we have discussed a specific problem with Lakatos’ methodology of research programmes and ended up at the seeming inadequacy of all methodologies. But neither I, nor Feyerabend, believe that there are never times when rules can be applied fruitfully to historical analyses. Indeed, Lakatos’ concept of the research programme seems to provide criteria that are more widely applicable than many others proposed before it. It does not fall prey to the rash assumption that science is strictly rational, thought it admits science’s rationality is all that it can explain. This is precisely what Feyerabend wants the rationalists (and particularly the other LSE rationalists) to admit: that we cannot always fit history into the box of rationality (regardless of whether the box is that of falsificationism or the methodology of research programmes). So, on the one hand, Lakatosian research programmes explain more than any other rationalist reconstruction can, but on the other hand, Lakatos admits that (unlike Feyerabend) he cannot explain irrationality in science. How can I criticize Feyerabend? If I accused him of incoherence, or self-contradiction, he would take it as a complement. If one can accept any standard at any time, depending upon the circumstances, then of course one can seem to be contradictory, he would say. I tend to agree with Feyerabend that no rules can be applied absolutely, for all time. But, one might criticize him in his specific historical analyses. For instance, his emphasis on the rhetorical (non-rational) use of language and irrational ‘methods’ of Galileo and Copernicus may ignore some of the important rational features in their work. Though this problem may be inherent to an attack on rationalist reconstructions of science, I think that Feyerabend often ignores salient features of history simply because they are instances of rationality. That being said, I believe that Feyerabend’s philosophy of science provides us with the mindset to build a number of very unique perspectives on the history of science. He tells us that no method can work absolutely, but some methods can work sometimes. Our task is to think for ourselves and create our own interpretations of science, and not to rely on the grandiose systems of our predecessors. Bennett, Jesse. The Cosmic Perspective (1st edition), Addison Wesley Longman, 1999, New York. “Chaos on the Quantum Scale”, Mason A. Porter and Richard L. Liboff, pp.532-537 in American Scientist Volume 89, No. 6 November-December 2001. “Chaos Theory and Fractals”, Jonathan Mendelson and Elana Blumenthal 2000-2001, URL: http://www.mathjmendl.org/chaos/index.html “Early Quantum Mechanics”, J J O'Connor and E F Robertson 1996, URL: http://www-history.mcs.st-andrews.ac.uk/history/HistTopics/The_Quantum_age_begins.html Einstein, Albert. “On the Quantum Theory of Radiation” pp63-77 in Sources of Quantum Mechanics. Ed. B.L. Van der Waerden, 1968, Dover Publications, New York. Feyerabend, Paul. Against Method, Verso, 1988 1975 New York. (Referred to in the text as AM) Feyerabend, Paul. Science in a Free Society, New Left Books, 1978, London (referred to in the text as SFS). Lakatos, Imre. “History of Science and its Rational Reconstructions” pp.107-127 in Scientific Revolutions, ed. Ian Hacking, 1981, Oxford University Press, New York. Wittgenstein, Ludwig. Philosophical Investigations. Translated by G.E.M. Anscombe (No publishing information provided). Log in or registerto write something here or to contact authors.
c0343fc63964c8eb
Davisson–Germer experiment From Wikipedia, the free encyclopedia Jump to: navigation, search The Davisson–Germer experiment was a physics experiment conducted by American physicists Clinton Davisson and Lester Germer in the years 1923–1927,[1] which confirmed the de Broglie hypothesis. This hypothesis advanced by Louis de Broglie in 1924 says that particles of matter such as electrons have wave-like properties. The experiment not only played a major role in verifying the de Broglie hypothesis and demonstrated the wave–particle duality, but also was an important historical development in the establishment of quantum mechanics and of the Schrödinger equation. History and overview[edit] According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy (now called photons), which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons.[2] According to de Broglie, for all matter and for radiation alike, the energy of the particle was related to the frequency of its associated wave by the Planck relation: And that the momentum of the particle was related to its wavelength by what is now known as the de Broglie relation: where h is Planck's constant. An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through X-ray scattering experiments on crystalline solids.[2][3] This suggestion of Elsasser was then communicated by his senior colleague (and later Nobel Prize recipient) Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were explained by Elsasser's proposition. However the initial intention of the Davisson and Germer experiment was not to confirm the de Broglie hypothesis, but rather to study the surface of nickel. American Physical Society plaque in Manhattan commemorates the experiment In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target. The angular dependence of the reflected electron intensity was measured and was determined to have the same diffraction pattern as those predicted by Bragg for X-rays. At the same time George Paget Thomson independently demonstrated the same effect firing electrons through metal films to produce a diffraction pattern, and Davisson and Thomson shared the Nobel Prize in Physics in 1937.[2][4] The Davisson – Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton (who won the Nobel Prize for Physics in 1927),[5] established the wave–particle duality hypothesis which was a fundamental step in quantum theory. Early experiments[edit] Davisson began work in 1921 to study electron bombardment and secondary electron emissions. A series of experiments continued through 1925. Experimental setup Davisson and Germer's actual objective was to study the surface of a piece of nickel by directing a beam of electrons at the surface and observing how many electrons bounced off at various angles. They expected that because of the small size of electrons, even the smoothest crystal surface would be too rough and thus the electron beam would experience diffuse reflection.[6] The experiment consisted of firing an electron beam from an electron gun directed to a piece of nickel crystal at normal incidence (i.e. perpendicular to the surface of the crystal). The experiment included an electron gun consisting of a heated filament that released thermally excited electrons, which were then accelerated through a potential difference giving them a certain amount of kinetic energy, towards the nickel crystal. To avoid collisions of the electrons with other molecules on their way towards the surface, the experiment was conducted in a vacuum chamber. To measure the number of electrons that were scattered at different angles, a faraday cup electron detector that could be moved on an arc path about the crystal was used. The detector was designed to accept only elastically scattered electrons. During the experiment an accident occurred and air entered the chamber, producing an oxide film on the nickel surface. To remove the oxide, Davisson and Germer heated the specimen in a high temperature oven, not knowing that this affected the formerly polycrystalline structure of the nickel to form large single crystal areas with crystal planes continuous over the width of the electron beam.[6] When they started the experiment again and the electrons hit the surface, they were scattered by atoms which originated from crystal planes inside the nickel crystal. In 1925, they generated a diffraction pattern with unexpected peaks. A breakthrough[edit] On a break, Davisson attended the Oxford meeting of the British Association for the Advancement of Science in the summer of 1926. At this meeting, he learned of the recent advances in quantum mechanics. To Davisson's surprise, Max Born gave a lecture that used diffraction curves from Davisson's 1923 research which he had published in Science that year, using the data as confirmation of the de Broglie hypothesis.[7] He learned that in prior years, other scientists, Walter Elsasser, E. G. Dymond, and Blackett, James Chadwick, and Charles Ellis had attempted similar diffraction experiments, but were unable to generate low enough vacuums or detect the low intensity beams needed.[7] Returning to the United States, Davisson made modifications to the tube design and detector mounting, adding azimuth in addition to colatitude. Following experiments generated a strong signal peak at 65 V and an angle θ = 45°. He published a note to Nature titled, "The Scattering of Electrons by a Single Crystal of Nickel". Questions still needed to be answered and experimentation continued through 1927. By varying the applied voltage to the electron gun, the maximum intensity of electrons diffracted by the atomic surface was found at different angles. The highest intensity was observed at an angle θ = 50° with a voltage of 54 V, giving the electrons a kinetic energy of 54 eV.[2] As Max von Laue proved in 1912 the periodic crystal structure serves as a type of three-dimensional diffraction grating. The angles of maximum reflection are given by Bragg's condition for constructive interference from an array, Bragg's law for n = 1, θ = 50°, and for the spacing of the crystalline planes of nickel (d = 0.091 nm) obtained from previous X-ray scattering experiments on crystalline nickel.[2] According to the de Broglie relation, electrons with kinetic energy of 54 eV have a wavelength of 0.167 nm. The experimental outcome was 0.165 nm via Bragg's law, which closely matched the predictions. Davisson and Germer's accidental discovery of the diffraction of electrons was the first direct evidence confirming de Broglie's hypothesis that particles can have wave properties as well. Davisson's attention to detail, his resources for conducting basic research, the expertise of colleagues, and luck all contributed to the experimental success. 1. ^ Davisson, C. J.; Germer, L. H. (1 April 1928). "Reflection of Electrons by a Crystal of Nickel". Proceedings of the National Academy of Sciences of the United States of America. 14 (4): 317–322. Bibcode:1928PNAS...14..317D. doi:10.1073/pnas.14.4.317. ISSN 0027-8424. PMC 1085484free to read. PMID 16587341.  2. ^ a b c d e R. Eisberg, R. Resnick (1985). "Chapter 3 – de Broglie's Postulate—Wavelike Properties of Particles". Quantum Physics: of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). John Wiley & Sons. ISBN 0-471-87373-X.  3. ^ H. Rubin (1995). "Walter M. Elsasser". Biographical Memoirs. 68. National Academy Press. ISBN 0-309-05239-4.  4. ^ The Nobel Foundation (Clinton Joseph Davisson and George Paget Thomson) (1937). "Clinton Joseph Davisson and George Paget Thomson for their experimental discovery of the diffraction of electrons by crystals". The Nobel Foundation 1937.  5. ^ The Nobel Foundation (Arthur Holly Compton and Charles Thomson Rees Wilson) (1937). "Arthur Holly Compton for his discovery of the effect named after him and Charles Thomson Rees Wilson for his method of making the paths of electrically charged particles visible by condensation of vapour". The Nobel Foundation 1927.  6. ^ a b Hugh D. Young, Roger A. Freedman: University Physics, Ed. 11. Pearson Education, Addison Wesley, San Francisco 2004, 0-321-20469-7, S. 1493–1494. 7. ^ a b Gehrenbeck, Richard K. (1978). "Electron diffraction: fifty years ago". Physics today. American Institute of Physics: 34–41.  External links[edit]
55ec0a50b5593dcc
Quote of the day Wir müssen wissen — wir werden wissen! David Hilbert Tr.: We must know — we will know! EPS awards five Italian physicists! Slowness in a quantum world Some days are gone since my last post but for a very good reason. I was very busy on writing down a new paper of mine. Meanwhile, frantic activity was around in the blogosphere due to EPS Conference in Grenoble. No Higgs yet but we do not despair. Being away from these distractions, I was able to analyze a main problem that was around in quantum mechanics since a nice paper by Karl-Peter Marzlin and Barry Sanders appeared in Physical Review Letters on 2004. My effort is in this paper. But what is all about? We are aware of the concept of adiabatic variation from thermodynamics and mechanics. We know that there exist physical systems that, if we care on doing a really slow variation in time of their parameters, the state of the system does not change too much. Let me state this with an example taken from mechanics. Consider a table with a hole at the center. Through the hole there is a wire with a little ball suspended. An electric motor is keeping the wire on the table side and the ball is performing small oscillations. If the wire is kept fixed, the period of the small oscillations of this pendulum is given by the square root of the ratio between the length of the wire at the hole position and the gravity acceleration multiplied by 2 times Greek pi. Now, we can turn on the motor and vary the length of the wire with some time varying law. This will imply a variation on the frequency of the oscillations of the ball. Now, if we assume a slow variation of the length it happens a nice thing: The system displays a conserved quantity, a so called adiabatic invariant, that the energy of the system varies proportionally to the frequency. This is just an approximate conserved quantity but it is a characteristic of a slow variation of a parameter of this system. In some way, the initial state of the system, properly evolved, is maintained as time evolves as the phase space occupied by the system keeps its form. This is true provided the rate of change of the length of the wire is much smaller than the frequency of the pendulum. This is a quite general result in classical mechanics. In 1928, Max Born and Vladimir Fock asked themselves if something similar is also true in a quantum world. In a classical paper, they were able to show that it is indeed so. Given a Schrödinger equation with a time varying Hamiltonian, under a given condition, a system keeps on staying in the same state properly evolved in time, multiplied by some phases. The validity condition is the critical point. This condition is expressed through the ratio between the rate of change of the Hamiltonian itself and the gap between instantaneous eigenvalues, as also eigenvalues as the states evolve in time. The gap condition is a fundamental one and was put forward on1950 by Tosio Kato. This is quite reminiscent to the case we gave about classical mechanics where the rate of change of the length of the wire, entering into the Hamiltonian, should be kept smaller than the frequency of oscillation and so this condition appears surely reasonable. But things are not that simple. You can consider an atom under the effect of a monochromatic radiation inside a cavity. This is generally well-described by a two-level system and people observe this system oscillating between the two states. Well, if the intensity of the field inside the cavity is small enough, one can see these oscillations dubbed Rabi flopping. This phenomenon is ubiquitous wherever a monochromatic field interacts with an atom but, the presence of a continuum of states changes this coherent effect into a decaying one as observed in everyday life. If we apply the condition for adiabatic approximation as devised by Born and Fock we get an inconsistency. The approximation seems to hold but Rabi flopping is there to say us the the state is changing in time. The system does not stay at all in the same state as time evolves notwithstanding our condition seems to say so. This is exactly what Marzlin and Sanders pointed out with an exactly solvable example. The condition found by Born and Fock for quantum slowness does not appear to be sufficient to grant an adiabatic behavior for a quantum system. This is a bad news as, in quantum computation, some promising technological applications are in view with the adiabatic approximation and we must be certain that our system behaves the way we expect. This opened up a hot debate that is yet to be over. But, what is going on here? The explanation to this inconsistency can be traced back to a couple of papers that I and Ali Mostafazadeh wrote in the nineties (see here and here). What Born and Fock really found is the leading order of a perturbation expansion of the solution of the time dependent Schrödinger equation. This has a deep implication for the validity condition of the adiabatic approximation that represents just the leading order of this series. Indeed, let us consider a system under the effect of a perturbation. There are situations when some corrections grow without bound as time increases. These terms are unphysical as an unbounded solution violates cherished principles of  physics as energy conservation, unitarity  and so on. These are called secularities in literature, due to their timescales, as they were firstly discovered in the perturbation series of computations in astronomy. So, if we stop at the leading order of such a series and blindly apply the condition as devised by Born and Fock and claim for applicability, we can be just wrong. This is exactly what Marzlin, Sanders and others have shown unequivocally. So, if you want to apply the adiabatic approximation and be sure it works, you have to do a more involved task. You have to compute the next-to-leading order correction at least and, accounting eventually for resonant behavior, identify unbounded terms. Techniques exist to resum them. After you will have this done, you will be able to identify the right condition for the adiabatic approximation to work. So, for the two-level system discussed above you will see that only when a very strong field is applied and Rabi flopping cannot happen you will get a consistent adiabatic behavior. What is the lesson to be learned here? The simplest one is that looking in some elder literature can often help to solve a problem. Anyhow, deep into an old dusty corner of quantum mechanics is just hidden a fundamental result: Strong perturbations can be managed in quantum mechanics exactly as the weak ones. An entire new world is open from this that our founding fathers cannot be aware of. Karl-Peter Marzlin, & Barry C. Sanders (2004). Inconsistency in the application of the adiabatic theorem Phys. Rev. Lett. 93, 160408 (2004) arXiv: quant-ph/0404022v6 Marco Frasca (2011). Consistency of the adiabatic theorem and perturbation theory arXiv arXiv: 1107.4971v1 Born, M., & Fock, V. (1928). Beweis des Adiabatensatzes Zeitschrift für Physik, 51 (3-4), 165-180 DOI: 10.1007/BF01343193 Marco Frasca (1998). Duality in Perturbation Theory and the Quantum Adiabatic Approximation Phys.Rev. A58 (1998) 3439 arXiv: hep-th/9801069v3 Ali Mostafazadeh (1996). The Quantum Adiabatic Approximation and the Geometric Phase Phys.Rev. A55 (1997) 1653-1664 arXiv: hep-th/9606053v1 Galileo’s abjuration Paper on critical temperature revised Get every new post delivered to your Inbox. Join 69 other followers %d bloggers like this:
996bbe72bed8a9c0
Take the 2-minute tour × I want to ask how do we actually measure the probability amplitude that appears in Schrödinger equation. From what I read in quantum mechanics textbooks, it appears that after the measurement, the system "collapses" to its eigen-state. And we know this because the probability amplitude of events changed. For a typical event involving two states, previously we only know its amplitude is $|A_{1}+A_{2}|^{2}$, but now it is $|A_{1}|^{2}+|A_{2}|^{2}$, etc. My question is, how do we know that our measurement of the probability amplitude is accurate? How do we know that there is no uncertainity principle that makes $$ \Delta A_{1}*\Delta A_{2}\ge k $$ for example? Since the probability amplitude is one of the intrinsic quantum properties of the system, it seems to me any measurement should disturb it to some extent. For example, if the above hypothetical "uncertainty relation" holds, then we cannot say in principle what $|A_1+A_2|^{2}$ is (since it must be disturbed by our measurement), but only what $E|A_1+A_2|^{2}$ is based on experiment. But if $A_1,A_{2}$ is in certain range, we may not be able to distinguish $|A_1+A_2|$ and $|A_1|^{2}+|A_2|^{2}$ anymore unless we take a huge number of experiments. To elaborate it, my rough conception of the way people measure it is this: We make the experiment in identical situation $N$ times, and we assume via strong law of large numbers that the average frequency must approach the mean value. However, this assumption does not exclude the possibility that every time we measure the probability of event $A_1$, the accuracy of measuring event $A_2$ may be somehow influenced. Therefore in actual measurement, the probability we get is close to $E|A_1+A_2|^{2}$, but not really the real value if by measurement we caused a huge variance in the data. Let us for simplicity consider an even simpler case with only one event $A$. $A$'s probability amplitude is given by the complex number $a$. Suppose in actual measurement, we found if we take $N$ measurements at the $i$th time, then the sample average is about $a+(-1)^{i}b$. Then we can propose either $A$'s probability amplitude changes with time (like in a two state system), or our measurement somehow influenced $a$'s sample value. Suppose we are in the second case, how do we know what is the true probability amplitude $a$? If we only do $N$ experiments, we would only get a biased value. And if we do more, the chance of the bias accumulating is small but still not negligible if $b$ is really large. share|improve this question You're asking a couple of different questions here. In the last paragraph you're just asking how you can tell that the measurements done on an ensemble of presumably identically prepared systems are uncorrelated. In practice you would vary the time interval at which you do the measurements to check for time correlation. –  DanielSank May 26 at 5:04 Your Answer Browse other questions tagged or ask your own question.
db9b6b4cb3838c0e
Science - Written by on February 1, 2011 The Flight of the Electrons Tags: , Purdue researchers break petaflop barrier with study of electrons in computer chips A team led by Gerhard Klimeck of Purdue University has broken the petascale barrier while addressing a relatively old problem in the very young field of computer chip design. Using Oak Ridge National Laboratory’s Jaguar supercomputer, Klimeck and Purdue colleague Mathieu Luisier reached more than a thousand trillion calculations a second (1 petaflop) modeling the journey of electrons as they travel through electronic devices at the smallest possible scale. Klimeck, leader of Purdue’s Nanoelectronic Modeling Group, and Luisier, a member of the university’s research faculty, used more than 220,000 of Jaguar’s 224,000 processing cores to reach 1.03 petaflops. The team is working to help manufacturers transcend Moore’s Law, which since 1965 has anticipated the astonishing pace of technology advances. It was in that year that Intel cofounder Gordon Moore suggested in Electronics magazine that the density of transistors on a computer chip—and, therefore, the speed of that chip—would double about every 2 years. And indeed they have, with Moore’s company just this year releasing chips that hold more than 2 billion transistors on a piece of silicon just over an inch square. An unavoidable reality But technology makers are running across an unavoidable physical reality: There is, in fact, a limit to how many transistors you can fit onto a sliver of silicon. Within the next few years transistors will be as small as they can get. “We’re at the stage where these transistors—the on-off switches—in these chips are as small as 20 to 30 nanometers in some of their critical widths,” noted Klimeck. “That’s along the lines of 100 to 150 atoms. We’re beginning to reach a limit where making it another factor of two smaller is going to be more and more difficult.” Twenty nanometers is indeed very small. By comparison the shortest wavelength of visible light—belonging to violet—is 20 times wider at 400 nanometers. An especially fine strand of hair is 1,000 times wider at 20 micrometers. The power of Jaguar has given Klimeck and Luisier the ability to pursue this task with unprecedented realism. “What we do is build models that try to represent how electrons move through transistor structures,” Klimeck explained. “Can we come up with geometries on materials or on combinations of materials—or physical effects at the nanometer scale—that might be different than on a traditional device, and can we use them to make a transistor that is less power hungry or doesn’t generate as much heat or runs faster? “You’re reaching the stage where you can’t think of your material being like a continuum. We experience that this material is not continuous, but discrete. And the placement of these atoms is important.” At left, schematic view of a nanowire transistor with an atomistic resolution of the semiconductor channel. At right, illustration of electron-phonon scattering in nanowire transistor. The current as function of position (horizontal) and energy (vertical) is plotted. Electrons (filled blue circle) lose energy by emitting phonons or crystal vibrations (green stars) as they move from the source to the drain of the transistor. The team is pursuing this work on Jaguar with two applications, known as Nanoelectric Modeling (NEMO) 3D and OMEN (a more recent effort whose name is an anagram of NEMO). NEMO 3D is an evolution of the earlier NEMO 1D, which was developed at Texas Instruments in the mid-1990s to model resonant tunneling diodes—devices that used the quantum mechanical ability of electrons to tunnel through a tiny barrier and appear on the other side. NEMO 3D expanded the application to three dimensions and million-atom systems, but at a cost: The systems were static, with no electron flow. “We were able to say where the electrons would be sitting in this geometry, but we couldn’t afford computationally to model electrons injected at one end and pulled out at the other end, which we did do in NEMO 1D but in a one-dimensional space or representation. “Around 2004 compute powers became strong enough on massively parallel machines that we could dream of combining the NEMO 1D and NEMO 3D capabilities and actually represent the device in three dimensions one atom at a time and actually inject electrons at one end and pull them out at the other end,” said Klimeck. “Having machines like Jaguar made these calculations possible. The theory of how to do that was understood with NEMO 1D, but it was computationally prohibitively expensive. OMEN is the next-generation prototype that runs on Jaguar now.” These applications combine techniques to get the greatest possible scientific discovery out of modern supercomputers. They calculate the most important particles in the system—valence electrons located on atoms’ outermost shells—from their fundamental properties. These are the electrons that flow in and out of the system. On the other hand, the applications approximate the behavior of less critical particles—the atomic nuclei and electrons on the inner shells. The applications solve the venerable Schrödinger equation, which describes the quantum mechanical properties of a system, although Klimeck’s team has modified the equation to allow for electrons moving into and out of the system. Using OMEN Luisier has been able to model about 140,000 atoms. Without taking into account the flow of electrons, NEMO 3D is typically used to simulate 5 million to 10 million atoms. At this scale the quantum mechanical oddities of electrons—such as the tunneling behavior studied in NEMO 1D—become increasingly important. “Our understanding of electron flow in these structures is different,” Klimeck explained. “As you make things very small, you expose the quantum mechanical nature of the electrons. They can go around corners. They can tunnel. They can do all kinds of crazy stuff. So electrons are quantum mechanical particles at that scale, and the atoms are discrete.” Bringing insights to the real world The team is working with two experimental groups to bring this research to the real world. One is led by Jesus Del Alamo at the Massachusetts Institute of Technology, the other by Alan Seabaugh at Notre Dame. With Del Alamo’s group the team is looking at making the electrons move through a semiconductor faster by building it from a material called indium arsenide rather than silicon. “If they can move faster, maybe they will be less likely to cause losses, and you can operate them at a higher speed,” Klimeck noted. “It’s basically an on-off switch based on high-electron-mobility materials. If you have low mobility, you’re losing your energy to something else.” With Seabaugh’s group the modeling team is working on band-to-band-tunneling transistors. These transistors bear some promise in lower-voltage operation, which could dramatically reduce the energy consumption in traditional field-effect transistors. While there may be limited opportunities to decrease the size of a transistor, there are plenty to make devices smaller and more efficient. One problem that has not been overcome is the energy lost as heat, which leads to challenges in both powering devices and cooling them. “Atoms like to vibrate in certain ways,” Klimeck explained, “and the vibrations are losses that come out of the device as heat. If we understand how the electrons interact with the crystal at the nanometer scale, how heat is being generated, maybe we can avoid the heat and make the transistors less power hungry. On the flip side we may be able to improve devices that convert thermal energy into electricity.” As he notes, minimizing the power requirements and heat generation of electronics as well as harvesting energy are critically important as our lives become more and more automated. “This is important anyplace where you don’t want to lug 40 pounds of batteries with you, for instance, if you’d like your iPhone to be lighter and run all kinds of image processing on it,” he said. “This is true for any mobile or computing technology—anything where you don’t want to waste energy, which these days is anywhere.”—by Leo Williams
be3b7b67357be562
The butterfly and the remarkable Professor Hofstadter This spring, there was excitement in the world of physics as a long-predicted butterfly was proved to exist. But, being a creature of physics, this butterfly wasn’t an insect, nor anything that would even occur without human minds to construct it. This was Hofstadter’s butterfly, a remarkable spectrum of electron energy levels. It was first described in 1976 by Douglas Hofstadter, who was then with the Physics Department, University of Oregon, USA. He was looking at the allowed energy levels of electrons restricted to a two-dimensional plane, with a periodic potential energy and a changing magnetic field. As Hofstadter put it in a summary of his work, “The resultant Schrödinger equation becomes a finite-difference equation whose eigenvalues can be computed by a matrix method.” To which you might respond, “Aha, but of course it does!” Or even, “Huh?” — in which case, you might simply appreciate that when he plotted a graph of the spectrum, Hofstadter made a remarkable pattern that looked somewhat like a butterfly. And this pattern was recursive, so if you look at a small part of the pattern you see the same butterfly shape, which is repeated at larger and larger scales. The paper was published just one year after the term “fractal” had been coined, and Hofstadter had discovered one of the very few fractals known in physics. Quest for the elusive butterfly Physicists have since searched for experimental proof of the butterfly, yet until recently it proved elusive. This is largely as it results from quantum effects, and when atoms in the two-dimensional plane are very close together observing the butterfly would require unfeasibly strong magnetic fields, while if they are widely spaced disorder ruins the pattern. Graphene, a quirky form of carbon, has been the key to finding the butterfly. It is a one-atom thick layer of carbon atoms arranged in hexagonal patterns – somewhat like chicken wire. A layer of this was placed on atomically flat boron nitride substrate, which likewise has a honeycomb atomic lattice structure, but with slightly longer bonds between atoms. This combination resulted in the electrons experiencing a periodic potential, akin to a marble rolling over a surface shaped like the tray of an egg carton. City College of New York Assistant Professor of Physics Cory Dean developed the material. He was a member of an international group that published its findings in May. Separate groups at the University of Manchester (UK) and Massachusetts Institute of Technology simultaneously reported similar results. According to a City College press release, the light and dark sections of the butterfly pattern correspond to “gaps” in energy levels that electrons cannot cross and dark areas where they can move freely. While efficient conductors like copper have no gaps, and there are very large gaps in insulators, Dean believes the very complicated structure of the Hofstadter spectrum suggests as yet unknown electrical properties. “We are now standing at the edge of an entirely new frontier in terms of exploring properties of a system that have never before been realized,” he said. “The ability to generate this effect could possibly be exploited to design new electronic and optoelectronic devices.” Graphene the wonder material, and father n son physicists Graphene planes had already shown promise as a new wonder material. They were first isolated in 2004, and have a thickness almost a millionth of a human hair. Graphene is stronger than steel and more conductive than copper, and can help make ultrafast optical switches for applications including communications, as well as lead to more efficient solar cells, enhanced printed circuits, unbreakable touchscreens and microscale Lithium-ion batteries. It may even prove to be the ideal material for 3D printing. Rather as graphene may have multiple uses, the man who described the butterfly spectrum has proven multi-talented. Douglas Hofstadter was the son of Stanford University physicist Robert Hofstadter, who in 1961 was the joint winner of the Nobel Prize for Physics, “for his pioneering studies of electron scattering in atomic nuclei and for his consequent discoveries concerning the structure of nucleons." Like father, like son, you might think, as Douglas also became a physicist. Yet he did not remain so for long. The year after his paper on the spectrum was published, Hofstadter joined Indiana University's Computer Science Department faculty, and launched a research program in computer modeling of mental processes, which he then called "artificial intelligence research", though he now prefers "cognitive science research". Miracles, mirages and butterfly dreaming Hofstadter pondered the question of what is a self, and how can one come out of stuff that is as selfless as a stone or a puddle? In an attempt to provide an answer, he wrote a book, Gödel, Escher, Bach: an Eternal Golden Braid. This interwove several narratives, and featured word play, puzzles, and recursion and self-reference, with objects and ideas referring to themselves. The book was a success, winning the Pulitzer Prize for general non-fiction. Yet in an interview with Wired, Hofstadter later expressed disappointment that most people found its point was simply to have fun, albeit noting that hundreds of people had written to him, saying it launched them on a path of studying computer science or cognitive science or philosophy. Some of these people might have been startled when Hofstadter, by then professor of cognitive science at Indiana University, USA, later told the New York Times, “I have no interest in computers,” adding, “People who claim that computer programs can understand short stories, or compose great pieces of music — I find that stuff ridiculously overblown.” The NY Times interview accompanied the publication of a straighter book on questions of consciousness and soul, I Am a Strange Loop. Within this, Hofstadter wrote, “In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.” Here, Hofstadter seems to echo the butterfly pattern he discovered, with its multitude of versions of itself. But there’s far more to consciousness, which he believes derives from a self-model. Over 2300 years ago, a butterfly featured in an anecdote by another thinker. The Chinese philosopher Zhuangzi wrote of dreaming he was a butterfly, and awaking to wonder if he was a man who dreamt of being a butterfly, or a butterfly dreaming of being a man. After Hofstadter, you might wonder if either of these is but a dream within a dream. Which may remind you of a movie, which you may find within another column, which is currently a hazy looping form within the mirage that’s this writer … but will not feature butterflies. Martin Williams
444da06018beadc1
It has been treated as a question of geometry. = More specifically, the Lorentz transformation is a hyperbolic rotation ∇ Measurement is an integral part of Physics like any other scientific subject. I feel that human reaction time is self-explanatory. We obtain the value of an unknown mass in terms of a known value of mass. How to use: The tape is attached to an object and the state of motion of the object can be deduced from the dots on the tape. In general, a signal is part of communication between parties and places. natural units He combined all the laws then known relating to those two phenomenon into four equations. This lab will involve making several measurements of the fundamental units of length, mass, and time. So measurement is necessary for physics. It has been asserted that time is an implicit consequence of chaos (i.e. Converting units of time. The tabulation of the equinoxes, the sandglass, and the water clock became more and more accurate, and finally reliable. In our epoch, during which electromagnetic waves can propagate without being disturbed by conductors or charges, we can see the stars, at great distances from us, in the night sky. − In contrast to the views of Newton, of Einstein, and of quantum physics, which offer a symmetric view of time (as discussed above), Prigogine points out that statistical and thermodynamic physics can explain irreversible phenomena,[39] as well as the arrow of time and the Big Bang. In the International System of Units (SI), the unit of time is the second (symbol: $${\displaystyle \mathrm {s} }$$). ⁡ In physics, time plays a significant role in measuring motion and forces. To do physics, one has to make measurements. … Also, in physics, we often use functional relationship to understand how one quantity varies as a function of another. We need to measure the physical quantities to obtain physically meaningful results to understand physics. Ticker-tape timer Measures short time intervals of 0.02 s. Using the same reasoning as above, Z represent the tape from an object that is decelerating. • Uncertainty may be written explicitly, e.g., height of a table = 72.3±0.1 cm If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B. For example, if you apply Converting units of time review (seconds, minutes, & hours) CCSS.Math: 4.MD.A.1, 5.MD.A.1. The vertical lasers push the caesium ball through a microwave cavity. [35] In 2001 the clock uncertainty for NIST-F1 was 0.1 nanoseconds/day. Physics - Measurement Units - The following table illustrates the major measuring units in physics − D. M. Meekhof, S. R. Jefferts, M. Stepanovíc, and T. E. Parker (2001) "Accuracy Evaluation of a Cesium Fountain Primary Frequency Standard at NIST". See: Time ball, an early form of Time signal. The standard for unit of time, the second (s), is the exact duration of 9,192,631,770 cycles of the radiation associated with the transition between the two hyperfine levels of the ground state of cesium-133 atom. If you spot any errors or want to suggest improvements, please contact us. Galileo, Newton, and most people up until the 20th century thought that time was the same for everyone everywhere. In particular, Stephen Hawking identifies three arrows of time:[23], Entropy is maximum in an isolated thermodynamic system, and increases. The measurement of time is overseen by BIPM (Bureau International des Poids et Mesures), located in Sèvres, France, which ensures uniformity of measurements and their traceability to the International System of Units (SI) worldwide. ϕ He was disputed by Fred Hoyle (1915–2001), who invented the term 'Big Bang' to disparage it. In physics, the treatment of time is a central issue. physicists use different instruments, such as This book is about Aristotle’s account of time in Physics IV.10-14. For X, the dots are evenly spaced. In physics this will usually mean a quantity of time in seconds, such as 35 s. Language : “What time is it?” One example might be a yellow ribbon tied to a tree, or the ringing of a church bell. You can see the philosophy of measurement in the little kids who don't even know what math is. His simple and elegant theory shows that time is relative to an inertial frame. Time in the former sense is determined with a wristwatch or clock. Measurement has been an essential in human life since the dawn of civilization. General Physics. In this chapter we shall consider some aspects of the concepts of time and distance.It has been emphasized earlier that physics, as do all the sciences, depends on observation.One might also say that the development of the physical sciences to their present form has depended to a large extent on the emphasis which has been placed on the making of quantitative observations. t [1] In classical, non-relativistic physics, it is a scalar quantity (often denoted by the symbol Because Newton's fluents treat a linear flow of time (what he called mathematical time), time could be considered to be a linearly varying parameter, an abstraction of the march of the hours on the face of a clock. In physics, the definition of time is simple— time is change, or the interval over which change occurs. There are four types of speed: uniform speed, variable speed, average speed, and instantaneous speed. In 1875, Hendrik Lorentz (1853–1928) discovered Lorentz transformations, which left Maxwell's equations unchanged, allowing Michelson and Morley's negative result to be explained. Then, try some practice problems. The procedure to deducing the state of motion from the resulting tape is best explained using an example. How to use: The zero-end of the rule is first aligned flat with one end of the object and the reading is taken where the other end of the object meets the rule. Kids try to compare their height, size of candy, size of dolls and amount of toys they have. Corresponding commutator relations also hold for momentum p and position q, which are conjugate variables of each other, along with a corresponding uncertainty principle in momentum and position, similar to the energy and time relation above. Physical time is measured only by observing changes in some property such as the change in location of the hands of a clock. Thus from the above example, time, speed and distance are Physical Quantities. 0.05km written in the standard form is a. Aristotle claims that time is not a kind of change, but that it is something dependent on change. We will get an increasing speed as the distance between the dots increases. − In physics, most measurements have units, such as meters or seconds. Time can be combined mathematically with other physical quantities to derive other concepts such as motion, kinetic energy and time-dependent fields. In physics, most measurements have units, such as meters or seconds. How to use: As the time event occurs, the stopwatch is started at the same time. Some precise stopwatches are connected electronically to the time event and hence, more accurate. {\displaystyle t} We have so far defined only an "A time" and a "B time.". We define a physical quantity either by specifying how it is measured or by stating how it is calculated from other measurements. He defines it as a kind of ‘number of change’ with respect to the before and after. In contrast, Erwin Schrödinger (1887–1961) pointed out that life depends on a "negative entropy flow". ϕ Time in the latter sense is measured with a stopwatch or an interval timer. 1.1 Length & Time. There is a considerable literature on classical notions of time, such as transit time and arrival time, in a quantum mechanical context. This has sprung a whole culture around the concept of time travel leading to numerous science fiction stories, and movies. Well these notes are great! In this view time is a coordinate. I believe that you meant human reaction time instead of human error? In particular, the railroad car description can be found in Science and Hypothesis,[29] which was published before Einstein's articles of 1905. This is the basis for timelines, where time is a parameter. It is a scalar quantity. Before the introduction of metric system in India there were various measurements units for length, mass and time. θ But the Schrödinger picture shown above is equivalent to the Heisenberg picture, which enjoys a similarity to the Poisson brackets of classical mechanics. Time can be combined mathematically with other physical quantities to derive o As the ball is cooled, the caesium population cools to its ground state and emits light at its natural frequency, stated in the definition of second above. Mini Physics is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to We as observers can still signal different parties and places as long as we live within their past light cone. But we cannot receive signals from those parties and places outside our past light cone. In physics, sometimes units of measurement in which c = 1 are used to simplify equations. Definition  Time is fundamental aspect of physical world  The standard unit of time is second  1 second is the duration to cover 9,192,631,770 vibrations of the radiation relative to the transition between two levels of ground state of Cesium- 133 atom 3. Clocks based on these techniques have been developed, but are not yet in use as primary reference standards. ) For a review see , Google Scholar Crossref J. G. Mugaand C. R. Leavens, “ Arrival time in quantum mechanics,” Phys. Keep it up! Pendulum clocks were widely used in the 18th and 19th century. The standard for unit of time, the second (s), is the exact duration of 9,192,631,770 cycles of the radiation associated with the transition between the two hyperfine levels of the ground state of cesium-133 atom. In particular, the astronomical observatories maintained for religious purposes became accurate enough to ascertain the regular motions of the stars, and even some of the planets. ) are known as Maxwell's equations for electromagnetism. Accuracy: ± 0.1 s. (Allowance made to human reaction time limits the accuracy of the stopwatch to 0.1 – 0.4 s for laboratory experiments. Measurement of mass. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other. Digital stopwatch Measure short time interval (in minutes and seconds) to an accuracy of ±0.01s Analogue stopwatch Measures short time intervals (in minutes and seconds) to an accuracy of ±0.1s. By 1798, Benjamin Thompson (1753–1814) had discovered that work could be transformed to heat without limit - a precursor of the conservation of energy or. Henri Poincaré (1854–1912) noted the importance of Lorentz's transformation and popularized it. t H The scientist will measure the time between each movement using the fundamental unit of seconds. Soon afterward, the Belousov–Zhabotinsky reactions[25] were reported, which demonstrate oscillating colors in a chemical solution. The wave is formed by an electric field and a magnetic field oscillating together, perpendicular to each other and to the direction of propagation. Because the wings beat so fast, the scientist will probably need to measure in milliseconds, or 10-3 seconds. {\displaystyle {\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}} This dramatic result has raised issues: what happened between the singularity of the Big Bang and the Planck time, which, after all, is the smallest observable time. ... IGCSE Physics 1. These equations allow for solutions in the form of electromagnetic waves. For example: Take a book and use ruler (scale) to find its length. In order to minimize the uncertainty in the period, we measured the time for the pendulum to make \(20\) oscillations, and divided that time by \(20\). Used to measure the time between alternating power cycles. All these happen even before they know math., International Bureau of Weights and Measures. ′ The difference provides the required time interval. The equations of general relativity predict a non-static universe. The speed of light c can be seen as just a conversion factor needed because we measure the dimensions of spacetime in different units; since the metre is currently defined in terms of the second, it has the exact value of 299 792 458 m/s. Suppose the length was 20 cm. The unknown mass of a body is compared with a known value of mass. It was expected that there was one absolute reference frame, that of the luminiferous aether, in which Maxwell's equations held unmodified in the known form. Measurement of Time, Temperature and Approximation Measurement of time, temperature and approximation. Physical measurements. Einstein's equations predict that time should be altered by the presence of gravitational fields (see the Schwarzschild metric): Or one could use the following simpler approximation: That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. Smaller time units have no use in physics as we understand it today. We need a clock to measure time. This phenomenon is also referred to as the principle of maximal aging, and was described by Taylor and Wheeler as:[31]. In this way periodic events can be used to measure time, here is an assignment for … This was corroborated by Penzias and Wilson in 1965. If there is at the point B of space another clock in all respects resembling the one at A, it is possible for an observer at B to determine the time values of events in the immediate neighbourhood of B. It may be a number on a digital clock, a heartbeat, or the position of the Sun in the sky. English English [Auto] What you'll learn. Made with | 2010 - 2020 | Mini Physics |, Click to share on Twitter (Opens in new window), Click to share on Facebook (Opens in new window), Click to share on Reddit (Opens in new window), Click to share on Telegram (Opens in new window), Click to share on WhatsApp (Opens in new window), Click to share on LinkedIn (Opens in new window), Click to share on Tumblr (Opens in new window), Click to share on Pinterest (Opens in new window), Click to share on Pocket (Opens in new window), Click to share on Skype (Opens in new window), Parallax Error & Zero Error, Accuracy & Precision, Practice MCQs For Measurement of Physical Quantities, Vernier Caliper Practice: Without Zero Error, Vernier Caliper Practice: Finding The Zero Error, Vernier Caliper Practice: With Zero Error, Case Study 2: Energy Conversion for A Bouncing Ball, Case Study 1: Energy Conversion for An Oscillating Ideal Pendulum, O Level: Magnetic Field And Magnetic Field Lines, Used to measure very shoty time intervals of about $10^{-10}$ seconds, Used to measure short time intervals of minutes and seconds to an accuracy of $\pm 0.01 \text{ s}$, Used to measure short time intervals of minutes and seconds to an accuracy of $\pm 0.1 \text{s}$, Used to measure short time intervals of 0.02 s, Used to measure longer time intervals of hours, minutes and seconds, Used to measure LONG time intervals of years to thousands of years. In fact, because we can measure time more accurately than length, even the SI measurement of the metre is defined in terms of the distance travelled by light in 0.000000003335640952 seconds. Designed by the teachers at SAVE MY EXAMS for the CIE IGCSE Physics 0625 / 0972 syllabus. Psychological arrow of time - our perception of an inexorable flow. It is the change in the position of an object with respect to time. This lab will involve making several measurements of the fundamental units of length, mass, and time. Table \(\PageIndex{1}\) lists the base quantities and the symbols used for their dimension. ) It is argued that this means that time is a kind of order (not, as is commonly supposed, that it is a kind of measure). The sun was the arbiter of the flow of time, but time was known only to the hour for millennia, hence, the use of the gnomon was known across most of the world, especially Eurasia, and at least as far southward as the jungles of Southeast Asia.[16]. Administrator of Mini Physics. The greatest discovery in the study of time resulted because of Einstein’s theory of relativity that introduced the concept of slowing of time with motion and gravity. In 1864, James Clerk Maxwell (1831–1879) presented a combined theory of electricity and magnetism. Speed is measured as the ratio between the distance and time and the SI unit of speed is m/s. Course content. The pendulum was released from \(90\) and its period was measured by filming the pendulum with a cell-phone camera and using the phone’s built-in time. The number of movement repetitions taking place every second is called frequency. ϕ measurement. The dots are made by a ticker tape timer with a time interval of 0.1 seconds. It is impossible to know that time has passed unless something changes. In the sciences generally, time is usually defined by its measurement: it is simply what a clock reads. Quantum mechanics explains the properties of the periodic table of the elements. For Z, the spacing between the dots decreases as time passes. The TU (for Time Unit) is a unit of time defined as 1024 µs for use in engineering. One of the most basic measurements in physics is the measurement of length, which is where we will start this lab. When might have time separated out from the spacetime foam;[38] there are only hints based on broken symmetries (see Spontaneous symmetry breaking, Timeline of the Big Bang, and the articles in Category:Physical cosmology). LENGTH AND TIME Making measurements is very important in physics. For reference, the table gives you the primary units of measurement in the MKS system. In free space (that is, space not containing electric charges), the equations take the form (using SI units):[28]. Physics 207 - Lab 1 - Measurements Introduction Any physical science requires measurement. The GPS satellites must account for the effects of gravitation and other relativistic factors in their circuitry. The figure above consists of 3 tapes, X, Y and Z, with a length of 1 m from the first dot to the last dot. i The Schrödinger equation[32] is. Consequences of this include relativity of simultaneity. x cosh Physics in particular often requires extreme levels of precision in time measurement, which has led to the requirement that time be considered an infinitely divisible linear continuum, and not quantized (i.e. mean any natural or man-made phenomenon that is repetitive and regular in nature In 1583, Galileo Galilei (1564–1642) discovered that a pendulum's harmonic motion has a constant period, which he learned by timing the motion of a swaying lamp in harmonic motion at mass at the cathedral of Pisa, with his pulse. In day to day living, time is a cultural construct whereby an event can be associated with a series of numbers. The primary time standard in the U.S. is currently NIST-F1, a laser-cooled Cs fountain,[34] the latest in a series of time and frequency standards, from the ammonia-based atomic clock (1949) to the caesium-based NBS-1 (1952) to NIST-7 (1993). ′ Cosmological arrow of time - distinguished by the expansion of the universe. The results are in agreement with the basic tenets of quantum field theory and reveal differences in the rates at which the quantum states of the B 0 meson transform into one another. ICSE Class 6 Revise. Using relativity and quantum theory we have been able to roughly reconstruct the history of the universe. At first, timekeeping was done by hand by priests, and then for commerce, with watchmen to note time as part of their duties. Periodic motion or happenings such as sunrise and sunset were formerly used for measuring time. According to the prevailing cosmological model of the Big Bang theory, time itself began as part of the entire Universe about 13.8 billion years ago. In 1824 Sadi Carnot (1796–1832) scientifically analyzed the steam engine with his Carnot cycle, an abstract engine. When making measurements. In physics, time plays a major role in the measurement of motion and forces. ⁡ They are appropriate for standards and scientific use. The caesium atomic clock became practical after 1950, when advances in electronics enabled reliable measurement of the microwave frequencies it generates. Albert Einstein's 1905 special relativity challenged the notion of absolute time, and could only formulate a definition of synchronization for clocks that mark a linear flow of time: If at the point A of space there is a clock, an observer at A can determine the time values of events in the immediate proximity of A by finding the positions of the hands which are simultaneous with these events. Like units, dimensions obey the rules of algebra. [19], In his Two New Sciences (1638), Galileo used a water clock to measure the time taken for a bronze ball to roll a known distance down an inclined plane; this clock was. Development of increasingly accurate frequency standards is underway. Gamow's prediction was a 5–10-kelvin black-body radiation temperature for the universe, after it cooled during the expansion. For example, a measurement of length is said to have dimension L or L 1, a measurement of mass has dimension M or M 1, and a measurement of time has dimension T or T 1. Then, try some practice problems. Some of the more common clocks and watches can be found in the table below: Equipment: It can be read in hours, minutes and seconds. Measurement of the period. Modern clocks make use of a piezoelectric crystal called quartz, you must have read quartz written on a wall clock while checking the time, well this crystal vibrates at a specific frequency when voltage is applied to it, so it is designed in such a way that a frequency of one hertz is obtained from it, this is used to measure time. Converting units of time. This is known as time dilation. Measurement is the process of comparing an unknown quantity with another quantity of its kind (called the unit of measurement) to find out how many times the first includes the second, Key elements of measurement process are the physical quantity, measuring tools and units of measurement. nonlinearity/irreversibility): the characteristic time, or rate of information entropy production, of a system. The Global Positioning System must also adjust signals to account for this effect. Measurement Uncertainities Motivation (15 minutes): Discuss the role of measurement and experimentation in physics; Illustrate issues surrounding measurement through measurement activities involving pairs (e.g. ). Measurement is a process of detecting an unknown physical quantity by using standard quantity. In 1945 Rabi then suggested that this technique be the basis of a clock[33] using the resonant frequency of an atomic beam. [2]) and, like length, mass, and charge, is usually described as a fundamental quantity. Measurement of time: using pendulum, clock or stopwatch; Period: time taken for 1 complete oscillation ; frequency: number of complete oscillations per second; the period increases with the length of the pendulum; MCQ Questions 1. Over time, instruments of great accuracy have been devolved to help scientist make better measurements. All other units of time measurement are now derived from the second. Richard of Wallingford (1292–1336), abbot of St. Alban's abbey, famously built a mechanical clock as an astronomical orrery about 1330.[17][18]. For reference, the table gives you the primary units of measurement in the MKS system. Measurement of mass is most commonly done by a Balance. Also, in physics, we often use functional relationship to understand how one quantity varies as a function of another. James Jespersen and Jane Fitz-Randolph (1999). By a clock, we mean any natural or man-made phenomenon that is repetitive and regular in nature. Also a casual term for a short period of time trijiffy (electronics) 1/20 s to 1/10 s: Used to measure the time between alternating power cycles. Every measurement of time involves measuring a change in some physical quantity. Mass, length, and time Mass, length, and time Time describes the flow of the universe from the past through the present into the future. The Poisson brackets are superseded by a nonzero commutator, say [H,A] for observable A, and Hamiltonian H: This equation denotes an uncertainty relation in quantum physics. Unit Length, Duration and Size Notes Other xentojiffy (physics)-3 × 10⁵³sThe amount of time it hypothetically takes light to hypothetically travel one Fermi (hypothetically the size of a nucleon) in a vacuum. This is so much better than results obtained by astronomical observations alone, so in 1967, the second was redefined as the time interval occupied by cycles of a specified energy change in the cesium atom. In physics, the definition of time is simple— time is change, or the interval over which change occurs. Time is an abstract concept at the best of times but these dimensions are so tiny that the classical laws of physics no longer count. ... but in fact it appears to be one possible take away from an exotic new quantum physics experiment. This page was last edited on 15 December 2020, at 05:19. e From this, a series of technical issues have emerged; see Category:Synchronization. The Michelson–Morley experiment failed to detect any difference in the relative speed of light due to the motion of the Earth relative to the luminiferous aether, suggesting that Maxwell's equations did, in fact, hold in all frames. ( Another important physical quantity that is often measured is mass, which you will also be measuring in this lab. The relative accuracy of such a time standard is currently on the order of 10−15[14] (corresponding to 1 second in approximately 30 million years). [24] Ilya Prigogine (1917–2003) stated that other thermodynamic systems which, like life, are also far from equilibrium, can also exhibit stable spatio-temporal structures. − θ These vector calculus equations which use the del operator ( The unit of measurement of time: the second, Thermodynamics and the paradox of irreversibility. Since the length is 1 m, the spacing between each dots is 0.2 m. We can calculate the speed of the object using $\text{Speed} = \frac{\text{Distance}}{\text{Time}} = \frac{0.2}{0.1} = 2 \, m/s$ Hence, X represent the tape from an object that is moving at constant speed. which is a change of coordinates in the four-dimensional Minkowski space, a dimension of which is ct. (In Euclidean space an ordinary rotation
9b2aa22f0b9bc2bc
Orbital hybridisation From Citizendium Revision as of 15:20, 31 March 2007 by Petréa Mitchell (Talk | contribs) (Phew, that was scary) Jump to: navigation, search In chemistry, hybridisation or hybridization (see also spelling differences) is the concept of mixing atomic orbitals to form new hybrid orbitals suitable for the qualitative description of atomic bonding properties. Hybridised orbitals are very useful in the explanation of the shape of molecular orbitals for molecules. It is an integral part of valence bond theory and the valence shell electron-pair repulsion (VSEPR) theory [1] [2]. Historical development The hybridisation theory was promoted by chemist Linus Pauling[3] in order to explain the structure of molecules such as methane (CH4). Historically, this concept was developed for such simple chemical systems but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. Hydridization theory is, however, considered less useful and less informative than Molecular Orbital Theory. Problems with hybridization are especially notable when the d orbitals are involved in bonding, as in coordination chemistry and organometallic chemistry. Although hybridisation schemes in transition metal chemistry can be used, they are not accurate and have little predictive power. It is important to note that orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridisation, this approximation is based on the atomic orbitals of hydrogen. Hybridised orbitals are assumed to be mixtures of these atomic orbitals, superimposed on each other in various proportions. Hydrogen orbitals are used as a basis for simple schemes of hybridisation because it is one of the few examples of orbitals for which an exact analytic solution to its Schrödinger equation is known. These orbitals are then assumed to be slightly, but not significantly distorted in heavier atoms, like carbon, nitrogen, and oxygen. Under these assumptions is the theory of hybridisation most applicable. It must be noted, that one does not need hybridisation to describe molecules, but for molecules made up from carbon, nitrogen and oxygen (and to a lesser extent, sulphur and phosphorus) the hybridisation theory/model makes the description much easier. The hybridisation theory finds its use mainly in organic chemistry, and mostly concerns C, N and O (and to a lesser extent P and S). Its explanation starts with the way bonding is organized in methane. sp3 hybrids Hybridisation describes the bonding atoms from an atom's point of view. That is, for a tetrahedrally coordinated carbon (e.g. methane, CH4), the carbon should have 4 orbitals with the correct symmetry to bond to the 4 hydrogen atoms. The problem with the existence of methane is now this: Carbon's ground-state configuration is 1s² 2s² 2px¹ 2py¹ or perhaps more easily read: (Note: The 1s orbital is lower in energy than the 2s orbital, and the 2s orbital is lower in energy than the 2p orbitals) The valence bond theory would predict, based on the existence of two half-filled p-type orbitals (the designations px py or pz are meaningless at this point, as they do not fill in any particular order), that C forms two covalent bonds. CH2. However, methylene is a very reactive molecule (see also: carbene) and cannot exist outside of a molecular system. Therefore, this theory alone cannot explain the existence of CH4. Furthermore, ground state orbitals cannot be used for bonding in CH4. While exciting a 2s electron into a 2p orbital would theoretically allow for four bonds, according to the valence bond theory which has been proved experimentally correct for systems like O2 this would imply that the various bonds of CH4 would have differing energies due to differing levels of orbital overlap. Once again, this has been experimentally disproved: any hydrogen can be removed from a carbon with equal ease. To summarise, to explain the existence of CH4 (and many other molecules) a method by which as many as 12 bonds (for transition metals) of equal strength (and therefore equal length) can be created was required. The first step in hybridisation is the excitation of one (or more) electrons (we will have a look on the carbon atom in methane, for simplicity of the discussion): The proton that forms the nucleus of a hydrogen atom attracts one of the valence electrons on carbon. This causes an excitation, moving a 2s electron into a 2p orbital. This, however, increases the influence of the carbon nucleus on the valence electrons by increasing the effective core potential (the amount of charge the nucleus exerts on a given electron = Charge of Core - Charge of all electrons closer to the nucleus). The combination of these forces creates new mathematical functions known as hybridised orbitals. In the case of carbon attempting to bond with four hydrogens, four orbitals are required. Therefore, the 2s orbital (core orbitals are almost never involved in bonding) mixes with the three 2p orbitals to form four sp3 hybrids (read as s-p-three). See graphical summary below. In CH4, four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength. This theory fits our requirements. A schematic presentation of hybrid orbitals overlapping hydrogens' s orbitals translates into Methane's tetrahedral shape An alternative view is: View the carbon as the C4- anion. In this case all the orbitals on the carbon are filled: If we now recombine these orbitals with the empty s-orbitals of 4 hydrogens (4 protons, H+) and allow maximum separation between the 4 hydrogens (i.e. tetrahedral surrounding of the carbon), we see that at any orientation of the p-orbitals, a single hydrogen has an overlap of 25% with the s-orbital of the C, and a total of 75% of overlap with the 3 p-orbitals (see that the relative percentages are the same as the character of the respective orbital in an sp3-hybridisation model, 25% s- and 75% p-character). According to the orbital hybridization theory the valence electrons in methane should be equal in energy but its photoelectron spectrum [4] shows two bands, one at 12.7 eV (one electron pair) and one at 23 eV (three electron pairs). This apparent inconsistency can be explained when one considers additional orbital mixing taking place when the sp3 orbitals mix with the 4 hydrogen orbitals. sp2 hybrids Other carbon based compounds and other molecules may be explained in a similar way as methane, take for example ethene (C2H4). Ethene has a double bond between the carbons. The Lewis structure looks like this: Carbon will sp2 hybridise, because hybrid orbitals will form only sigma bonds and one pi bond is required for the double bond between the carbons. The hydrogen-carbon bonds are all of equal strength and length, which agrees with experimental data. In sp2 hybridization the 2s orbital is mixed with only two of the three available 2p orbitals: forming a total of 3 sp2 orbitals with one p-orbital remaining. In ethene the two carbon atoms form a sigma bond by overlap of two sp2 orbitals and each carbon atoms forms two covalent bonds with hydrogen by s - sp3 overlap all with 120° angles. the pi-bond between the carbon atoms perpendicular to the molecular plane is formed by 2p-2p overlap. The amount of p-character is not restricted to integer values, i.e. hybridisations like sp2.5 are also readily described. In this case the geometries are somewhat distorted from the ideally hybridised picture. For example, as stated in Bent's rule, a bond tends to have higher p-character when directed toward a more electronegative substituent. sp hybrids The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model the 2s orbital mixes with only one of the three p-orbitals resulting in two sp orbitals and two remaining unchanged p orbitals. The chemical bonding in acetylene (C2H2) consists of sp - sp overlap between the two carbon atoms forming a sigma bond and two additional pi bonds form by p - p overlap. Each carbon also bonds to hydrogen in a sigma s - sp overlap at 180° angles. Hybridisation and molecule shape Using hybridisation, along with the VSEPR theory, helps to explain molecule shape: • AX1 (eg, LiH): no hybridisation; trivially linear shape • AX2 (eg, BeCl2): sp hybridisation; linear or diagonal shape; bond angles are cos-1(-1) = 180° • AX3 (eg, BCl3): sp² hybridisation; trigonal planar shape; bond angles are cos-1(-1/2) = 120° • AX4 (eg, CCl4): sp³ hybridisation; tetrahedral shape; bond angles are cos-1(-1/3) ≈ 109.5° • AX5 (eg, PCl5): sp³d hybridisation; trigonal bipyramidal shape • AX6 (eg, SF6): sp³d² hybridisation; octahedral (or square bipyramidal) shape This holds if there are no lone electron pairs on the central atom. If there are, they should be counted in the Xi number, but bond angles become smaller due to increased repulsion. For example, in water (H2O), the oxygen atom has two bonds with H and two lone electron pairs (as can be seen with the valence bond theory as well from the electronic configuration of oxygen), which means there are four such 'elements' on O. The model molecule is, then, AX4: sp³ hybridization is utilized, and the electron arrangement of H2O is tetrahedral. This agrees with the shape, we know water has a non-linear, bent structure, with an angle of 104.5 degrees (the two lone-pairs are not visible). Hybridization theory has been superseded by MO theory Although the language and pictures arising from Hybridization Theory, more widely known as Valence Bond Theory, remain widespread in synthetic organic chemistry, this qualitative analysis of bonding has been largely superseded by Molecular Orbital Theory. For example, inorganic chemistry texts have all but abandoned instruction of hybridization, except as a historical footnote.[5][6] One specific problem with hybridization is that is incorrectly predicts the photoelectron spectra of many molecules, including such fundamental species such as methane and water. From a pedogical perspective, hybridization approach tends to over-emphasize localization of bonding electrons and does not effectively embrace molecular symmetry as does MO Theory. 1. Clayden, Greeves, Warren, Wothers. Organic Chemistry. Oxford University Press (2001), ISBN 0-19-850346-6. 2. Organic chemistry John McMurry 2nd Ed. ISBN 0534079687 3. L. Pauling, J. Am. Chem. Soc. 53 (1931), 1367 4. photo electron spectrum of methane 1 photo electron spectrum of methane 2 External links
83f69c5b7fa471eb
Please login first A proposal for teaching introductory quantum physics in the footsteps of Einstein Marco Di Mauro * 1 , Salvatore Esposito 2 , Adele Naddeo 2 1  University of Salerno 2  INFN Sezione di Napoli 10.3390/ECU2021-09320 (registering DOI) A timely challenge in current physics education is to develop educational tracks aimed at introducing advanced high school students to the main concepts of quantum theory. While standard tracks are historical in nature, going from Planck’s hypothesis to the Schrödinger equation, several points of this history tend to be left out. However, the richness of the history of quantum physics makes much more material available, part of which could potentially be adopted to enhance students’ understanding of basic quantum physics. In this respect, the pivotal work by Einstein stands out, because of its clarity and readability, also for modern readers, and especially because many of the characteristic features of quantum physics can in fact be traced to some paper by him on the quantum theory of radiation. This is the case of light quanta (introduced in 1905, and then applied to the photoelectric effect), wave-particle duality for light (1909) and probability (1916). These concepts were all introduced using clear and compelling statistical arguments, which however are not part of usual high school curricula. We were led to think that high school students can be fruitfully be exposed to the above material, and therefore we developed a didactic track, which introduces some characteristic concepts of quantum physics in a way that follows Einsteins’ original arguments. This can be done in a way that requires nothing more than elementary integral calculus and statistics, plus elements of classical physics which are part of the standard curriculum of advanced high school students. Such a track can then usefully complement the usual historically oriented curricula, while giving the students a grasp of subtle quantum concepts, which can also help them when they come to more advanced topics such as matter waves, the Schrödinger equation and Born’s rule. Keywords: Quantum physics; physics teaching; history of physics.
5069dc8c044a2aac
Skip to main content Chemistry LibreTexts Chapter 2: Waves and Particles • Page ID • Quantum mechanics is the theoretical framework which describes the behavior of matter on the atomic scale. It is the most successful quantitative theory in the history of science, having withstood thousands of experimental tests without a single verifiable exception. It has correctly predicted or explained phenomena in fields as diverse as chemistry, elementary-particle physics, solid-state electronics, molecular biology and cosmology. A host of modern technological marvels, including transistors, lasers, computers and nuclear reactors are offspring of the quantum theory. Possibly 30% of the US gross national product involves technology which is based on quantum mechanics. For all its relevance, the quantum world differs quite dramatically from the world of everyday experience. To understand the modern theory of matter, conceptual hurdles of both psychological and mathematical variety must be overcome. A paradox which stimulated the early development of the quantum theory concerned the indeterminate nature of light. Light usually behaves as a wave phenomenon but occasionally it betrays a particle-like aspect, a schizoid tendency known as the wave-particle duality. We consider first the wave theory of light.​ The Double-Slit Experiment Figure \(\PageIndex{1}\) shows a modernized version of the famous double-slit diffraction experiment first performed by Thomas Young in 1801. Light from a monochromatic (single wavelength) source passes through two narrow slits and is projected onto a screen. Each slit by itself would allow just a narrow band of light to illuminate the screen. But with both slits open, a beautiful interference pattern of alternating light and dark bands appears, with maximum intensity in the center. To understand what is happening, we review some key results about electromagnetic waves. Figure \(\PageIndex{1}\): Modern version of Young's interference experiment using a laser gun. Single slit (left) produces an intense band of light. Double slit (right) gives a diffraction pattern. See animated applet at Maxwell's theory of electromagnetism was an elegant unification of the diverse phenomena of electricity, magnetism and radiation, including light. Electromagnetic radiation is carried by transverse waves of electric and magnetic fields, propagating in vacuum at a speed \( c≈3\times 10^8m/sec \), known as the "speed of light." As shown in Figure 2, the E and B fields oscillate sinusoidally, in synchrony with one another. The magnitudes of E and B are proportional (\(B=E/c\) in SI units). The distance between successive maxima (or minima) at a given instant of time is called the wavelength \(\lambda\). At every point in space, the fields also oscillate sinusoidally as functions of time. The number of oscillations per unit time is called the frequency \(\nu\). Since the field moves one wavelength in the time \(\lambda/c\), the wavelength, frequency and speed for any wave phenomenon are related by \[ \lambda\nu=c\label{1}\] Figure \(\PageIndex{2}\): Schematic representation of electromagnetic wave. In electromagnetic theory, the intensity of radiation, energy flux incident on a unit area per unit time, is represented by the Poynting vector \[\vec{S}=\mu_{0}\vec{E}\times \vec{B}\label{2}\] The energy density contained in an electromagnetic field, even a static one, is given by Note that both of the above energy quantities depend quadratically on the fields E and B. To discuss the diffraction experiments described above, it is useful to define the amplitude of an electromagnetic wave at each point in space and time r, t by the function such that the intensity is given by The function \(\Psi(\vec{r},t)\) will, in some later applications, have complex values. In such cases we generalize the definition of intensity to where \(\Psi\left(\vec{r},t\right)^\ast\) represents the complex conjugate of \(\Psi\left(\vec{r},t\right)\). In quantum mechanical applications, the function \(\Psi\) is known as the wavefunction. Figure \(\PageIndex{1}\): Interference of two equal sinusoidal waves. Top: constructive interference. Bottom: destructive interference. Center: intermediate case. The resulting intensities \(\rho=\Psi^2\) is shown on the right. The electric and magnetic fields, hence the amplitude \(\Psi\), can have either positive and negative values at different points in space. In fact constructive and destructive interference arises from the superposition of waves, as illustrated in Figure \(\PageIndex{3}\). By Equation \(\ref{5}\), the intensity \(\rho\ge0\) everywhere. The light and dark bands on the screen are explained by constructive and destructive interference, respectively. The wavelike nature of light is convincingly demonstrated by the fact that the intensity with both slits open is not the sum of the individual intensities, ie, \(\rho\neq\rho_{1}+\rho_{2}\). Rather it is the wave amplitudes which add: with the intensity given by the square of the amplitude: The cross term \(2\Psi_{1}\Psi_{2}\) is responsible for the constructive and destructive interference. Where \(\Psi_{1}\) and \(\Psi_{2}\) have the same sign, constructive interference makes the total intensity greater than the the sum of \(\rho_{1}\) and \(\rho_{2}\). Where \(\Psi_{1}\) and \(\Psi_{2}\) have opposite signs, there is destructive interference. If, in fact, \(\Psi_{1}\) = \(-\Psi_{2}\) then the two waves cancel exactly, giving a dark fringe on the screen. Wave-Particle Duality The interference phenomena demonstrated by the work of Young, Fresnel and others in the early 19th Century, apparently settled the matter that light was a wave phenomenon, contrary to the views of Newton a century earlier--case closed! But nearly a century later, phenomena were discovered which could not be satisfactorily accounted for by the wave theory, specifically blackbody radiation and the photoelectric effect. Deviating from the historical development, we will illustrate these effects by a modification of the double slit experiment. Let us equip the laser source with a dimmer switch capable of reducing the light intensity by several orders of magnitude, as shown in Figure \(\PageIndex{4}\). With each successive filter the diffraction pattern becomes dimmer and dimmer. Eventually we will begin to see localized scintillations at random positions on an otherwise dark screen. It is an almost inescapable conclusion that these scintillations are caused by photons, the bundles of light postulated by Planck and Einstein to explain blackbody radiation and the photoelectric effect. Figure \(\PageIndex{1}\): Scintillations observed after dimming laser intensity by several orders of magnitude. These are evidently caused by individual photons! But wonders do not cease even here. Even though the individual scintillations appear at random positions on the screen, their statistical behavior reproduces the original high-intensity diffraction pattern. Evidently the statistical behavior of the photons follows a predictable pattern, even though the behavior of individual photons is unpredictable. This implies that each individual photon, even though it behaves mostly like a particle, somehow carry with it a "knowledge" of the entire wavelike diffraction pattern. In some sense, a single photon must be able to go through both slits at the same time. This is what is known as the wave-particle duality for light: under appropriate circumstances light can behave as a wave or as a particle. Planck's resolution of the problem of blackbody radiation and Einstein's explanation of the photoelectric effect can be summarized by a relation between the energy of a photon to its frequency: \[E=h \nu \label{8b}\] where \(h = 6.626\times 10^{-34} J sec\), known as Planck's constant. Much later, the Compton effect was discovered, wherein an x-ray or gamma ray photon ejects an electron from an atom, as shown in Figure \(\PageIndex{5}\). Assuming conservation of momentum in a photon-electron collision, the photon is found to carry a momentum p, given by \[p=\dfrac{h}{\lambda} \label{9}\] Equation \(\ref{8b}\) and \(\ref{9}\) constitute quantitative realizations of the wave-particle duality, each relating a particle-like property--energy or momentum--to a wavelike property--frequency or wavelength. Figure \(\PageIndex{1}\): Compton effect. The momentum and energy carried by the incident x-ray photon are transferred to the ejected electron and the scattered photon. According to the special theory of relativity, the last two formulas are actually different facets of the same fundamental relationship. By Einstein's famous formula, the equivalence of mass and energy is given by The photon's rest mass is zero, but in travelling at speed c, it acquires a finite mass. Equating Equation \(\ref{8}\)) and \(\ref{10}\) for the photon energy and taking the photon momentum to be \(p = mc\), we obtain \[p = \dfrac{E}{c} = \dfrac{h\nu}{c} = \dfrac{h}{\lambda} \label{11}\] Thus, the wavelength-frequency relation (Equation \(\ref{1}\)), implies the Compton-effect formula (Equation \(\ref{9}\)). The best we can do is to describe the phenomena constituting the wave-particle duality. There is no widely accepted explanation in terms of everyday experience and common sense. Feynman referred to the "experiment with two holes" as the "central mystery of quantum mechanics." It should be mentioned that a number of models have been proposed over the years to rationalize these quantum mysteries. Bohm proposed that there might exist hidden variables which would make the behavior of each photon deterministic, ie, particle-like. Everett and Wheeler proposed the "many worlds interpretation of quantum mechanics" in which each random event causes the splitting of the entire universe into disconnected parallel universes in which each possibility becomes the reality. Needless to say, not many people are willing to accept such a metaphysically unwieldy view of reality. Most scientists are content to apply the highly successful computational mechanisms of quantum theory to their work, without worrying unduly about its philosophical underpinnings. Sort of like people who enjoy eating roast beef but would rather not think about where it comes from. There was never any drawn-out controversy about whether electrons or any other constituents of matter were other than particle-like. Yet a variant of the double-slit experiment using electrons instead of light proves otherwise. The experiment is technically difficult but has been done. An electron gun, instead of a light source, produces a beam of electrons at a selected velocity, which is focused and guided by electric and magnetic fields. Then, everything that happens for photons has its analog for electrons. Individual electrons produce scintillations on a phosphor screen-this is how TV works. But electrons also exhibit diffraction effects, which indicates that they too have wavelike attributes. Diffraction experiments have been more recently carried out for particles as large as atoms and molecules, even for the C60 fullerene molecule. De Broglie in 1924 first conjectured that matter might also exhibit a wave-particle duality. A wavelike aspect of the electron might, for example, be responsible for the discrete nature of Bohr orbits in the hydrogen atom. According to de Broglie's hypothesis, the "matter waves" associated with a particle have a wavelength given by which is identical in form to Compton's result (Equation \ref{9}) (which, in fact, was discovered later). The correctness of de Broglie's conjecture was most dramatically confirmed by the observations of Davisson and Germer in 1927 of diffraction of monoenergetic beams of electrons by metal crystals, much like the diffraction of x-rays. And measurements showed that de Broglie's formula (Equation \(\ref{12}\)) did indeed give the correct wavelength (see Figure \(\PageIndex{6}\)). Figure \(\PageIndex{16}\): Intensity of electron scattered at a fixed angle off a nickel crystal, as function of incident electron energy. From C. J. Davisson "Are Electrons Waves?" Franklin Institute Journal 205, 597 (1928). The Schrödinger Equation Schrödinger in 1926 first proposed an equation for de Broglie's matter waves. This equation cannot be derived from some other principle since it constitutes a fundamental law of nature. Its correctness can be judged only by its subsequent agreement with observed phenomena (a posteriori proof). Nonetheless, we will attempt a heuristic argument to make the result at least plausible. In classical electromagnetic theory, it follows from Maxwell's equations that each component of the electric and magnetic fields in vacuum is a solution of the wave equation \[\nabla^2\Psi-\dfrac{1}{c^2}\dfrac{\partial ^2\Psi}{\partial t^2}=0\label{13}\] where the Laplacian or "del-squared" operator is defined by We will attempt now to create an analogous equation for de Broglie's matter waves. Accordingly, let us consider a very general instance of wave motion propagating in the x-direction. At a given instant of time, the form of a wave might be represented by a function such as \[\psi(x)=f\left(\dfrac {2\pi x}{ \lambda}\right)\label{15}\] \[e^{i\theta}=\cos\theta + i \sin\theta\label{16}\] Each of the above is a periodic function, its value repeating every time its argument increases by \(2\pi\). This happens whenever x increases by one wavelength \(\lambda\). At a fixed point in space, the time-dependence of the wave has an analogous structure: \[T(t)=f(2\pi\nu t)\label{17}\] where \(\nu\) gives the number of cycles of the wave per unit time. Taking into account both x- and t-dependence, we consider a wavefunction of the form representing waves travelling from left to right. Now we make use of the Planck and de Broglie formulas (Equation \(\ref{8}\) and \(\ref{12}\), respectively) to replace \(\nu\) and \(\lambda\) by their particle analogs. This gives Since Planck's constant occurs in most formulas with the denominator \(2\pi\), this symbol was introduced by Dirac. Now Equation \(\ref{17}\) represents in some way the wavelike nature of a particle with energy E and momentum p. The time derivative of Equation \ref{19} gives \[\dfrac{\partial\Psi}{\partial t} = -(iE/\hbar)\times \exp \left[\dfrac{i(px-Et)}{\hbar} \right]\label{21}\] \[i\hbar\dfrac{\partial\Psi}{\partial t} = E\Psi\label{22}\] \[-i\hbar\dfrac{\partial\Psi}{\partial x} = p\Psi\label{23}\] The energy and momentum for a nonrelativistic free particle are related by Thus \(\Psi(x,t)\) satisfies the partial differential equation For a particle with a potential energy \(V(x)\), we postulate that the equation for matter waves generalizes to For waves in three dimensions should then have Here the potential energy and the wavefunction depend on the three space coordinates x, y, z, which we write for brevity as r. This is the time-dependent Schrödinger equation for the amplitude \(\Psi(\vec{r}, t)\) of the matter waves associated with the particle. Its formulation in 1926 represents the starting point of modern quantum mechanics. (Heisenberg in 1925 proposed another version known as matrix mechanics.) For conservative systems, in which the energy is a constant, we can separate out the time-dependent factor from (19) and write where \(\psi(\vec{r})\) is a wavefunction dependent only on space coordinates. Putting Equation \ref{30} into Equation \ref{29} and cancelling the exponential factors, we obtain the time-independent Schrödinger equation: Most of our applications of quantum mechanics to chemistry will be based on this equation. The bracketed object in Equation \(\ref{31}\) is called an operator. An operator is a generalization of the concept of a function. Whereas a function is a rule for turning one number into another, an operator is a rule for turning one function into another. The Laplacian (\(\nabla\)) is an example of an operator. We usually indicate that an object is an operator by placing a `hat' over it, eg, \(\hat{A}\). The action of an operator that turns the function f into the function g is represented by Equation \(\ref{23}\) implies that the operator for the x-component of momentum can be written \[\hat{p}_{x}=-i\hbar\dfrac{\partial}{\partial x}\label{33}\] and by analogy, we must have \[\hat{p}_{y}=-i\hbar\dfrac{\partial}{\partial y}\] \[\hat{p}_{z}=-i\hbar\dfrac{\partial}{\partial z}\label{34}\] The energy, as in Equation \(\ref{27}\), expressed as a function of position and momentum is known in classical mechanics as the Hamiltonian. Generalizing to three dimensions, We construct thus the corresponding quantum-mechanical operator \[\hat{H}=-\dfrac{\hbar^2}{2m}\left(\dfrac{\partial^2}{\partial x^2}+\dfrac{\partial^2}{\partial y^2}+\dfrac{\partial^2}{\partial z^2}\right)+V(x,y,z)=-\dfrac{\hbar^2}{2m}\nabla^2+V(\vec{r})\label{36}\] The time-independent Schrödinger equation (Equation \(\ref{31}\)) can then be written symbolically as This form is actually more generally to any quantum-mechanical problem, given the appropriate Hamiltonian and wavefunction. Most applications to chemistry involve systems containing many particles--electrons and nuclei. An operator equation of the form \[\hat{A}\psi=const \psi\label{38}\] is called an eigenvalue equation. Recall that, in general, an operator acting on a function gives another function (e.g., Equation \(\ref{32}\)). The special case (Equation \(\ref{38}\)) occurs when the second function is a multiple of the first. In this case, \(\psi\) is known as an eigenfunction and the constant is called an eigenvalue. (These terms are hybrids with German, the purely English equivalents being `characteristic function' and `characteristic value.') To every dynamical variable \(A\) in quantum mechanics, there corresponds an eigenvalue equation, usually written \[\hat{A}\psi=a\psi \label{39}\] The eigenvalues a represent the possible measured values of the variable \(A\). The Schrödinger Equation (\(\ref{37}\)) is the best known instance of an eigenvalue equation, with its eigenvalues corresponding to the allowed energy levels of the quantum system. The Wavefunction For a single-particle system, the wavefunction \(\Psi(\vec{r},t)\), or \(\psi(\vec{r})\) for the time-independent case, represents the amplitude of the still vaguely defined matter waves. The relationship between amplitude and intensity of electromagnetic waves we developed for Equation \(\ref{6}\) can be extended to matter waves. The most commonly accepted interpretation of the wavefunction is due to Max Born (1926), according to which \(\rho(r)\), the square of the absolute value of \(\psi(r)\) is proportional to the probability density (probability per unit volume) that the particle will be found at the position r. Probability density is the three-dimensional analog of the diffraction pattern that appears on the two-dimensional screen in the double-slit diffraction experiment for electrons described in the preceding Section. In the latter case we had the relative probability a scintillation would appear at a given point on the screen. The function \(\rho(r)\) becomes equal, rather than just proportional to, the probability density when the wavefunction is normalized, that is, This simply accounts for the fact that the total probability of finding the particle somewhere adds up to unity. The integration in Equation \(\ref{40}\) extends over all space and the symbol \(d\tau\) designates the appropriate volume element. For example, the volume differential in Cartesian coordinates, \(d\tau=dx\,dy\,dz\) is changed in spherical coordinates to \(d\tau=r^2sin\theta\, dr \,d\theta \, d\phi\). The physical significance of the wavefunctions makes certain demands on its mathematical behavior. The wavefunction must be a single-valued function of all its coordinates, since the probability density ought to be uniquely determined at each point in space. Moreover, the wavefunction should be finite and continuous everywhere, since a physically-meaningful probability density must have the same attributes. The conditions that the wavefunction be single-valued, finite and continuous--in short, "well behaved"-- lead to restrictions on solutions of the Schrödinger equation such that only certain values of the energy and other dynamical variables are allowed. This is called quantization and is in the feature that gives quantum mechanics its name. Contributors and Attributions
9c16a51350465a2b
We gratefully acknowledge support from the Simons Foundation and member institutions. Authors and titles for Jul 2018 [ total of 3241 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 3226-3241 ] [ showing 25 entries per page: fewer | more ] [1]  arXiv:1807.00010 [pdf, ps, other] Title: Global dimension function on stability conditions and Gepner equations Authors: Yu Qiu Comments: Fix a proof and many typos and change the structure [2]  arXiv:1807.00017 [pdf, ps, other] Title: $μ$-constant deformations of functions on ICIS Subjects: Algebraic Geometry (math.AG) [3]  arXiv:1807.00022 [pdf, other] Title: On Solving Ambiguity Resolution with Robust Chinese Remainder Theorem for Multiple Numbers Subjects: Information Theory (cs.IT) [4]  arXiv:1807.00025 [pdf, ps, other] Title: Models of curves over DVRs Authors: Tim Dokchitser Comments: 39 pages, minor changes [5]  arXiv:1807.00026 [pdf, other] Title: Null controllability of parabolic equations with interior degeneracy and one-sided control [6]  arXiv:1807.00027 [pdf, ps, other] Title: Bounds on the Poincaré constant for convolution measures Comments: comments welcome Subjects: Probability (math.PR); Information Theory (cs.IT); Functional Analysis (math.FA) [7]  arXiv:1807.00032 [pdf, ps, other] Title: A degree condition for diameter two orientability of graphs Subjects: Combinatorics (math.CO) [8]  arXiv:1807.00034 [pdf, other] Title: Behavior of zeros of $X_{1}$-Jacobi and $X_{1}$-Laguerre exceptional polynomials Authors: Yen Chi Lun Subjects: Classical Analysis and ODEs (math.CA) [9]  arXiv:1807.00041 [pdf, other] Title: Improved Generalized Periods Estimates Over Curves on Riemannian Surfaces with Nonpositive Curvature Comments: 26 pages, 3 figures, 2 corollaries on weak $L^2$ convergence added. These results should be compared with arXiv:1711.09864 by the second author on the closed geodesic case Subjects: Analysis of PDEs (math.AP); Differential Geometry (math.DG); Number Theory (math.NT) [10]  arXiv:1807.00045 [pdf, other] Title: A note on incremental POD algorithms for continuous time data Journal-ref: Appl. Numer. Math. 144 (2019), 223-233 Subjects: Numerical Analysis (math.NA) [11]  arXiv:1807.00047 [pdf, ps, other] Title: On Spectral Properties of some Class of Non-selfadjoint Operators Authors: M.V.Kukushkin Comments: The report devoted to the results of this work took place 06.12.2017 at the seminar of the Department of mathematical physics St. Petersburg state University, Saint Petersburg branch of V.A.Steklov Mathematical Institute of the Russian Academy of science, Russia, Saint Petersburg. arXiv admin note: text overlap with arXiv:1804.10840 Subjects: Functional Analysis (math.FA) [12]  arXiv:1807.00056 [pdf, other] Title: Fundamental Limits of Decentralized Data Shuffling Comments: 22 pages, 5 figures, to appear in IEEE Transactions on Information Theory Subjects: Information Theory (cs.IT) [13]  arXiv:1807.00057 [pdf, ps, other] Title: On nonassociative graded-simple algebras over the field of real numbers Comments: 24 pages Subjects: Rings and Algebras (math.RA) [14]  arXiv:1807.00059 [pdf, ps, other] Title: Hermitian Curvature flow on unimodular Lie groups and static invariant metrics Comments: 25 pages. Revised version. To appear in TAMS Journal-ref: Trans. Amer. Math. Soc., Volume 373 (6), 2020, 3967-3993 Subjects: Differential Geometry (math.DG) [15]  arXiv:1807.00070 [pdf, ps, other] Title: Quasi Markov Chain Monte Carlo Methods Subjects: Statistics Theory (math.ST); Probability (math.PR) [16]  arXiv:1807.00079 [pdf, ps, other] Title: Pushforwards of Measures on Real Varieties under Maps with Rational Singularities Authors: Andrew Reiser Comments: 39 pages, no figures. arXiv admin note: text overlap with arXiv:1307.0371 by other authors Subjects: Algebraic Geometry (math.AG) [17]  arXiv:1807.00081 [pdf, ps, other] Title: On a rationality problem for fields of cross-ratios Comments: 7 pages Subjects: Algebraic Geometry (math.AG); Group Theory (math.GR); Rings and Algebras (math.RA) [18]  arXiv:1807.00085 [pdf, ps, other] Title: Hurwitz numbers and integrable hierarchy of Volterra type Comments: 12 pages, no figure; (v2) typos in eqs. (14), (15) etc. are corrected; (v3) typos are corrected, final version for publication Journal-ref: J. Phys. A: Math. Theor. 51 (2018), 43LT01 (9 pages) [19]  arXiv:1807.00086 [pdf, other] Title: Hybridized discontinuous Galerkin methods for wave propagation [20]  arXiv:1807.00087 [pdf, ps, other] Title: Whitehead products in moment-angle complexes Comments: 16 pages, comments welcome Subjects: Algebraic Topology (math.AT); Combinatorics (math.CO) [21]  arXiv:1807.00088 [pdf, other] Title: Asymptotics of recurrence coefficients for the Laguerre weight with a singularity at the edge Authors: Xiao-Bo Wu Comments: 15 pages, 3 figures Subjects: Classical Analysis and ODEs (math.CA) [22]  arXiv:1807.00091 [pdf, other] Title: A linearized and conservative Fourier pseudo-spectral method for the damped nonlinear Schrödinger equation in three dimensions Comments: 29 pages, 2 figures Subjects: Numerical Analysis (math.NA) [23]  arXiv:1807.00098 [pdf, ps, other] Title: Global Well-Posedness and Exponential Stability for Heterogeneous Anisotropic Maxwell's Equations under a Nonlinear Boundary Feedback with Delay Comments: updated and improved version Subjects: Analysis of PDEs (math.AP) [24]  arXiv:1807.00104 [pdf, ps, other] Title: On the K-theory of division algebras over local fields Subjects: K-Theory and Homology (math.KT) [25]  arXiv:1807.00105 [pdf, ps, other] Title: $h^*$-Polynomials With Roots on the Unit Circle Comments: minor clarifications added to version 2 Subjects: Combinatorics (math.CO); Number Theory (math.NT) [ showing 25 entries per page: fewer | more ] Disable MathJax (What is MathJax?)
4b1417371591a469
[92][93] Developing volume production using this method is the key to enabling faster carbon reduction by using hydrogen in industrial processes,[94] fuel cell electric heavy truck transportation,[95][96][97][98] and in gas turbine electric power generation. A group of people in a dysfunctional hospital have new challenges every day, while trying to work out love, friendship and work. The American Heritage® Stedman's Medical Dictionary Tout pour la coiffure. LesPAC.com, le meilleur site d'annonces classées au Québec pour vendre, acheter ou louer des articles neufs ou usagés de véhicules, ameublement, immobilier, appareils … Additional hydrogen can be recovered from the steam by use of carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. [88] By contrast, the positive hydrogen molecular ion (H+2) is a rare molecule in the universe. H&M is a multinational clothing retailer that puts an emphasis on providing quality designer clothing to its customers for a good price. [121], The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is higher. You will be able to connect the printer to a network and print across devices. The major application is the production of margarine. “Affect” vs. “Effect”: Use The Correct Word Every Time. Found 17846 words that end in h. Browse our Scrabble Word Finder, Words With Friends cheat dictionary, and WordHub word solver to find words that end with h. Or use our Unscramble word solver to find your best possible play! Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. This ion has also been observed in the upper atmosphere of the planet Jupiter. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). Thank you for joining the B&H email list! [65], In 1671, Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. One of the many complications to this highly optimized technology is the formation of coke or carbon: Consequently, steam reforming typically employs an excess of H2O. Le respect de la vie privée de nos utilisateurs étant important pour nous, Laposte.net respecte totalement la confidentialité de vos mails. Check back later This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes.[44]. Chef de file en enseignement et en recherche, HEC Montréal est une école de gestion universitaire dont la réputation est établie au Québec, au Canada et à l'international. [24] In the orthohydrogen form, the spins of the two nuclei are parallel and form a triplet state with a molecular spin quantum number of 1 (​1⁄2+​1⁄2); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (​1⁄2–​1⁄2). H or h is the eighth letter in the ISO basic Latin alphabet. When the helium is vaporized, the atomic hydrogen would be released and combine back to molecular hydrogen. As best possible H&M Landing will establish appropriate shore-side passenger staging areas for each vessel. Please enter your provided firm code: Please enter your username or ID: Please enter your password: Cancel Copyright © 2011. Hydrogen production using natural gas methane pyrolysis is a recent "no greenhouse gas" process. H&M has since it was founded in 1947 grown into one of the world's leading fashion companies. [44] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance.[86]. Hydrogen can form compounds with elements that are more electronegative, such as halogens (F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge. [151] Hydrogen dissolves in many metals and in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[152] leading to cracks and explosions. Many physical and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Avatar wants to measure Trueblood's head to confirm his research on cranial measurements. Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on 6 May 1937. 36.5m Followers, 500 Following, 5,964 Posts - See Instagram photos and videos from H&M (@hm) See what S H (saba017) has discovered on Pinterest, the world's biggest collection of ideas. H2 is produced in chemistry and biology laboratories, often as a by-product of other reactions; in industry for the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents in biochemical reactions. [83], Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite distinct from those of molecular hydrogen. A Microsoft 365 subscription offers an ad-free interface, custom domains, enhanced security options, the full desktop version of Office, and 1 TB of cloud storage. However, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. [144] Parts per million (ppm) of H2 occurs in the breath of healthy humans. Molecular H2 exists as two spin isomers, i.e. Describe 2020 In Just One Word? As the only neutral atom for which the Schrödinger equation can be solved analytically,[8] study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics. It consists of an antiproton with a positron. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. READ MORE Hydrogen is the chemical element with the symbol H and atomic number 1. [139] Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs,[140] as an isotopic label in the biosciences,[60] and as a radiation source in luminous paints. Under anaerobic conditions, ferrous hydroxide (Fe(OH)2) is be oxidized by the protons of water to form magnetite and H2. [37] By some definitions, "organic" compounds are only required to contain carbon. Directed by Rania Attieh, Daniel Garcia. Geordi La Forge was originally going to be revealed to be an alien. Hydrogen is often produced using natural gas, which involves the removal of hydrogen from hydrocarbons at very high temperatures, with 48% of hydrogen production coming from steam reforming. Key consumers of H2 include hydrodealkylation, hydrodesulfurization, and hydrocracking. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect. [136], Pure or mixed with nitrogen (sometimes called forming gas), hydrogen is a tracer gas for detection of minute leaks. Create an account or log into Facebook. [35] The study of their properties is known as organic chemistry[36] and their study in the context of living organisms is known as biochemistry. [141], The triple point temperature of equilibrium hydrogen is a defining fixed point on the ITS-90 temperature scale at 13.8033 Kelvin. Très convoités depuis plus de 50 ans, les produits d’épicerie St-Hubert sont reconnus pour leur qualité et leur authenticité. This reaction is the basis of the Kipp's apparatus, which once was used as a laboratory gas source: In the absence of acid, the evolution of H2 is slower. We have the largest selection of firearms in the state. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum at all—illustrating how the "planetary orbit" differs from electron motion. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in spin isomers of hydrogen. [5] Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. A very Star Trek gift guide for the 2020 holiday season. Deep Space Nine Writers Wanted to Return to the Gangster Planet. [121] The Sun's energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth. This will clear your Bing search history on this device. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary aluminium hydride. [5] Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. [5] François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. With over 500 H&R Block offices across Canada, chances are there's a tax office near you. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors. Some such organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast. The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for protium. H2 is relatively unreactive. [79], Antihydrogen (H) is the antimatter counterpart to hydrogen. Veuillez entrer votre nom ou code d'utilisateur. [72] For example, the ISS,[73] Mars Odyssey[74] and the Mars Global Surveyor[75] are equipped with nickel-hydrogen batteries. 4-H Online Enrollment 4,688,943 members and counting 4HOnline is a fully integrated management system that brings together all levels of the 4-H experience. Antihydrogen is the only type of antimatter atom to have been produced as of 2015[update]. It is used as a shielding gas in welding methods such as atomic hydrogen welding. Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. Reviews show that H&M's trendy fashions and quality of their products is what attracts most its customers. Avec nos ailes de poulet, côtes levées, quiches, pâtés au poulet, ainsi que notre sauce BBQ et notre salade de chou, vous pourrez recréer à la maison le … The M*A*S*H finale in 1983 is still tops, percentage-wise; it got nearly half the country, 106 million out of 234 million. Hydrogen plays a vital role in powering stars through the proton-proton reaction in case of stars with very low to approximately 1 mass of the Sun and the CNO cycle of nuclear fusion in case of stars more massive than our Sun. [154], Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. However, even in this case, such solvated hydrogen cations are more realistically conceived as being organized into clusters that form species closer to H9O+4. Le plus récentes nouvelles pour les Canadiens de Montréal, y compris faits saillants, alignement, le calendrier, les scores et les archives. H&M's business concept is to offer fashion and quality at the best price. Compare hotel deals, offers and read unbiased reviews on hotels. 00. [84], Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2. Around the time of M*A*S*H, I was always looking for the next job. Why Do “Left” And “Right” Mean Liberal And Conservative? An exception in group 2 hydrides is BeH2, which is polymeric. [123] For example, CO2 sequestration followed by carbon capture and storage could be conducted at the point of H2 production from fossil fuels. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009,[76] more than 19 years after launch and 13 years beyond their design life.[77]. [80][81], Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75 percent of normal matter by mass and more than 90 percent by number of atoms. What’s The Difference Between “Yule” And “Christmas”? During the early study of radioactivity, various heavy radioactive isotopes were given their own names, but such names are no longer used, except for deuterium and tritium. H&M’s business concept is to offer fashion and quality at the best price in a sustainable way. Elle contribue aussi à l'amélioration de l'habitat et soutient les initiatives communautaires, la recherche et le développement dans l'industrie de l'habitation. Based on the Random House Unabridged Dictionary, © Random House, Inc. 2020, Collins English Dictionary - Complete & Unabridged 2012 Digital Edition Hydrogen is the most abundant chemical substance in the universe, constituting roughly 75% of all baryonic mass. We've developed a suite of premium Outlook features for people with advanced email and calendar needs. The resulting ammonia is used to supply the majority of the protein consumed by humans. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in H2 because of its low mass. Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Let me know who is manipulating a stock, and to h—l with dividends and earnings. A protagonist is the main character of a story, or the lead. [126] Fuel cells can convert hydrogen and oxygen directly to electricity more efficiently than internal combustion engines. But the damage to hydrogen's reputation as a lifting gas was already done and commercial hydrogen airship travel ceased. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable. [40] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals,[11] complicating the design of pipelines and storage tanks.[12]. Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite.
e5f2ad657a80e300
The shapes of the first five atomic orbitals are: 1s, 2s, 2px, 2py, and 2pz. The two colors show the phase or sign of the wave function in each region. Each picture is domain coloring of a ψ(x, y, z) function which depend on the coordinates of one electron. To see the elongated shape of ψ(x, y, z)2 functions that show probability density more directly, see pictures of d-orbitals below. In atomic theory and quantum mechanics, an atomic orbital is a mathematical function describing the location and wave-like behavior of an electron in an atom.[1] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as predicted by the particular mathematical form of the orbital.[2] Each orbital in an atom is characterized by a set of values of the three quantum numbers n, , and ml, which respectively correspond to the electron's energy, angular momentum, and an angular momentum vector component (the magnetic quantum number). Alternative to the magnetic quantum number, the orbitals are often labeled by the associated harmonic polynomials (e.g. xy, x2y2). Each such orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number = 0, 1, 2, and 3 respectively. These names, together with the value of n, are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically (g, h, i, k, ...),[3] omitting j[4][5] because some languages do not distinguish between the letters "i" and "j".[6] Atomic orbitals are the basic building blocks of the atomic orbital model (alternatively known as the electron cloud or wave mechanics model), a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of a multi-electron atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of the blocks of 2, 6, 10, and 14 elements within sections of the periodic table arises naturally from the total number of electrons that occupy a complete set of s, p, d, and f atomic orbitals, respectively, although for higher values of the quantum number n, particularly when the atom in question bears a positive charge, the energies of certain sub-shells become very similar and so the order in which they are said to be populated by electrons (e.g. Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can only be rationalized somewhat arbitrarily. Atomic orbitals of the electron in a hydrogen atom at different energy levels. The probability of finding the electron is given by the color, as shown in the key at upper right. Electron properties With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the orbiting electrons around a nucleus could not be fully described as particles, but needed to be explained by the wave-particle duality. In this sense, the electrons have the following properties: Wave-like properties: 1. The electrons do not orbit the nucleus in the manner of a planet orbiting the sun, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency. 2. The electrons are never in a single point location, although the probability of interacting with the electron at a single point can be found from the wave function of the electron. The charge on the electron acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function. Particle-like properties: 1. The number of electrons orbiting the nucleus can only be an integer. 2. Electrons jump between orbitals like particles. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon. 3. The electrons retain particle-like properties such as: each wave state has the same electrical charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition. Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom's "electron cloud"[7]) tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle. Formal quantum mechanical definition Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrodinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions.[8] (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other.[9] Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though most electrons do not exist in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbitals 3D views of some hydrogen-like atomic orbitals showing probability density and phase (g orbitals and higher are not shown) Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., an atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for atomic orbitals are usually spherical coordinates (r, θ, φ) in atoms and Cartesian (x, y, z) in polyatomic molecules. The advantage of spherical coordinates (for atoms) is that an orbital wave function is a product of three factors each dependent on a single coordinate: ψ(r, θ, φ) = R(r) Θ(θ) Φ(φ). The angular factors of atomic orbitals Θ(θ) Φ(φ) generate s, p, d, etc. functions as real combinations of spherical harmonics Yℓm(θ, φ) (where and m are quantum numbers). There are typically three mathematical forms for the radial functions R(r) which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons: 1. The hydrogen-like atomic orbitals are derived from the exact solutions of the Schrödinger Equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on the distance r from the nucleus has nodes (radial nodes) and decays as e−(constant × distance). 2. The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does the hydrogen-like orbital. 3. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as . Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like atomic orbital. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. Main article: Atomic theory The term "orbital" was coined by Robert Mulliken in 1932 as an abbreviation for one-electron orbital wave function.[10] However, the idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued at least 19 years earlier by Niels Bohr,[11] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[12] Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.[13] Early models With J. J. Thomson's discovery of the electron in 1897,[14] it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolved in orbit-like rings within a positively charged jelly-like substance,[15] and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure.[12] Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time,[16] and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation.[17] Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were only permitted to have discrete values of angular momentum, quantized in units h/2π.[11] This constraint automatically permitted only certain values of electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. The Rutherford–Bohr model of the hydrogen atom. The Rutherford–Bohr model of the hydrogen atom. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step towards the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum, and thus a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength.[a] The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation in the hydrogen atom) and remains empty. Modern conceptions and connections to the Heisenberg uncertainty principle Immediately after Heisenberg discovered his uncertainty principle,[18] Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself.[19] In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require an infinite particle momentum. In chemistry, Schrödinger, Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere[citation needed] in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names Orbital notation and subshells Orbitals have been given names, which are usually given in the form: where X is the energy level corresponding to the principal quantum number n; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number . For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level (n = 1) and has an angular quantum number of = 0, denoted as s. Orbitals with = 1, 2 and 3 are denoted as p, d and f respectively. The set of orbitals for a given n and is called a subshell, denoted The exponent y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1. X-ray notation Main article: X-ray notation There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For n = 1, 2, 3, 4, 5, ..., the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals Main article: Hydrogen-like atom The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can only be solved with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: n, , and m. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of the hydrogen-like atoms are its atomic orbitals.[clarification needed] However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number n first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, n determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of  are even more closely related, and are said to comprise a "subshell". Quantum numbers Main article: Quantum number Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers only occur in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. Complex orbitals Energetic levels and sublevels of polyelectronic atoms. Energetic levels and sublevels of polyelectronic atoms. In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number n describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells. The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where n is some integer n0, ranges across all (integer) values satisfying the relation . For instance, the n = 1 shell has only orbitals with , and the n = 2 shell has only orbitals with , and . The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, , describes the magnetic moment of an electron in an arbitrary direction, and is also always an integer. Within a subshell where is some integer , ranges thus: . The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist. = 0 (s) = 1 (p) = 2 (d) = 3 (f) = 4 (g) ... n = 1 ... n = 2 0 −1, 0, 1 ... n = 3 0 −1, 0, 1 −2, −1, 0, 1, 2 ... n = 4 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 ... n = 5 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 −4, −3, −2, −1, 0, 1, 2, 3, 4 ... Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'. Each electron also has a spin quantum number, s, which describes the spin of each electron (spin up or spin down). The number s can be +1/2 or −1/2. The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (n, , m), these two electrons must differ in their spin. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing m = +1 from m = −1. As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experiment — where an atom is exposed to a magnetic field — provides one such example.[20] Real orbitals Animation of continuously varying superpositions between the {\displaystyle p_{1)) and the {\displaystyle p_{x)) orbitals. Note that this animation does not utilize the Condon–Shortley phase convention. Animation of continuously varying superpositions between the and the orbitals. Note that this animation does not utilize the Condon–Shortley phase convention. In addition to the complex orbitals described above, it is common, especially in the chemistry literature, to utilize real atomic orbitals. These real orbitals arise from simple linear combinations of the complex orbitals. Using the Condon–Shortley phase convention, the real atomic orbitals are related to the complex atomic orbitals in the same way that the real spherical harmonics are related to the complex spherical harmonics. Letting denote a complex atomic orbital with quantum numbers , , and , we define the real atomic orbitals by [21] If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic . Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction[citation needed]. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations.[22] In the real hydrogen-like orbitals, the quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (though its absolute value is). Some real atomic orbitals are given specific names beyond the simple designation. Orbitals with quantum number equal to are referred to as orbitals. With this it already possible to assigns names to complex orbitals such as where the first symbol is the quantum number, the second number is the symbol for that particular quantum number and the subscript is the quantum number. As an example of how the full orbital names are generated for real orbitals, we may calculate . From the table of spherical harmonics, we have that with . We then have Likewise we have . As a more complicated example, we also have In all of these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in , , and appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in . We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers. Note that the expression above all use the Condon–Shortley phase convention which is favored by quantum physicists.[23][24] Other conventions for the phase of the spherical harmonics exists.[25][26] Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above. Below is a tabulation of these Cartesian polynomial names for the atomic orbitals.[27][28] Note that there does not seem to be reference in the literature as to how to abbreviate the lengthy Cartesian spherical harmonic polynomials for so there does not seem be consensus as to the naming of orbitals or higher according to this nomenclature. m = −3 m = −2 m = −1 m = 0 m = +1 m = +2 m = +3 Shapes of orbitals Transparent cloud view of a computed 6s (n = 6, ℓ = 0, m = 0) hydrogen atom orbital. The s orbitals, though spherically symmetrical, have radially placed wave-nodes for n > 1. Only s orbitals invariably have a center anti-node; the other types never do. Transparent cloud view of a computed 6s (n = 6, = 0, m = 0) hydrogen atom orbital. The s orbitals, though spherically symmetrical, have radially placed wave-nodes for n > 1. Only s orbitals invariably have a center anti-node; the other types never do. Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density | ψ(r, θ, φ) |2 has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although | ψ |2 as the square of an absolute value is everywhere non-negative, the sign of the wave function ψ(r, θ, φ) is often indicated in each subregion of the orbital picture. Sometimes the ψ function will be graphed to show its phases, rather than the | ψ(r, θ, φ) |2 which shows probability density but has no phases (which have been lost in the process of taking the absolute value, since ψ(r, θ, φ) is a complex number). |ψ(r, θ, φ)|2 orbital graphs tend to have less spherical, thinner lobes than ψ(r, θ, φ) graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, in order to show wave function phases, shows mostly ψ(r, θ, φ) graphs. The lobes can be viewed as standing wave interference patterns between the two counter rotating, ring resonant travelling wave "m" and "m" modes, with the projection of the orbital onto the xy plane having a resonant "m" wavelengths around the circumference. Though rarely depicted, the travelling wave solutions can be viewed as rotating banded tori, with the bands representing phase information. For each m there are two standing wave solutions m⟩ + ⟨−m and m⟩ − ⟨−m. For the case where m = 0 the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. For the case where = 0 there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. For any given n, the smaller is, the more radial nodes there are. For any given , the smaller n is, the fewer radial nodes there are (zero for whichever n first has that orbital). Loosely speaking n is energy, is analogous to eccentricity, and m is orientation. In the classical case, a ring resonant travelling wave, for example in a circular transmission line, unless actively forced, will spontaneously decay into a ring resonant standing wave because reflections will build up over time at even the smallest imperfection or discontinuity. Generally speaking, the number n determines the size and energy of the orbital for a given nucleus: as n increases, the size of the orbital increases. When comparing different elements, the higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the overall size of the whole atom remains very roughly constant, even as the number of electrons in heavier elements (higher Z) increases. Experimentally imaged 1s and 2p core-electron orbitals of Sr, including the effects of atomic thermal vibrations and excitation broadening, retrieved from energy dispersive x-ray spectroscopy (EDX) in scanning transmission electron microscopy (STEM).[29] Also in general terms, determines an orbital's shape, and m its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on m also. Together, the whole set of orbitals for a given and n fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The single s-orbitals () are shaped like spheres. For n = 1 it is roughly a solid ball (it is most dense at the center and fades exponentially outwardly), but for n = 2 or more, each single s-orbital is composed of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s-orbitals for all n numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy.[29] Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the final outcome (see the figure at right). The shapes of p, d and f-orbitals are described verbally here and shown graphically in the Orbitals table below. The three p-orbitals for n = 2 have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of m. The overall result is a lobe pointing along each direction of the primary axes. Four of the five d-orbitals for n = 3 look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the centre along the x and y axes themselves. The fifth and final d-orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair. There are seven f-orbitals, each with shapes more complex than those of the d-orbitals. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with n values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of n (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of n further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the px, py, and pz are the same shape.[30][31] The 1s, 2s, and 2p orbitals of a sodium atom. Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell n (e.g. all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem. Orbitals table This table shows all orbital configurations for the real hydrogen-like wave functions up to 7s, and therefore covers the simple electronic configuration for all elements in the periodic table up to radium. "ψ" graphs are shown with and + wave function phases shown in two different colors (arbitrarily red and blue). The pz orbital is the same as the p0 orbital, but the px and py are formed by taking linear combinations of the p+1 and p−1 orbitals (which is why they are listed under the m = ±1 label). Also, the p+1 and p−1 are not the same shape as the p0, since they are pure spherical harmonics. s ( = 0) p ( = 1) d ( = 2) f ( = 3) m = 0 m = 0 m = ±1 m = 0 m = ±1 m = ±2 m = 0 m = ±1 m = ±2 m = ±3 s pz px py dz2 dxz dyz dxy dx2y2 fz3 fxz2 fyz2 fxyz fz(x2y2) fx(x2−3y2) fy(3x2y2) n = 1 n = 2 Px orbital.png Py orbital.png n = 3 Dxz orbital.png Dyz orbital.png Dxy orbital.png Dx2-y2 orbital.png n = 4 Fxz2 orbital.png Fyz2 orbital.png Fxyz orbital.png Fz(x2-y2) orbital.png Fx(x2-3y2) orbital.png Fy(3x2-y2) orbital.png n = 5 n = 6 n = 7 * No elements with this magnetic quantum number have been discovered yet. These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are contain combinations of two eigenstates. See comparison in the following picture: Atomic orbitals spdf m-eigenstates and superpositions Atomic orbitals spdf m-eigenstates and superpositions Qualitative understanding of shapes The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum.[32] To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ(r, θ) and the wave functions for a vibrating sphere are three-coordinate ψ(r, θ, φ). None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. Orbital energy Main article: Electron shell In atoms with a single electron (hydrogen-like atoms), the energy of an orbital (and, consequently, of any electrons in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher level of energy, but the difference decreases as increases. For high , the level of energy becomes so high that the electron can easily escape from the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken to a slight extent in the solution to the Dirac equation (where the energy depends on n and another quantum number j), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s-orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms of higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. s p d f g h 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 26 31 7 16 19 23 27 32 37 8 20 24 28 33 38 44 9 25 29 34 39 45 51 10 30 35 40 46 52 59 Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table Electron atomic and molecular orbitals. The chart of orbitals (left) is arranged by increasing energy (see Madelung rule). Note that atomic orbits are functions of three variables (two angles, and the distance r from the nucleus). These images are faithful to the angular component of the orbital, but not entirely representative of the orbital as a whole. Atomic orbitals and periodic table construction Main articles: Electron configuration and Electron shell Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as s, or spin quantum number. Thus, two electrons may occupy a single orbital, so long as they have different values of s. However, only two electrons, because of their spin, can be associated with each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom.[33] The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the n associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: 2s 2p 2p 2p 3s 3p 3p 3p 4s 3d 3d 3d 3d 3d 4p 4p 4p 5s 4d 4d 4d 4d 4d 5p 5p 5p 6s 4f 4f 4f 4f 4f 4f 4f 5d 5d 5d 5d 5d 6p 6p 6p 7s 5f 5f 5f 5f 5f 5f 5f 6d 6d 6d 6d 6d 7p 7p 7p Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see Electron configuration § Atoms: Aufbau principle and Madelung rule). The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Relativistic effects Main article: Relativistic quantum chemistry See also: Extended periodic table For elements with high atomic number Z, the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high-Z atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium.[34] In the Bohr Model, an n = 1 electron has a velocity given by , where Z is the atomic number, is the fine-structure constant, and c is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy).[35] However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes.[36] pp hybridisation (conjectured) In late period-8 elements a hybrid of 8p3/2 and 9p1/2 is expected to exist,[37] where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon. Transitions between orbitals Main article: Atomic electron transition Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus only happen if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: 1. State n = 1, = 0, m = 0 and ms = +1/2 2. State n = 2, = 0, m = 0 and ms = −1/2 By quantum theory, state 1 has a fixed energy of E1, and state 2 has a fixed energy of E2. Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly E2E1. If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly E2E1 will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can only jump to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. See also 1. ^ This physically incorrect Bohr model is still often taught to beginning students.[citation needed] 1. ^ Orchin, Milton; Macomber, Roger S.; Pinhas, Allan; Wilson, R. Marshall (2005). Atomic Orbital Theory (PDF). 2. ^ Daintith, J. (2004). Oxford Dictionary of Chemistry. New York: Oxford University Press. ISBN 978-0-19-860918-6. 3. ^ Griffiths, David (1995). Introduction to Quantum Mechanics. Prentice Hall. pp. 190–191. ISBN 978-0-13-124405-4. 4. ^ Levine, Ira (2000). Quantum Chemistry (5 ed.). Prentice Hall. pp. 144–145. ISBN 978-0-13-685512-5. 5. ^ Laidler, Keith J.; Meiser, John H. (1982). Physical Chemistry. Benjamin/Cummings. p. 488. ISBN 978-0-8053-5682-3. 6. ^ Atkins, Peter; de Paula, Julio; Friedman, Ronald (2009). Quanta, Matter, and Change: A Molecular Approach to Physical Chemistry. Oxford University Press. p. 106. ISBN 978-0-19-920606-3. 7. ^ Feynman, Richard; Leighton, Robert B.; Sands, Matthew (2006). The Feynman Lectures on Physics – The Definitive Edition, Vol 1 lect 6. Pearson PLC, Addison Wesley. p. 11. ISBN 978-0-8053-9046-9. 8. ^ Roger Penrose, The Road to Reality 9. ^ Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Prentice-Hall. p. 262. ISBN 0-205-12770-3. Therefore, the wave function of a system of identical interacting particles must not distinguish among the particles. 10. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Physical Review. 41 (1): 49–71. Bibcode:1932PhRv...41...49M. doi:10.1103/PhysRev.41.49. 11. ^ a b Bohr, Niels (1913). "On the Constitution of Atoms and Molecules". Philosophical Magazine. 26 (1): 476. Bibcode:1914Natur..93..268N. doi:10.1038/093268a0. S2CID 3977652. 12. ^ a b Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine. 7 (41): 445–455. doi:10.1080/14786440409463141. Archived from the original on 2017-11-27. Retrieved 2009-05-30. 13. ^ Bryson, Bill (2003). A Short History of Nearly Everything. Broadway Books. pp. 141–143. ISBN 978-0-7679-0818-4. 14. ^ Thomson, J. J. (1897). "Cathode rays". Philosophical Magazine. 44 (269): 293. doi:10.1080/14786449708621070. 15. ^ Thomson, J. J. (1904). "On the Structure of the Atom: an Investigation of the Stability and Periods of Oscillation of a number of Corpuscles arranged at equal intervals around the Circumference of a Circle; with Application of the Results to the Theory of Atomic Structure" (extract of paper). Philosophical Magazine. Series 6. 7 (39): 237–265. doi:10.1080/14786440409463107. 16. ^ Rhodes, Richard (1995). The Making of the Atomic Bomb. Simon & Schuster. pp. 50–51. ISBN 978-0-684-81378-3. 17. ^ Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine. 7 (41): 446. doi:10.1080/14786440409463141. Archived from the original on 2017-11-27. Retrieved 2009-05-30. 18. ^ Heisenberg, W. (March 1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik A. 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/BF01397280. S2CID 122763326. 19. ^ Bohr, Niels (April 1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature. 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0. 20. ^ Gerlach, W.; Stern, O. (1922). "Das magnetische Moment des Silberatoms". Zeitschrift für Physik. 9 (1): 353–355. Bibcode:1922ZPhy....9..353G. doi:10.1007/BF01326984. S2CID 126109346. 21. ^ Thaller, Bernd (2004). Advanced visual quantum mechanics. New York: Springer/TELOS. ISBN 978-0387207773. 22. ^ General chemistry: principles and modern applications. [Place of publication not identified]: Prentice Hall. 2016. ISBN 0133897311. 23. ^ Messiah, Albert (1999). Quantum mechanics : two volumes bound as one (Two vol. bound as one, unabridged reprint ed.). Mineola, NY: Dover. ISBN 978-0-486-40924-5. 24. ^ Claude Cohen-Tannoudji; Bernard Diu; Franck Laloë; et al. (1996). Quantum mechanics. Translated by from the French by Susan Reid Hemley. Wiley-Interscience. ISBN 978-0-471-56952-7. 25. ^ Levine, Ira (2014). Quantum Chemistry (7th ed.). Pearson Education. pp. 141–2. ISBN 978-0-321-80345-0. 26. ^ Blanco, Miguel A.; Flórez, M.; Bermejo, M. (December 1997). "Evaluation of the rotation matrices in the basis of real spherical harmonics". Journal of Molecular Structure: THEOCHEM. 419 (1–3): 19–27. doi:10.1016/S0166-1280(97)00185-1. 28. ^ Friedman (1964). "The shapes of the f orbitals". J. Chem. Educ. 41 (7): 354. 29. ^ a b Jeong, Jong Seok; Odlyzko, Michael L.; Xu, Peng; Jalan, Bharat; Mkhoyan, K. Andre (2016-04-26). "Probing core-electron orbitals by scanning transmission electron microscopy and measuring the delocalization of core-level excitations". Physical Review B. 93 (16): 165140. Bibcode:2016PhRvB..93p5140J. doi:10.1103/PhysRevB.93.165140. 30. ^ Powell, Richard E. (1968). "The five equivalent d orbitals". Journal of Chemical Education. 45 (1): 45. Bibcode:1968JChEd..45...45P. doi:10.1021/ed045p45. 31. ^ Kimball, George E. (1940). "Directed Valence". The Journal of Chemical Physics. 8 (2): 188. Bibcode:1940JChPh...8..188K. doi:10.1063/1.1750628. 32. ^ Cazenave, Lions, T., P.; Lions, P. L. (1982). "Orbital stability of standing waves for some nonlinear Schrödinger equations". Communications in Mathematical Physics. 85 (4): 549–561. Bibcode:1982CMaPh..85..549C. doi:10.1007/BF01403504. S2CID 120472894. 33. ^ Bohr, Niels (1923). "Über die Anwendung der Quantumtheorie auf den Atombau. I". Zeitschrift für Physik. 13 (1): 117. Bibcode:1923ZPhy...13..117B. doi:10.1007/BF01328209. 34. ^ Lower, Stephen. "Primer on Quantum Theory of the Atom". 35. ^ Poliakoff, Martyn; Tang, Samantha (9 February 2015). "The periodic table: icon and inspiration". Philosophical Transactions of the Royal Society A. 373 (2037): 20140211. Bibcode:2015RSPTA.37340211P. doi:10.1098/rsta.2014.0211. PMID 25666072. 36. ^ Szabo, Attila (1969). "Contour diagrams for relativistic orbitals". Journal of Chemical Education. 46 (10): 678. Bibcode:1969JChEd..46..678S. doi:10.1021/ed046p678. 37. ^ Fricke, Burkhard (1975). Superheavy elements: a prediction of their chemical and physical properties. Recent Impact of Physics on Inorganic Chemistry. Structure and Bonding. Vol. 21. pp. 89–144. doi:10.1007/BFb0116498. ISBN 978-3-540-07109-9. Retrieved 4 October 2013.
5b7c45f38a4e2a00
Skip to main content Dimensional Effects on Densities of States and Interactions in Nanostructures When we suppress motion of particles in certain directions through confining potentials, e.g. in quantum wells or quantum wires, we often model the residual low energy excitations in the system through low-dimensional quantum mechanical systems. Prominent examples of this concern layered heterostructures, and one instance where the number d of spatial dimensions enters in a manner which is of direct relevance to technology is in the density of states. In the standard parabolic band approximation, this takes the form (with two helicity or spin states) These are densities of states per d-dimensional volume and per unit of energy. The corresponding dependence of the relation between the Fermi energy and the density n of electrons on d is Variants of these equations (including summation over subbands) are often used for d = 2 or d = 1 to estimate carrier densities in quasi two-dimensional systems or nanowires, and the density of states plays a crucial role in all transport and optical properties of materials. Indeed, the obvious relevance for electrical conductivity properties in micro and nanotechnology implies that densities of states for d = 1, 2, or 3 are now commonly discussed in engineering textbooks, but there is another reason why I anticipate that variants of Eq. (1) will become ever more prominent in the technical literature. Densities also play a huge role in data storage, but with us still relying on binary logic switching between two stable states (spin up or down, charge or no charge, conductivity or no conductivity), data storage densities are limited by the physical densities of the systems which provide the dual states. We could (and likely will) drive information technology and integration much further if we can find ways to utilize more than just two states of a physical system to store and process information. Then, data storage densities should become proportional to energy integrals of local densities of states. Equation (1) for d = 1 or d = 2 is certainly applicable for particles which have low energies compared to the confinement energy of a nanowire or a quantum well, but how can we effectively model particles which are weakly confined to a nanowire or quantum well, or which are otherwise affected by the presence of a low-dimensional substructure? In these cases, we can devise dimensionally hybrid models [1, 2] which yield e.g. densities of states which interpolate between d = 2 and d = 3 [3, 4]. This construction will be reviewed in Sect. 2. Based on the experience gained with dimensionally hybrid Hamiltonians for massive particles, we can also construct inter-dimensional Hamiltonians for photons which should be applicable to photons in the presence of high-permittivity thin films or interfaces. These models can also be solved in terms of infinite series expansions using image charges, and the merits of this approach can easily be tested. The case of high-permittivity thin films and testing the theory against image charge solutions will be discussed in Sect. 3. Dimensionally Hybrid Hamiltonians and Green’s Functions for Massive Particles in the Presence of Thin Films or Interfaces We use the connection between Green’s functions and the density of states to generalize Eq. (1) for massive particles in the presence of a thin film or interface. The energy-dependent Green’s function for a Hamiltonian H with spectrum E n and eigenstates |n, ν〉 is Here, ν is a degeneracy index and the notation implies that continuous components in the indices (n, ν) are integrated. The first equation simply states the relation between the resolvent of the Hamiltonian and the Green’s function G(E) which is normalized as lim m→0,E→0 G(E)| d=3 = (4πr)−1. The zero-energy Green’s function G(0) determines e.g. 2-particle correlation functions and electromagnetic interaction potentials, and the energy-dependent Green’s function G(E) determines e.g. scattering amplitudes for particles of energy E. Application for resistivity calculations is therefore another technologically relevant application of Green’s functions. However, in the present section we are interested in this function because it also determines the local density of states in a system with Hamiltonian H through the relation Here, we explicitly included a factor 2 for the number of spin or helicity states, because the summation over degeneracy indices in (3,4) usually only involves orbital indices. For our present investigation, the distinctive feature of the interface is that the particles move in it with an effective mass m *, while their mass in the surrounding bulk is m. We use coordinates parallel to a plane interface, which is located at z = z 0. Bold vector notation is used for quantities parallel to the interface, e.g. and . We assume that the interface has a thickness L. If the wavenumber component orthogonal to the interface is small compared to the inverse width, |k L| 1, i.e. if the de Broglie wavelength and the incidence angle satisfy λ L|cosϑ|, we can approximate the kinetic energy of the particles through a second quantized Hamiltonian where μ = m */L. The corresponding first quantized Hamiltonian is The interesting aspect of the Hamiltonians (5,6) is the linear superposition of two-dimensional and three-dimensional kinetic terms. The formalism presented here could and will certainly be extended to include also kinetic terms which are linear in derivatives, in particular in the interface term. This would be motivated either by a Rashba term arising from perpendicular fields penetrating the interface [511] of from the dispersion relation in Graphene [1215]. However, for the present investigation we will use a parabolic band approximation in the bulk and in the interface. The energy-dependent Green’s function describes scattering effects in the presence of the interface but also applies to scattering off perturbations which are not located on the interface. In an axially symmetric mixed representation the first order approximation to scattering of an orthogonally incoming plane wave off an impurity potential corresponds to Green’s functions for surfaces or interfaces are commonly parametrized in an axially symmetric mixed representation like . In bra-ket notation, this corresponds for the free Green’s function G 0(E), which is also translation invariant in z direction, to We will briefly recall the explicit form of the free Green’s function G 0(E) in the axially symmetric mixed parametrization for later comparison. The equation To study how this is modified in the presence of the interface, we observe that the Hamiltonians (5) or (6) yield a Schrödinger equation The corresponding equation for the Green’s function or 2-point correlation function is The solution of this equation is described in the Appendix. In particular, we find the representation (see Eq. (27)) where the definition ℓ≡m/2μ = Lm/2m * was used. The ℓ-independent terms in (10) correspond to the free Green’s function G 0(E) (8). The interface at z 0 breaks translational invariance in z direction, and we have with Eq. (7) We will use the result (10) to calculate the density of states in the interface. Substitution yields and after evaluation of the integral This is a more complicated result than the density (1) for d = 2 or d = 3. However, it reduces to either the two-dimensional or three-dimensional density of states in the appropriate limits, see Fig. 1. For large energies, i.e. if the states only probe length scales smaller than the transition length scale ℓ, we find the two-dimensional density of states properly rescaled by a dimensional factor to reflect that it is a density of states per three-dimensional volume, Figure 1 figure 1 The red line is the two-dimensional limit (12). The blue line is the three-dimensional density of states. The it black line is the inter-dimensional density of states (11) for ℓ = 50 nm For small energies, i.e. if the states probe length scales larger than ℓ, we find the three-dimensional density of states This limiting behavior for interpolation between two and three dimensions is consistent with what is also observed for the zero-energy Green’s function in the interface, see equations (21–22) below. Equation (11) also implies interpolating behavior for the relation between electron density and Fermi energy on the interface. The full relation is This approximates two-dimensional behavior for , and three-dimensional behavior for , It is intuitively understandable that the presence of a layer reduces the available density of states for given energy, or equivalently increases the Fermi energy for a given density of electrons. The presence of a layer generically implies boundary or matching conditions which reduce the number of available states at a given energy. A condition for relevance of the inter-dimensional behavior is a large transition scale compared to the layer thickness, ℓ L, see also Fig. 2. In terms of effective particle mass, this means i.e. the energy band in the interface should be more strongly curved than in the bulk matrix for the transition to two-dimensional behavior to be observable. Figure 2 figure 2 The upper dotted (blue) line is the three-dimensional Green’s function (4πr)−1 in units of ℓ−1, the continuous line is the Green’s function (19) in units of ℓ−1, and the lower dotted (red) line is the two-dimensional logarithmic Green’s function ℓ·G = − (γ + ln(r/2ℓ))/(4π) Electric Fields in the Presence of High-Permittivity Thin Films or Interfaces The zero-energy Green’s function determines electrostatic and exchange interactions through the electrostatic potential . Here, q is an electric charge in a dielectric material of permittivity . The zero-energy Green’s function in d spatial dimensions is given by We cannot infer from the previous section that the zero energy limit of the inter-dimensional Green’s function calculated there also yields a dimensionally hybrid potential, because we were dealing with solutions of Schrödinger’s equation instead of the Gauss law. However, we can rederive the zero energy limit of that Green’s function from the Gauss law for electromagnetic fields in the presence of a high-permittivity interface. Suppose we have charge carriers of charge q and mass m in the presence of an interface with permittivity and permeability μ*, We continue to denote vectors parallel to the interface in bold face notation, , , etc. If the photon wavelengths and incidence angles satisfy the condition λ L|cosϑ|, we can approximate the system with an action Variation with respect to the electrostatic potential, , yields the Gauss law in the form and the continuity condition E z (z 0 − 0) = E z (z 0 + 0). We solve Eq. (16) in Coulomb gauge, where the Green’s function has to satisfy This equation is the zero energy limit of Eq. (9) with the substitution We can therefore read off the solution from the results of the previous section with E = 0 and now . Equation (10) yields in particular with . Fourier transformation yields The zero-energy Green’s function in the interface is given in terms of a Struve function and a Neumann function1, This yields logarithmic behavior of interaction potentials at small distances r ℓ and 1/r behavior for large separation r ℓ of charges in high-permittivity thin films, see also Fig. 2. For the comparison with image charges, we set z 0 = 0 and recall that the solution for the potential of a charge q at , z = 0 proceeds through the ansatz and symmetric continuation to z < −L/2. This yields electric fields and the junction conditions at z = L/2 yield for n ≥ 0 from the continuity of E r , and from the continuity of D z , These conditions can be solved through In particular, the potential at z = 0 is We have and therefore for The solution from image charges is in very good agreement with the analytic model for distances r L/2, where both the image charge solution and the analytic model show strong deviations from the bulk r −1 behavior. This is illustrated in Fig. 3 by plotting the reduced electrostatic potential for a charge q, in the interface. Figure 3 figure 3 Different reduced electrostatic potentials are plotted for . The upper dotted (green) line is the three-dimensional reduced potential L/(4πr). The central dotted (blue) line is the reduced potential following from the image charge solution (22). The solid (black) line is the potential from the analytic model (19). The lower dotted (red) line is the reduced logarithmic potential. The reduced potentials from our analytic model and from image charges are indistinguishable for , see also Fig. 4 It is also instructive to plot the relative deviation between the dimensionally hybrid potential which follows from (20) and the potential (23) from image charges. Figure 4 shows that for r L/2, the dimensionally hybrid model is a very good approximation to the potential from image charges with accuracy better than 10−2 if . For , the accuracy is still better than 4 × 10−2. Figure 4 figure 4 The relative deviation between the dimensionally hybrid potential from (19) and the potential (22) from image charges for An analysis of models for particles in the presence of a low effective mass interface, and for electromagnetic fields in the presence of a high-permittivity thin film, yields dimensionally hybrid densities of states (11) and electrostatic potentials (17,20) which interpolate between two-dimensional behavior and three-dimensional behavior. The analytic model for the electromagnetic fields is in very good agreement with the infinite series solution already for small distance scales r L/2, where the potential strongly deviates from the standard bulk r −1 potential. At distance scales smaller than L/2, r −1, behavior seems to dominate again for the electrostatic potential, in agreement with expectations that for distances which are small compared to the lateral extension of a dielectric slab, bulk behavior should be restored. However, note that neither the inter-dimensional analytic model nor the solution from image charges is trustworthy for very small distances, because both models rely on a continuum approximation through the use of effective permittivities, but the continuum approximation should break down at sub-nanometer scales. The most important finding is that interfaces and thin films of width L should exhibit transitions between two-dimensional and three-dimensional distance laws for physical quantities at length scales of order Lm/2m * or , respectively. Interfaces with strong band curvature or high permittivity should provide good samples for experimental study of the transition between two-dimensional and three-dimensional behavior. Appendix: Solution of Eq. 9 Substitution of the fourier transform into Eq. 9 yields This yields with (7) the condition Fournier transformation with respect to z yields This result implies that has the form with the yet to be determined satisfying For the treatment of the integrals, we should be consistent with the calculation of the free retarded Green's function (8), This yields And therefore where the definition ℓm/2μ = Lm/2m*. Fourier transformation of Eq (26) with respect to k yields finally The Green's function with only k space variables is found from the Fournier transform of Eq. 25, and the ensuing equations This yields It is easily verified that Fournier transformation yields again the result (26). 1Our notations for special functions follow the conventions of Abramowitz and Stegun [16]. 1. Dick R: Int. J. Theor. Phys.. 2003, 42: 569. 10.1023/A:1024446017417 Article  Google Scholar  2. Dick R: Nanoscale Res. Lett.. 2008, 3: 140. Bibcode number [2008NRL.....3..140D] 10.1007/s11671-008-9126-4 Article  Google Scholar  3. Dick R: Phys. E. 2008, 40: 524. 10.1016/j.physe.2007.07.025 Article  Google Scholar  4. Dick R: Phys. E. 2008, 40: 2973. COI number [1:CAS:528:DC%2BD1cXntVOns7o%3D] 10.1016/j.physe.2008.02.017 Article  Google Scholar  5. Bychkov YA, Rashba EI: JETP Lett.. 1984, 39: 78. Bibcode number [1984JETPL..39...78B] Google Scholar  6. Bychkov YA, Rashba EI: J. Phys. C. 1984, 17: 6039. Bibcode number [1984JPhC...17.6039B] 10.1088/0022-3719/17/33/015 Article  Google Scholar  7. Cappelluti E, Grimaldi C, Marsiglio F: Phys. Rev. Lett.. 2007, 98: 167002. COI number [1:STN:280:DC%2BD2s3ptVyqsA%3D%3D]; Bibcode number [2007PhRvL..98p7002C] 10.1103/PhysRevLett.98.167002 Article  Google Scholar  8. Cappelluti E, Grimaldi C, Marsiglio F: Phys. Rev. B. 2007, 76: 085334. Bibcode number [2007PhRvB..76h5334C] 10.1103/PhysRevB.76.085334 Article  Google Scholar  9. Srisongmuang B, Pairor P, Berciu M: Phys. Rev. B. 2008, 78: 155317. Bibcode number [2008PhRvB..78o5317S] 10.1103/PhysRevB.78.155317 Article  Google Scholar  10. Vasilopoulos P, Wang XF: Phys. E. 2008, 40: 1729. 10.1016/j.physe.2007.10.068 Article  Google Scholar  11. Li S-S, Xia J-B: Nanoscale Res. Lett.. 2009, 4: 178. COI number [1:CAS:528:DC%2BD1MXht1Kntrw%3D]; Bibcode number [2009NRL.....4..178L] 10.1007/s11671-008-9222-5 Article  Google Scholar  12. Semenoff GW: Phys. Rev. Lett.. 1984, 53: 2449. Bibcode number [1984PhRvL..53.2449S] 10.1103/PhysRevLett.53.2449 Article  Google Scholar  13. Apalkov V, Wang XF, Chakraborty T: Mod. Phys. B. 2007, 21: 1165. COI number [1:CAS:528:DC%2BD2sXltl2jtb0%3D] 10.1142/S0217979207042604 Article  Google Scholar  14. Covaci L, Berciu M: Phys. Rev. Lett. 2008, 100: 256405. Bibcode number [2008PhRvL.100y6405C] 10.1103/PhysRevLett.100.256405 Article  Google Scholar  15. Li T, Zhang Z: Nanoscale Res. Lett.. 2010, 5: 169. Bibcode number [2010NRL.....5..169L] 10.1007/s11671-009-9460-1 Article  Google Scholar  16. Abramowitz M., Stegun I.A. (Eds): Handbook of Mathematical Functions 9th printing. Dover Publications, New York; 1970. Google Scholar  Download references This research was supported by NSERC Canada. Open Access Author information Authors and Affiliations Corresponding author Correspondence to Rainer Dick. Rights and permissions Reprints and Permissions About this article Cite this article Dick, R. Dimensional Effects on Densities of States and Interactions in Nanostructures. Nanoscale Res Lett 5, 1546 (2010). Download citation • Received: • Accepted: • Published: • DOI: • Density of states • Coulomb and exchange interactions in nanostructures • Dielectric thin films
698e1ea6dde3e30a
Open access peer-reviewed chapter Scaling the Benefits of Digital Nonlinear Compensation in High Bit-Rate Optical Meshed Networks Written By Danish Rafique and Andrew D. Ellis Submitted: April 28th, 2012 Reviewed: August 27th, 2012 Published: June 13th, 2013 DOI: 10.5772/52743 Chapter metrics overview 2,063 Chapter Downloads View Full Metrics 1. Introduction The communication traffic volume handled by trunk optical transport networks has been increasing year by year [1]. Meeting the increasing demand not only requires a quantitative increase in total traffic volume, but also ideally requires an increase in the speed of individual clients to maintain the balance between cost and reliability. This is particularly appropriate for shorter links across the network, where the relatively high optical signal-to-noise ratio (OSNR) would allow the use of a higher capacity, but is less appropriate for the longest links, where products are already close to the theoretical limits [2]. In such circumstances, it is necessary to maximize resource utilization and in a static network one approach to achieve this is the deployment of spectrally efficient higher-order modulation formats enabled by digital coherent detection. As attested by the rapid growth in reported constellation size [3,4], the optical hardware for a wide variety of coherently detected modulation formats is identical [5]. This has led to the suggestion that a common transponder may be deployed and the format adjusted on a link by link basis to either maximize the link capacity given the achieved OSNR, or if lower, match the required client interface rate [6] such that the number of wavelength channels allocated to a given route is minimized. It is believed that such dynamic, potentially self-adjusting, networks will enable graceful capacity growth, ready resource re-allocation and cost reductions associated with improved transponder volumes and sparing strategies. However additional trade-offs and challenges associated with such networks are presented to system designers and network planners. One such challenge is associated with the nonlinear transmission impairments which strongly link the achievable channel reach for a given set of modulation formats, symbol-rates [6,7] across a number of channels. Various methods of compensating fiber transmission impairments have been proposed, both in optical and electronic domain. Traditionally, dispersion management was used to suppress the impact of fiber nonlinearities [8,9]. Although dispersion management is appreciably beneficial, the benefit is specific to a limited range of transmission formats and rates and it enforces severe limitations on link design. Similarly, compensation of fiber impairments based on spectral inversion (SI) [10], has been considered attractive because of the removal of in-line dispersion compensation modules (DCM), transparency to modulation formats and compensation of nonlinearity. However, although SI has large bandwidth capabilities, it often necessitates precise positioning and customized link design (e.g., distributed Raman amplification, etc.). Alternatively, with the availability of high speed digital signal processing (DSP), electronic mitigation of transmission impairments has emerged as a promising solution. As linear compensation methods have matured in past few years [11], the research has intensified on compensation of nonlinear impairments. In particular, electronic signal processing using digital back-propagation (DBP) with time inversion has been applied to the compensation of channel nonlinearities [12,13]. Back-propagation may be located at the transmitter [14] or receiver [15], places no constraints on the transmission line and is thus compatible with the demands of an optical network comprising multiple routes over a common fiber platform. In principle this approach allows for significant improvements in signal-to-noise ratios until the system performance becomes limited only by non-deterministic effects [16] or the power handling capabilities of individual components. Although the future potential of nonlinear impairment compensation using DBP in a dynamic optical network is unclear due to its significant computational burden, simplification of nonlinear DBP using single-channel processing at the receiver suggest that the additional processing required for intra-channel nonlinearity compensation may be significantly lower than is widely anticipated [17,18]. Studies of the benefits of DBP have largely been verified for systems employing homogenous network traffic, where all the channels have the same launch power [19]. However, as network upgrades are carried out, it is likely that channels employing different multi-level formats will become operational. In such circumstances, it has been demonstrated that the overall network capacity may be increased if the network traffic will become inhomogeneous, not only in terms of modulation format, but also in terms of signal launch power [6,7,20]. In particular, if each channel operates at the minimum power required for error free propagation (after error correction) rather than a global average power or the optimum power for the individual channel, the overall level of cross phase modulation in the network is reduced [20]. In this chapter we demonstrate the application of electronic compensation schemes in a dynamic optical network, focusing on adjustable signal constellations with non identical launch powers, and discuss the impact of periodic addition of 28-Gbaudpolarization multiplexed m-ary quadrature amplitude modulation (PM-mQAM) channels on existing traffic. We also discuss the impact of cascaded reconfigurable optical add-drop multiplexerson networks operating close to the maximum permissible capacity in the presence of electronic compensation techniques for a range of higher-order modulation formats and filter shapes. 2. Simulation conditions Figure 1 illustrates the simulation setup. The optical link comprised nine (unless mentioned otherwise) 28-GbaudWDM channels, employing PM-mQAM with a channel spacing of 50 GHz. For all the carriers, both the polarization states were modulated independently using de-correlated 215 and 216 pseudo-random bit sequences (PRBS), for x- and y-polarization states, respectively. Each PRBS was de-multiplexed separately into two multi-level output symbol streams which were used to modulate an in-phase and a quadrature-phase carrier. The optical transmitters consisted of continuous wave laser sources, followed by two nested Mach-Zehnder Modulator structures for x- and y-polarization states, and the two polarization states were combined using an ideal polarization beam combiner. The simulation conditions ensured 16 samples per symbol with 213 total simulated symbols per polarization. The signals were propagated over standard single mode fiber (SSMF) transmission link with 80 km spans, no inline dispersion compensation and single-stage erbium doped fiber amplifiers (EDFAs). The fiber had attenuation of 0.2 dB/km, dispersion of 20 ps/nm/km, and a nonlinearity coefficient (γ) of 1.5/W/km(unless mentioned otherwise). Each amplifier stage was modeled with a 4.5 dB noise figure and the total amplification gain was set to be equal to the total loss in each span. Figure 1. Simulation setup for 28-GbaudPM-mQAM (m= 4, 16, 64, 256) transmission system withLwavelengths andMspans per node (total spans is given byN). At the coherent receiver the signals were pre-amplified (to a fixed power of 0 dBm per channel), filtered with a 50 GHz 3rd order Gaussian de-multiplexing filter, coherently-detected and sampled at 2 samples per symbol. Transmission impairments were digitally compensated in two scenarios. Firstly by using electronic dispersion compensation (EDC) alone, employing finite impulse response (FIR) filters (T/2-spaced taps) adapted using a least mean square algorithm. In the second case, electronic compensation was applied via single-channel digital back-propagation (SC-DBP), which was numerically implemented by split-step Fourier method based solution of nonlinear Schrödinger equation. In order to establish the maximum potential benefit of DBP, the signals were up sampled to 16 samples per bit and an upper bound on the step-size was set to be 1 km with the step length chosen adaptively based on the condition that in each step the nonlinear effects must change the phase of the optical field by no more than 0.05 degrees. To determine the practically achievable benefit, in line with recent simplification of DBP algorithms, e.g. [17,18,21], we also employed a simplified DBP algorithm similar to [21], with number of steps varying from 0.5 step/span to 2 steps/span. Following one of these stages (EDC or SC-DBP) polarization de-multiplexing, frequency response compensation and residual dispersion compensation was then performed using FIR filters, followed by carrier phase recovery [22]. Finally, the symbol decisions were made, and the performance assessed by direct error counting (converted into an effective Q-factor (Qeff)). All the numerical simulations were carried out using VPItransmissionMaker®v8.5, and the digital signal processing was performed in MATLAB®v7.10. 3. Analysis of trade-offs in hybrid networks 3.1. Constraints on transmission reach In a dynamic network, there are a large range of options to provide the desired flexibility including symbol rate [23], sub-carrier multiplexing [24], network configuration [25] signal constellation and various combinations of these techniques. In this section we focus on the signal constellation and discuss the impact of periodic addition of PM-mQAM (m= 4, 16, 64, 256) transmission schemes on existing PM-4QAM traffic in a 28-GbaudWDM optical network with a total transparent optical path of 9,600 km. We demonstrate that the periodic addition of traffic at reconfigurable optical add-drop multiplexer (ROADM) sites degrades through traffic, and that this degradation increases with the constellation size of the added traffic. In particular, we demonstrate that undistorted PM-mQAM signals have the greatest impact on the through traffic, despite such signals having lower peak-to-average power ratio (PAPR) than dispersed signals, although the degradation strongly correlated to the total PAPR of the added traffic at the launch point itself. Using this observation, we propose the use of linear pre-distortion of the added channels to reduce the impact of the cross-channel impairments [26,27]. Note that the total optical path was fixed to be 9,600 kmand after every Mspans, a ROADM stage was employed and the channels to the left and right of the central channel were dropped and new channels with independent data patterns were added, as shown in Figure 2. in order to analyze the system performance, the dropped channels were coherently-detected after first ROADM and the central channel after the last ROADM link. Figure 2. Network topology for flexible optical network, employing PM-4QAM traffic as a through channel, and PM-mQAM traffic as neighboring channels, getting added/dropped at each ROADM site. Note that in this schematic only right-hand wavelength is shown to be added/dropped, however in the simulations both right and left wavelengths were add/dropped. The total path length was fixed to 9,600 km, and the number of ROADMs was varied. The optimum performance of the central PM-4QAM channel at 9,600 kmoccurred for a launch power of -1 dBm. In this study, the launch power of all the added channels was also fixed at -1 dBm, such that all channels had equal launch powers. Figure 3 illustrates the performance of the central test channel after the last node (solid), along with the performance of co-propagating channel employing various modulation formats after the first ROADM node (open) for a number of ROADM spacing’s, using both single-channel DBP (Figure 3a) and EDC (Figure 3b). It can be seen that single-channel DBP offers a Qeff improvement of ~1.5 dBcompared to EDC based system. This performance improvement is strongly constrained by inter-channel nonlinearities, such that intra-channel effects are not dominant. Moreover, the figure shows that as the number of ROADM nodes are increased, or the distance between ROADMs decreases, the performance of higher-order neighboring channels improves significantly due to the improved OSNR. It can also be seen from Figure 3 that added channels with higher-order formats induce greater degradation of the through channel. In particular if there are 30 ROADM sites (320 kmROADM spacing) allocated to transmit PM-64QAM, whilst this traffic operates with significant margin, the through traffic falls below the BER of 3.8x10-3. This increased penalty is due to the increased nonlinear degradation encountered in the first span after the ROADM node, where higher formats induce greater cross phase modulation(XPM) than PM-4QAM by virtue of their increased PAPR. However, even when the add drop traffic is PM-4QAM, the performance of the through channel degrades slightly as the number of ROADM nodes is increased, despite the reduction in PAPR due to the randomization of the nonlinear crosstalk. The estimated PAPR evolutions for the various formats are shown in Figure 4. Asymptotic values are reached after the first span, and reach a slightly higher value for m ≥ 16. The PAPR is reduced at the ROADM site itself, particularly for PM-4QAM. Figure 4 implies that harmful increases in the instantaneous amplitude of the interfering channels are not the entire cause of the penalty experienced by the through channel; we can therefore only conclude that the additional distortion results from interplay between channel walk off and nonlinear effects. Given that walk-off is known to induce short and medium range correlation in crosstalk between subsequent bits, effectively low pass filtering the crosstalk [28]. We thus believe that the penalty experienced by the through channel is not only because of variation in PAPR, but also due to the randomization of the crosstalk by the periodic replacement of the interfering data pattern. Figure 3. Qeff as a function of number of ROADMs (and distance between ROADM nodes) for 28-GbaudPM-mQAM showing performance of central PM-4QAM (solid, after total length), and neighboring PM-mQAM (open, after first node). a) with single-channel DBP, b) with electronic dispersion compensation. Square: 4QAM, circle: 16QAM, up triangle: 64QAM, diamond:256QAM. Up arrows indicate that no errors were detected, implying that the Qeff was likely to be above 12.59dB. Total link length is 9,600 km. Figure 4. Variation in PAPR, for 4QAM (black), 16QAM (red), 64QAM (green) and 256QAM (blue) for a loss-less linear fiber with 20 ps/nm/kmdispersion. Figure 5. Qeff of the PM-4QAM through channel for 28-GbaudPM-mQAM add/drop traffic after 9,600kmas a function of a figure of merit (FOM) defined in the text for various add drop configurations. Solid: with single-channel DBP, open: with EDC. This is confirmed by Figure 5, which plots the Qeff of PM-4QAM after last node, for both EDC and single-channel DBP, in terms of a figure of merit (FOM) related to the increased amplitude modulation experienced by the test channel in the spans immediately following the ROADM node, defined as, wheremrepresents the modulation order, ROADMNrepresents number of add-drop nodes, Imaxand Iallare the maximum and mean intensity of the given modulation format at the ROADM site. A strong correlation between the penalty and change in PAPR is observed. For instance, for a high number of ROADMs the system would be mostly influenced by relatively un-dispersed signals and the difference between peak-to-average fluctuations for multi-order QAM varies significantly. This leads to higher-order modulation formats impinging worse cross-channel effects on existing traffic for shorter routes. Having observed that the nonlinear penalty is determined by the reduction in the correlation of nonlinear phase shift between bits arising from changing bit patterns, and to changes in PAPR arising from undistorted signals, it is possible to design a mitigation strategy to minimize these penalties. Figure 6 illustrates, for both EDC and single-channel DBP systems, that if the co-propagating higher-order QAM channels are linearly pre-dispersed, the performance of the PM-4QAM through traffic can be improved. The figure shows that when positive pre-dispersion is applied, such that the neighboring channel constellation is never, along its entire inter node transmission length, restored to a well-formed shape, the impact of cross-channel impairments on existing traffic is reduced significantly. Figure 6. Qeff of the PM-4QAM through channel with 30 ROADM sites, when the neighboring PM-64QAM channel is linearly pre-dispersed. Solid: with single-channel DBP, open: with EDC. On the other hand, when negative pre-dispersion of less than the node-length (distance per node) is employed, the central test channel is initially degraded further. This behavior can be attributed to the increased impact of the PAPR of the un-dispersed constellation which is restored in the middle of the link. However, if negative pre-dispersion of more than the node-length is employed, the penalty is reduced due to lower PAPR induced XPM, and the performance saturates for higher values of pre-dispersion, similar to the case of positive pre-dispersion. Note that avoiding well formed signals along the entire link corresponds to maximizing the path averaged PAPR of the signals. The benefits of this strategy have subsequently been predicted from a theoretical standpoint [27]. 3.2. Constraints on transmitted power In this section, we demonstrate that independent optimization of the transmitted launch power enhances the performance of higher modulation order add-drop channels but severely degrades the performance of through traffic due to strong inter-channel nonlinearities. However, if an altruistic launch power policy is employed such that the higher-order add-drop traffic still meets the BER of 3.8x10-3, a trade-off can be recognized between the performance of higher-order channels and existing network traffic enabling higher overall network capacity with minimal crosstalk [19]. As a baseline for this study, we initially consider transmission distances up to 9,600km with the same 80km spans, suitable to enable a suitable performance margin (at bit-error rate of 3.8x10-3) for the network traffic given various modulation schemes at a fixed launch power of -1 dBm, (optimum power as determined in previous section. For a dynamic network with NROADMs and mthorder PM-QAM, the overall results are summarized in Table 1. The table shows under which conditions the central PM-4QAM channel (right-hand symbol), and the periodically added traffic (left-hand symbol) are simultaneously able to achieve error-free operation after FEC. Two ticks indicate that both types of traffic is operational, whilst a cross indicates that at least one channel produces severely errorred signals. As expected, with decreasing ROADM spacing, the operability of higher-order neighboring channels increases due to the improved OSNR. However, it can also be seen that as a consequence, added channels with higher-order formats induce greater degradation of the through channel through nonlinear crosstalk as shown in Section 3.1. In particular, if the ROADM spacing is 320 km, allocated to transmit PM-64QAM, whilst this traffic is operable, the through traffic falls below the BER threshold. Conversely for large ROADM spacing, there is little change in nonlinear crosstalk, since the m-QAM signals are highly dispersed, but the higher order format traffic has insufficient OSNR for error free operation. We refer to this approach as “fixed network power”. ROADM spacing Table 1. Operability of PM-mQAM/4QAM above BER threshold of 3.8x10-3for a total trnamsission distance of 9,600km. Tick/Cross (Left) represents performance of mQAM, Tick/Cross (Right) represents corresponding performance of central 4QAM. Tick: Operational, Cross: Non-operational Since higher-order modulation formats have higher required OSNR, we expect the optimum launch power for those channels to be different than those used in the fixed network power scenario which was operated at a launch power of -1 dBm. Thus, for example, for large ROADM spacing, we improved performance might be expected if the add-drop traffic operates with increased launch power. Figure 7 illustrates the performance of through channel and the higher-order add-drop channels as a function of launch power of the add-drop traffic (through channel operates with a fixed, previously optimized, launch power of -1 dBm). For clarity we report two ROADM spacings, selected to give zero margin (Figure 7a) or ~2 dBmargin (Figure 7b) for 256QAM add drop traffic. The ROADM spacing for 16 and 64QAM signals were scaled in proportion (approximately) to their required OSNR levels under linear transmission. The exact ROADM spacing is reported in the figure captions. Figure 7 clearly illustrates that the higher-order formats operating over a longer (shorter) reach enable lower (higher) Qeff, but also that the nonlinear effects increase in severity as the modulation order is increased. In particular, the long distance through traffic is strongly degraded before the nonlinear threshold is reached for such formats. Comparing Figure 7a and Figure 7b, we can see that the reduced ROADM spacing in Figure 7b enables improved performance of the add-drop channels; however the degradation of the through channel is increasingly severe. This change in behavior between formats can be attributed to the increased amplitude modulation imposed by un-dispersed signals added at each ROADM site, as discussed previously. Figure 7. Qeff as a function of launch power of two neighboring channels for 28-GbaudPM-mQAM, showing performance of central PM-4QAM (Solid), and neighboring PM-mQAM (Half Solid). Triangle: 16QAM, Circle: 64QAM, Square:256QAM. The launch power per channel for PM-4QAM is fixed to -1dBm. ROADM spacing of, a) 2400, 640, 160km, b) 1200, 320, 80kmfor 16, 64, 256 QAM, respectively. We can use the results of Figure 7 to analyze the impact of various power allocation strategies. Clearly if we allow each transponder to adjust its launch power to optimize its own performance autonomously, a high launch power will be selected and the degradation to the traffic from other transponders increases in severity, and in all six scenarios in Figure 7 the through channel fails if the performance of the add drop traffic is optimized independently. This suggests that launch power should be centrally controlled. Howevercentrally controlled optimization of individual launch powers for each transponder is complex; so a more promising approach would be a fixed launch power irrespective of add-drop format or reach to minimize the complexity of this control. We have already seen (Table 1) that if the launch power is set to favor the performance of PM-4QAM (-1 dBm) the flexibility in transmitted format for the add/drop transponders is low, and to confirm this in Figure 7 four of the scenarios fail. The best performance for these two scenarios is achieved at a fixed launch power of -3 dBm, but we still find that 3 scenarios fail to establish error free connections. However, if the transponders are altruistically operated at the minimum launch power required for the desired connection (not centrally controlled), the majority of the scenarios studied result in successful connections. The one exception is the add-drop of 256QAM channels with a ROADM spacing of 160 km, which is close to the maximum possible reach of the format. Note that shorter through paths would tend to use higher-order formats for all the routes, where nonlinear sensitivity is higher [29], and therefore we expect similar conclusions. 4. Application in meshed networks In the previous section, we identified that optimum performance for a given predetermined modulation format was obtained by using the minimum launch power. However, this arbitrary selection of transmitted format fails to take into account the ability of a given link to operate with different formats, leading to a rich diversity of connections. In this section, we focus on the impact of flexibility in the signal constellation, allowing for evolution of the existing ROADM based static networks. We consider a configuration where network capacity is increased by allowing higher-order modulation traffic to be transmitted on according to predetermined rules based on homogenous network transmission performance. In particular we consider a 50 GHz channel grid with coherently-detected 28-GbaudPM-mQAMand 20 wavelength channels. We demonstrate that even if modulation formats are chosen based on knowledge of the maximum transmission reach aftersingle-channel digital back-propagation, for the network studied, the majority of the network connections (75%) are operable with significant optical signal-to-noise ratio margin when operated with electronic dispersion compensation alone. However, 23% of the links require the use of single-channel DBP for error free operation. Furthermore, we demonstrate that in this network higher-order modulation formats are more prone to impairments due to channel nonlinearities and filter crosstalk; however they are less affected by the bandwidth constrictions associated with ROADM cascades due to shorter operating distances. Finally, we show that, for any given modulation order, a minimum filter Gaussian order of ~3 or bandwidth of ~35 GHzenables the performance with approximately less than 1 dBpenalty with respect to ideal rectangular filters [30]. 4.1. Network design To establish a preliminary estimate of maximum potential transmission distance of each available format, we employed the transmission reaches identified in Section 3. These are suitable to enable a BER of 3.8x10-3 at a fixed launch power of -1 dBmassuming the availability of single-channel DBP. These conditions gave maximum reaches of 2,400 kmfor PM-16QAM, 640 kmfor PM-64QAM and 160 kmfor 256 QAM. Note that only single-channel DBP was considered in this study since in a realistic mesh network access to neighboring traffic might be impractical. WDM based DBP solution may be suitable for a point to point submarine link or for a network connection where wavelengths linking the same nodes co-propagate using adjacent wavelengths. Implementation of this condition would require DBP aware routing and wavelength assignment algorithms. This approach could enable significant Qeff improvements or reach increases. For 64QAM, up to 7 dBQeffimprovements were shown in [29], although the benefit depends on the number of processed channels [31]. We then applied this link capacity rule to an 8-node route from a Pan-European network topology (see highlighted link in Figure 8). To generate a representative traffic matrix, for each node, commencing with London, we allocated traffic demand from the node under consideration to all of the subsequent nodes, operating the link at the highest order constellation permissible for the associated transmission distance, and selecting the next wavelength. We note that none of the links in this chosen route were suitable for 256QAM, indeed only the Strasberg to Zurich and Vienna to Prague links are expected to be suitable for this format. Figure 8. node Pan-European network topology. Link 1: London-to-Amsterdam: 7 spans, Link 2: Amsterdam-to-Brussels: 3 spans, Link 3: Brussels-to-Frankfurt: 6 spans. Link 4: Frankfurt-to-Munich: 6 spans, Link 5: Munich-to-Milan: 7 spans, Link 6: Milan-to-Rome: 9 spans, Link 7: Rome-to-Athens: 19 spans. (80km/span). Once all nodes were connected by a single link, this process was repeated (in the same order), adding additional capacity between nodes where an unblocked route was available until all 20 wavelengths were allocated, and no more traffic could be assigned without blockage. Table 2 illustrates the resultant traffic matrix showing the location where traffic was added and dropped (gray highlighting) and the order of the modulation format (numbers) carried wavelength (horizontal index) on each link (vertical index). For example, emerging from node 6 are nine wavelengths carrying PM-4QAM and 5 wavelengths carrying PM-16QAM whilst on the center wavelength, PM-16QAM data is transmitted from node 1 (London) to node 5 (Munich) where this traffic is dropped and replaced with PM-64QAM traffic destined for node 6 (Milan). This ensured that various nodes were connected by multiple wavelengths. As it can be seen, the adopted procedure allowed for a reasonably meshed optical network (36 connections) with shortest route of 3 spans and longest path of 57 spans, emulating a quasi-real traffic scenario with highly heterogeneous traffic. At each node, add-drop functionality was enabled using a channelized ROADM architecture where all the wavelengths were de-multiplexed and channels were added/dropped, before re-multiplexing the data signals again. We considered Rectangular and Gaussian-shaped filters for ROADM stages, and the order of the Gaussian filters was varied from 1 through 6. -10- 9- 8- 7- 6- 5-4-3-2- 10+ 1+ 2+ 3+ 4+ 5+ 6+ 7+ 8+ 9 Table 2. Traffic matrix (Each element represents the modulation order, Grayed: Traffic dropped and added at nodes highlighted in gray. 4.2. Results and discussions 4.2.1. Nonlinear transmission with ideal ROADMs Figure 9 depicts the required OSNR of each connection as a function of transmission distance, after electronic dispersion compensation. Note that in this case we employed rectangular ROADM filters to isolate the impact of inter-channel nonlinear impairments from filtering crosstalk (no cascade penalties were observed with ideal filters). Numerous conclusions can be ascertained from this figure. First, these results confirm that with mixed-format traffic and active ROADMs, as the transmission distance is increased the required OSNR increases irrespective of the modulation order due to channel nonlinearities. Second, as observed by the greater rate of increase in required OSNR with distance, the higher-order channels are most degraded by channel nonlinearities, even at the shortest distance traversed. Furthermore, even for the shortest distances the offset between the theoretical OSNR for a linear system and the simulated values are greater for higher order formats. These two effectsare attributed to the significantly reduced minimum Euclidian distance which leads to increased sensitivity to nonlinear effects. However, for a system designed according to single-channel DBP propagation limits, as the one studied here, one can observe that majority of the links operate using EDC alone (except the ones highlighted by up-arrows). Note that managing the PAPR for such formats through linear pre-dispersion could further improve the transmission performance, as shown in Section 1.3. Additionally, in order to examine the available system margin, Figure 9 also shows the received OSNR for various configurations, where it can be seen that majority of the links (except 3) have more than 2 dBavailable margins, and that our numerical results show an excellent match to the theoretical predictions. Figure 9. Nonlinear tolerance of PM-mQAM in a dynamic mesh network after EDC. a) Colored: OSNR at BER of 3.8x10-3 vs. Distance (Links traversed: 1(square), 2(circle), 3(up-tri), 4(down-tri), 5(left-tri), 6(right-tri), 7(diamond), horizontal lines (theoretical required OSNR)), open: intermediate nodes, solid: destination nodes. Black: Received OSNR (black spheres), Line (theoretical received OSNR), Dotted Line (theoretical received OSNR with 5dBmargin). Up arrows indicate failed connections (corresponding to drop nodes). As discussed, the results presented in Figure 9 exclude 9 network connections classified as failed (25% of the total traffic), where the calculated BER was always found to be higher than the 3.8x10-3. In order to address the failed routes, we employed single-channel DBP, as shown in [21], on such channels, as shown in Figure 10 (red: simplified, blue: full-precision 40 steps per span).It can be seen that all but one of the links can be restored by using single-channel DBP, with the Qeffincreasing by an average of ~1 dB, consistent with the improvements observed for heterogeneous traffic in Section 1.3. The link which continues to give a BER even after after single-channel DBP is operated with the highest order modulation format studied, and its two nearest neighbors are both highly dispersed. Note that even though the maximum node lengths are chosen based on nonlinear transmission employing single-channel DBP, most of the network traffic also abide by the EDC constraints (64QAM: ≥ 1 span, 16QAM: ≥ 6 spans, 4QAM ≥ 24 spans). The failed links have one-to-one correlation with violation of these EDC constraints, allowing for prediction of DBP requirements with a quarter of the total network traffic requiring the implementation of single-channel DBP. Also, note that all but two of the links are operable with less than 15 DBP steps for the whole link. Figure 10. Qeff as a function of network nodes for failed routes, shown by up-arrows inFig. 5, for PM-mQAM in a dynamic mesh network. After EDC (black) and single-channel DBP (red: simplified, blue: full-precision 40 steps per span). Table shows the network parameters for each scenario and number of steps for single-channel simplified DBP. These results give some indication of the benefit of flexible formats and DBP. For particular network studied (assuming one of the two failed links works with high precision DBP), if homogeneous traffic, employing 4QAM, is considered, a total network capacity of 4-Tb/scould be achieved. On the other hand, flexible m-ary QAM employing bandwidth allocation based on EDC performance limits only (not shown) enables ~60% increase in transmission capacity (6.8-Tb/s), while designs accounting for SC-DBP add a further 12% increase in capacity (7.7-Tb/s). Note that for traffic calculations based on EDC constraints, we assumed that the routes of Figure 10 would operate satisfactorily for the next format down and that there would be no increase in the nonlinear penalty experienced by any other channel. Further increase in capacity can be attained if pre-dispersion or limited WDM DBP are used, or if more format granularity is introduced (e.g. 8QAM and 32QAM) to exploit the remaining margin. In this example, 25% of transponders operating in single-channel DBP mode enable a 12% increase in capacity. One may therefore argue that in order to provide a the same increase in capacity without employing DBP, approximately 12% more channels would be required, consuming 12% more energy (assuming that the energy consumption is dominated by the transponders). In the case studied, since a ¼ of transponders require DBP, breakeven would occur if the energy consumption of a DBP transponder was 50% greater than a conventional transponder. Given that commercial systems allocate approximately 3-5% of their power to the EDC chipset [32], this suggests that the DBP unit used could be up to 16 times the complexity of the EDC chip. The results reported in Figure 10 with simplified DBP fall within this bound and highlight the practicality of simplified DBP algorithms. Figure 11. Qeff as a function of Gaussian filter order (35GHzbandwidth) for a 6dBmargin from theoretical achievable OSNR. a) 4QAM; b) 16QAM; c) 64QAM. (up-arrows indicate that no errors were detected). 4.2.2. Filter order and BW dependence Figure 11 shows the performance of a selection of links with less than 6 dBmargin from the theoretical achievable OSNR (see Figure 9 for links used, we show only the links with the worst required OSNR in the case of 16QAM for clarity), as a function of the Gaussian filter order within each ROADM. As it is well-known, the transmission penalty decreases as filter order increases [33]. However, it can be seen that for higher-order modulation formats, the transmission performance saturates at lower filter orders, compared to lower-order formats. This trend is related to the fact that modulation formats traversing through greater number of nodes are more strongly dependent on the Gaussian order (attributed to known penalties from filter cascades [34,35]). For instance, the performance of 4QAM traffic is severely degraded as a function of Gaussian order, due to the higher number of nodes traversed by such format. 16QAM channels show relatively good tolerance to filter order due to reduced number of hops, however when greater than 3 nodes are employed, the performance again becomes a strong function of filter order. 64QAM is least dependent on filter order since no intermediate ROADMs are traversed. For any given modulation order, a minimum Gaussian order of ~3 enables the optimum performance to be within 1 dBof the performance for an ideal rectangular filter. Figure 12. Qeff as a function of Gaussian filter bandwidth (and filter order) for worst-case OSNR margin seen inFigure 6.8. a) 4QAM; b) 16QAM; c) 64QAM. The simulated Qeff versus 3 dBbandwidth of the ROADM stages and filter order is shown in Figure 12, again for the worst-case required OSNR observed in Figure 9 for each modulation format. For lower bandwidths, the Qeff is degraded due to bandwidth constraints. With the exception of second order filters, bandwidths down to 35 GHzare sufficient for all the formats studied. However, consistent with previous analysis (in Figure 10), the impact of filter order on 64QAM is minimal and lower-order filters seem to have better performance than higher-order ones at 25GHzbandwidth. This is because when the signal bandwidth (28-GHz) exceeds the filter bandwidth, the lower order filters capture more of the signal spectra. However, this effect is visible in the case of 64QAM only since no nodes were traversed in this case, thereby avoiding the penalty from ROADM stages with lower filter orders. 5. Summary and future work In this chapter we explored the network aspect of advanced physical layer technologies, including multi-level formats employing varying DSP, and solutions were proposed to enhance the capacity of static transport networks. It was demonstrated that that if the order of QAM is adjusted to maximize the capacity of a given route, there may be a significant degradation in the transmission performance of existing traffic for a given dynamic network architecture. Such degradations were shown to be correlated to the accumulated peak-to-average power ratio of the added traffic along a given path, and that management of this ratio through pre-distortion was proposed to reduce the impact of adjusting the constellation size on through traffic. Apart from distance constraints, we also explored limitations in the operational power range of network traffic. The transponders which autonomously select a modulation order and launch power to optimize their own performance were reported to have a severe impact on co-propagating network traffic. A solution was proposed to operate the transponders altruistically, offering lower penalties than network controlled fixed power approach. In the final part of our analysis, the interplay between different higher-ordermodulation channels and the effect of filter shapes and bandwidth of(de)multiplexers on the transmission performance, in a segment of pan-European optical network was explored. It was verified that if the link capacities are assigned assuming that digital back propagation is available, 25% of the network connections fail using electronic dispersion compensation alone. However, majority of such links can indeed be restored by employing single-channel digital back-propagation. Our results indicated some benefit of flexible formats and DBP in realistic mesh networks. We showed that for particular network studied, if homogeneous traffic, employing 4QAM is considered, a total network capacity of 4 Tb/scan be achieved. On the other hand, flexible m-ary QAM employing bandwidth allocation based on EDC performance limits enable ~60% increase in transmission capacity (6.8 Tb/s), while designs accounting for SC-DBP add a further 12% increase in capacity (7.7 Tb/s). Further enhancement in network capacity may be obtained through the use of intermediate modulation order, dispersion pre-compensation for nonlinearity control and the use of altruistic launch powers. In terms of network evolution, the ultimate goal is to enable software-defined transceivers, where each node would switch itself to just-rightmodulation scheme and associated DSP, based on various physical layer, distance, power, and etc. constraints. Modeling of real-time traffic employing the content covered in this chapter, should motivate and pave the way for high capacity upgrade of currently deployed networks. In addition, modulation/DSP aware routing and wavelength assignment algorithms (e.g. DBP bandwidth aware wavelength allocation) would further enhance the transmission capacity. This work was supported by Science Foundation Ireland under Grant numbers 06/IN/I969 and 08/CE/11523. 1. 1. R.W. Tkach, "Scaling optical communications for the next decade and beyond," Bell Labs Technical Journal 14, 3-9 (2010). 2. 2. P. Winzer, “Beyond 100G Ethernet,” IEEE Communications Magazine 48, 26 (2010). 3. 3. S. Makovejsm, D. S. Millar, V. Mikhailov, G. Gavioli, R. I. Killey, S. J. Savory, and P. Bayvel, “Experimental Investigation of PDMQAM16 Transmission at 112 Gbit/s over 2400 km,” OFC/NFOEC, OMJ6 (2010). 4. 4. J. Yu, X. Zhou, Y. Huang, S. Gupta, M. Huang, T. Wang, and P. Magill, “112.8-Gb/s PM-RZ 64QAM Optical Signal Generation and Transmission on a 12.5GHz WDM Grid,” OFC/NFOEC, OThM1 (2010). 5. 5. M. Seimetz, Higher-order modulation for optical fiber transmission. Springer (2009). 6. 6. A. Nag, M. Tornatore, and B. Mukherjee, “Optical network design with mixed line rates and multiple modulation formats,” Journal of Lightwave Technology 28, 466–475 (2010). 7. 7. C. Meusburger, D. A. Schupke, and A. Lord, “Optimizing the migration of channels with higher bitrates,” Journal of Lightwave Technology 28, 608–615 (2010). 8. 8. M. Suzuki, I. Morita, N. Edagawa, S. Yamamoto, H. Taga, and S. Akiba, "Reduction of Gordon-Haus timing jitter by periodic dispersion compensation in soliton transmission," Electronics Letters 31, 2027-2029 (1995). 9. 9. C. Fürst, C. Scheerer, G. Mohs, J-P. Elbers, and C. Glingener, "Influence of the dispersion map on limitations due to cross-phase modulation in WDM multispan transmission systems," Optical Fiber Communication Conference, OFC ’01, MF4 (2001). 10. 10. D.D. Marcenac, D. Nesset, A. E. Kelly, M. Brierley, A. D. Ellis, D. G. Moodie, and C. W. Ford, "40 Gbit/s transmission over 406 km of NDSF using mid-span spectral inversion by four-wave-mixing in a 2 mm long semiconductor optical amplifier," Electronics Letters 33, 879 (1997). 11. 11. M. Kuschnerov, F. N. Hauske, K. Piyawanno, B. Spinnler, M. S. Alfiad, A. Napoli, and B. Lankl, “DSP for coherent single-carrier receivers,” Journal of Lightwave Technology 27, 3614-3622 (2009). 12. 12. X. Li, X. Chen, G. Goldfarb, Eduardo Mateo, I. Kim, F. Yaman, and G. Li, ‘‘Electronic post-compensation of WDM transmission impairments using coherent detection and digital signal processing,” Opt. Express, 16, 880 (2008). 13. 13. D. Rafique, J. Zhao, and A. D. Ellis, "Digital back-propagation for spectrally efficient WDM 112 Gbit/s PM m-ary QAM transmission," Opt. Express 19, 5219-5224 (2011). 14. 14. C. Weber, C.-A. Bunge, and K. Petermann, ‘‘Fiber nonlinearities in systems using electronic predistortion of dispersion at 10 and 40 Gbit/s,” Journal of Lightwave Technology 27, 3654-3661 (2009). 15. 15. G. Goldfarb, M.G. Taylor, and G. Li, ‘‘Experimental demonstration of fiber impairment compensation using the split step infinite impulse response method,” IEEE LEOS, ME3.1 (2008). 16. 16. D. Rafique and A. D. Ellis, "Impact of signal-ASE four-wave mixing on the effectiveness of digital back-propagation in 112 Gb/s PM-QPSK systems," Opt. Express 19, 3449-3454 (2011). 17. 17. L.B. Du, and A. J. Lowery, "Improved single channel backpropagation for intra-channel fiber nonlinearity compensation in long-haul optical communication systems," Opt. Express 18, 17075-17088 (2010). 18. 18. L. Lei, Z. Tao, L. Dou, W. Yan, S. Oda, T. Tanimura, T. Hoshida, and J. C. Rasmussen, "Implementation Efficient Nonlinear Equalizer Based on Correlated Digital Backpropagation," OFC/NFOEC, OWW3 (2011). 19. 19. S. J. Savory, G. Gavioli, E. Torrengo, and P. Poggiolini, "Impact of Interchannel Nonlinearities on a Split-Step Intrachannel Nonlinear Equalizer," Photonics Technology Letters, IEEE 22, 673-675 (2010). 20. 20. D. Rafique and A. D. Ellis, "Nonlinear Penalties in Dynamic Optical Networks Employing Autonomous Transponders," Photonics Technology Letters, IEEE 23, 1213-1215 (2011). 21. 21. D. Rafique, M. Mussolin, M. Forzati, J. Martensson, M.N. Chugtai, A.D. Ellis, “Compensation of intra-channel nonlinear fibre impairments using simplified digital backpropagation algorithm”, Optics Express, Opt. Express 19, 9453-9460 (2011). 22. 22. C. S. Fludger, T. Duthel, D. vanden Borne, C. Schulien, E.-D. Schmidt, T. Wuth, J. Geyer, E. DeMan, G.-D. Khoe, and H. de Waardt,, "Coherent Equalization and POLMUX-RZ-DQPSK for Robust 100-GE Transmission," J. Lightwave Technol. 26, 64-72 (2008). 23. 23. T. Wuth, M. W. Chbat, and V. F. Kamalov, “Multi-rate (100G/40G/10G) Transport over deployed optical networks,” OFC/NFOEC, NTuB3 (2008). 24. 24. W. Wei, Z. Lei, and Q. Dayou, “Wavelength-based sub-carrier multiplexing and grooming for optical networks bandwidth virtualization,” OFC/NFOEC, PDP35 (2008) 25. 25. R. Peter, and C. Brandon, “Evolution to colorless and directionless ROADM architectures,” OFC/NFOEC, NWE2 (2008). 26. 26. D. Rafique and A.D. Ellis “Nonlinear penalties in long-haul optical networks employing dynamic transponders,” Optics Express 19, 9044-9049, (2011). 27. 27. S. Turitsyn, M. Sorokina, and S. Derevyanko, "Dispersion-dominated nonlinear fiber-optic channel," OpticsLetters 37, 2931-2933 (2012) . 28. 28. L.E. Nelson, A. H. Gnauck, R. I. Jopson, and A. R. Chraplyvy, “Cross-phase modulation resonances in wavelength-division-multiplexed lightwave transmission,” ECOC, 309–310 (1998). 30. 30. D. Rafique and A.D. Ellis “Nonlinear and ROADM induced penalties in 28 Gbaud dynamic optical mesh networks employing electronic signal processing," Optics Express 19, 16739-16748, (2011). 31. 31. D. Rafique and A. D. Ellis, "Various Nonlinearity Mitigation Techniques Employing Optical and Electronic Approaches," Photonics Technology Letters, IEEE 23, 1838-1840 (2011). 32. 32. K. Roberts, “Digital signal processing for coherent optical communications: current state of the art and future challenges,” SPPCOM, SPWC1 (2011). 33. 33. F. Heismann, “System requirements for WSS filter shape in cascaded ROADM networks,” OFC/NFOEC, OThR1 (2010). 34. 34. T. Otani, N. Antoniades, I. Roudas, and T. E. Stern, “Cascadability of passband-flattened arrayed waveguidegrating filters in WDM optical networks,” Photonics Technology Letters 11, 1414-1416 (1999). 35. 35. M. Filer, and S. Tibuleac, “DWDM transmission at 10Gb/s and 40Gb/s using 25GHz grid and flexible-bandwidth ROADM,” OFC/NFOEC, NThB3 (2011). Written By Danish Rafique and Andrew D. Ellis
f56823ed216e17ef
Quantum gravity From Infogalactic: the planetary knowledge core Jump to: navigation, search The current understanding of gravity is based on Albert Einstein's general theory of relativity, which is formulated within the framework of classical physics. On the other hand, the nongravitational forces are described within the framework of quantum mechanics, a radically different formalism for describing physical phenomena based on probability.[2] The necessity of a quantum mechanical description of gravity follows from the fact that one cannot consistently couple a classical system to a quantum one.[3] Although a quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, difficulties arise when one attempts to apply the usual prescriptions of quantum field theory to the force of gravity.[4] From a technical point of view, the problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, the most popular approaches being string theory and loop quantum gravity.[5] A recent development is the theory of causal fermion systems which gives quantum mechanics, general relativity, and quantum field theory as limiting cases.[6][7][8][9][10][11] Strictly speaking, the aim of quantum gravity is only to describe the quantum behavior of the gravitational field and should not be confused with the objective of unifying all fundamental interactions into a single mathematical framework. While any substantial improvement into the present understanding of gravity would aid further work towards unification, study of quantum gravity is a field in its own right with various branches having different approaches to unification. Although some quantum gravity theories, such as string theory, try to unify gravity with the other fundamental forces, others, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. A theory of quantum gravity that is also a grand unification of all known interactions is sometimes referred to as a theory of everything (TOE). Question dropshade.png Open problem in physics: (more open problems in physics) Diagram showing where quantum gravity sits in the hierarchy of physics theories Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. Quantum field theory depends on particle fields embedded in the flat space-time of special relativity. General relativity models gravity as a curvature within space-time that changes as a gravitational mass moves. Historically, the most obvious way of combining the two (such as treating gravity as simply another particle field) ran quickly into what is known as the renormalization problem. In the old-fashioned understanding of renormalization, gravity particles would attract each other and adding together all of the interactions results in many infinite values which cannot easily be cancelled out mathematically to yield sensible, finite results. This is in contrast with quantum electrodynamics where, given that the series still do not converge, the interactions sometimes evaluate to infinite results, but those are few enough in number to be removable via renormalization. Effective field theories Quantum gravity can be treated as an effective field theory. Effective quantum field theories come with some high-energy cutoff, beyond which we do not expect that the theory provides a good description of nature. The "infinities" then become large but finite quantities depending on this finite cutoff scale, and correspond to processes that involve very high energies near the fundamental cutoff. These quantities can then be absorbed into an infinite collection of coupling constants, and at energies well below the fundamental cutoff of the theory, to any desired precision; only a finite number of these coupling constants need to be measured in order to make legitimate quantum-mechanical predictions. This same logic works just as well for the highly successful theory of low-energy pions as for quantum gravity. Indeed, the first quantum-mechanical corrections to graviton-scattering and Newton's law of gravitation have been explicitly computed[13] (although they are so infinitesimally small that we may never be able to measure them). In fact, gravity is in many ways a much better quantum field theory than the Standard Model, since it appears to be valid all the way up to its cutoff at the Planck scale. While confirming that quantum mechanics and gravity are indeed consistent at reasonable energies, it is clear that near or above the fundamental cutoff of our effective quantum theory of gravity (the cutoff is generally assumed to be of the order of the Planck scale), a new model of nature will be needed. Specifically, the problem of combining quantum mechanics and gravity becomes an issue only at very high energies, and may well require a totally new kind of model. Quantum gravity theory for the highest energy scales The general approach to deriving a quantum gravity theory that is valid at even the highest energy scales is to assume that such a theory will be simple and elegant and, accordingly, to study symmetries and other clues offered by current theories that might suggest ways to combine them into a comprehensive, unified theory. One problem with this approach is that it is unknown whether quantum gravity will actually conform to a simple and elegant theory, as it should resolve the dual conundrums of special relativity with regard to the uniformity of acceleration and gravity, and general relativity with regard to spacetime curvature. Such a theory is required in order to understand problems involving the combination of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe. Quantum mechanics and general relativity Gravity Probe B (GP-B) has measured spacetime curvature near Earth to test related models in application of Einstein's general theory of relativity. The graviton At present, one of the deepest problems in theoretical physics is harmonizing the theory of general relativity, which describes gravitation, and applications to large-scale structures (stars, planets, galaxies), with quantum mechanics, which describes the other three fundamental forces acting on the atomic scale. This problem must be put in the proper context, however. In particular, contrary to the popular claim that quantum mechanics and general relativity are fundamentally incompatible, one can demonstrate that the structure of general relativity essentially follows inevitably from the quantum mechanics of interacting theoretical spin-2 massless particles (called gravitons).[14][15][16][17][18] While there is no concrete proof of the existence of gravitons, quantized theories of matter may necessitate their existence.[citation needed] Supporting this theory is the observation that all fundamental forces except gravity have one or more known messenger particles, leading researchers to believe that at least one most likely does exist; they have dubbed this hypothetical particle the graviton. The predicted find would result in the classification of the graviton as a "force particle" similar to the photon of the electromagnetic field. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. These include string theory, superstring theory, M-theory, and loop quantum gravity. Detection of gravitons is thus vital to the validation of various lines of research to unify quantum mechanics and relativity theory. The dilaton The dilaton made its first appearance in Kaluza–Klein theory, a five-dimensional theory that combined gravitation and electromagnetism. Generally, it appears in string theory. More recently, however, it's become central to the lower-dimensional many-bodied gravity problem[19] based on the field theoretic approach of Roman Jackiw. The impetus arose from the fact that complete analytical solutions for the metric of a covariant N-body system have proven elusive in general relativity. To simplify the problem, the number of dimensions was lowered to (1+1), i.e., one spatial dimension and one temporal dimension. This model problem, known as R=T theory[20] (as opposed to the general G=T theory) was amenable to exact solutions in terms of a generalization of the Lambert W function. It was also found that the field equation governing the dilaton (derived from differential geometry) was the Schrödinger equation and consequently amenable to quantization.[21] Thus, one had a theory which combined gravity, quantization, and even the electromagnetic interaction, promising ingredients of a fundamental physical theory. It is worth noting that this outcome revealed a previously unknown and already existing natural link between general relativity and quantum mechanics. However, this theory lacks generalization to the (2+1) or (3+1) dimensions. In principle, the field equations are amenable to such generalization (as shown with the inclusion of a one-graviton process[22]) and yield the correct Newtonian limit in d dimensions but only if a dilaton is included. Furthermore, it is not yet clear what the fully generalized field equation governing the dilaton in (3+1) dimensions should be. The fact that gravitons can propagate in (3+1) dimensions implies that gravitons and dilatons do exist in the real world. Nonetheless, detection of the dilaton is expected to be even more elusive than the graviton. But since this simplified approach combines gravitational, electromagnetic and quantum effects, their coupling could potentially lead to a means of vindicating the theory, through cosmology and even, perhaps, experimentally. Nonrenormalizability of gravity On the other hand, in quantizing gravity there are infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it's impossible to conduct infinite experiments to fix the values of every parameter, we do not have a meaningful physical theory: If we treat QG as an effective field theory, there is a way around this problem. That is, the meaningful theory of quantum gravity (that makes sense and is predictive at all energy levels) inherently implies some deep principle that reduces the infinitely many unknown parameters to a finite number that can then be measured: • One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, it is difficult to find a reliable answer, but some people still pursue this option. • Another possibility is that there are new unfound symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries. QG as an effective field theory In an effective field theory, all but the first few of the infinite set of parameters in a non-renormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is indeed a predictive quantum field theory.[13] (A very similar situation occurs for the very similar effective field theory of low-energy pions.) Furthermore, many theorists agree that even the Standard Model should really be regarded as an effective field theory as well, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally. Spacetime background dependence A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While easy to grasp in principle, this is the hardest idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory,[25] in which the only physically relevant information is the relationship between different events in space-time. String theory Background independent theories Semi-classical quantum gravity Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). See Quantum field theory in curved spacetime for a more complete discussion. Points of tension There are other points of tension between quantum mechanics and general relativity. • First, classical general relativity breaks down at singularities, and quantum mechanics becomes inconsistent with general relativity in the neighborhood of singularities (however, no one is certain that classical general relativity applies near singularities in the first place). • Second, it is not clear how to determine the gravitational field of a particle, since under the Heisenberg uncertainty principle of quantum mechanics its location and velocity cannot be known with certainty. The resolution of these points may come from a better understanding of general relativity.[26] • Third, there is the problem of time in quantum gravity. Time has a different meaning in quantum mechanics and general relativity and hence there are subtle issues to resolve when trying to formulate a theory which combines the two.[27] Candidate theories There are a number of proposed quantum gravity theories.[28] Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.[29][30] String theory One suggested starting point is ordinary quantum field theories which, after all, are successful in describing the other three basic fundamental forces in the context of the standard model of elementary particle physics. However, while this leads to an acceptable effective (quantum) field theory of gravity at low energies,[31] gravity turns out to be much more problematic at higher energies. For ordinary field theories such as quantum electrodynamics, a technique known as renormalization is an integral part of deriving predictions which take into account higher-energy contributions,[32] but gravity turns out to be nonrenormalizable: at high energies, applying the recipes of ordinary quantum field theory yields models that are devoid of all predictive power.[33] One attempt to overcome these limitations is to replace ordinary quantum field theory, which is based on the classical concept of a point particle, with a quantum theory of one-dimensional extended objects: string theory.[34] At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions.[35] The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success are unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.[36] In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[37] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[38][39] As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge. Loop quantum gravity Simple spin network of the type used in loop quantum gravity The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields.[40][41] In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.[42][43][44][45] Other approaches There are a number of other approaches to quantum gravity. The approaches differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified.[47][48] Examples include: Weinberg–Witten theorem In quantum field theory, the Weinberg–Witten theorem places some constraints on theories of composite gravity/emergent gravity. However, recent developments attempt to show that if locality is only approximate and the holographic principle is correct, the Weinberg–Witten theorem would not be valid[citation needed]. Experimental tests The most widely pursued possibilities for quantum gravity phenomenology include violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam. The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. If truly primordial, these waves were born as quantum fluctuations in gravity itself. Cosmologist Ken Olum (Tufts University) stated: "I think this is the only observational evidence that we have that actually shows that gravity is quantized....It's probably the only evidence of this that we will ever have."[58] See also 1. Rovelli, Carlo. "Quantum gravity - Scholarpedia". www.scholarpedia.org. Retrieved 2016-01-09.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 2. Griffiths, David J. (2004). Introduction to Quantum Mechanics. Pearson Prentice Hall. OCLC 803860989.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 3. Wald, Robert M. (1984). General Relativity. University of Chicago Press. p. 382. OCLC 471881415.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 4. Zee, Anthony (2010). Quantum Field Theory in a Nutshell (2nd ed.). Princeton University Press. p. 172. OCLC 659549695.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 5. Penrose, Roger (2007). The road to reality : a complete guide to the laws of the universe. Vintage. p. 1017. OCLC 716437154.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 6. 6.0 6.1 F. Finster, J. Kleiner, Causal fermion systems as a candidate for a unified physical theory, arXiv:1502.03587 [math-ph] (2015) 7. 7.0 7.1 F. Finster, The Principle of the Fermionic Projector, hep-th/0001048, hep-th/0202059, hep- th/0210121, AMS/IP Studies in Advanced Mathematics, vol. 35, American Mathematical Society, Providence, RI, 2006. 8. 8.0 8.1 F. Finster, A formulation of quantum field theory realizing a sea of interacting Dirac particles, arXiv:0911.2102 [hep-th], Lett. Math. Phys. 97 (2011), no. 2, 165–183. 9. 9.0 9.1 F. Finster, An action principle for an interacting fermion system and its analysis in the continuum limit, arXiv:0908.1542 [math-ph] (2009). 10. 10.0 10.1 F. Finster, The continuum limit of a fermion system involving neutrinos: Weak and gravitational interactions, arXiv:1211.3351 [math-ph] (2012). 11. 11.0 11.1 F. Finster, Perturbative quantum field theory in the framework of the fermionic projector, arXiv:1310.4121 [math-ph], J. Math. Phys. 55 (2014), no. 4, 042301. 13. 13.0 13.1 13.2 Donoghue (1995). "Introduction to the Effective Field Theory Description of Gravity". arXiv:gr-qc/9512024.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> (verify against ISBN 9789810229085) 14. Kraichnan, R. H. (1955). "Special-Relativistic Derivation of Generally Covariant Gravitation Theory". Physical Review. 98 (4): 1118–1122. Bibcode:1955PhRv...98.1118K. doi:10.1103/PhysRev.98.1118.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 15. Gupta, S. N. (1954). "Gravitation and Electromagnetism". Physical Review. 96 (6): 1683–1685. Bibcode:1954PhRv...96.1683G. doi:10.1103/PhysRev.96.1683.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 17. Gupta, S. N. (1962). "Quantum Theory of Gravitation". Recent Developments in General Relativity. Pergamon Press. pp. 251–258.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 18. Deser, S. (1970). "Self-Interaction and Gauge Invariance". General Relativity and Gravitation. 1: 9–18. arXiv:gr-qc/0411023. Bibcode:1970GReGr...1....9D. doi:10.1007/BF00759198.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 19. Ohta, Tadayuki; Mann, Robert (1996). "Canonical reduction of two-dimensional gravity for particle dynamics". Classical and Quantum Gravity. 13 (9): 2585–2602. arXiv:gr-qc/9605004. Bibcode:1996CQGra..13.2585O. doi:10.1088/0264-9381/13/9/022.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 20. Sikkema, A E; Mann, R B (1991). "Gravitation and cosmology in (1+1) dimensions". Classical and Quantum Gravity. 8: 219–235. Bibcode:1991CQGra...8..219S. doi:10.1088/0264-9381/8/1/022.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 21. Farrugia; Mann; Scott (2007). "N-body Gravity and the Schroedinger Equation". Classical and Quantum Gravity. 24 (18): 4647–4659. arXiv:gr-qc/0611144. Bibcode:2007CQGra..24.4647F. doi:10.1088/0264-9381/24/18/006.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 22. Mann, R B; Ohta, T (1997). "Exact solution for the metric and the motion of two bodies in (1+1)-dimensional gravity". Physical Review D. 55 (8): 4723–4747. arXiv:gr-qc/9611008. Bibcode:1997PhRvD..55.4723M. doi:10.1103/PhysRevD.55.4723.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 24. Hamber, H. W. (2009). Quantum Gravitation - The Feynman Path Integral Approach. Springer Publishing. ISBN 978-3-540-85292-6.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 25. Smolin, Lee (2001). Three Roads to Quantum Gravity. Basic Books. pp. 20–25. ISBN 0-465-07835-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Pages 220–226 are annotated references and guide for further reading. 26. Hunter Monroe (2005). "Singularity-Free Collapse through Local Inflation". arXiv:astro-ph/0506506.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 27. Edward Anderson (2010). "The Problem of Time in Quantum Gravity". arXiv:1009.2157 [gr-qc].<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> (also published as chapter 4 of ISBN 9781611229578) 28. A timeline and overview can be found in Rovelli, Carlo (2000). "Notes for a brief history of quantum gravity". arXiv:gr-qc/0006061.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> (verify against ISBN 9789812777386) 31. Donoghue, John F. (editor) (1995). "Introduction to the Effective Field Theory Description of Gravity". In Cornet, Fernando (ed.). Effective Theories: Proceedings of the Advanced School, Almunecar, Spain, 26 June–1 July 1995. Singapore: World Scientific. arXiv:gr-qc/9512024. ISBN 981-02-2908-9.CS1 maint: extra text: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 32. Weinberg, Steven (1996). "Chapters 17–18". The Quantum Theory of Fields II: Modern Applications. Cambridge University Press. ISBN 0-521-55002-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 33. Goroff, Marc H.; Sagnotti, Augusto; Sagnotti, Augusto (1985). "Quantum gravity at two loops". Physics Letters B. 160: 81–86. Bibcode:1985PhLB..160...81G. doi:10.1016/0370-2693(85)91470-4.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 34. An accessible introduction at the undergraduate level can be found in Zwiebach, Barton (2004). A First Course in String Theory. Cambridge University Press. ISBN 0-521-83143-1.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>, and more complete overviews in Polchinski, Joseph (1998). String Theory Vol. I: An Introduction to the Bosonic String. Cambridge University Press. ISBN 0-521-63303-6.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> and Polchinski, Joseph (1998b). String Theory Vol. II: Superstring Theory and Beyond. Cambridge University Press. ISBN 0-521-63304-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 35. Ibanez, L. E. (2000). "The second string (phenomenology) revolution". Classical & Quantum Gravity. 17 (5): 1117–1128. arXiv:hep-ph/9911499. Bibcode:2000CQGra..17.1117I. doi:10.1088/0264-9381/17/5/321.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 38. Townsend, Paul K. (1996). Four Lectures on M-Theory. ICTP Series in Theoretical Physics. p. 385. arXiv:hep-th/9612121. Bibcode:1997hepcbconf..385T.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 40. Ashtekar, Abhay (1986). "New variables for classical and quantum gravity". Physical Review Letters. 57 (18): 2244–2247. Bibcode:1986PhRvL..57.2244A. doi:10.1103/PhysRevLett.57.2244. PMID 10033673.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 41. Ashtekar, Abhay (1987). "New Hamiltonian formulation of general relativity". Physical Review D. 36 (6): 1587–1602. Bibcode:1987PhRvD..36.1587A. doi:10.1103/PhysRevD.36.1587.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 42. Thiemann, Thomas (2006). "Loop Quantum Gravity: An Inside View". Approaches to Fundamental Physics. Lecture Notes in Physics. 721: 185. arXiv:hep-th/0608210. Bibcode:2007LNP...721..185T. doi:10.1007/978-3-540-71117-9_10. ISBN 978-3-540-71115-5.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 43. Rovelli, Carlo (1998). "Loop Quantum Gravity". Living Reviews in Relativity. 1. Retrieved 2008-03-13.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 44. Ashtekar, Abhay; Lewandowski, Jerzy (2004). "Background Independent Quantum Gravity: A Status Report". Classical & Quantum Gravity. 21 (15): R53–R152. arXiv:gr-qc/0404018. Bibcode:2004CQGra..21R..53A. doi:10.1088/0264-9381/21/15/R01.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 45. Thiemann, Thomas (2003). "Lectures on Loop Quantum Gravity". Lecture Notes in Physics. Lecture Notes in Physics. 631: 41–135. arXiv:gr-qc/0210094. Bibcode:2003LNP...631...41T. doi:10.1007/978-3-540-45230-0_3. ISBN 978-3-540-40810-9.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 46. Rovelli, Carlo (2004). Quantum Gravity. Cambridge University Press. ISBN 0521715962.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 47. Isham, Christopher J. (1994). "Prima facie questions in quantum gravity". In Ehlers, Jürgen; Friedrich, Helmut (eds.). Canonical Gravity: From Classical to Quantum. Springer. arXiv:gr-qc/9310031. ISBN 3-540-58339-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 49. Loll, Renate (1998). "Discrete Approaches to Quantum Gravity in Four Dimensions". Living Reviews in Relativity. 1: 13. arXiv:gr-qc/9805049. Bibcode:1998LRR.....1...13L. doi:10.12942/lrr-1998-13. Retrieved 2008-03-09.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 50. Sorkin, Rafael D. (2005). "Causal Sets: Discrete Gravity". In Gomberoff, Andres; Marolf, Donald (eds.). Lectures on Quantum Gravity. Springer. arXiv:gr-qc/0309009. ISBN 0-387-23995-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 51. See Daniele Oriti and references therein. 52. Hawking, Stephen W. (1987). "Quantum cosmology". In Hawking, Stephen W.; Israel, Werner (eds.). 300 Years of Gravitation. Cambridge University Press. pp. 631–651. ISBN 0-521-37976-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 53. Wen 2006 54. See ch. 33 in Penrose 2004 and references therein. 55. "Quantum Holonomy Theory by J. Aastrup and J. M. Grimstrup" (PDF).<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 56. Hossenfelder, Sabine (2011). "Experimental Search for Quantum Gravity". In V. R. Frignanni (ed.). Classical and Quantum Gravity: Theory, Analysis and Applications. Chapter 5: Nova Publishers. ISBN 978-1-61122-957-8.CS1 maint: location (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 57. Hossenfelder, Sabine (2010-10-17). V. R. Frignanni (ed.). "Experimental Search for Quantum Gravity Chapter 5". Classical and Quantum Gravity: Theory, Analysis and Applications. Nova Publishers. 5 (2011). arXiv:1010.3420. Bibcode:2010arXiv1010.3420H.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 58. Camille Carlisle. "First Direct Evidence of Big Bang Inflation". SkyandTelescope.com. Retrieved March 18, 2014.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Further reading
35c427cd265e1852
Schrödinger, Dirac - Quantum Wave Mechanics The fundamental idea of wave mechanics was the title of Erwin Schrödinger’s Nobel Lecture in 1933. The lecture explains his ideas on wave mechanics in an easily read manner and provides an excellent introduction to this subject45. The Schrödinger equations provide a sound model for multi-electron atoms that agree extremely well with measured results. The theory specifies the laws of wave motion that the particles of any microscopic system obey. This is done by specifying, for each system, the equation that controls the behaviour of the wave function, and also by specifying the connection between the behaviour of the wave function and the behaviour of the particle. The theory is an extension of the de Broglie postulate and furthermore, there is a close relation between it and Newton's theory of the motion of particles in macroscopic systems. A comparison can also be made between the Schrödinger theory and Maxwell's theory of electromagnetism because electromagnetic waves behave in a manner which is very analogous to the behaviour of the wave functions of the Schrödinger theory. The Schrödinger equation tells us the form of the wave function ψ (x, t), if we tell it about the force acting on the associated particle (of mass m) by specifying the potential energy corresponding to the force. In other words, the wave function is a solution to the Schrödinger equation for that potential energy Schrödinger developed his wave equation by using the de Broglie-Einstein postulates where λ=h/p and ɣ=E/h (h=Planck’s constant) which connect the wavelength λ of the wave function with the linear momentum p (=mv) of the associated particle and connect the frequency ɣ of the wave function with the total energy E of the particle with essentially constant p and E. A further postulate of the wave equation relates the total energy E of a particle to its Kinetic Energy KE and Potential Energy V where E = p/2m + V or E = KE + V. He used the required consistency with these postulates in his search for an argument that is designed to make the quantum mechanical wave equation seem very plausible, but it must be emphasised that this plausibility argument did not constitute a derivation. In the final analysis, the quantum mechanical wave equation was obtained by a postulate, whose justification is not that it has been deduced entirely from information already known experimentally, but that it correctly predicts results which can be verified experimentally. This description of Schrödinger’s quantum theory is important to the Theory of Physics in 5 Dimensions because the form of the wave equation used fits more comfortably with the expressions of Physics in 5 Dimensions than the expressions of classical physics.  With v4 defined as the velocity vector of a body as viewed by an observer in classical physics, two key aspects of Physics in 5 Dimensions are that:   1. All objects and particles of have a constant mass m (not varying with v4) and a constant energy E = m c and 2. All bodies have three velocity vectors, namely v4, v5 and c (the speed of light), with the scalar relationship c= v4+ v52. It follows46 that all changes of energy can only be an exchange between kinetic energy K = m v4 c and potential energy V = m v5 c where E= K+ V2.   While the Schrödinger Equation E = p/2m + V is non-relativistic, it can be replaced by the equivalent relativistic Physics in 5 Dimensions term K5 where  E = K5 + V = m v4/(1+v5/c) + V; which is also shown47 to be consistent with E= K+ V2. When v4<<c then v5≈c and we return to the Schrödinger Equation where E = K5 + V ≈ p/2m + V. In 1928 Dirac developed a relativistic theory of quantum mechanics utilising essentially the same postulates as the Schrödinger theory however incorporating a relativistic element. Dirac’s expression included two new variables which give rise to the spin of the electron. Dirac commented in his Nobel Lecture48: The variables also give rise to some unexpected phenomena concerning the motion of the electron. These have been fully worked out by Schrödinger. It is found that an electron which seems to us to be moving slowly, must actually have a very high frequency oscillatory motion of small amplitude superimposed on the regular motion that appears to us. As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light. This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory motion is so high and its amplitude so small. But one must believe in this consequence of the theory, since other consequences of the theory which are inseparably bound up with this one, such as the law of scattering of light by an electron, are confirmed by experiment. This comment by Dirac links particles (in this case electrons) moving with the velocity of light c with a quantum theory and supports the hypothesis of Physics in 5 Dimensions that all matter has a path with the velocity of light.  (45) Schrödinger – Nobel Lecture – The fundamental idea of wave mechanics – December 12. 1933 - (46) See pages 42-46 - Physics in 5 Dimensions ISBN: 978-3-96014-233-1 (47) See pages 47-48 - Physics in 5 Dimensions ISBN as above (48) Schrödinger & Dirac - Nobel lecture – Theory of electrons and positrons – 1933 Select another Famous Physicist and Physics Topic of interest to you  The Book by Alan ClarkPhysics in 5 Dimensions - is also available as a PDF file to members of ResearchGate here.
d863384c36c34c78
Skip to main content Alexander Fufaev Quantum Mechanics Videos Here you will find 4 free Quantum mechanics video lectures. Level 2 (without higher mathematics) Photoelectric Effect and The Einstein Formula Simply Explained Here you will learn what the photoelectric effect is, how it can be described with a Einstein formula + everything you need to know about it. Content of the video 1. 00:00 What is the photoelectric effect? 2. 00:07 Experiment setup 3. 00:46 Photons 4. 01:58 Work function and threshold frequency 5. 03:48 Convert electron volts to joules and vice versa 6. 04:19 Kinetic energy of an electron 7. 04:50 Einstein formula 8. 05:36 Stopping (braking) voltage 9. 07:20 Energy frequency graph Level 3 (with higher mathematics) Hermitian Operators Simply Explained In this physics video you will learn what Hermitian operators and matrices are and what important properties they have. Content of the video 1. 00:00 Motivation: Real mean values 2. 01:16 What are Hermitian operators? 3. 02:16 What is a self-adjoint operator? 4. 02:56 Notation of a Hermitian operator 5. 03:20 Eigenvalues are real 6. 04:27 Eigenvectors are orthogonal 7. 05:56 Eigenvectors form a basis 8. 06:37 What is a Hermite matrix? 9. 07:19 Examples for (non) Hermitian matrices Level 2 (without higher mathematics) De-Broglie Wavelength And The Wave-Like Matter Here you will learn about de Broglie wavelength (matter wavelength), its derivation, an example and how de Broglie wavelength can be expressed with voltage. Content of the video 1. [00:00] Wave-particle duality 2. [00:46] Derivation of the De Broglie wavelength 3. [02:40] Example 4. [03:35] De Broglie wavelength using voltage Level 3 (with higher mathematics) Schrodinger Equation explained extensively in 50 minutes! In this quantum mechanics lecture you will learn the Schrödinger equation (1d and 3d, time-independent and time-dependent) within 45 minutes. Content of the video 1. [00:10] What is a partial second-order DEQ? 2. [01:08] Classical Mechanics vs. Quantum Mechanics 3. [04:38] Applications 4. [05:24] Derivation of the time-independent Schrödinger equation (1d) 5. [17:24] Squared magnitude, probability and normalization 6. [25:37] Wave function in classically allowed and forbidden regions 7. [35:44] Time-independent Schrödinger equation (3d) and Hamilton operator 8. [38:29] Time-dependent Schrödinger equation (1d and 3d) 9. [41:29] Separation of variables and stationary states
925e06a19ca52b04
Take the 2-minute tour × I am reading Introduction to Quantum Mechanics by David Griffiths and I am in Ch2 page 59. He starts out writing the time dependent Schrödinger equation and the solution for $\psi(x,t)$ for the free particle, $$\psi(x,t) = A e^{ik(x-(\hbar k/2m)t)} + B e^{-ik(x + (\hbar k/2m)t)}$$ Then he goes and says the following, Now, any function of $x$ and $t$ that depends on these variables in the special combination $x \pm vt$ (for some constant $v$) represents a wave of fixed profile, traveling in the $\pm x$-direction, at speed $v$. What does this sentence mean? share|improve this question Did you try plotting a representative function like this? –  anna v May 19 '11 at 3:45 I'm trying to plot it on maple right now. I don't know what to specify the energies as, cause k = sqrt(2mE)/h_bar. Griffiths goes on and says that this wave function is NOT normalizable! So I'm confused. –  QEntanglement May 19 '11 at 3:49 Another question that came shortly after this one explores the same math. –  dmckee May 19 '11 at 17:46 1 Answer 1 up vote 10 down vote accepted It means there are many possible shapes for waves, not just pure sine waves. For example, $$\psi(x,t) = A\textrm{e}^{-k^2(x-vt)^2}$$ is a possible wavefunction. It represents a Gaussian wave packet that travels down the x-axis in the positive direction at speed $v$. The important part is that you can make the substitution $u = x-vt$ into $\psi$ and get a function of a single variable $u$. So, start with any function $f$ of a single variable $u$. Now make the substitution $u = x - vt$. $f$ has now become a wave that travels down the x-axis at speed $v$ with some funky shape. The mathematically-important thing is that such functions can be represented as a superposition of sinusoidals of continuously-varying frequencies all traveling in tandem down the x-axis (by "traveling" I mean "have phase velocity"). The sinusoidals that go with a given $f$ are found through fourier analysis. This is important because the sinusoidals are the eigenfunctions of the Hamiltonian for a free particle. share|improve this answer Your Answer
76c8b3e188c64d95
Short strong hydrogen bonds studied by inelastic neutron scattering and computational methods Date of Award Degree Type Degree Name Doctor of Philosophy (PhD) Bruce S. Hudson Hydrogen bonds, Neutron scattering, Bond energies, Potassium hydrogen bistrifluoroacetate Subject Categories Chemistry | Physical Sciences and Mathematics Hydrogen bonding can be simply defined as any interaction between molecules that involve the participation of hydrogen, or stated in another fashion: "a hydrogen bond exists when a hydrogen atom H is bonded to more than one other atom". Within this simple definition is a diverse range of interactions that is difficult to explain utilizing conventional ionic and covalent bonding. Hydrogen bonds that exhibit large bond energies are termed short, strong hydrogen bonds and possess unique potential surfaces. These potential surfaces are termed low barrier because the energy barrier that separates the potential wells is lower or equal to the zero point level. Little or no energy is required to move the hydrogen from one well to the other. This potential surface requires special attention in calculating quantum mechanical solutions for these systems due to the quartic shape of these short-strong hydrogen bond potential surfaces. In this thesis I have examined several short, strong hydrogen bonded systems using inelastic neutron scattering and have calculated the corresponding neutron vibrational spectra. I have also made a detailed investigation into the potential surface of the strong hydrogen bond in potassium hydrogen bistrifluoroacetate. Results have shown that it is possible to calculate accurate structural coordinates and vibrational spectra that agree with the experimental. The calculations give an incorrect energy minimization resulting in incorrect vibrational band placement in the inelastic neutron spectrum from the use of a harmonic fit of an anharmonic potential surface. The anharmonic potential surface resultant from the barrier between double wells positioned below the zero point level is calculated for the hydrogen bond. This can be correctly modeled by calculating the potential surface using a fixed OO distance, and solving the Schrödinger equation along this potential. This is the first comparison of neutron vibrational spectra and calculated spectra to provide an understanding of the limitations of computational methods to examine strong hydrogen bonds. This is a new and powerful tool to accurately examine the strength and structure of strong hydrogen bonds.
b0a787c42b5ca507
From Wikipedia, the free encyclopedia Jump to: navigation, search An instanton[1] (or pseudoparticle[2][3]) is a notion appearing in theoretical and mathematical physics. An instanton is a classical solution to equations of motion[note 1] with a finite, non-zero action, either in quantum mechanics or in quantum field theory. More precisely, it is a solution to the equations of motion of the classical field theory on a Euclidean spacetime. Quantum theory[edit] In such quantum theories, solutions to the equations of motion may be thought of as critical points of the action. The critical points of the action may be local maxima of the action, local minima, or saddle points. Instantons are important in quantum field theory because: • they appear in the path integral as the leading quantum corrections to the classical behavior of a system, and • they can be used to study the tunneling behavior in various systems such as a Yang–Mills theory. Mathematically, a Yang–Mills instanton is a self-dual or anti-self-dual connection in a principal bundle over a four-dimensional Riemannian manifold that plays the role of physical space-time in non-abelian gauge theory. Instantons are topologically nontrivial solutions of Yang–Mills equations that absolutely minimize the energy functional within their topological type. The first such solutions were discovered in the case of four-dimensional Euclidean space compactified to the four-dimensional sphere, and turned out to be localized in space-time, prompting the names pseudoparticle and instanton. Yang–Mills instantons have been explicitly constructed in many cases by means of twistor theory, which relates them to algebraic vector bundles on algebraic surfaces, and via the ADHM construction, or hyperkähler reduction (see hyperkähler manifold), a sophisticated linear algebra procedure. The groundbreaking work of Simon Donaldson, for which he was later awarded the Fields medal, used the moduli space of instantons over a given four-dimensional differentiable manifold as a new invariant of the manifold that depends on its differentiable structure and applied it to the construction of homeomorphic but not diffeomorphic four-manifolds. Many methods developed in studying instantons have also been applied to monopoles.[why?] Quantum mechanics[edit] An instanton can be used to calculate the transition probability for a quantum mechanical particle tunneling through a potential barrier. One of the simplest examples of a system with an instanton effect is a particle in a double-well potential. In contrast to a classical particle, there is non-vanishing probability that it crosses a region of potential energy higher than its own energy. One way to calculate this probability is by means of the semi-classical WKB approximation, which requires the value of \hbar to be small. The Schrödinger equation for the particle reads If the potential were constant, the solution would (up to proportionality) be a plane wave, \psi = \exp(-\mathrm{i}kx)\, This means that if the energy of the particle is smaller than the potential energy, one obtains an exponentially decreasing function. The associated tunneling amplitude is proportional to e^{-\frac{1}{\hbar}\int_a^b\sqrt{2m(V(x)-E)} \, dx}, where a and b are the beginning and endpoint of the tunneling trajectory. Alternatively, the use of path integrals allows an instanton interpretation and the same result can be obtained with this approach. In path integral formulation, the transition amplitude can be expressed as K(a,b;t)=\langle x=a|e^{-\frac{i\mathbb{H}t}{\hbar}}|x=b\rangle =\int d[x(t)]e^{\frac{iS[x(t)]}{\hbar}}. Following the process of Wick rotation (analytic continuation) to Euclidean spacetime (it\rightarrow \tau), one gets K_E(a,b;\tau)=\langle x=a|e^{-\frac{\mathbb{H}\tau}{\hbar}}|x=b\rangle =\int d[x(\tau)]e^{-\frac{S_E[x(\tau)]}{\hbar}}, with the Euclidean action S_E=\int_{\tau_a}^{\tau_b}\left(\frac{1}{2}m\left(\frac{dx}{d\tau}\right)^2+V(x)\right) d\tau. The potential energy changes sign V(x) \rightarrow - V(x) under the Wick rotation and the minima transform into maxima, thereby V(x) exhibits two "hills" of maximal energy. Results obtained from the mathematically well-defined Euclidean path integral may be Wick-rotated back and give the same physical results as would be obtained by appropriate treatment of the (potentially divergent) Minkowskian path integral. As can be seen from this example, calculating the transition probability for the particle to tunnel through a classically forbidden region (V(x)) with the Minkowskian path integral corresponds to calculating the transition probability to tunnel through a classically allowed region (with potential −V(X)) in the Euclidean path integral (pictorially speaking—in the Euclidean picture—this transition corresponds to a particle rolling from one hill of a double-well potential standing on its head to the other hill). This classical solution of the Euclidean equations of motion is often named "kink solution" and is an example of an instanton. In this example, the two "vacua" of the double-well potential, turn into hills in the Euclideanized version of the problem. Thus, the instanton field solution of the (1 + 1)-dimensional field theory (first quantized quantum mechanical system) allows to be interpreted as a tunneling effect between the two vacua of the physical Minkowskian system. Note that a naive perturbation theory around one of those two vacua would never show this non-perturbative tunneling effect, dramatically changing the picture of the vacuum structure of this quantum mechanical system. Quantum field theory[edit] Hypersphere S^3 Hypersphere Stereographic projection Parallels (red), meridians (blue) and hypermeridians (green).[note 2] In studying Quantum Field Theory (QFT), the vacuum structure of a theory may draw attention to instantons. Just as a double-well quantum mechanical system illustrates, a naive vacuum may not be the true vacuum of a field theory. Moreover, the true vacuum of a field theory may be an "overlap" of several topologically inequivalent sectors, so called "topological vacua". A well understood and illustrative example of an instanton and its interpretation can be found in the context of a QFT with a non-abelian gauge group,[note 3] a Yang–Mills theory. For a Yang–Mills theory these inequivalent sectors can be (in an appropriate gauge) classified by the third homotopy group of SU(2) (whose group manifold is the 3-sphere S^3). A certain topological vacuum (a "sector" of the true vacuum) is labelled by an unaltered transform, the Pontryagin index. As the third homotopy group of S^3 has been found to be the set of integers, there are infinitely many topologically inequivalent vacua, denoted by |N\rangle , where N is their corresponding Pontryagin index. An instanton is a field configuration fulfilling the classical equations of motion in Euclidean spacetime, which is interpreted as a tunneling effect between these different topological vacua. It is again labelled by an integer number, its Pontryagin index, Q. One can imagine an instanton with index Q to quantify tunneling between topological vacua |N\rangle and |N+Q\rangle . If Q = 1, the configuration is named BPST instanton after its discoverers Alexander Belavin, Alexander Polyakov, Albert S. Schwartz and Yu. S. Tyupkin. The true vacuum of the theory is labelled by an "angle" theta and is an overlap of the topological sectors: |\theta\rangle =\sum_{N=-\infty}^{N=+\infty}e^{i \theta N}|N\rangle. Gerard 't Hooft first performed the field theoretic computation of the effects of the BPST instanton in a theory coupled to fermions in [1]. He showed that zero modes of the Dirac equation in the instanton background lead to a non-perturbative multi-fermion interaction in the low energy effective action. Yang–Mills theory[edit] The classical Yang–Mills action on a principal bundle with structure group G, base M, connection A, and curvature (Yang–Mills field tensor) F is S_{YM} = \int_M \left|F\right|^2 d\mathrm{vol}_M, where d\mathrm{vol}_M is the volume form on M. If the inner product on \mathfrak{g}, the Lie algebra of G in which F takes values, is given by the Killing form on \mathfrak{g}, then this may be denoted as \int_M \mathrm{Tr}(F \wedge *F), since F \wedge *F = \langle F, F \rangle d\mathrm{vol}_M. For example, in the case of the gauge group U(1), F will be the electromagnetic field tensor. From the principle of stationary action, the Yang–Mills equations follow. They are \mathrm{d}F = 0, \quad \mathrm{d}{*F} = 0. The first of these is an identity, because dF = d2A = 0, but the second is a second-order partial differential equation for the connection A, and if the Minkowski current vector does not vanish, the zero on the rhs. of the second equation is replaced by \mathbf J. But notice how similar these equations are; they differ by a Hodge star. Thus a solution to the simpler first order (non-linear) equation {*F} = \pm F\, is automatically also a solution of the Yang–Mills equation. Such solutions usually exist, although their precise character depends on the dimension and topology of the base space M, the principal bundle P, and the gauge group G. In nonabelian Yang–Mills theories, DF=0 and D*F=0 where D is the exterior covariant derivative. Furthermore, the Bianchi identity DF=dF+A\wedge F-F\wedge A=d(dA+A\wedge A)+A\wedge (dA+A\wedge A)-(dA + A\wedge A)\wedge A=0 is satisfied. In quantum field theory, an instanton is a topologically nontrivial field configuration in four-dimensional Euclidean space (considered as the Wick rotation of Minkowski spacetime). Specifically, it refers to a Yang–Mills gauge field A which approaches pure gauge at spatial infinity. This means the field strength vanishes at infinity. The name instanton derives from the fact that these fields are localized in space and (Euclidean) time – in other words, at a specific instant. The case of instantons on the two-dimensional space may be easier to visualise because it admits the simplest case of the gauge group, namely U(1), that is an abelian group. In this case the field A can be visualised as simply a vector field. An instanton is a configuration where, for example, the arrows point away from a central point (i.e., a "hedgehog" state). In four dimensions abelian instantons are impossible. The field configuration of an instanton is very different from that of the vacuum. Because of this instantons cannot be studied by using Feynman diagrams, which only include perturbative effects. Instantons are fundamentally non-perturbative. The Yang–Mills energy is given by \frac{1}{2}\int_{\mathbb{R}^4} \operatorname{Tr}[*\bold{F}\wedge \bold{F}] where ∗ is the Hodge dual. If we insist that the solutions to the Yang–Mills equations have finite energy, then the curvature of the solution at infinity (taken as a limit) has to be zero. This means that the Chern–Simons invariant can be defined at the 3-space boundary. This is equivalent, via Stokes' theorem, to taking the integral This is a homotopy invariant and it tells us which homotopy class the instanton belongs to. Since the integral of a nonnegative integrand is always nonnegative, =\int_{\mathbb{R}^4}\operatorname{Tr}[*\bold{F}\wedge\bold{F}+\cos\theta \bold{F}\wedge\bold{F}] for all real θ. So, this means If this bound is saturated, then the solution is a BPS state. For such states, either ∗F = F or ∗F = − F depending on the sign of the homotopy invariant. Instanton effects are important in understanding the formation of condensates in the vacuum of quantum chromodynamics (QCD) and in explaining the mass of the so-called 'eta-prime particle', a Goldstone-boson[note 4] which has acquired mass through the axial current anomaly of QCD. Note that there is sometimes also a corresponding soliton in a theory with one additional space dimension. Recent research on instantons links them to topics such as D-branes and Black holes and, of course, the vacuum structure of QCD. For example, in oriented string theories, a Dp brane is a gauge theory instanton in the world volume (p + 5)-dimensional U(N) gauge theory on a stack of N D(p + 4)-branes. Various numbers of dimensions[edit] Instantons play a central role in the nonperturbative dynamics of gauge theories. The kind of physical excitation that yields an instanton depends on the number of dimensions of the spacetime, but, surprisingly, the formalism for dealing with these instantons is relatively dimension-independent. In 4-dimensional gauge theories, as described in the previous section, instantons are gauge bundles with a nontrivial four-form characteristic class. If the gauge symmetry is a unitary group or special unitary group then this characteristic class is the second Chern class, which vanishes in the case of the gauge group U(1). If the gauge symmetry is an orthogonal group then this class is the first Pontrjagin class. In 3-dimensional gauge theories with Higgs fields, 't Hooft–Polyakov monopole's play the role of instantons. In his 1977 paper Quark Confinement and Topology of Gauge Groups, Alexander Polyakov demonstrated that instanton effects in 3-dimensional QED coupled to a scalar field lead to a mass for the photon. In 2-dimensional abelian gauge theories worldsheet instantons are magnetic vortices. They are responsible for many nonperturbative effects in string theory, playing a central role in mirror symmetry. In 1-dimensional quantum mechanics, instantons describe tunneling, which is invisible in perturbation theory. 4d supersymmetric gauge theories[edit] Supersymmetric gauge theories often obey nonrenormalization theorems, which restrict the kinds of quantum corrections which are allowed. Many of these theorems only apply to corrections calculable in perturbation theory and so instantons, which are not seen in perturbation theory, provide the only corrections to these quantities. Field theoretic techniques for instanton calculations in supersymmetric theories were extensively studied in the 1980s by multiple authors. Because supersymmetry guarantees the cancellation of fermionic vs. bosonic non-zero modes in the instanton background, the involved 't Hooft computation of the instanton saddle point reduces to an integration over zero modes. In N = 1 supersymmetric gauge theories instantons can modify the superpotential, sometimes lifting all of the vacua. In 1984 Ian Affleck, Michael Dine and Nathan Seiberg calculated the instanton corrections to the superpotential in their paper Dynamical Supersymmetry Breaking in Supersymmetric QCD. More precisely, they were only able to perform the calculation when the theory contains one less flavor of chiral matter than the number of colors in the special unitary gauge group, because in the presence of fewer flavors an unbroken nonabelian gauge symmetry leads to an infrared divergence and in the case of more flavors the contribution is equal to zero. For this special choice of chiral matter, the vacuum expectation values of the matter scalar fields can be chosen to completely break the gauge symmetry at weak coupling, allowing a reliable semi-classical saddle point calculation to proceed. By then considering perturbations by various mass terms they were able to calculate the superpotential in the presence of arbitrary numbers of colors and flavors, valid even when the theory is no longer weakly coupled. In N = 2 supersymmetric gauge theories the superpotential receives no quantum corrections. However the correction to the metric of the moduli space of vacua from instantons was calculated in a series of papers. First, the one instanton correction was calculated by Nathan Seiberg in Supersymmetry and Nonperturbative beta Functions. The full set of corrections for SU(2) Yang–Mills theory was calculated by Nathan Seiberg and Edward Witten in Electric – magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang–Mills theory, in the process creating a subject that is today known as Seiberg–Witten theory. They extended their calculation to SU(2) gauge theories with fundamental matter in Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD. These results were later extended for various gauge groups and matter contents, and the direct gauge theory derivation was also obtained in most cases. For gauge theories with gauge group U(N) the Seiberg-Witten geometry has been derived from gauge theory using Nekrasov partition functions in 2003 by Nikita Nekrasov and Andrei Okounkov and independently by Hiraku Nakajima and Kota Yoshioka. In N = 4 supersymmetric gauge theories the instantons do not lead to quantum corrections for the metric on the moduli space of vacua. See also[edit] References and notes[edit] 1. ^ Equations of motion are grouped under three main types of motion: translations, rotations, oscillations (or any combinations of these). 3. ^ See also: Non-abelian gauge theory 4. ^ See also:Pseudo-Goldstone boson 1. ^ Instantons in Gauge Theories. Edited by Mikhail A. Shifman. World Scientific, 1994. 2. ^ Interactions Between Charged Particles in a Magnetic Field. By Hrachya Nersisyan, Christian Toepffer, Günter Zwicknagel. Springer, Apr 19, 2007. Pg 23 3. ^ Large-Order Behaviour of Perturbation Theory. Edited by J.C. Le Guillou, J. Zinn-Justin. Elsevier, Dec 2, 2012. Pg. 170. • Instantons in Gauge Theories, a compilation of articles on instantons, edited by Mikhail A. Shifman • Solitons and Instantons, R. Rajaraman (Amsterdam: North Holland, 1987), ISBN 0-444-87047-4 • The Uses of Instantons, by Sidney Coleman in Proc. Int. School of Subnuclear Physics, (Erice, 1977); and in Aspects of Symmetry p. 265, Sidney Coleman, Cambridge University Press, 1985, ISBN 0-521-31827-0; and in Instantons in Gauge Theories • Solitons, Instantons and Twistors. M. Dunajski, Oxford University Press. ISBN 978-0-19-857063-9. • The Geometry of Four-Manifolds, S.K. Donaldson, P.B. Kronheimer, Oxford University Press, 1990, ISBN 0-19-853553-8.
f3f72cca3fb95357
Schrödinger has never met Newton Sabine Hossenfelder believes that Schrödinger meets Newton. But is the story about the two physicists' encounter true? Yes, these were just jokes. I don't think that Sabine Hossenfelder misunderstands the history in this way. Instead, what she completely misunderstands is the physics, especially quantum physics. She is in a good company. Aside from the authors of some nonsensical papers she mentions, e.g. van Meter, Giulini and Großardt, Harrison and Moroz with Tod, Diósi, and Carlip with Salzman, similar basic misconceptions about elementary quantum mechanics have been promoted by Penrose and Hameroff. Hameroff is a physician who, along with Penrose, prescribed supernatural abilities to the gravitational field. It's responsible for the gravitationally induced "collapse of the wave function" which also gives us consciousness and may be even blamed for Penrose's (not to mention Hameroff's) complete inability to understand rudimentary quantum mechanics, among many other wonderful things; I am sure that many of you have read the Penrose-Hameroff crackpottery and a large percentage of those readers even fail to see why it is a crackpottery, a problem I will try to fix (and judging by the 85-year-long experience, I will fail). It's really Penrose who should be blamed for the concept known as the Schrödinger-Newton equations So what are the equations? Sabine Hossenfelder reproduces them completely mindlessly and uncritically. They're supposed to be the symbiosis of quantum mechanics combined with the Newtonian limit of general relativity. They say:\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) &= \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \\ \Delta \Phi(t,\vec x) &= 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Don't get misled by the beautiful form they take in \(\rm\LaTeX\) implemented by MathJax; superficial beauty of the letters doesn't guarantee the validity. Sabine Hossenfelder and others immediately talk about mechanically inserting numbers into these equations, and so on, but they never ask a basic question: Are these equations actually right? Can we prove that they are wrong? And if they are right, can they be responsible for anything important that shapes our observations? Of course that the second one is completely wrong; it fundamentally misunderstands the basic concepts in physics. And even if you forgot the reasons why the second equation is completely wrong, they couldn't be responsible for anything important we observe – e.g. for well-defined perceptions after we measure something – because of the immense weakness of gravity (and because of other reasons). Analyzing the equations one by one So let us look at the equations, what they say, and whether they are the right equations describing the particular physical problems. We begin with the first one,\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) = \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \] Is it right? Yes, it is a conventional time-dependent Schrödinger equation for a single particle that includes the gravitational potential. When the gravitational potential matters, it's important to include it in the Hamiltonian as well. The gravitational potential energy is of course as good a part of the energy (the Hamiltonian) as the kinetic energy, given by the spatial Laplacian term, and it should be included in the equations. In reality, we may of course neglect the gravitational potential in practice. When we study the motion of a few elementary particles, their mutual gravitational attraction is negligible. For two electrons, the gravitational force is more than \(10^{40}\) times weaker than the electrostatic force. Clearly, we can't measure the transitions in a Hydrogen atom with the relative precision of \(10^{-40}\). The "gravitational Bohr radius" of an atom that is only held gravitationally would be comparably large to the visible Universe because the particles are very weakly bound, indeed. Of course, it makes no practical sense to talk about energy eigenstates that occupy similarly huge regions because well before the first revolution (a time scale), something will hit the particles so that they will never be in the hypothetical "weakly bound state" for a whole period. But even if you consider the gravity between a microscopic particle (which must be there for our equation to be relevant) such as a proton and the whole Earth, it's pretty much negligible. For example, the protons are running around the LHC collider and the Earth's gravitational pull is dragging them down, with the usual acceleration of \(g=9.8\,\,{\rm m}/{\rm s}^2\). However, there are so many forces that accelerate the protons much more strongly in various directions that the gravitational pull exerted by the Earth can't be measured. But yes, it's true that the LHC magnets and electric fields are also preventing the protons from "falling down". The protons circulate for minutes if not hours and as skydivers know, one may fall pretty far down during such a time. An exceptional experiment in which the Earth's gravity has a detectable impact on the quantum behavior of particles are the neutron interference experiments, those that may be used to prove that gravity cannot be an entropic force. To describe similar experiments, one really has to study the neutron's Schrödinger equation together with the kinetic term and the gravitational potential created by the Earth. Needless to say, much of the behavior is obvious. If you shoot neutrons through a pair of slits, of course that they will accelerate towards the Earth much like everything else so the interference pattern may be found again; it's just shifted down by the expected distance. People have also studied neutrons that are jumping on a trampoline. There is an infinite potential energy beneath the trampoline which shoots the neutrons up. And there's also the Earth's gravity that attracts them down. Moreover, neutrons are described by quantum mechanics which makes their energy eigenstates quantized. It's an interesting experiment that makes one sure that quantum mechanics does apply in all situations, even if the Earth's gravity plays a role as well, and that's where the Schrödinger equation with the gravitational potential may be verified. I want to say that while the one-particle Schrödinger equation written above is the right description for situations similar to the neutron interference experiments, it already betrays some misconceptions by the "Schrödinger meets Newton" folks. The fact that they write a one-particle equation is suspicious. The corresponding right description of many particles wouldn't contain wave functions that depend on the spacetime, \(\Psi(t,\vec x)\). Instead, the multi-particle wave function has to depend on positions of all the particles, e.g. \(\Psi(t,\vec x_1,\vec x_2)\). However, the Schrödinger equation above already suggests that the "Schrödinger meets Newton" folks want to treat the wave function as an object analogous to the gravitational potential, a classical field. This totally invalid interpretation of the objects becomes lethal in the second equation. Confusing observables with their expectation values, mixing up probability waves with classical fields The actual problem with the Schrödinger-Newton system of equations is the second equation, Poisson's equation for the gravitational potential,\[ \Delta \Phi(t,\vec x) = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Is this equation right under some circumstances? No, it is never right. It is a completely nonsensical equation which is nonlinear in the wave function \(\Psi\) – a fatal inconsistency – and which mixes apples with oranges. I will spend some time with explaining these points. First, let me start with the full quantum gravity. Quantum gravity contains some complicated enough quantum observables that may only be described by the full-fledged string/M-theory but in the low-energy approximation of an "effective field theory", it contains quantum fields including the metric tensor \(\hat g_{\mu\nu}\). I added a hat to emphasize that each component of the tensor field at each point is a linear operator (well, operator distribution) acting on the Hilbert space. I have already discussed the one-particle Schrödinger equation that dictates how the gravitational field influences the particles, at least in the non-relativistic, low-energy approximation. But we also want to know how the particles influence the gravitational field. That's given by Einstein's equations,\[ \hat{R}_{\mu\nu} - \frac{1}{2} \hat{R} \hat{g}_{\mu\nu} = 8\pi G \,\hat{T}_{\mu\nu} \] In the quantum version, Einstein's equations become a form of the Heisenberg equations in the Heisenberg picture (Schrödinger's picture looks very complicated for gravity or other field theories) and these equations simply add hats above the metric tensor, Ricci tensor, Ricci scalar, as well as the stress-energy tensor. All these objects have to be operators. For example, the stress-energy tensor is constructed out of other operators, including the operators for the intensity of electromagnetic and other fields and/or positions of particles, so it must be an operator. If an equation relates it to something else, this something else has to be an operator as well. Think about Schrödinger's cat – or any other macroscopic physical system, for that matter. To make the thought experiment more spectacular, attach the whole Earth to the cat so if the cat dies, the whole Earth explodes and its gravitational field changes. It's clear that the values of microscopic quantities such as the decay stage of a radioactive nucleus may imprint themselves to the gravitational field around the Earth – something that may influence the Moon etc. (We may subjectively feel that we have already perceived one particular answer but a more perfect physicist has to evolve us into linear superpositions as well, in order to allow our wave function to interfere with itself and to negate the result of our perceptions. This more perfect and larger physicist will rightfully deny that in a precise calculation, it's possible to treat the wave function as a "collapsed one" at the moment right after we "feel an outcome".) Because the radioactive nucleus may be found in a linear superposition of dictinct states and because this state is imprinted onto the cat and the Earth, it's obvious that even the gravitational field around the (former?) Earth is generally found in a probabilistic linear superposition of different states. Consequently, the values of the metric tensors at various points have to be operators whose values may only be predicted probabilistically, much like the values of any observable in any quantum theory. Let's now take the non-relativistic, weak-gravitational-field, low-energy limit of Einstein's equations written above. In this non-relativistic limit, \(\hat g_{00}\) is the only important component of the metric tensor (the gravitational redshift) and it gets translated to the gravitational potential \(\hat \Phi\) which is clearly an operator-valued field, too. We get\[ \Delta \hat\Phi(t,\vec x) = 4\pi G \hat\rho(t,\vec x). \] It looks like the Hossenfelder version of Poisson's equation except that the gravitational potential on the left hand side has a hat; and the source \(\hat\rho\), i.e. the mass density, has replaced her \(m \abs{\Psi(t,\vec x)}^2\). Fine. There are some differences. But can I make special choices that will produce her equation out of the correct equation above? What is the mass density operator \(\hat\rho\) equal to in the case of the electron? Well, it's easy to answer this question. The mass density coming from an electron blows up at the point where the electron is located; it's zero everywhere else. Clearly, the mass density is a three-dimensional delta-function:\[ \hat\rho(t,\vec x) = m \delta^{(3)}(\hat{\vec X} - \vec x) \] Just to be sure, the arguments of the field operators such as \(\hat\rho\) – the arguments that the fields depend on – are ordinary coordinates \(\vec x\) which have no hats because they're not operators. In quantum field theories, whether they're relativistic or not, they're as independent variables as the time \(t\); after all, \((t,x,y,z)\) are mixed with each other by the relativistic Lorentz transformations which are manifest symmetries in relativistic quantum field theories. However, the equation above says that the mass density at the point \(\vec x\) blows up iff the eigenvalue of the electron's position \(\hat X\), an eigenvalue of an observable, is equal to this \(\vec x\). The equation above is an operator equation. And yes, it's possible to compute functions (including the delta-function) out of operator-valued arguments. Semiclassical gravity isn't necessarily too self-consistent an approximation. It may resemble the equally named song by Savagery above. Clearly, the operator \(\delta^{(3)}(\hat X - \vec x)\) is something different than Hossenfelder's \(\abs{\Psi(t,\vec x)}^2\) – which isn't an operator at all – so her equation isn't right. Can we obtain the squared wave function in some way? Well, you could try to take the expectation value of the last displayed equation:\[ \bra\Psi \Delta \hat\Phi(t,\vec x)\ket\Psi = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Indeed, if you compute the expectation value of the operator \(\delta^{(3)}(\hat X - \vec x)\) in the state \(\ket\Psi\), you will obtain \(\abs{\Psi(t,\vec x)}^2\). However, note that the equation above still differs from the Hossenfelder-Poisson equation: our right equation properly sandwiches the gravitational potential, which is an operator-valued field, in between the two copies of the wave functions. Can't you just introduce a new symbol \(\Delta\Phi\), one without any hats, for the expectation value entering the left hand side of the last equation? You may but it's just an expectation value, a number that depends on the state. The proper Schrödinger equation with the gravitational potential that we started with contains the operator \(\hat\Phi(t,\vec x)\) that is manifestly independent of the wave function (either because it is an external classical field – if we want to treat it as a deterministically evolving background field – or because it is a particular operator acting on the Hilbert space). So they're different things. At any rate, the original pair of equations is wrong. Nonlinearity in the wave function is lethal Those deluded people are obsessed with expectation values because they don't want to accept quantum mechanics. The expectation value of an operator "looks like" a classical quantity and classical quantities are the only physical quantities they have really accepted – and 19th century classical physics is the newest framework for physics that they have swallowed – so they try to deform and distort everything so that it resembles classical physics. An arbitrarily silly caricature of the reality is always preferred by them over the right equations as long as it looks more classical. But Nature obeys quantum mechanics. The observables we can see – all of them – are indeed linear operators acting on the Hilbert space. If something may be measured and seen to be equal to something or something else (this includes Yes/No questions we may answer by an experiment), then "something" is always associated with a linear operator on the Hilbert space (Yes/No questions are associated with Hermitian projection operators). If you are using a set of concepts that violate this universal postulate, then you contradict basic rules of quantum mechanics and what you say is just demonstrably wrong. This basic rule doesn't depend on any dynamical details of your would-be quantum theory and it admits no loopholes. Two pieces of the wave function don't attract each other at all You could say that one may talk about the expectation values in some contexts because they may give a fair approximation to quantum mechanics. The behavior of some systems may be close to the classical one, anyway, so why wouldn't we talk about the expectation values only? However, this approximation is only meaningful if the variations of the physical observables (encoded in the spread of the wave function) are much smaller than their characteristic values such as the (mean) distances between the particles which we want to treat as classical numbers, e.g.\[ \abs{\Delta \vec x} \ll O(\abs{\vec x_1-\vec x_2}) \] However, the very motivation that makes those confused people study the Schrödinger-Newton system of equations is that this condition isn't satisfied at all. What they typically want to achieve is to "collapse" the wave function packets. They're composed of several distant enough pieces, otherwise they wouldn't feel the need to collapse them. In their system of equations, two distant portions of the wave function attract each other in the same way as two celestial bodies do – because \(m \abs{\Psi}^2\) enters as the classical mass density to Poisson's equation for the gravitational potential. They write many papers studying whether this self-attraction of "parts of the electron" or another object may be enough to "keep the wave function compact enough". Of course, it is not enough. The gravitational force is extremely weak and cannot play such an essential role in the experiments with elementary particles. In Ghirardi-Rimini-Weber: collapsed pseudoscience, I have described somewhat more sophisticated "collapse theories" that are trying to achieve a similar outcome: to misinterpret the wave function as a "classical object" and to prevent it from spreading. Of course, these theories cannot work, either. To keep these wave functions compact enough, they have to introduce kicks that are so large that we are sure that they don't exist. You simply cannot find any classical model that agrees with observations in which the wave function is a classical object – simply because the wave function isn't a classical object and this fact is really an experimentally proven one as you know if you think a little bit. But what the people studying the Schrödinger-Newton system of equations do is even much more stupid than what the GRW folks attempted. It is internally inconsistent already at the mathematical level. You don't have to think about some sophisticated experiments to verify whether these equations are viable. They can be safely ruled out by pure thought because they predict things that are manifestly wrong. I have already said that the Hossenfelder-Poisson equation for the gravitational potential treats the squared wave function as if it were a mass density. If your wave function is composed of two major pieces in two regions, they will behave as two clouds of interplanetary gas and these two clouds will attract because each of them influences the gravitational potential that influences the motion of the other cloud, too. However, this attraction between two "pieces" of a wave function definitely doesn't exist, in a sharp contrast with the immensely dumb opinion held by pretty much every "alternative" kibitzer about quantum mechanics i.e. everyone who has ever offered any musings that something is fundamentally wrong with the proper Copenhagen quantum mechanics. There would only be an attraction if the matter (electron) existed at both places because the attraction is proportional to \(M_1 M_2\). However, one may easily show that the counterpart of \(M_1M_2\) is zero: the matter is never at both places at the same time. Imagine that the wave function has the form\[ \ket\psi = 0.6\ket \phi+ 0.8 i \ket \chi \] where the states \(\ket\phi\) and \(\ket\chi\) are supported by very distant regions. As you know, this state vector implies that the particle has 36% odds to be in the "phi" region and 64% odds to be in the "chi" region. I chose probabilities that are nicely rational, exploiting the famous 3-4-5 Pythagorean triangle, but there's another reason why I didn't pick the odds to be 50% and 50%: there is absolutely nothing special about wave functions that predict exactly the same odds for two different outcomes. The number 50 is just a random number in between \(0\) and \(100\) and it only becomes special if there is an exact symmetry between \(p\) and \((1-p)\) which is usually not the case. Much of the self-delusion by the "many worlds" proponents is based on the misconception that predictions with equal odds for various outcomes are special or "canonical". They're not. Fine. So if we have the wave function \(\ket\psi\) above, do the two parts of the wave function attract each other? The answer is a resounding No. The basic fact about quantum mechanics that all these Schrödinger-Newton and many-worlds and other pseudoscientists misunderstand is the following point. The wave function above doesn't mean that there is 36% of an object here AND 64% of an object there. (WRONG.) Note that there is "AND" in the sentence above, indicating the existence of two objects. Instead, the right interpretation is that the particle is here (36% odds) OR there (64% odds). (RIGHT.) The correct word is "OR", not "AND"! However, unlike in classical physics, you're not allowed to assume that one of the possibilities is "objectively true" in the classical sense even if the position isn't measured. On the other hand, even in quantum mechanics, it's still possible to strictly prove that the particle isn't found at both places simultaneously; the state vector is an eigenstate of the "both places" projection operator (product of two projection operators) with the eigenvalue zero. (The same comments apply to two slits in a double-slit experiment.) The mutually orthogonal terms contributing to the wave function or density matrix aren't multiple objects that simultaneously exist, as the word "AND" would indicate. You would need (tensor) products of Hilbert spaces and/or wave functions, not sums, to describe multiple objects! Instead, they are mutually excluding alternatives for what may exist, alternative properties that one physical system (e.g. one electron) may have. And mutually excluding alternatives simply cannot interact with each other, gravitationally or otherwise. Imagine you throw dice. The result may be "1" or "2" or "3" or "4" or "5" or "6". But you know that only one answer is right. There can't be any interaction that would say that because both "1" and "6" may occur, they attract each other which is why you probably get "3" or "4" in the middle. It's nonsense because "1" and "6" are never objects that simultaneously exist. If they don't simultaneously exist, they can't attract each other, whatever the rules are. They can't interact with one another at all! While the expectation value of the electron's position may be "somewhere in between" the regions "phi" and "chi", we may use the wave function to prove with absolute certainty that the electron isn't in between. The proponents of the "many-worlds interpretation" often commit the same trivial mistake. They are imagining that two copies of you co-exist at the same moment – in some larger "multiverse". That's why they often talk about one copy's thinking how the other copy is feeling in another part of a multiverse. But the other copy can't be feeling anything at all because it doesn't exist if you do! You and your copy are mutually excluding. If you wanted to describe two people, you would need a larger Hilbert space (a tensor product of two copies of the space for one person) and if you produced two people out of one, the evolution of the wave function would be quadratic i.e. nonlinear which would conflict with quantum mechanics (and its no-xerox theorem), too. These many-worlds apologists, including Brian Greene, often like to say (see e.g. The Hidden Reality) that the proper Copenhagen interpretation doesn't allow us to treat macroscopic objects by the very same rules of quantum mechanics with which the microscopic objects are treated and that's why they promote the many worlds. This proposition is what I call chutzpah. In reality, the claim that right after the measurement by one person, there suddenly exist several people is in a striking contradiction with facts that may be easily extracted from quantum mechanics applied to a system of people. The quantum mechanical laws – laws meticulously followed by the Copenhagen school, regardless of the size and context – still imply that the total mass is conserved, at least at a 1-kilogram precision, so it is simply impossible for one person to evolve into two. It's impossible because of the very same laws of quantum mechanics that, among many other things, protect Nature against the violation of charge conservation in nuclear processes. It's them, the many-worlds apologists, who are totally denying the validity of the laws of quantum mechanics for the macroscopic objects. In reality, quantum mechanics holds for all systems and for macroscopic objects, one may prove that classical physics is often a valid approximation, as the founding fathers of quantum mechanics knew and explicitly said. The validity of this approximation, as they also knew, is also a necessary condition for us to be able to make any "strict valid statements" of the classical type. The condition is hugely violated by interfering quantum microscopic (but, in principle, also large) objects before they are measured so one can't talk about the state of the system before the measurement in any classical language. In Nature, all observables (as well as the S-matrix and other evolution operators) are expressed by linear operators acting on the Hilbert space and Schrödinger's equation describing the evolution of any physical system has to be linear, too. Even if you use the density matrix, it evolves according to the "mixed Schrödinger equation" which is also linear:\[ i\hbar \ddfrac{}{t}\hat\rho = [\hat H(t),\hat \rho(t)]. \] It's extremely important that the density matrix \(\hat \rho\) enters linearly because \(\hat \rho\) is the quantum mechanical representation of the probability distribution, even the initial one. And the probabilities of final states are always linear combinations of the probabilities of the initial states. This claim follows from pure logic and will hold in any physical system, regardless of its laws. Why? Classically, the probabilities of final states \(P({\rm final}_j)\) are always given by\[ P({\rm final}_j) = \sum_{i=1}^N P({\rm initial}_i) P({\rm evolution}_{i\to j}) \] whose right hand side is linear in the probabilities of the initial states and the left hand side is linear in the probabilities of the final states. Regardless of the system, these dependences are simply linear. Quantum mechanics generalizes the probability distributions to the density matrices which admit states arising from superpositions (by having off-diagonal elements) and which are compatible with the non-zero commutators between generic observables. However, whenever your knowledge about a system may be described classically, the equation above strictly holds. It is pure maths; it is as questionable or unquestionable (make your guess) as \(2+2=4\). There isn't any "alternative probability calculus" in which the final probabilities would depend on the initial probabilities nonlinearly. If you carefully study the possible consistent algorithms to calculate the probabilities of various final outcomes or observations, you will find out that it is indeed the case that the quantum mechanical evolution still has to be linear in the density matrix. The Hossenfelder-Poisson equation fails to obey this condition so it violates totally basic rules of the probability calculus. Just to connect the density matrix discussion with a more widespread formalism, let us mention that quantum mechanics allows you to decompose any density matrix into a sum of terms arising from pure states,\[ \hat\rho = \sum_{k=1}^M p_k \ket{\psi_k}\bra{\psi_k} \] and it may study the individual terms, pure states, independently of others. When we do so, and we often do, we find out that the evolution of \(\ket\psi\), the pure states, has to be linear as well. The linear maps \(\ket\psi\to U\ket\psi\) produce \(\hat\rho\to U\hat\rho \hat U^\dagger\) for \(\hat\rho=\ket\psi\bra\psi\) which is still linear in the density matrix, as required. If you had a more general, nonlinear evolution – or if you represented observables by non-linear operators etc. – then these nonlinear rules for the wave function would get translated to nonlinear rules for the density matrix as well. And nonlinear rules for the density matrix would contradict some completely basic "linear" rules for probabilities that are completely independent of any properties of the laws of physics, such as\[ \] So the linearity of the evolution equations in the density matrix (and, consequently, also the linearity in the state vector which is a polotovar for the density matrix) is totally necessary for the internal consistency of a theory that predicts probabilities, whatever the internal rules that yield these probabilistic predictions are! That's why two pieces of the wave function (or the density matrix) can never attract each other or otherwise interact with each other. As long as they're orthogonal, they're mutually exclusive possibilities of what may happen. They can never be interpreted as objects that simultaneously exist at the same moment. The product of their probabilities (and anything that depends on its being nontrivial) is zero because at least one of them equals zero. And the wave functions and density matrix cannot be interpreted as classical objects because it's been proven, by the most rudimentary experiments, that these objects are probabilistic distributions or their polotovars rather than observables. These statements depend on no open questions at the cutting edge of the modern physics research; they're parts of the elementary undergraduate material that has been understood by active physicists since the mid 1920s. It now trivially follows that all the people who study Schrödinger-Newton equations are profoundly deluded, moronic crackpots. And that's the memo. Single mom: totally off-topic Totally off-topic. I had to click somewhere, not sure where (correction: e-mail tip from Tudor C.), and I was led to this "news article"; click to zoom in. Single mom Amy Livingston of Plzeň, 87, is making $14,000 a month. That's not bad. First of all, not every girl manages to become a mom at the age of 87. Second of all, it is impressive for a mom with such a name – who probably doesn't speak Czech at all – to survive in my hometown at all. Her having 12 times the average salary makes her achievements even more impressive. ;-) Add to Digg this Add to reddit snail feedback (3) : reader Ervin Goldfain said... Your points are well taken: the Schrodinger-Newton equation is fundamentally flawed. Expanding on these issues, I'd like to know your views on the validity of: 1)the WKB approximation, 2)semiclassical gravity, 3)quantum chaos and quantization of classically chaotic dynamical systems? reader Luboš Motl said... Dear Ervin, thanks for your listening. All the entries in your systems are obviously legitimate and interesting approximations (1,2) or topics that may be studied (3). That doesn't mean that all people say correct things about them and use them properly, of course. ;-) The WKB approximation is just the "leading correction coming from quantum mechanics" to classical physics. Various simplified Ansaetze may be written down in various contexts. Semiclassical gravity either refers to general relativity with the first (one-loop) quantum corrections; or it represents the co-existence of quantized matter fields with non-quantized gravitational fields. This is only legitimate if the gravitational fields aren't affected by the matter fields - if the spacetime geometry solve the classical Einstein equations with sources that don't depend on the microscopic details of the matter fields and particles which are studied in the quantum framework. The matter fields propagate on a fixed classical background in this approximation but they don't affect the background by their detailed microstates. Indeed, if the dependence of the gravitational fields on the properties of the matter fields is substantial or important, there's no way to use the semiclassical approximation. Some people would evolve the gravitational fields according to the expectation values of the stress-energy tensor but that's the same mistake as discussed in this article in the context of the Poisson-Hossenfelder equation. Classical systems may be chaotic - unpredictable behavior very sensitive on initial conditions. Quantum chaos is about the research of the complicated wave functions etc. in systems that are analogous to (hatted) classically chaotic systems. reader Ervin Goldfain said... Thanks Lubos. I also take classical approximations with a grain of salt. For instance, mixing classical gravity with quantum behavior is almost always questionable a way or another. Here is a follow up question. What would you say if experiments on carefully prepared quantum systems could be carried out in highly accelerated frames of references? Could this be a reliable way of falsifying predictions of semiclassical gravity, for example?
1a863dfdcef57951
Bookmark and Share Welcome to the section where All About Education shall provide you with the details of various Entrance Exams and Entrance Exams Results. This section will be useful to students looking at appearing for the various entrance exams that are conducted by universities and colleges for admissions within India and abroad. In this section All About Education presents with the details of the following GRE entrance exam and GRE entrance exam results. All About Education also provides with other details like : – Eligibility criteria of GRE Exams – GRE Examination details – GRE Examination Application Procedure – GRE Exam Dates – GRE Exam Useful Contacts – GRE Exam Selection Procedure Welcome to GRE 2012 Test Section. Here you will find about GRE Exam 2012, GRE Coaching, Colleges, GRE Syllabus, Preparation, Paper Pattern, Sample Test Papers, Questions, GRE Notification, Important Dates, GRE Exam Date, Online GRE Test and GRE Results. GRE – Graduate Record Examination is a standardized exam that is administered by the Educational Testing Service (ETS). GRE scores are used by graduate school admission offices in the United States and in other English-speaking countries. The GRE General Test is offered at computer-based test centers in the United States, Canada and many other countries. It is offered at paper-based test centers in areas of the world where computer-based testing is not available GRE Entrance Exam 2012-2013, Dates, Syllabus, Results, Notification, Pattern, GRE Entrance Exam Results. GRE Test 2012-2013 Content and Structure / Pattern :- Section:- Analytical Writing Number of Questions:- 1 “Issue” Task1 , 1 “Argument” Task1 – 30 minutes Time:- 45 minutes Section:- Verbal Reasoning Number of Questions:- 30 Time:- 30 minutes Section:-Quantitative Reasoning Number of Questions:- 28 Time:- 45 minutes Paper-based GRE General Test Content and Structure The paper-based GRE General Test is composed of Verbal Reasoning, Quantitative Reasoning and Analytical Writing sections. In addition, one unidentified unscored section may be included, and this section can appear in any position in the test after the Analytical Writing section. Questions in the unscored section are being tested for possible use in future tests and answers will not count toward your scores. Total testing time is up to 3¾ hours. The directions at the beginning of each section specify the total number of questions in the section and the time allowed for that section. The Analytical Writing section is always first. For the “Issue” task, two topics will be assigned and you will choose one. The “Argument” task does not present a choice of topics; instead, a single topic will be presented. The Verbal and Quantitative sections may appear in any order, including an unidentified Verbal or Quantitative unscored section. Treat each section presented during your test as if it counts. Section:- Analytical Writing Time:- 45 minutes Section:- Verbal Reasoning (2 sections) Number of Questions:- 38 per section Time:- 60 minutes Section:-Quantitative Reasoning  (2 sections) Number of Questions:- 30 per section Time:-  60 minutes GRE Entrance Exams 2012-2013,Dates, Syllabus, Results, Courses, Notification, Pattern GRE General Test Syllabus 2012-2013:- Analytical Writing:- The Analytical Writing section consists of two analytical writing tasks: a 45-minute “Present Your Perspective on an Issue” task and a 30-minute “Analyze an Argument” task. * The “Argument” task presents a different challenge — it requires you to critique an argument by discussing how well-reasoned you find it. You are asked to consider the logical soundness of the argument rather than to agree or disagree with the position it presents. * The “Issue” and “Argument” tasks are complementary in that the “issue” task requires you to construct a personal argument about an issue, and the “argument” task requires you to critique someone else’s argument by assessing its claims. Verbal Reasoning:- There are four types of questions in the Verbal Reasoning section of the GRE General Test:- * Analogies — Analogy questions test your ability to recognize the relationship between the words in a word pair and to recognize when two word pairs display parallel relationships. To answer an analogy question, you must formulate the relationship between the words in the given word pair and then select the answer containing those words most closely related to one another. Some examples are relationships of kind, size, spatial contiguity or degree. * Antonyms — Antonym questions measure the strength of your vocabulary and ability to reason from a given concept to its opposite. Antonyms may require only general knowledge of a word, or they may require that you make fine distinctions among answer choices. Answer choices may be single words or phrases. * Sentence Completions — Sentence completion questions measure your ability to use a variety of cues provided by syntax and grammar to recognize the overall meaning of a sentence and analyze the relationships among the component parts of the sentence. You select which of five words or sets of words can best complete a sentence to give it a logically satisfying meaning and allow it to be read as a stylistically integrated whole. * Reading Comprehension — Reading comprehension questions measure your ability to read with understanding, insight and discrimination. These questions explore your ability to analyze a written passage from several perspectives, including your ability to recognize explicitly stated elements as well as underlying statements or arguments and their implications. There are three types of questions in the Quantitative Reasoning section of the GRE General Test: * Quantitative Comparison — These questions test your ability to reason quickly and accurately about the relative sizes of two quantities or to perceive that not enough information is provided to make such a comparison. * Problem Solving — The format of these multiple-choice questions varies. The solution may require simple computations, manipulations or multistep problem-solving. * Data Interpretation — Some problem-solving questions involve data analysis. Many occur in sets of two to five questions that share common data in the form of tables or graphs that allow you to read or estimate data values. GRE Subject Test Syllabus 2012-2013:- Biochemistry, Cell and Molecular Biology:- 1. Chemical and Physical Foundations * Thermodynamics and kinetics * Redox states * Water, pH, acid-base reactions and buffers * Solutions and equilibria * Solute-solvent interactions * Chemical interactions and bonding * Chemical reaction mechanisms 2. Structural Biology: Structure, Assembly, Organization and Dynamics * Small molecules * Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids) * Supramolecular complexes (e.g., membranes, ribosomes and multienzyme complexes) 3. Catalysis and Binding * Enzyme reaction mechanisms and kinetics * Ligand-protein interaction (e.g., hormone receptors, substrates and effectors, transport proteins and antigen-antibody interactions) 4. Major Metabolic Pathways * Carbon, nitrogen and sulfur assimilation * Anabolism * Catabolism * Synthesis and degradation of macromolecules 5. Bioenergetics (including respiration and photosynthesis) * Energy transformations at the substrate level * Electron transport * Proton and chemical gradients * Energy coupling (e.g., phosphorylation and transport) 6. Regulation and Integration of Metabolism * Covalent modification of enzymes * Allosteric regulation * Compartmentalization * Hormones 7. Methods * Biophysical approaches (e.g., spectroscopy, x-ray, crystallography, mass spectroscopy) * Isotopes * Separation techniques (e.g., centrifugation, chromatography and electrophoresis) * Immunotechniques Methods of importance to cellular biology, such as fluorescence probes (e.g., FRAP, FRET and GFP) and imaging, will be covered as appropriate within the context of the content below. 1. Cellular Compartments of Prokaryotes and Eukaryotes: Organization, Dynamics and Functions * Cellular membrane systems (e.g., structure and transport across membrane) * Nucleus (e.g., envelope and matrix) * Mitochondria and chloroplasts (e.g., biogenesis and evolution) 2. Cell Surface and Communication * Extracellular matrix (including cell walls) * Cell adhesion and junctions * Signal transduction * Receptor function * Excitable membrane systems 3. Cytoskeleton, Motility and Shape * Regulation of assembly and disassembly of filament systems * Motor function, regulation and diversity 4. Protein, Processing, Targeting and Turnover * Translocation across membranes * Posttranslational modification * Intracellular trafficking * Secretion and endocytosis * Protein turnover (e.g., proteosomes, lysosomes, damaged protein response) 5. Cell Division, Differentiation and Development * Cell cycle, mitosis and cytokinesis * Meiosis and gametogenesis * Fertilization and early embryonic development (including positional information, homeotic genes, tissue-specific expression, nuclear and cytoplasmic interactions, growth factors and induction, environment, stem cells and polarity) 1. Genetic Foundations * Mendelian and non-Mendelian inheritance * Transformation, transduction and conjugation * Recombination and complementation * Mutational analysis * Genetic mapping and linkage analysis 2. Chromatin and Chromosomes * Karyotypes * Translocations, inversions, deletions and duplications * Aneuploidy and polyploidy * Structure * Epigenetics 3. Genomics * Genome structure * Repeated DNA and gene families * Gene identification * Transposable elements * Bioinformatics * Proteomics * Molecular evolution 4. Genome Maintenance * DNA replication * DNA damage and repair * DNA modification * DNA recombination and gene conversion 5. Gene Expression * The genetic code * Transcription/transcriptional profiling * RNA processing * Translation 6. Gene Regulation * Positive and negative control of the operon * Promoter recognition by RNA polymerases * Attenuation and antitermination * Cis-acting regulatory elements * Trans-acting regulatory factors * Gene rearrangements and amplifications * Small non-coding RNA (e.g., siRNA, microRNA) 7. Viruses * Genome replication and regulation * Virus-host interactions 8. Methods * Restriction maps and PCR * Nucleic acid blotting and hybridization * DNA cloning in prokaryotes and eukaryotes * Sequencing and analysis * Protein-nucleic acid interaction * Transgenic organisms * Microarrays The approximate distribution of questions by content category is shown below. * Fundamentals of cellular biology, genetics and molecular biology are addressed. * Attention is also given to experimental methodology. 1. Cellular Structure and Function (16–17%) 1. Biological compounds * Macromolecular structure and bonding * Abiotic origin of biological molecules 2. Enzyme activity, receptor binding and regulation 3. Major metabolic pathways and regulation * Respiration, fermentation and photosynthesis * Synthesis and degradation of macromolecules * Hormonal control and intracellular messengers 4. Membrane dynamics and cell surfaces * Transport, endocytosis and exocytosis * Electrical potentials and transmitter substances * Mechanisms of cell recognition, cell junctions and plasmodesmata * Cell wall and extracellular matrix 5. Organelles: structure, function, synthesis and targeting * Nucleus, mitochondria and plastids * Endoplasmic reticulum and ribosomes * Golgi apparatus and secretory vesicles * Lysosomes, peroxisomes and vacuoles 6. Cytoskeleton, motility and shape * Actin-based systems * Microtubule-based systems * Intermediate filaments * Bacterial flagella and movement 8. Methods * Microscopy (e.g., electron, light, fluorescence) * Immunological (e.g., Western Blotting, immunohistochemistry, immunofluorescence) 2. Genetics and Molecular Biology (16–17%) 1. Genetic foundations * Mendelian inheritance * Pedigree analysis * Prokaryotic genetics (transformation, transduction and conjugation) * Genetic mapping 2. Chromatin and chromosomes * Nucleosomes * Karyotypes * Chromosomal aberrations * Polytene chromosomes 3. Genome sequence organization * Introns and exons * Single-copy and repetitive DNA * Transposable elements 4. Genome maintenance * DNA replication * DNA mutation and repair 5. Gene expression and regulation in prokaryotes and eukaryotes: mechanisms * The operon * Promoters and enhancers * Transcription factors * RNA and protein synthesis * Processing and modifications of both RNA and protein 6. Gene expression and regulation: effects * Control of normal development * Cancer and oncogenes * Whole genome expression (e.g., microarrays) * Regulation of gene expression by RNAi (e.g., siRNA) * Epigenetics 7. Immunobiology * Cellular basis of immunity * Antibody diversity and synthesis * Antigen-antibody interactions 8. Bacteriophages, animal viruses and plant viruses * Viral genomes, replication, and assembly * Virus-host cell interactions 9. Recombinant DNA methodology * Restriction endonucleases * Blotting and hybridization * Restriction fragment length polymorphisms * DNA cloning, sequencing and analysis * Polymerase chain reaction * The structure, physiology, behavior and development of plants and animals are addressed. * Examples of developmental phenomena range from fertilization through differentiation and morphogenesis. 1. Animal Structure, Function and Organization (10%) 1. Exchange with environment * Nutrient, salt and water exchange * Gas exchange * Energy 2. Internal transport and exchange * Circulatory and digestive systems 3. Support and movement * Support systems (external, internal and hydrostatic) * Movement systems (flagellar, ciliary and muscular) 4. Integration and control mechanisms * Nervous and endocrine systems 5. Behavior (communication, orientation, learning and instinct) 6. Metabolic rates (temperature, body size and activity) 2. Animal Reproduction and Development (6%) 1. Reproductive structures 2. Meiosis, gametogenesis and fertilization 5. External control mechanisms (e.g., photoperiod) 1. Organs, tissue systems, and tissues 2. Water transport, including absorption and transpiration 3. Phloem transport and storage 4. Mineral nutrition 5. Plant energetics (e.g., respiration and photosynthesis) 1. Reproductive structures 2. Meiosis and sporogenesis 3. Gametogenesis and fertilization 4. Embryogeny and seed development 5. Meristems, growth, morphogenesis and differentiation 5. Diversity of Life (6%) 1. Archaea * Morphology, physiology and identification 2. Bacteria (including cyanobacteria) * Morphology, physiology, pathology and identification 3. Protista * Protozoa, other heterotrophic Protista (slime molds and Oomycota) and autotrophic Protista * Major distinguishing characteristics * Phylogenetic relationships * Importance (e.g., eutrophication, disease) 4. Fungi * Distinctive features of major phyla (vegetative, asexual and sexual reproduction) * Generalized life cycles * Importance (e.g., decomposition, biodegradation, antibiotics and pathogenicity) * Lichens 5. Animalia with emphasis on major phyla * Major distinguishing characteristics * Phylogenetic relationships 6. Plantae with emphasis on major phyla * Alternation of generations * Major distinguishing characteristics * Phylogenetic relationships * Ecological and evolutionary topics are given equal weight. * Ecological questions range from physiological adaptations to the functioning of ecosystems. * Although principles are emphasized, some questions may consider applications to current environmental problems. * Questions in evolution range from its genetic foundations through evolutionary processes to their consequences. * Evolution is considered at the molecular, individual, population and higher levels. * Principles of ecology, genetics and evolution are interrelated in many questions. * Some questions may require quantitative skills, including the interpretation of simple mathematical models. 1. Ecology (16–17%) 1. Environment/organism interaction * Biogeographic patterns * Physiological ecology * Temporal patterns (e.g., seasonal fluctuations) 2. Behavioral ecology * Habitat selection * Mating systems * Social systems * Resource acquisition 3. Population Structure and Function * Population dynamics/regulation * Demography and life history strategies 4. Communities * Direct and indirect interspecific interactions * Community structure and diversity * Change and succession 5. Ecosystems * Productivity and energy flow * Chemical cycling 2. Evolution (16–17%) 1. Genetic variability * Origins (mutations, linkage, recombination and chromosomal alterations) * Levels (e.g., polymorphism and heritability) * Spatial patterns (e.g., clines and ecotypes) * Hardy-Weinberg equilibrium 2. Evolutionary processes * Gene flow and genetic drift * Natural selection and its dynamics * Levels of selection (e.g., individual and group) * Trade-offs and genetic correlations * Natural selection and genome evolution * Synonymous vs. nonsynonymous nucleotide ratios 3. Evolutionary consequences * Fitness and adaptation * Speciation * Systematics and phylogeny * Convergence, divergence and extinction * Coevolution 4. History of life * Origin of prokaryotic and eukaryotic cells * Fossil record * Paleontology and paleoecology * Lateral transfer of genetic sequences * The test consists of approximately 130 multiple-choice questions. 2. Solutions and Standardization — Concentration terms, primary standards 6. Environmental Applications 7. Radiochemical Methods — Detectors, applications Computer Science:- A. Data organization * Data types * Data structures and implementation techniques B. Program control and structure * Iteration and recursion * Procedures, functions, methods and exception handlers * Concurrency, communication and synchronization C. Programming languages and notation * Constructs for data organization and program control * Scope, binding and parameter passing * Expression evaluation D. Software engineering * Formal specifications and assertions * Verification techniques * Software development models, patterns and tools E. Systems * Compilers, interpreters and run-time systems * Operating systems, including resource management and protection/security * Networking, Internet and distributed systems * Databases * System analysis and development tools A. Digital logic design * Implementation of combinational and sequential circuits * Optimization and analysis B. Processors and control units * Instruction sets * Computer arithmetic and number representation * Register and ALU organization * Data paths and control sequencing C. Memories and their hierarchies * Performance, implementation and management * Cache, main and secondary storage * Virtual memory, paging and segmentation D. Networking and communications * I/O systems and protocols * Synchronization E. High-performance architectures * Pipelining superscalar and out-of-order execution processors * Parallel and distributed architectures A. Algorithms and complexity * Exact and asymptotic analysis of specific algorithms * Upper and lower bounds on the complexity of specific problems * Computational complexity, including NP-completeness B. Automata and language theory * Models of computation (finite automata, Turing machines) * Formal languages and grammars (regular and context-free) * Decidability C. Discrete structures Mathematical logic Elementary combinatorics and graph theory Discrete probability, recurrence relations and number theory Literature in English:- 1. Literary Analysis (40 – 55%) 2. Identification (15 – 20%) 3. Cultural and Historical Contexts (20 – 25%) * Elementary algebra: basic algebraic techniques and manipulations acquired in high school and used throughout mathematics * Discrete mathematics: logic, set theory, combinatorics, graph theory and algorithms (such as electrostatics, currents and DC circuits, magnetic fields in free space, Lorentz force, induction, Maxwell’s equations and their applications, electromagnetic waves, AC circuits, magnetic and electric fields in matter) (such as the laws of thermodynamics, thermodynamic processes, equations of state, ideal gases, kinetic theory, ensembles, statistical concepts and calculation of thermodynamic quantities, thermal expansion and heat transfer) (such as fundamental concepts, solutions of the Schrödinger equation (including square wells, harmonic oscillators, and hydrogenic atoms), spin, angular momentum, wave function symmetry, elementary perturbation theory) (such as properties of electrons, Bohr model, energy quantization, atomic structure, atomic spectra, selection rules, black-body radiation, x-rays, atoms in electric and magnetic fields) (such as introductory concepts, time dilation, length contraction, simultaneity, energy and momentum, four-vectors and Lorentz transformation, velocity addition) (such as data and error analysis, electronics, instrumentation, radiation detection, counting statistics, interaction of charged particles with matter, lasers and optical interferometers, dimensional analysis, fundamental applications of probability and statistics) Nuclear and Particle physics (e.g., nuclear properties, radioactive decay, fission and fusion, reactions, fundamental properties of elementary particles), Condensed Matter (e.g., crystal structure, x-ray diffraction, thermal properties, electron theory of metals, semiconductors, superconductors), Miscellaneous (e.g., astrophysics, mathematical methods, computer applications) 1. Learning (3–5%) 1. Classical Conditioning 2. Instrumental Conditioning 3. Observational Learning, Modeling 4. Theories, Applications and Issues 2. Language (3–4%) 1. Units (phonemes, morphemes, phrases) 2. Syntax 3. Meaning 4. Speech Perception and Processing 5. Verbal and Nonverbal Communication 6. Bilingualism 7. Theories, Applications and Issues 3. Memory (7–9%) 1. Working Memory 2. Long-term Memory 3. Types of Memory 4. Memory Systems and Processes 5. Theories, Applications and Issues 4. Thinking (4–6%) 1. Representation (Categorization, Imagery, Schemas, Scripts) 2. Problem Solving 3. Judgment and Decision-making Processes 4. Planning, Metacognition 5. Intelligence 6. Theories, Applications and Issues 5. Sensation and Perception (5–7%) 1. Psychophysics, Signal Detection 2. Attention 3. Perceptual Organization 4. Vision 5. Audition 6. Gustation 7. Olfaction 8. Somatosenses 9. Vestibular and Kinesthetic Senses 10. Theories, Applications and Issues 6. Physiological/Behavioral Neuroscience (12–14%) 1. Neurons 2. Sensory Structures and Processes 3. Motor Structures and Functions 4. Central Structures and Processes 5. Motivation, Arousal, Emotion 6. Cognitive Neuroscience 7. Neuromodulators and Drugs 8. Hormonal Factors 9. Comparative and Ethology 10. States of Consciousness 11. Theories, Applications and Issues 1. Clinical and Abnormal (12–14%) 1. Stress, Conflict, Coping 2. Diagnostic Systems 3. Assessment 4. Causes and Development of Disorders 5. Neurophysiological Factors 6. Treatment of Disorders 7. Epidemiology 8. Prevention 9. Health Psychology 10. Culture and Gender Issues 11. Theories, Applications and Issues 2. Lifespan Development (12–14%) 1. Nature-Nurture 2. Physical and Motor 3. Perception and Cognition 4. Language 5. Intelligence 6. Social and Personality 7. Emotion 8. Socialization, Family and Cultural Influences 9. Theories, Applications and Issues 3. Personality (3–5%) 1. Theories 2. Structure 3. Assessment 4. Personality and Behavior 5. Applications and Issues 4. Social (12–14%) 1. Social Perception, Cognition, Attribution, Beliefs 2. Attitudes and Behavior 3. Social Comparison, Self 4. Emotion, Affect and Motivation 5. Conformity, Influence and Persuasion 6. Interpersonal Attraction and Close Relationships 7. Group and Intergroup Processes 8. Cultural and Gender Influences 9. Evolutionary Psychology, Altruism and Aggression 10. Theories, Applications and Issues 1. General (4–6%) 1. History 2. Industrial-Organizational 3. Educational 2. Measurement and Methodology (11–13%) 1. Psychometrics, Test Construction, Reliability, Validity 2. Research Designs 3. Statistical Procedures 4. Scientific Method and the Evaluation of Evidence 5. Ethics and Legal Issues 6. Analysis and Interpretation of Findings Bookmark and Share
b00bc6a1fa65dfb1
Rice University logo Physics 521: Quantum Mechanics I Course Outline Introduction: course overview, history of quantum mechanics Mathematical foundation of quantum mechanics: quantum states and Hilbert spaces, observables and operators, commutation relations and Heisenberg's uncertainty principle, pure and mixed states, density operator Quantum dynamics: time evolution and the Schrödinger equation,  Schroedinger and Heisenberg pictures, quantization of harmonic oscillator, propagators and Feynman path integrals, potential and gauge transformation Theory of angular momentum:  rotation and angular momentum operator, spin and  SU(2) group, orbital angular momentum, solution of the hydrogen atom ( Schrödinger equation for central potential),  addition of angular momenta and Clebsch-Gordan coefficients, tensor operators and Wigner-Eckart theorem Symmetry in quantum mechanics: conservation laws and degeneracies, parity (space inversion), time-reversal symmetry Typical Organization Lectures T Th 1:00 - 2:15 PM Homework (30%) Midterm Exam (30%) Final exam (40%) Main Text:  J.J. Sakurai, Modern Quantum Mechanics (Addison-Wesley, 2010) Other Texts: R. Shankar, Principles of Quantum Mechanics, Springer, 1994 (2nd Ed.) E. Merzbacher, Quantum Mechanics, Wiley, 1997. A. Messiah, Quantum Mechanics, Dover, 1999.
454d2989949f7b57
Sunday, July 05, 2015 The group of Suchitra Sebastian has discovered very unconventional condensed matter system which seems to be simultaneously both insulator and conductor of electricity in presence of magnetic field. Science article is entitled "Unconventional Fermi surface in an insulating state". There is also a popular article "Paradoxical Crystal Baffles Physicists" in Quanta Magazine summarizing the findings. I learned about the finding first from the blog posting of Lubos (I want to make absolutely clear that I do not share the racistic attitudes of Lubos towards Greeks. I find the discussions between Lubos and same minded blog visitor barbarians about the situation in Greece disgusting). The crystal studied at superlow temperatures was Samarium hexaboride - briefly SmB6. The high resistance implies that electron cannot move more that one atom's width in any direction. Sebastian et al however observed electrons traversing over a distance of millions of atoms- a distance of orde 10-4 m, the size of a large neuron. So high mobility is expected only in conductors. SmB6 is neither metal or insulator or is both of them! The finding is described by Sebastian as a "big schock and as a "magnificent paradox by condensed matter theorists Jan Zaanen. Theoreticians have started to make guesses about what might be involved but according to Zaanen there is no even remotely credible hypothesis has appeared yet. On basis of its electronic structure SmB6 should be a conductor of electricity and it indeed is at room temperature: the average number conduction electrons per SmB6 is one half. At low temperatures situation however changes. At low temperatures electrons behave collectivly. In superconductors resistance drops to zero as a consequence. In SmB6 just the opposite happens. Each Sm nucleus has the average 5.5 electrons bound to it at tight orbits. Below 223 degrees of Celsius the conduction electrons of SmB6 are thought to "hybridize" around samarium nuclei so that the system becomes an insulator. Various signatures demonstrate that SmB6 indeed behaves like an insulator. During last five years it has been learned that SmB6 is not only an insulator but also so called topological insulator. The interior of SmB6 is insulator but the surface acts as a conductor. In their experiments Sebastian et al hoped to find additional evidence for the topological insulator property and attempted to measure quantum oscillations in the electrical resistance of their crystal sample. The variation of quantum oscillations as sample is rotated can be used to map out the Fermi surface of the crystal. No quantum oscillations were seen. The next step was to add magnetic field and just see whether something interesting happens and could save the project. Suddenly the expected signal was there! It was possible to detect quantum oscillations deep in the interior of the sample and map the Fermi surface! The electrons in the interior travelled 1 million times faster than the electrical resistance would suggest. Fermi surface was like that in copper, silver or gold. A further surprise was that the growth of the amplitude of quantum oscillations as temperature was decreased, was very different from the predictions of the universal Lifshitz-Kosevich formula for the conventional metals. Could TGD help to understand the strange behavior of SmB6? There are several indications that the paradoxical effect might reveal the underlying dynamics of quantum TGD. The mechanism of conduction must represent new physics and magnetic field must play a key role by making conductivity possible by somehow providing the "current wires". How? The TGD based answer is completely obvious: magnetic flux tubes. One should also understand topological insulator property at deeper level, that is the conduction along the boundaries of topological insulator. One should understand why the current runs along 2-D surfaces. In fact, many exotic condensed matter systems are 2-dimensional in good approximation. In the models of integer and fractional quantum Hall effect electrons form a 2-D system with braid statistics possible only in 2-D system. High temperature super-conductivity is also an effectively 2-D phenomenon.One should also understand topological insulator property at deeper level, that is the conduction along the boundaries of topological insulator. 1. Many-sheeted space-time is second fundamental prediction TGD. The dynamics of single sheet of many-sheeted space-time should be very simple by the strong form of holography implying effective 2-dimensionality. The standard model description of this dynamics masks this simplicity since the sheets of many-sheeted space-time are replaced with single region of slightly curved Minkowski space with gauge potentials sums of induced gauge potentials for sheets and deviation of metric from Minkowski metric by the sum of corresponding deviations for space-time sheets. Could the dynamics of exotic condensed matter systems give a glimpse about the dynamics of single sheet? Could topological insulator and anyonic systems provide examples of this kind of systems? 2. Second basic prediction of TGD is strong form of holography: string world sheets and partonic 2-surfaces serve as kind of "space-time genes" and the dynamics of fermions is 2-D at fundamental level. It must be however made clear that at QFT limit the spinor fields of imbedding space replace these fundamental spinor fields localized at 2-surface. One might argue that the fundamental spinor fields do not make them directly visible in condensed matter physics. Nothing however prevents from asking whether in some circumstances the fundamental level could make itself visible. In particular, for large heff dark matter systems (, whose existence can be deduced from the quantum criticality of quantum TGD) the partonic 2-surfaces with CP2 size could be scaled up to nano-scopic and even longer size scales. I have proposed this kind of surfaces as carriers of electrons with non-standard value of heff in QHE and FQHE. The long range quantum fluctuations associated with large, heff=n× h phase would be quantum fluctuations rather than thermal ones. In the case of ordinary conductivity thermal energy makes it possible for electrons to jump between atoms and conductivity becomes very small at low temperatures. In the case of large scale quantum coherence just the opposite happens as observed. One therefore expects that Lifshitz-Kosevich formula for the temperature dependence of the amplitude does not hold true. The generalization of Lifschitz-Kosevich formula to quantum critical case deduced from quantum holographic correspondence by Hartnoll and Hofman might hold true qualitatively also for quantum criticality in TGD sense but one must be very cautious. The first guess is that by underlying super-conformal invariance scaling laws typical for critical systems hold true so that the dependence on temperature is via a power of dimensionless parameter x=T/mu;, where μ is chemical potential for electron system. As a matter fact, exponent of power of x appears and reduces to first power for Lifshitz-Konsevich formula. Since magnetic field is important, one also expects that the ratio of cyclotron energy scale Ec∝ ℏeff eB/me to Fermi energy appears in the formula. One can even make an order of magnitude guess for the value of heff/h≅ 106 from the facts that the scale of conduction and conduction velocity were millions times higher than expected. Strings are 1-D systems and strong form of holography implies that fermionic strings connecting partonic 2-surfaces and accompanied by magnetic flux tubes are fundamental. At light-like 3-surfaces fermion lines can give rise to braids. In TGD framework AdS/CFT correspondence generalizes since the conformal symmetries are extended. This is possible only in 4-D space-time and for the imbedding space H=M4× CP2 making possible to generalize twistor approach. 3. Topological insulator property means from the perspective of modelling that the action reduces to a non-abelian Chern-Simons term. The quantum dynamics of TGD at space-time level is dictated by Kähler action. Space-time surfaces are preferred extremals of Kähler action and for them Kähler action reduces to Chern-Simons terms associated with the ends of space-time surface opposite boundaries of causal diamond and possibly to the 3-D light-like orbits of partonic 2-surfaces. Now the Chern-Simons term is Abelian but the induced gauge fields are non-Abelian. One might say that single sheeted physics resembles that of topological insulator. 4. The effect appears only in magnetic field. I have been talking a lot about magnetic flux tubes carrying dark matter identified as large heff phases: topological quantization distinguishes TGD from Maxwell's theory: any system can be said to possess "magnetic body, whose flux tubes can serve as current wires. I have predicted the possibility of high temperature super-conductivity based on pairs of parallel magnetic flux tubes with the members of Cooper pairs at the neighboring flux tubes forming spin singlet or triplet depending on whether the fluxes are have same or opposite direction. Also spin and electric currents assignable to the analogs of spontaneously magnetized states at single flux tube are possible. The obvious guess is that the conductivity in question is along the flux tubes of the external magnetic field. Could this kind of conductivity explains the strange behavior of SmB6. The critical temperature would be that in which the parallel flux tubes are stable. The interaction energy of spin with the magnetic field serves as a possible criterion for the stability if the presence of dark electrons stabilizes the flux tubes. The following represents an extremely childish attempt of a non-specialist to understand how the conductivity might be understood. The current carrying electrons at flux tubes near the top of Fermi surface are current carriers. heff=n×h and magnetic flux tubes as current wires bring in the new elements. Also in the standard situation one considers cylinder symmetric solutions of Schrödinger equation in external magnetic field and introduces maximal radius for the orbits so that formally the two situations seem to be rather near to each other. Physically the large heff and associated many-sheeted covering of space-time surface providing the current wire makes the situation different since the collisions of electrons could be absent in good approximation so that the velocity of charge carriers could be much higher than expected as experiments indeed demonstrate. Quantum criticality is the crucial aspect and corresponds to the situation in which the magnetic field attains a value for which a new orbit emerges/disappears at the surface of the flux tube: in this situation dark electron phase with non-standard value of heff can be generated. This mechanism is expected to apply also in bio- superconductivity and to provide a general control tool for magnetic body. 1. Let us assume that flux tubes cover the whole transversal area of the crystal and there is no overlap. Assume also that the total number of conduction electrons is fixed, and depending on the value of heff is shared differently between transversal and longitudinal degrees of freedom. Large value of heff squeezes the electrons from transversal to longitudinal flux tube degrees of freedom and gives rise to conductivity. 2. Consider first Schrödinger equation. In radial direction one has harmonic oscillator and the orbits are Landau orbits. The cross sectional area behaves like πR2= nTheff/2mωc giving nT∝1/heff. Increase of the Planck constant scales up the radii of the orbits so that the number of states in cylinder of given radius is reduced. Angular momentum degeneracy implies that the number of transversal states is NT= nT2∝ 1/heff2. In longitudinal direction one has free motion in a box of length L with states labelled by integer nL. The number of states is given by the maximum value NL of nL. 3. If the total number of states is fixed to N = NLNT is fixed and thus does not depend on heff, one has NL ∝ heff2. Quanta from transversal degrees of freedom are squeezed to longitudinal degrees of freedom, which makes possible conductivity. 4. The conducting electrons are at the surface of the 1-D "Fermi-sphere", and the number of conduction electrons is Ncond≅ dN/dε × δ ε≅dN/dε T= NT/2εF ∝ 1/heff4. The dependence on heff does not favor too large values of heff. On the other hand, the scattering of electrons at flux tubes could be absent. The assumption L∝heff increases the range over which current can flow. 5. To get a non-vanishing net current one must assume that only the electrons at the second end of the 1-D Fermi sphere are current carriers. The situation would resemble that in semiconductor. The direction of electric field would induce symmetry breaking at the level of quantum states. The situation would be like that for a mass in Earth's gravitational field treated quantally and electrons would accelerate freely. Schrödinger equation would give rise to Airy functions as its solution. What about quantum oscillations in TGD framework? 1. Quantum oscillation refers to de Haas-van Alphen effect - an oscillation of the induced magnetic moment as a function of 1/B with period τ= 2πe/ℏS, where S is the momentum space area of the extremal orbit of the Fermi surface, in the direction of the applied field. The effect is explained to be due to the Landau quantization of the electron energy. I failed to really understand the explanation of this source and in my humble opinion the following arguments provide a clearer view about what happens. 2. If external magnetic field corresponds to flux tubes Fermi surface decomposes into cylinders parallel to the magnetic field since the motion in transversal degrees of freedom is along circles. In the above thought experiment also a quantization in the longitudinal direction occurs if the flux tube has finite length so that Fermi surface in longitudinal direction has finite length. One expects on basis of Uncertainty Principle that the area of the cross section in momentum space is given by S∝ heff2/πR2, where S is the cross sectional area of the flux tube. This follows also from the equation of motion of electron in magnetic field. As the external magnetic field B is increased, the radii of the orbits decrease inside the flux tube, and in momentum space the radii increase. 3. Why does the induced magnetic moment (magnetization) and other observables oscillate? 1. The simplest manner to understand this is to look at the situation at space-time level. Classical orbits are harmonic oscillator orbits in radial degree of freedom. Suppose that that the area of flux tube is fixed and B is increased. The orbits have radius rn2= (n+1/2) × hbar/eB and shrink. For certain field values the flux eBA =n×hbar corresponds to an integer multiple of the elementary flux quantum - a new orbit at the boundary of the flux tube emerges if the new orbit is near the boundary of Fermi sphere providing the electrons. This is clearly a critical situation. 2. In de Haas- van Alphen effect the orbit n+1 for B has same radius as the orbit n for 1/B+Δ (1/B): rn+1(1/B) =rn(1/B+Δ (1/B)). This gives approximate differential equation with respect to n and one obtains (1/B)(n)= (n+1/2)× Δ (1/B) . Δ (1/B) is fixed from the condition the flux quantization. When largest orbit is at the surface of the flux, tube the orbits are same for B(n) and B(n+1), and this gives rise to the de Haas - van Alphen effect. 3. It is not necessary to assume finite radius for the flux tube, and the exact value of the radius of the flux tube does not play an important role. The value of flux tube radius can be estimated from the ratio of the Fermi energy of electron to the cyclotron energy. Fermi energy about .1 eV depending only on the density of electrons in the lowest approximation and only very weakly on temperature. For a magnetic field of 1 Tesla cyclotron energy is .1 meV. The number of cylinders defined by orbits is about n=104. 4. What happens in TGD Universe in which the areas of flux tubes identifiable as space-time quanta are finite? Could quantum criticality of the transition in which a new orbit emerges at the boundary of flux tube lead to a large heff dark electron phase at flux tubes giving rise to conduction? 1. The above argument makes sense also in TGD Universe for the ordinary value of Planck constant. What about non-standard values of Planck constant? For heff/h =n the value of flux quantum is n-fold so that the period of the oscillation in de Haas - van Alphen effect becomes n times shorter. The values of the magnetic field for which the orbit is at the surface of the flux tube are however critical since new orbit emerges assuming that the cyclotron energy corresponds is near Fermi energy. This quantum criticality could give rise to a phase transition generating non-standard value of Planck constant. What about the period for Δ (1/B)? For heff/h=n? Modified flux quantization for extremal orbits implies that the area of flux quantum is scaled up by n. The flux changes by n units for the same increment of Δ (1/B) as for ordinary Planck constant so that de Haas -van Alphen effect does not detect the phase transition. 2. If the size scale of the orbits is scaled up by n1/2 as the semiclassical formula suggests the number of classical orbits is reduced by a factor 1/n if the radius of the flux tube is not changed in the transition h→ heff to dark phase. n-sheetedness of the covering however compensates this reduction. 3. What about possible values of heff/h? The total value of flux seems to give the upper bound of heff/h=nmax, where nmax is the value of magnetic flux for ordinary value of Planck constant. For electron and magnetic field for B=10 Tesla and has n≤ 105. This value is of the same order as the rough estimate from the length scale for which anomalous conduction occurs. Clearly, the mechanism leading to anomalously high conductivity might be the transformation of the flux tubes to dark ones so that they carry dark electrons currents. The observed effect would be dark, quantum critical variant of de Haas-van Alphen effect! Also bio-superconductivity is quantum critical phenomenon and this observation would suggests sharpening of the existing TGD based model of bio-super-conductivity. Super-conductivity would occur for critical magnetic fields for which largest cyclotron orbit is at the surface of the flux tube so that the system is quantum critical. Quantization of magnetic fluxes would quantify the quantum criticality. The variation of magnetic field strength would serve as control tool generating or eliminating supra currents. This conforms with the general vision about the role of dark magnetic fields in living matter. To sum up, a breaktrough of TGD is taking place. I have written about thirty articles during this year - more than one article per week. There is huge garden there and trees contain fruits hanging low! It is very easy to pick them: just shatter and let them drop to the basket! New experimental anomalies having a nice explanation using TGD based concepts appear on weekly basis and the mathematical and physical understanding of TGD is taking place with great leaps. It is a pity that I must do all alone. I would like to share. I can only hope that colleagues could take the difficult step: admit what has happened and make a fresh start. See the article Does the physics of SmB6 make the fundamental dynamics of TGD directly visible? At 3:43 PM, Blogger L. Edgar Otto said... Matti, there is a Yeats poem that the roebuck would describe God as a roebuck. If there is such a bigger picture then many will see their own take on general theory in this...Lubos seems to understand this is an important breakthrough the tries to incorporate it into methods of string theory. Of course you predicted some of this as having connections to other representations of a more general physics. You, see, it is not clear to me at all that in our exchanging of models that you are alone and that confirms your long speculations. You are just a little topologically insulated is all :-) At 10:04 PM, Anonymous said... Edgar, I am not a man of one theory. I am not a phanatic. 37 years of hard work and what is happening on the experimental side (and what is not happening on theoretical side) force me to talk straightly. It is my duty. In every breakthrough in physics some theory has been selected by its internal virtues and others have dropped away from consideration. This is cruel but unavoidable. In the recent situation when AMS is warning about disastrous effects of the only game in the town mentality of super stringers and for forgetting the experimental reality, it is important that theories which really work would get the attention they desire. Unfortunately, this is not the case. I of course understand the motivations of Lubos and it is nice that he talks about the experimental side. And strings are an important part of physics but definitely not in the manner that super stringers want to think. Strings are the key element of TGD too but they live in 4-D space-time and one avoids the non-sensical attempt to reduce physics to that of 10-D blackholes. By extending the superconformal symmetries one gets as a bonus many nice things: one understand 4-dimensionality of space-time and many other things. If the only competitor of TGD trying to explain the SmB_6 is a theory describing them in terms of 10-D blackholes, there are very little doubts about the winner in the long run. I want to make clear that I am not saying to underline my own contributions. It just happened to be me who became first conscious about TGD. It could have been Witten or some other name or non-name. Post a Comment << Home
6ad5028706e7eb35
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer There are many ways to introduce the electromagnrtic field in Quantum Field Theory(QFT), such as canonical quantization method which introduces the creation and annihilation oprators by treating the amplitudes of electromagnetic waves as the operators. One way I have read in book is different, but I don't understand. At first, the author introduces a transformation to change the quantum field $\psi$ $$\psi\rightarrow\psi'=\psi \ e^{-i\alpha(x)}$$ And then, the former covariant derivative $\partial_{\mu}$ is no longer covariant $$\partial_\mu(\psi \ e^{-i\alpha(x)}) = e^{-i\alpha(x)}(\partial_\mu \psi-i\psi \ \partial_\mu\alpha(x) )$$ There is an inhomogeneous term that make $\partial_\mu$ is not conariant. So, the author has to introduce a new definition of covariant derivative, using another symbol $D_\mu$, by introducing a vector field $A_\mu$, to make $D_\mu$ covariant $$\partial_\mu\psi\rightarrow D_\mu\psi=\partial_\mu\psi+ \frac{ie}{\hbar c}A_\mu\psi$$ $$\partial_\mu\psi^\dagger\rightarrow D_\mu\psi^\dagger=\partial_\mu\psi^\dagger- \frac{ie}{\hslash c}A_\mu\psi^\dagger$$ and check that $D_\mu$ is covariant $$D'_\mu\psi'=e^{-i\alpha(x)}D_\mu\psi$$ at the same time $$A'_\mu=A_\mu+ \frac{\hslash c}{e}\partial_\mu\alpha(x)$$ And then, the author says the vector field $A_\mu$ is the vector potential of electromagnetic field, and writes down the Langrangian $$L_{matter+em}=L_{matter}(\partial_\mu\psi\rightarrow D_\mu\psi)+L_{em}$$ I am confused what the author does. In this part, I can follow the derivation, but I can not understand. Why need we introduce the transformation at first? What does the idea base on? I don't know why $A_\mu$ is the vector potential of eletromagnetic field. Is it because the propeties that $A_\mu$ has is the same as the vector potential of eletromagnetic field by after calculating? I mean, we should have not known what the $A_\mu$ is when we introduced it to make $D_\mu$ covariant. So, how do we know the $A_\mu$ is? What is the reason that we treat the $A_\mu$ as the vector potential of electromagnetic field? share|cite|improve this question Try this explanation on for size. I emphasise that it is my own way of understanding the $U(1)$ Gauge Invariance of electrodynamics and I haven't seen it elsewhere in exactly the same words. The derivation you cite is wontedly given in the context of the semi-classical (i.e. first quantized, or before the quantum field is introduced) Dirac or Schrödinger equation for the electron: the same reasoning applies to both. These equations describe a fermion, so one cannot re-interpret their particle fields $\psi$ as a macroscopic, classically-measureable field: the Pauli exclusion principle means that you cannot copy your fermion and have $N$ (where $N$ is very big) particles in the same quantum state. Otherwise, you could in principle measure the full complex value of $\psi(x,y,z,t)$ to arbitrary accuracy by copying $\psi$ in this way and then doing a classical measurement - something that you can do with bosons (see below). What does this mean at the one particle level? It means that only $|\psi|^2$ is experimentally meaningful for one particle: you can measure the probability of finding the particle at a position in space, but the phase of $\psi$ has no such meaning. So we should be able to multiply $\psi$ by an arbitrary phasing function $e^{i\,\alpha(x,y,z,t)}$ and get something that means physically the same thing. But, in the light of the structure of the Dirac or Schrödinger equations this seems absurd: most assuredly: $\partial_j \left[e^{i\,\alpha(x,y,z,t)}\,\psi(x,y,z,t)\right]\neq e^{i\,\alpha(x,y,z,t)}\, \partial_j\,\psi(x,y,z,t)$ but rather $\partial_j\left[ e^{i\,\alpha(x,y,z,t)}\,\psi(x,y,z,t)\right]= e^{i\,\alpha(x,y,z,t)}\, \left[\partial_j\,\psi(x,y,z,t) + i\, \psi(x,y,z,t)\,\partial_j\,\alpha(x,y,z,t)\right]$ and so such an assumption of invariance with respect to arbitrary phasing is invalidated because phasing patently "ruins" the structure of the Dirac or Schrödinger equations unless the phase factor $\alpha$ is globally constant. This is reasonable: the phase of $\psi$ is most definitely part of the solutions of the equations and plays a definite role in diffraction and other wave effects that have a bearing on the intensity field $|\psi|^2$. But one can "retrieve" the situation by postulating that the single particle is coupled to some outside field, so that the Dirac or Schrödinger equations now have the terms involving this coupled-in field: if so, we can add an arbitrary phase $\alpha(x,y,z,t)$ and keep the global equation the same by saying that, whenever we do this, we must take away a balancing $i \partial_j\,\alpha(x,y,z,t)$ from the coupled in field. Thus, we conclude, if the coupled-in field $A_j$ is at all physical, it has to give the same measurements as the field $A_j + \partial_j\,\alpha(x,y,z,t)$: the original plus any arbitrary field of the form $\partial_j\,\alpha(x,y,z,t)$, where $\alpha(x,y,z,t)$ is some suitably well-defined scalar field. But there is a field that behaves exactly like this: the electromagnetic four-potential. We can add a spatial gradient $\nabla \alpha$ to the vector part and at the same time add the scalar $\partial_t\alpha$ to the scalar electric potential without affecting the electric $\mathbf{E}$ and magnetic $\mathbf{B}$ fields. So there we have it: the assumed "gauge invariance" suggests the electromagnetic field, because the balancing field behaves the same way as gauge-transformed vector magnetic and electric potentials. So the essential reasoning here is really a hunch: if it looks like a duck and quacks, maybe it is a duck. So too with the outside field coupling into the Dirac equation. It behaves like the potential fields of Maxwell's electromagnetism, so "we" (or rather the physicist who first thought of this - my ignorance sadly hinders my telling you who) go with our hunch and see what happens when we assume the field is the electromagnetic field. Note that the same ideas do not apply to bosons. We can think of Maxwell's equations as the first-quantized equations for the photon. No one speaks of making Maxwell's equations invariant with respect to multiplication by an arbitrary phase function $e^{i\,\alpha(x,y,z,t)}$ in the same way as is done for the Dirac equation. This is even though the Maxwell equations can be cast in a quaternionic form that is identical to a zero-mass Dirac equation when the latter is thought of as two quaternionic equations coupled by a mass term - so the same mathematical trick would be just as valid with the Maxwell equations as it would with the Dirac equation. My interpretation is this: the photon's phase is classically meaningful: photons are bosons, so in principle we can get as many as we like in the same state: so many indeed that we can measure the phase of their common "wave function" (which is now the electromagnetic field - see caveats in the afternotes below) to arbitrary accuracy with classical measurement devices such as interferometers. Every one-particle photon state corresponds EXACTLY to a macroscopic, classical electromagnetic field that we can set up and measure in detail to arbitrary accuracy in the laboratory. So, even though the phase of one photon is "hidden" just as is the phase of one electron above - there are no "phase" eigenstates and no "phase" observable - it must still be "absolute" in the "prolifically-copy-and-classically-measure" sense just described. This ends my answer, but I add some interesting related things below. Maxwell's Equations from $U(1)$ Gauge Invariance Incidentially, one can use this gauge invariance thinking to motivate or "derive" the Maxwell equations. Given suitable differentiability assumptions on the field, the simplest way to derive fields that are exactly unaffected by the gauge transformations is to form the tensor curl: $F_{\mu\,\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$ (try this if you've not before: it leaves nonzero $F_{\mu\,\nu}$ that are unaffected by the gauge transformation). Now we postulate a Lorentz-invariant wave equation for a massless field: $\Box \mathbf{A} = \mathbf{0}$ is the obvious one (let's just go all the way back to D'Alembert!). You straight away have get the freespace Maxwell equations as equations fulfilled by the "physical", unaffected by gauge $F_{\mu,\nu}$. How ironic that the Pauli Exclusion Principle, which was first postulated to "stop" electrons in the Bohr atom from radiating and thus told against behavior seemingly foretold by the Maxwell equations can be used to motivate a gauge transformation that pretty much leads from the Dirac equation back to the Maxwell equations! An intriguing and uncluttered "first quantized" or "baby" formulation of QED (appealing especially to non quantum field theorists like me) can be gotten by postulating the Dirac-Maxwell equation: $\gamma^\mu\left(i \partial_\mu - q A_\mu\right) \psi + V \psi - \psi = 0$ $\partial_\nu F^{\nu\,\mu} = q\,\bar{\psi} \gamma^\mu \psi$ with the Lorenz Gauge $\partial_\mu A^\mu = 0$ i.e. intuitively, the source for the Maxwell equations is $q$ times the probability current density (here $\bar{\psi}$ stands for charge conjugated $\psi$). A first quantized electron field is now nonlinearly coupled to a first quantized photon field. This nonlinear system can be solved exactly for the hydrogen atom potential $V$ and in other situations A. O. Barut and J. Kraus, "Nonperturbative Quantum Electrodynamics: The Lamb Shift", Foundations of Physics, Vol. 13, No. 2, 1983 and this solution can indeed model the Lamb shift and spontaneous emission. The series solution comes out to something very like the standard QED perturbation terms and indeed one must step in and "renormalize" this solution too. See also the works of Hilary Booth in the late 1990s. Maxwell's Equations as the Propagation Equations Photon Wave Function For various reasons, a first quantized "photon wave function" has some different behaviours and interpretations from the fermion field $\psi$ in the Dirac equation. It is perfectly valid to treat the solutions of Maxwell's equations as one-photon quantum states, but one must understand that there are difficulties in handling localized position eigenstates for the photon precisely analogous to those for fermions. See the works of Iwo Bialynicki-Birula, Margaret Hawton in the late 1990s and early 2000s, for example: Iwo Bialynicki-Birula, "On the Wave Function of the Photon" Acta Physica Polonica 86, 97-116 (1994) Margaret Hawton and William E. Baylis, "Angular momentum and the geometrical gauge of localized photon states", Phys. Rev. A 71, 033816 (2005) share|cite|improve this answer Maybe I get your point. You say that the electromagnetic field is the reason why the phase of matter field is not constant, right? All the effects or information that matter interact with electromagnetic field is the changing of phase of matter field, right? – qfzklm Aug 6 '13 at 13:17 But I still have a question. All the field interacting with matter maybe change the phase and amplitude of matter field. So, does only electromagntic field change the phase only? If not, how we identify the $A_\mu$ is electromagnetic field? Maybe it is the superposition of electromagnetic field and some other field... – qfzklm Aug 6 '13 at 13:26 The electromagnetic field can absorb any phase you put onto the fermion's wave function in such a way that the physical fields $\mathbf{E}$, $\mathbf{B}$ ... are not affected. Or, the other way around, when you make a gauge transformation on the electromagnetic field (which doesn't affect your physical fields), the matter's phase is "where" the added gauge $\mathbf{A},\phi$ "go", and, because this matter is fermionic, there is no chance of an experimenter being able to "amplify" the wavefunction to see the phase as they in principle can with bosons. – WetSavannaAnimal aka Rod Vance Aug 6 '13 at 13:30 As to your second question: as I said - the whole thing is simply a "hunch" - just as with many things in physics - like Einstein modeling his field equations on the poisson equation and so on. Also, we are talking "electrons" here, so it's not unreasonable to assume it's the electromagnetic field!! You can then go and test this hunch with, say, the Aharonov-Bohm effect, which suggests strongly that the $\mathbf{A}$ field is the appropriate one to put into the electromagnetic momentum $p + q \mathbf{A}$. Nothing proves that it's not some mixture of fields like you say- you just "suck and see"! – WetSavannaAnimal aka Rod Vance Aug 6 '13 at 13:34 Nice answer. Just one minor thing: when you said "the photon's phase is classically meaningful" I'm sure you meant the phase of the corresponding coherent state. It may be worth a caveat to the effect that the phase of a single photon isn't meaningful, just to avoid possible misunderstanding? – twistor59 Aug 6 '13 at 14:36 Your Answer
774329a8669d90e3
Feynman Diagrams for the Masses (part 1) If you want to win a Nobel Prize in Physics by finding the unified field theory, it’s pretty obvious that you will have to learn how to make Quantum field theory (QFT) calculations. In the 1940s, Richard Feynman and Ernest Stueckelberg independently developed a notation (now known as Feynman Diagrams), that greatly ease certain calculations in QFT. Julian Schwinger complained that Feynman had made QFT accessible to the “masses”. He meant the “masses of physicists.” Between this post and the next, we’re going to take Feynman one better and make Feynman diagrams accessible to the masses, as in “masses of amateurs.” Our simplification will be to use qubits. For a reference in the arXiv literature, see Quantum Electrodynamics for Qubits, but our discussion will be simpler than this. This post will discuss Feynman diagrams the usual way. I will try to write this introduction at a low enough level, and with enough links to explanations of the jargon, that it can be understood by people unfamiliar with particle physics. Accordingly, we will skim over a lot of details, but will include links to articles that will explain further, if needed. Photon self energy by electron positron pair production As a first step, let’s describe what is so difficult about the usual way of doing Feynman diagrams. Maybe this can work as a short introduction to the method for those who haven’t seen it before. It’s fairly easy to write down Feynman diagrams. It’s more difficult to turn them into definite integrals, and then to evaluate the integrals is yet worse. We begin by discussing how Feynman diagrams enter into particle physics. Feynman diagrams are used in perturbation theory to mathematically approximate a difficult to compute quantum calculation through a series expansion. One takes the calculation one wishes to make, and turns off parts of the interaction that are difficult to compute. This gives an equation that we can solve exactly. We then turn the interaction back on, but only as an infinitesimal correction. Since it is an infinitesimal, we keep only terms that have no more than N products of the infinitesimal. If we can solve the resulting equations (they get more and more difficult as N increases), we end up with a series as a sort of polynomial of order N, for example: First 4 Feynman Diagrams for photon self energy due to electron The above illustration shows the bare (i.e. without any interactions) photon propagator on the top row, plus the single 2nd order diagram in the second row, and the two 4th order diagrams in the next two rows. The coupling constant is the fine structure constant, \alpha , and the places on the diagrams which correspond to interactions are marked with \alpha . If these diagrams compute out to give the values A, B, C, and D, then the photon propagator, corrected (for the possible creation and annihilation of electron / positron pairs in the vacuum) is given by A + B\alpha^2 + (C+D)\alpha^4 . Note that since the photon is a vector particle, the values of A, B, C, and D are not real or complex numbers but instead are more complicated, as we will discuss later. In the usual practice, the interaction part that is turned off is typically a coupling constant. A coupling constant is a constant that determines the strength of an interaction. An “interaction” means an event where quantum particles are created or annihilated. “Creation” and “annihilation” are the terms used in QFT to describe the event where a particle begins its existence or ends its existence. Without interactions, there is no need for the creation or annihilation, and we could use the simpler theory of Quantum Mechanics instead of QFT. Instead, QFT includes quantum mechanics as a subset. After a particle is created, in QFT we can use quantum mechanics to determine how its wave function changes with time, up until the moment when it is annihilated. In QFT, one almost always uses a Green’s function for the quantum mechanical wave equation that we would like the particle to obey. The physicists call such a Green’s function a propagator because it shows how the wave function spreads, or “propagates”, if it starts out concentrated at one point in spacetime (or the equivalent in momentum space). Momentum space comes from taking the Fourier transform of the position space wave functions. It is convenient for calculations in QFT because complicated <a href=”http://en.wikipedia.org/wiki/Convolution&#8221; mce_href=”http://en.wikipedia.org/wiki/Convolution”convolution integrals become simple when you Fourier transform them. Another way of saying that momentum space is convenient is that in momentum space, momentum and energy are conserved exactly, this is not the case in position space quantum mechanical calculations. In the usual way of doing physics, one obtains Feynman diagrams after making a guess at the Lagrangian density. Joseph Louis Lagrange was an 18th century mathematician. The Lagrangian is roughly the kinetic energy minus the potential energy. If we choose a particular form for the kinetic and potential energies we can write down the Lagrangian. From the Lagrangian we can compute the equations of motion. We do this by varying the Lagrangian, that is, by computing the Lagrangian for a set of possible paths and picking a path for which small changes to the path do not change the Lagrangian. Such a path is a possible sequence of values for the positions of our particles (and their momenta). The equations of motion will show up as a set of coupled differential equations. For a wave theory, like quantum mechanics, the kinetic and potential energies are defined at each point in space-time as a functional of the fields. With \psi the wave function, T the kinetic energy, and V the potential energy, one could write the Lagrangian as: L(t) = \int_{z=-\infty}^{+\infty}\int_{y=-\infty}^{+\infty}\int_{x=-\infty}^{+\infty}( T(\psi(x,y,z,t)) - V(\psi(x,y,z,t))dx\;dy\;dz . Instead of getting an equation of motion for the particles, we get a set of coupled partial differential equations. The partial derivatives show up because of the dependency on position. If we turned off the interaction, the equations of motion we would get from the Lagrangian in the usual QFT technique would be something like Schrödinger’s equation or Dirac’s equation. The propagators (Green’s functions) for these equations of motion are well known. What is not known are the propagators for the more complicated equations of motion that would give the full Lagrangian. Such a propagator is called “exact”. We will direct our effort at this sort of problem, that is, finding the exact propagators (or an approximation to them) for complicated Lagrangians. In quantum mechanics, we would compute the complex number C = \langle final | initial \rangle . The probability would then be the absolute value of C squared, that is, P = |C|^2 . In QFT, the exact (or approximated) propagator is used in a similar fashion. For example, if the propagator is given by p(x,t,x',t') , where x and x’ stand for 3-vectors giving spatial positions, and the initial and final conditions are given by \psi_i(x), \psi_f(x) , at times t_i and t_f , respectively, then the complex number computed from the propagator is: \int\psi_f(x) p(x,t_f,x',t_i)\psi_i(x)\; d^3x\;d^3x' where the integral is the definite integral twice over all space. The probability is then given by P=|C|^2 as in the quantum mechanical calculation. Of course quantum field theory also allows calculations where the initial and final states are not the same particle states. These are used to model experiments where the type of particle is changed by the interaction. The method is similar to the above, but the propagator p(x,t,x',t') refers to different sorts of quantum states. Such a “mixed” propagator would be zero when the interaction is turned off. T and V can be depend on \psi(x,y,z,t) in complicated ways, for example, they can depend on its derivatives. In addition, \psi is only rarely a simple real or complex valued function (which represent what the physicists call a scalar particle), but instead would be a spinor for a spin-1/2 fermion or a vector for a vector boson such as a photon. The use of spinors is very important and is taught to undergraduate physicists though undergraduate mathematics majors may get a degree without learning of them. Because spinors and vector bosons are complicated, most QFT textbooks begin with scalars. No elementary scalar particles are known, so this is a toy model used primarily to teach the subject. The problem with the full theory is that it is so complicated that even simple diagrams like the following result in amazingly messy calculations, which we will illustrate. Our approach will be different from most introductions to QFT. We will first briefly discuss the subject in its full glory, examining the above diagram, and then reduce to a simplification that is not just a toy model, but is the QFT on qubits. For reference, the reader is directed to the recent arXiv paper by Iwo Bialynicki-Birula and Tomasz Sowiński, Quantum Electrodynamics for Qubits. So now we will turn the above Feynman diagram into a calculation. To do this, we use Feynman rules. These rules convert the elements of the diagram into mathematical objects that will be present in a definite integral. On solving the definite integral, one finds the result of the computation. The vast majority of work with Feynman diagrams is done in Momentum Space, that is, with momentum and energy instead of space and time. Calculations are much simpler in momentum space. In fact, in quatum mechanics it is easy to forget that there is anything other than momentum space and sometimes people do forget. For example, see “coalesce into a single blob”, with respect to Bose-Einstein condensation. The source of Feynman diagrams is approximately as follows. First, one chooses a quantum mechanical theory for how a particle moves. One then finds a propagator for this theory. In a theory with several different particles, one might need several different kinds of propagators. The Feynman rules will convert lines and arcs into these propagators. Second, one chooses a set of interactions. The interactions define what can be created when other things are annihilated. The interactions tell how the lines and arcs are hooked together, that is, they define the vertices. For each type of vertex, a complex constant is given (the coupling constant), and if the arcs coming into the vertex have spinor or vector structure, then a way for these to relate to one another must be included in the vertex. Generally, one defines the propagators and interactions by using a Lagrangian, but the details of this need not concern us here. For any given set of Feynman rules, we could define a Lagrangian that would give those rules, and vice versa. They are alternative ways of describing the same thing. As I mentioned above, the rules one uses in momentum space are much simpler than the rules in position space. The primary reason for this is that when one Fourier transforms a derivative, one gets a nice simple factor of the momentum. In the usual QED theory one also has to treat the virtual particles differently from the “real” ones that are “on the mass shell”. For our purposes, it is enough to understand the virtual particles. With that said, the Feynman rules for QED, the interaction between photons and electrons (and positrons) are simply: Feynman rules for QED (virtual particles only) The above, which are from Peskin and Schoeder’s book. The mathematical objects on the right need a little explaining. The i\epsilon is an infinitesimally small imaginary constant that comes in when one does the Fourier transform. The sign specifies whether the resulting pole will be taken to be just barely above or just barely below the real line in the complex plane. The “p” is the momentum carried by a particle. In relativity, momentum is a 4-vector, so there are three components. One usually writes the four components (p_0, p_1, p_2, p_3) . This is said to be convenient for summation conventions, but it seems to me that summation conventions can be what we wish them to be and don’t require integer designations. So I use the more geometric nomenclature (p_t, p_x, p_y, p_z) , which follows that of the Pauli matrices, which fit better with the “east coast metric”. But to be compatible with Peskin & Schroeder, I’ll keep to the west coast metric here. An advantage of using t,x,y,z to designate the components of momentum (and position), is that I can use the integers to designate different momenta without being at all confusing to the student. So the relativistic conventions I will use will be: East coast metric relativity definitions The reader should not take the above two figures as definitive. My purpose here is simply to show what goes into QFT calculations even as simple as the first correction to the photon propagator. (Did I mention that I’m ignoring the term that has a vacuum bubble?) My objective here is not to calculate it. If you would like to correct one of the above, please put it in the comments and I may or may not get around to correcting it. In addition to these problems, there are several different conventions for how to define electricity and magnetism. In short, even the conventions available for the simple theory of QED is a mess. In addition to the above, one also requires that momentum is conserved at the vertices, and that “undetermined loop momenta” are integrated over \int d^4p/(2\pi)^4 . The conservation of momentum is obtained by adding to the vertices a delta function on the sum of the 4-momenta coming into the vertex. For example, if the three propagators coming into a vertex bring momenta of k, p, q , then we include a 4-delta function \delta(k+p+q) = \delta(k_t+p_t+q_t)\delta(k_x+p_x+q_x)\delta(k_y+p_y+q_y)\delta(k_z+p_z+q_z). This delta function makes sure that energy and momentum are (each separately) conserved at the vertex. This is part of the reason that momentum space is simpler: energy and momentum are conserved. Finally, the \gamma^\mu are a set of four 4×4 matrices called the Dirac gamma matrices. In places where one is supposed to add a real or complex (i.e. scalar) number to a matrix, one first multiplies the scalar by the unit 4×4 matrix. As a 4-vector of matrices, we will use x,y,z,t notation to reference the four of them. As 4×4 matrices, the four individual matrices are somewhat arbitrary, they are just a representation of the Lorentz group. In my own work I use geometric notation which does not require the choice of a representation. So it is with some repugnance that here I use a particular representation: Gamma matrices, east coast metric, chiral In this choice of representation of the gamma matrices, the product that appears in the electron propagator, p_\mu\gamma^\mu , expands out to be a 4×4 matrix. This sort of dot product appears over and over in Feynman diagrams. Feynman introduced an abbreviated notation for it called the “slash” notation, which can be expanded out (in our choice of gamma matrices) as follows: Slashed momentum Putting together all the Feynman rules, let’s write down the first self energy correction to the photon propagator. The Feynman diagram, as before, is: Photon self energy by electron positron pair production When using the Feynman rules, one arranges for the “goes into” of one part of the diagram to fit into the “goes outa” of the next. The first place where this has to happen is at the ends of the photon propagators. Each photon propagator makes a factor g_{\mu\nu} . The \mu, \nu are just dummy variables that will be summed over when we add vertices to the ends of the propagator (where the photon is created and annihilated). In our diagram, we have two photon propagators, so we will end up with four dummy variables. I will use \mu, \nu, \alpha, \beta for these as follows: Photon self energy with photon propagator indices labeled In the above, the left vertex is associated with the \beta index, and the right vertex uses the \mu index. The finished correction to the propagator will have \alpha, \nu indices. These indices match up with the indices given in the Feynman diagrams under “QED Vertex”. Once we get the indices correctly matched to the Feynman diagram, the second place where “goes into” has to match up with “goes out of” is in the 4×4 matrix structure of the fermion propagators. Each of the electron propagators is a 4×4 matrix of complex numbers. The QED Vertex is also a 4×4 matrix. We have to make sure that these matrices are used in the correct order. With the photon propagators, it didn’t matter what order we put the two indices in. This is because the g_{\mu\nu} matrix is symmetric. This is not the case for the fermion propagator. The p_\mu\gamma^\mu matrix is not symmetric. This is why the fermion propagators have an arrow sign. The arrow sign indicates the direction in which we insert the 4×4 matrices. For the diagram above, the fermions form a loop. Starting at the left vertex, the order around the loop is “left vertex”, “top electron propagator”, “right vertex”, “bottom electron propagator”, and back to the “left vertex”. This is the order in which we must put down the 4×4 matrices associated with these objects. Using the momenta as labeled in the diagram, the terms, in order and separated by parentheses, are: electron loop with 4x4 matrices noted The alert reader will notice that, taken as a product of 4×4 matrices, the above product is arbitrary in that I chose to begin it with the left most vertex. And in general, calculations in QFT reduce to complex numbers, not 4×4 matrices. There is another step, which is to take the trace of this product of matrices.  The trace arises because we have to take a sum over all possible polarizations of the fermion loop.  You can break the loop at any point, and when you sum over polarizations, you end up taking the trace.  Since the trace of AB is the same as the trace of BA, it doesn’t matter where we start the loop, the trace function will give the same result. Okay, we’ve arranged for the 4×4 matrices to multiply in the correct order (i.e. the electron propagators are hooked up correctly). We’ve arranged for the photon indices to fit correctly into the vertices (i.e. the photon propagators are hooked up correctly). We can now write down the calculation needed: Calculation for 1st correction to photon propagator The above is a definite integral of 4×4 matrix products with 8 variables of integration. The delta functions will simplify four of the integrals. This will leave a delta function that factors out, i.e. \delta(q_1 - q_2) . What remains will still be quite a mess.  Various calculational tricks will allow computation of the matrix trace without having to actually multiply it out.  The equivalent calculation in position space is considerably worse. This discussion of Feynman diagrams hasn’t included various things that are needed to account for symmetry between diagrams. The above has a minus sign added because of the presence of a fermion loop. God knows what I’ve left out. Surely those in the know will complain in the comments. But these are details. What I’ve tried to do with this post is to give an idea what sort of things go on when one converts a Feynman diagram into a calculation. In the following post we will redo this, but with qubits, and show how the theory simplifies to the point that amateurs can discover new results. Filed under physics 11 responses to “Feynman Diagrams for the Masses (part 1) 1. carlbrannen I sat down to continue writing and I already found errors. In switching the metric to east coast, I need to switch the Feynman rules also, otherwise integrals don’t blow up like they’re supposed to… I’ll get around to fixing it probably later today. 2. nc This is very clear and interesting. One thing about quantum field theory that worries me is the poor way that the theory is summarized: either with too much abstract maths which aren’t applied to concrete examples, or else with no maths at all. It should be possible to concisely explain it, and that’s what you’re doing. One problem with popular Feynman diagrams is that they’re plots of time (ordinate, or y-axis) versus spatial distance (abscissa or x-axis). Therefore, it’s a funny convention to draw particles as horizontal lines on Feynman diagrams, because such particles aren’t moving in time, just in distance, which is impossible. [snip]For rest of comment, see Long Nigel Post #1. For Nigel’s copy of the comment, with further comments, see Nigel’s blog post on energy conservation comments.[/snip] 3. Pingback: Feynman Diagrams for the Masses (part 2) « Mass 4. dkv I also find the website very interesting especially the virtual photons and virtual electrons.Why is momentum conservation necessary ? If energy conservation can be violated in time then momentum can be violated in space. 5. 3zy Little mistake. Last matrix element is not -pz. [Carl: And that’s not going to be easy to fix…] 6. Ulla How can these diagrams be used for the wave function? Can they be combined with twistors in some way? 7. Mitchell Porter Each diagram describes a sequence of changes in a multi-particle wavefunction. For example, in Carl’s first diagram (if you read left to right), you start out with a single-photon wavefunction (represented by the wavy line), it changes into a one-electron,-one-positron wavefunction (the two straight lines with arrows), and then it changes back into a single-photon wavefunction (wavy line again). The diagram describes both a process in space (photon becomes electron-positron pair which recombine) as well as a quantum probability (which comes from multiplying the algebraic expressions in the diagram labeled “Feynman QED Rules”). A wavefunction is a superposition of all the possibilities. You can extend that view to talk about a superposition of possible “histories” or “processes”. The standard form of quantum mechanics involves the Schrödinger equation: you start with a wavefunction, and then that equation tells you how it changes over time. But the other way to do it is to look at all the possible sequences of events (the “histories”). Each history gets a complex number, and then the complex numbers add if the histories arrive at the same final state. This is the “sum over histories” or “path integral”, and mathematically it’s exactly the same as the other picture. When particles can interact, the possible histories can become extremely complex. But in quantum electrodynamics (electron-photon interactions), the more often they interact, the less probable that history is. So the “higher order diagrams” only matter for precision calculations. Feynman wrote a famous book about this, “QED”, which is meant to explain it all to the public. Twistor theory also has “twistor diagrams” which are exactly the same idea. In fact they are now being used in QCD (gluons) and string theory. 8. Ulla Thanks for the answer, but you said nothing about the combination Feynman – twistors. I’m a novis in these things, as you certainly saw. I hunt the qubit thing and the degrees of freedom (P- V-axis). In Wikipedia they said that Feynman was no good for solitons and superconduction in general, and that is what it is about in my meridians and the nerve pulse. The twistors are no good for particles, atoms and molecules. But they can have many superpositions at a time. It seems to me now that it are the twistors that are responsible for the wave collapse, through braiding. The wave is repelling, the particle is attracting? 9. carlbrannen Wikipedia says that Feynman diagrams are “no good for solitons” because Feynman diagrams are for perturbation theory and perturbation theory is no good for solitons and similar bound states. My efforts have been in extending the methods of Feynman diagrams to exactly those deeply bound states. The idea is that you look at what happens in perturbation theory to give you an idea of what must happen in strongly bound states. See my latest paper on “Spin Path Integrals and Generations” for example calculations of bound states using Feynman path integrals. Another way of describing the idea is that in those bound states, you have to agree that certain things are as true in bound states as they are in perturbational quantum mechanics, especially superposition. Now standard quantum mechanics handles bound states beautifully, the real problem is only that Feynman diagrams and modern elementary particles is more of a perturbational way of looking at things. So I take the tools defined by what we know from perturbation theory and turn them around to apply them to bound states. As far as twistors go, I don’t think I know enough to competently comment on them except to say that Marni Sheppeard seems to think they’re useful, and it sure looks to me like they’re similar to the kind of stuff I’m doing. 10. Ulla The problem is the relaxation that happens in the braidings. The energy is freed. Can you really take the tools and just turn them around? There are also the time factor. Also the hierarchial problem is there. The vectors/spinors are of different amplitude? What determines it? It sounds logical to think this process is assymmetric and it cannot be turned around. 11. Abdul I need Feynman diagram for atomic and nuclear (bound states) reactions. Please, provide some, if you can. Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
aade4f02fd6ecf53
Perturbation Theory (Quantum Mechanics) From QED < PlasmaWiki(Link to this page as [[PlasmaWiki/Perturbation Theory (Quantum Mechanics)]]) Jump to: navigation, search Only a tiny fraction of problems in quantum mechanics can be be solved analytically. When an exact solution cannot be obtained, one may seek approximate answers through a variety of means, perturbation theory being one of them. Loosely, it happens that the behavior of most quantum systems changes in a fairly regular way upon a slight modification. When a system can be described by a core part, which is to be solved exactly, or nearly so (perhaps using variational principles to determine the ground-state wavefunction), and a smaller part, called the perturbation, we can apply the methods of perturbation theory to determine in an approximate way the behavior of the perturbed system. Perturbation Theory also describes a more general mathematical framework for obtaining perturbative solutions to other types of systems. The core of perturbation theory, as applied to quantum mechanics, is present in the comparatively simple time-independent nondegenerate case. We need to be quite careful around degeneracies -- fortunately we provide a framework which largely handles this for us. There is a cost: we are now required to solve the Eigenvalue Problem directly. Time-dependent perturbation theory is even more involved, but it allows us to deal with an extraordinary variety of interesting systems (indeed, five nobel prizes have been awarded for resonance of two state systems in time dependent potentials! (I.I.Rabi on Molecular Beams and Nuclear Magnetic Resonance, Bloch and Purcell on B fields in atomic nuclei and nuclear magnetic moments; Townes, Basov, and Prochorov on masers, lasers, and quantum optics; Kastler on optical pumping; and Lauterbur and Mansfield (Medicine) for MRI). One of the most useful formulas to arise out of this topic is Fermi's Golden Rule for transition probabilities. In what follows we largely follow Sakurai, so please refer to Modern Quantum Mechanics if you're looking for a text. Consider a system describe by a Hamiltonian \hat{H} which may be split like so: \hat{H} = \hat{H}_0 + \hat{V} The |n^{(0)}\rang. Here we omit this cumbersome notation. Where an expression is ambiguous assume dimensional consistency)</i> We wish to solve, approximately, the eigenstate problem for the full Hamiltonian. (\hat{H}_0 + \lambda \hat{V}) |n\rang = E_n|n\rang We make use of a parameter λ, which may be 'dialed' from 0 to 1. In practically all systems there is a smooth transition between the perturbed and unperturbed system. (note: for this method to be valid, this eigenitems must be analytic in a complex plane around λ = 0) As the parameter λ increases from 0, the energy of the nth level shifts from it's unperturbed value. We define the energy shift as \Delta_n = E_n - E_n^0 Where we note that Δn,En are functions of the perturbation parameter λ. We rearrange the Schrödinger equation like so: (E_n^0 - \hat{H}_0)|n\rang = (\lambda \hat{V} - \Delta_n)|n\rang We may invert the operator \lang n^0|) \lang n^0|(E_n^0 - \hat{H}_0)|n\rang = E_n^0 - E_n^0 = 0 = \lang n^0|(\lambda \hat{V} - \Delta_n)|n\rang We can define a complementary projection operator \hat{\phi}_n, to project states away from the unperturbed states. \hat{\phi}_n \equiv 1 - | n^0 \rang \lang n^0 | The operator \hat{\phi}_n to the right. The following equation is then correct in every dimension except for that in the direction of | n^0 \rang |n\rang != \frac{1}{E_n^0 - \hat{h}_0} \phi_n(\lambda \hat{V} - \Delta_n) | n \rang We correct this by adding a term in the direction of | n^0 \rang. |n\rang = c_n(\lambda) | n^0 + \rang \frac{1}{E_n^0 - \hat{h}_0} \hat{\phi}_n(\lambda \hat{V} - \Delta_n) | n \rang To reduce to the unperturbed equation when c_n = \lang n^0|n\rang. In fact, since the equation is homogeneous cn is a free variable, and we may set cn(λ) = 1 and normalize the ket at the end of the calculation. Simplifying, we obtain the two equations which the rest of the method is based on: 1: \quad\quad |n\rang = | n^0 \rang + \frac{\hat{\phi}_n}{E_n^0 - \hat{h}_0}(\lambda \hat{V} - \Delta_n) | n \rang and from \lang n^0|(\lambda \hat{V} - \Delta_n)|n\rang = 0, 2: \quad\quad \Delta_n = \lambda \lang n^0|\hat{V}|n\rang\quad{(2)} The basic strategy is to expand |n\rang and Δn in the powers of λ and then match the appropriate coefficients. This is a perfectly valid strategy, since the whole derivation has made no regard to the value of λ, and An Nth Degree Polynomial has N Roots. Thus | n \rang = |n^0\rang + \lambda | n^1 \rang + \lambda^2 | n^2 \rang + \dots \Delta_n = \lambda \Delta_n^1 + \lambda^2 \Delta_n^2 + \dots where | n^m \rang, \Delta_n^m stand for the mth order corrections. Substituting our expanded eigenstates and energy shifts into equation 2, and equating the coefficients of powers of λ yields O(\lambda^m): \Delta_n^m = \lang n^0|\hat{V}|n^{m-1}\rang Turning our attention to an expanded equation 1, we have |n^0\rang + \lambda | n^1 \rang + \lambda^2 | n^2 \rang + \dots = |n^0\rang + \frac{\hat{\phi}_n}{E_n^0 - \hat{h}_0}(\lambda \hat{V} - \lambda \Delta_n^1 - \lambda^2 \Delta_n^2 - \dots) \times (|n^0\rang + \lambda | n^1 \rang + \lambda^2 | n^2 \rang + \dots ) At this point the strategy becomes clear. \Delta_n^m may be evaluated with only the m − 1th order eigenket. The mth order eigenket may be obtained knowing only up to the mth order energy shift. This procedure may continue for as long as is desired (in fact, it may even be possible to evaluate the sum analytically). If we write down, to second order, the explicit expansion for Δn and |n\rang, we can make several interesting qualitative observations. \Delta_n \equiv E_n - E_n^0 = \lambda \hat{V}_{nn} + \lambda^2 \sum_{k \neq n} \frac{|\hat{V}_{nk}|^2}{E_n^0 - E_k^0} + O(\lambda^3), V_{nk} \equiv \lang n^0|\hat{V}|k^0\rang \neq \lang n | \hat{V} | k \rang The matrix elements Vnk are taken with respect to the unperturbed kets. For the pertrubed ket, the expansion is |n^0\rang = \lambda \sum_{k \neq n} |k^0\rang \frac{\hat{V}_{kn}}{E_n^0 - E_k^0} \quad \quad + \lambda^2 \left({\sum_{k \neq n}\sum_{l \neq n} \frac{|k^0\rang \hat{V}_{kl}\hat{V}_{ln}}{(E_n^0 - E_k^0)(E_n^0 - E_l^0)} - \sum_{k \neq n} \frac{|k^0\rang \hat{V}_{nn}\hat{V}_{kn}}{(E_n^0 - E_k^0)^2}}\right) + O(\lambda^3) Examining the last equation closely, we see that the perturbation has the effect of mixing the previously unperturbed eigenkets. The second order energy shift (the second term in the second last equation) exhibits interesting behavior. The energy levels of kets which are "mixed" by \hat{V} (where Vnk is positive) are repelled away from each other. This is a special case of the No-level Crossing Theorem, which states that a pair of energy levels connected by perturbation do not cross as the strength of the perturbation is varied. It is fairly easy to that the perturbation expansions will converge if E \leq E_0 (R. G. Newton 1982, p.233). This page was recovered in October 2009 from the Plasmagicians page on Perturbation_Theory_(Quantum_Mechanics) dated 07:24, 17 December 2006. Personal tools
74d076148a4fa18c
Monday, April 22, 2013 Listen to Spacetime Quantum gravity researcher at work. We normally think about geometry as distances between points. The shape of a surface is encoded in the distances between the points on in. If the set of points is discrete, then this description has a limited resolution. But there’s a different way to think about geometry, which goes back about a century to Hermann Weyl. Instead of measuring distances between points, we could measure the way a geometric shape vibrates if we bang it. From the frequencies of the resulting tones, we could then extract information about the geometry. In maths speech we would ask for the spectrum of the Laplace-operator, which is why the approach is known as “spectral geometry”. Under which circumstances the spectrum contains the full information about the geometry is today still an active area of research. This central question of spectral geometry has been aptly captured in Mark Kac's question “Can one hear the shape of a drum?” Achim Kempf from the University of Waterloo recently put forward a new way to think about spectral geometry, one that has a novel physical interpretation which makes it possibly relevant for quantum gravity The basic idea, which is still in a very early phase, is the following. The space-time that we live in isn’t just a classical geometric object. There are fields living on it that are quantized, and the quantization of the fluctuations of the geometry themselves are what physicists are trying to develop under the name of quantum gravity. It is a peculiar, but well established, property of the quantum vacuum that what happens at one point is not entirely independent from what happens at another point because the quantum vacuum is a spatially entangled state. In other words, the quantum vacuum has correlations. The correlations of the quantum vacuum are encoded in the Greensfunction which is a function of pairs of points, and the correlations that this function measures are weaker the further away two points are. Thus, we expect the Greensfunction for all pairs in a set of points on space-time to carry information about the geometry. Concretely, consider a space-time of finite volume (because infinities make everything much more complicated), and randomly sprinkle a finite number of points on it. Then measure the field's fluctuating amplitudes at these points, and measure them again and again to obtain an ensemble of data. From this set of amplitudes at any two of the points you then calculate their correlators. The size of the correlators is the quantum substitute for knowing the distance between the two chosen points. Achim calls it “a quantum version of yard sticks.” Now the Greensfunction is an operator and has eigenvalues. These eigenvalues, importantly, do not depend on the chosen set of points, though the number of eigenvalues that one obtains does. For N points, there are N eigenvalues. If one sprinkles fewer points, one loses the information of structures at short distances. But the eigenvalues that one has are properties of the space-time itself. The Greensfunction however is the inverse of the Laplace-operator, so its eigenvalues are the inverses of the eigenvalues of the Laplace-operator. And here Achim’s quantum yard sticks connect to spectral geometry, though he arrived there from a completely different starting point. This way one rederives the conjecture of (one branch of) spectral geometry, namely that the specrum of a curved manifold encodes its shape. That is neat, really neat. But it’s better than that. There exist counter examples for the central conjecture of spectral geometry, where the shape reconstruction was attempted from the scalar Laplace-operator's spectrum alone but the attempt failed. Achim makes the observation that the correlations in quantum fluctuations can be calculated for different fields and argues that to reconstruct the geometry it is necessary to not only consider scalar fields, but also vector and symmetric covariant 2-tensor fields. (Much like one decomposes fluctuations of the metric into these different types.) Whether taking into account also the vector and tensor fields is relevant or not depends on the dimension of the space-time one is dealing with; it might not be necessary for lower-dimensional examples. In his paper, Achim suggests that to study whether the reconstruction can be achieved one may use a perturbative approach in which one makes small changes to the geometry and then tries to recover these small changes in the change of correlators. Look how nicely the physicists’ approach interlocks with thorny mathematical problems. What does this have to do with quantum gravity? It is a way to rewrite an old problem. Instead of trying to quantize space-time, one could discretize it by sprinkling the points and encode its properties in the eigenvalues of the Greensfunctions. And once one can describe the curvature of space-time by these eigenvalues, which are invariant properties of space-time, one is in a promising new starting position for quantizing space-time. I’ve heard Achim giving talks about the topic a couple of times during the years, and he has developed this line of thought in a series of papers. I have no clue if his approach is going to lead anywhere. But I am quite impressed how he has pushed forward the subject and I am curious to see how this research progresses. saibod said... Without having looked into Achim's work in detail, I wonder: how does it relate to Connes' reconstruction theorem, which proves that a Riemannian manifold can be recovered from its underlying spectral triple? Plato Hagel said... Yes I like this approach Bee. The conversion process still has to have specifics and the theoretic involved in terms of it's construction would have to have some association for the correlation to work. For example: Often, an increase or decrease in some level in this information is indicated by an increase or decrease in pitch, amplitude or tempo, but could also be indicated by varying other less commonly used components. See: Sonification This conversion process is very important. Another example would be: HiggsJetEnergyProfileCrotale and HiggsJetEnergyProfilePiano use only the energy of the cells in the jet to modulate the pitch, volume, duration and spatial position of each note. The sounds being modulated in these examples are crotales (baby cymbals) and a piano string struck with a soft beater, then shifted up in pitch by 1000 Hz and `dovetailed'. In HiggsJetRythSig we are simply travelling steadily along the axis of the jet of particles and hearing a ping of crotales for each point at which there is a significant energy deposit somewhere in the jet. HiggsJetEnergyGate uses just the energy deposited in the jet's cells. At each time point (defined by the distance from the point of collision) the energy is used to define the number of channels used from the piano sound file. So high energy can be heard as thick, burbly sound whilst low energy has a thinner sound. See: Listen to the decay of a god particle It is exciting for me to see your demonstration in concert with the approach of quantum gravity. Plato Hagel said... You might like this link below as well. We think of space as a silent place. But physicist Janna Levin says the universe has a soundtrack -- a sonic composition that records some of the most dramatic events in outer space. (Black holes, for instance, bang on spacetime like a drum.) An accessible and mind-expanding soundwalk through the universe. See: Janna Levin: The sound the universe makes kneemo said... Indeed the paper appears to describe techniques dual to those used by Connes in his noncommutative geometrical studies of the standard model and gravity. The two approaches are related through the fact that the Dirac operator can be thought of as the square-root of the Laplacian. Kempf prefers to use the Laplacian while Connes uses the Dirac operator in his spectral triples (A,H,D) to encode the spectral geometry. In Connes spectral triple, A is the operator algebra of functions over the given manifold, H is the Hilbert space on which it acts on and D is the Dirac operator whose spectrum is used to recover the structure of the manifold, much like Kempf uses the spectrum of the Laplacian to recover the "shape" of the manifold. In Connes approach to the standard model and gravity, to recover the gauge group of the standard model he considers a product space M x F, where F is a finite geometry, related to the "sprinkling of points" mentioned in Kempf's paper that has a matrix interpretation. Specifically, Connes considers the algebra of functions A over the 6-point space in his model, where A=C+H+M_3(C). Here, C is the set of complex numbers, H the algebra of quaternions (transforming in M_2(C)) and M_3(C) the set of 3x3 matrices over the complex numbers, acting on the one, two and 3-point spaces respectively. Classically, the manifolds which these encode are the unit circle, CP^1 and CP^2, each discretized by the eigenvalues of the matrix operators in the algebra of functions over the finite geometry F. In string theory, such a finite geometry also arises in the guise of internal worldvolume degrees of freedom. In this framework, gauge groups can be seen as the internal degree of freedom at every point on the world-volume of N-coincident branes. The gauge symmetry is the freedom that a fundamental string has in deciding which of the N identical branes it can end on. In Connes' model, there would be a total of six branes encoded by the spectral triple of his finite geometry F, giving the U(1), SU(2) and SU(3) symmetry groups of the standard model. Uncle Al said... "The correlations of the quantum vacuum are encoded in the Greensfunction which is a function of pairs of points." Green’s function opens Newton (e.g., terrain gravitometer sweeps to reconstruct buried dense ore or low density petroleum). To my knowledge, Green functions are not validated for general relativity. Green functions are all coordinate squares, removing chirality (versus Ashtekar). Green functions are defective if they uncreate fermionic matter parity violations. Quantum gravitation and SUSY will founder until somebody discovers why persuasive maths do not empirically apply. Euclid plus perturbation is terrestrial cartography, and still fails to navigate the high seas, because rigorously derived Euclid is wrong in context. Green functions for linearized theory are established. Green functions describe complete non-linear theory to any required accuracy. An odd polynomial to any number of terms is not a sine wave. It fails at boundaries. Sabine Hossenfelder said... Hi Saibod, Achim submits the following: "Connes' spectral triple has much more information than just the spectrum of the Dirac operator. Namely, to know the spectral triple is also to know how the Dirac operator acts on concrete spinor fields. Having this much more information makes it way easier to reconstruct a manifold. The difficult part is to show under which conditions the spectrum (or spectra) *alone* suffice(s) to determine a manifold." Phillip Helbig said... Greensfunction ---> Green function. The first form is not correct and never was. The second is the preferred form now, e.g. Schrödinger equation, Maxwell equations, not Schrödinger's equation, not Maxwell's equation (though the possessive forms are grammatically correct). I remember that Max Tegmark commented in a talk at the 1994 Texas Symposium in Munich that he had looked up the official recommendations and "Green function" is correct, though he found that rather funny. Sabine Hossenfelder said... Hi Phillip, I also find that rather funny, but I'll keep it in mind. Though I'm afraid that if I would write "Green function" nobody would know what I mean, which somewhat defeats the purpose of language. It's like that, after some years of complaining about the way the Swedes write dates that nobody knows how to read, I found out that it's the "international standard" for dates they're using... Best, Phillip Helbig said... I'm pretty sure that there is no-one who knows what a Greensfunction is but doesn't know what a Green function is. The fact that it is capitalized hints that it is a proper name. Christine said... This comment has been removed by the author. Christine said... This comment has been removed by the author. Juan F. said... Awesome post Sabine! Should we call you The Quantum Gravity Doctor? ;) Giotis said... Nice picture Sabine... I guess finally your mother's dream become true. You are a 'real' doctor now with a stethoscope:-) Christine said... This comment has been removed by the author. Christine said... Oops, sorry for my badly written comments. Anyway, "Green function" or "Green's function" are the terms that I know. Never heard of Greensfunctions... Sabine Hossenfelder said... Hi Christine, He's only considering manifolds without boundary. I've been a little brief on the details for the sake of readability, but it arguably goes on the expenses of clarity, sorry about that. I can recommend Achim's paper though, I found it very well written and understandable. It's also not very long. Best, Sabine Hossenfelder said... Hi Giotis, There are some Dr med's in our family. I don't think it ever was my mother's dream I join them. My younger brother and I, we'd sometimes sneak into the doctor's office on weekends and play with the equipment. I've always been more interested in basic research though. And my younger brother, he's a mechanical engineer now. Best, Sabine Hossenfelder said... Yes, you can call me the Quantum Gravity Doctor. The patient is noncompliant :p Best, Plato Hagel said... Hearing the Shape of the Drum Thanks Bee Markus Maute said... This comment has been removed by the author. Raisonator said... the Dirac operator acts on concrete spinor fields." This sounds like an interesting mathematical question but in terms of physics one needs the spinors anyway to have fermions and to be able to reconstruct the Standard Model. The Laplacian alone will not do. One just gets the bosonic part of the spectral action. Moreover, to do serious physics at least an almost commutative spectral triple is required anyway, rendering the overall manifold non-commutative. Also, if just considering the Laplacian, I don't think one gets the gauge fields which are part of the bosonic action. Regarding spacetime in isolation I regard as a major step backwards and completely against the very spirit of unification (of spacetime and matter), in particular given the sheer success of the noncommutative standard model. Well, that's all based on my limited understanding of the subject, so please correct me if I am wrong. DaveS said... You go, girl and get that Quantum Gravity! Zephir said... Quantum gravity is the theory supposed to bridge the dimensional scales of quantum mechanics and general relativity, i.e. the human observer scales. I can see no reason, why the common chemistry and biology couldn't fall into subject of quantum gravity as well. Robert L. Oldershaw said... Last line from Alan Lightman's review of Smolin's new book in the NY Times Book Review section of the Sunday paper. "For if we must appeal to the existence of other universes - unknown and unknowable - to explain out universe, then science has progressed into a cul-de-sac with no scientific escape." Science 1 ; pseudo-science 0
5af4988b0ee03c6f
Thursday, April 20, 2017 Tagore's shift to universal humanism was subsequent Swapan Dasgupta Tapan Raychaudhuri's study of Bengal's responses to the West in the 19th century dealt with three intellectual stalwarts - Bhudeb Mukherjee, Bankim Chandra Chatterjee and Swami Vivekananda. All three focused on issues that related to Hindus as Hindus. To them, modernity did not mean discarding the Hindu inheritance but reshaping (and in Bhudeb's case rediscovering) the Hindu inheritance. In the realms of political activism too, the movement against the Partition of Bengal had explicitly Hindu overtones - take Aurobindo Ghose and Bipin Chandra Pal as foremost examples - and this religio-political aspect was embraced by Rabindranath Tagore. Tagore's shift to universal humanism was a subsequent development and his trenchant critique of Mahatma Gandhi's non-cooperation movement did not endear him to most fellow Bengalis. However, his iconic status, particularly after he won the Nobel prize, insulated him from any politically-inspired criticism. Arguably, C.R. Das was an exception but his legacy of communal power-sharing collapsed rapidly after his untimely death. From the late 1920s till Independence, there was often very little to distinguish the Bengal Congress from the Hindu Mahasabha. In spite of the parallel attraction of Marxism, the upper echelons of bhadralok society did not shun explicitly Hindu mobilization. The Hindu Mahasabha boasted of the involvement of intellectual stalwarts such as Shyama Prasad Mookerjee, Nirmal Chandra Chatterjee and even Ramananda Chatterjee.  The Spiritual Nationalism & Human Unity: approach taken by Sri Aurobindo in Politics D Banerjee Abstract: Sri Aurobindo's theory of Politics is quite extraordinary than other contemporary politicians of India. It also has several parts like Swaraj, boycott, resistance, national education as necessary ingredient of Indian political agitation started from 1905. But it has  Participation and the Mystery: Transpersonal Essays in Psychology, Education, and Religion JN Ferrer - 2017 Page 1. PARTICIPATION AND THE. MYSTERY Transpersonal Essays in Psychology, Education, and Religion |ORGE N. FER RER Page 2. PARTICIPATION AND THE MYSTERY Page 3. Page 4. PARTICIPATION AND THE ... › upsc-prelims › terrorist-and-... 12 hours ago - The people associated with this samiti were Sri Aurobindo, Deshabandhu Chittaranjan Das, Surendranath Tagore, Jatindranath Banerjee, Bagha Jatin, Bhupendra Natha Datta, Barindra Ghosh etc. Bhupendra ... Daily Mail-18-Apr-2017 The original founders were Sri Aurobindo and his younger brother, Barindra. Both, along with 47 other accused stood trial for the Alipur bomb blast case or the ... Chronicle of Higher Education (subscription)-19-Mar-2017 Like a lot of academics, I have long harbored the desire to write a popular book — in my case, something like Richard Dawkins's The Selfish Gene. But sadly, I ... MARCH 19, 2017 The future of our species is surely rich material, something many of us have speculated about. Will we have massive brains, dwindling little bodies, and highly functional genitalia? I tend to think so. In fact, with five kids, I rather pride myself on being an advanced specimen. Then again, maybe I’m not. From Darwin on, experts have worried that big brains are not that adaptive. Think of all of the great philosophers — Aquinas, Hume, Kant, Plato, Wittgenstein — who died childless. Mad In America-13-Mar-2017 Is the suppression of spirituality in the West the reason for our struggle and suffering labeled as mental illness? Are we medicated to numb the pain and ... On 20 Apr 2017, at 09:12, priyedarshi jetli wrote: I am not promoting logicism. Some logicists believed that you can construct all of mathematics from logic. The Intuitionists don't accept this. Neither do I. My point was a simple general one, logic is mainly about inferences and has no content, therefore free of ontology or epistemology. This is what alternative systems of logic have in common. Further, as far as I know most systems of logic can be translated into each other or shown to be extentions of classical logic. Of course it takes some doing to accomplish this. Logicism has failed. No (reasonably serious) logicians would assume it. We know that elementary arithmetic cannot be deduced from any logic. If fact, we can prove in arithmetic that arithmetic does not follow from logic. Russell and Whitehead claims that 1+1=2 can be proved in their logical system, but they assumed a part of set theory, which assumes much more than arithmetic. We know also that arithmetic is Turing equivalent. It realizes all computations, and this lead to the problem of recovering physics from a statistics on all computations see from inside (that is: structured by the modal logic of self-references imposed by the Incompleteness Phenomenon). My point is more technical: if we are machine, there is no primary physical reality, only a first person locally sharable infinities of number's "dreams". It makes me say that in Occident, we have to backtrack to Pythagoras and Plato (and the neopythagoreans + the neoplatonists) in the field of (scientific/modest) theology. I appreciate the antique greeks mainly because they do not oppose mysticism and rationalism.  My feeling, coming from my interest in buddhist mahayana is that in India, there has been always a bigger open-mindness about immaterialism, or, to put it in another way, a bigger skepticism toward the material explanations. India, and China for a long time, seem to not have separated rationalism and religion/theology as much as Europa. But of course, all civilisation has its dark period and witches hunts. In today's quantum mechanics, all matter (some of which is perceived directly by senses and called classical) is made up of quantum particles.  A quantum particle is said to be a packet of de Broglie phase waves, each of which is supposed to have a speed greater than that of light in vacuum.  Thus the phase wave is a mathematical abstraction, it is non-material in that you cannot perceive it by senses.  So, one may say all matter is made of ideas.   Dear all,      I would like to suggest something here, so many on this forum are attempting to explain the infinite and infinitesimal complexities of matter and spirit, body, mind, soul, consciousness and the unconscious. And Deepak seems to want to throw all that aside and suggests that nothing of matter subtle or gross actually exists, except for consciousness which according to him is all one non personal oneness of bliss. My own opinion is that neither of these approaches is being realistic, and I'll try to explain why.      First, Deepak's theories make no sense when put to any test, he postulates that nothing in the world exists, but we are all aware that it does, even if transitory and temporary we know something is there, we are there, the body is there, mind, universe, stars and sun, oceans etc. And we know that we didn't create all of that, something else must be the source. So it's nice to think that all phenomena are our own creation as he states, but obviously something else is there to discover.      Then the physicists and other scientists are so expert at breaking down the complex functioning of that phenomena which Deepak is claiming does not exist, that is true. But how to actually empirically explain the origins of such phenomena to get to the actual root of it? Obviously this cannot be done through speculation, why? Because our minds and intellect used for speculation are themselves products of the material process itself, and the product cannot fully explain the source, logically. Therefore Deepak's attempt to simplify things. But again that falls short, he's only partly right after all, true that consciousness is more subtle than matter, superior to matter, subjective. But we ourselves as consciousness are not the source of all matter, else we would not be "bamboozled" by it as he likes to say, one cannot be bamboozled by something he has himself created, not possible if he is the source of that.       That does leave the one logical explanation, and which I believe Bhakti Madhava Puri has been trying to explain further, that we ourselves as limited subjective conscious entities are ourselves objects of a greater unlimited subjective consciousness, infinite absolute consciousness which is the source of both individual consciousness and matter both subtle and gross. That's my conclusion anyway, it makes sense in explaining reality, I welcome any comments on this. I do have admiration for the subtle complexities of all the physicists' and philosophers' explanations on this forum but this is my own contribution, I think it makes sense. In this world we are simply the conscious agent, the minds, bodies, even thoughts, we are not the source of all these, they are under control of and the property of something else, someone else. To accept that idea is to me the key. Regards, Eric Reyes Proofs apply to mathematical theorems not to things like matter or consciousness. And even in mathematics if something cannot be proven it does not mean that it is not true. Take Fermat's last theorem for example. It took 300 years to prove it. It is like saying that if no one had ever climbed the peak of Mount Everest it did not exist. The metaphoric statement of the type Deepak Chopra makes is meaningless. There is a very old fallacy that is commonly used:  No one has proven that God does not exist. Therefore, God exists. It is an obvious fallacy. In any case the burden of proof lies on the one who accepts the existence of God or a non-material consciousness for that matter and not on those who do not accept them. The fallacy is surely obvious to those who are committing them as well. They are a disguise for covering up the issue.  The issue is that once stated as alternative hypotheses, idealism, dualism, materialism and perhaps other isms regarding the mind body problem are dogmas. None of these hypotheses can be proven and all attempts to prove them beg the question. But among dogmas we can choose the one that seems most plausible to us. To me the materialist or physicalist dogma is the most plausible. There is no emergent mental (non-physical) world, there is no non-physical or mystical forces that causally act on the physical world. The physical is causally closed. To deny this people often equate 'physics of the day' with 'everything that is physical'. I need not spend  time to dispel this fallacious equivocation. We need only reflect on the statement that "everything is in principle explainable physically." Dear Siegfried, Thanks for the comments. It seems that you agree that the quantum classical divide is not about spatial dimension in the way often talked about. I agree that the quantum level is the level of the individual quantum of action. So very often the amount of energy involved ins small. However, if we take a Bose mode as an indivisible quantum of action (as contemporary field theory generally does) rather than the notional single photon or phonon or whatever, then there is actually no limit to the energy in a quantum of action. It will be h x frequency x the notional ‘number of particles' quantum number. So a military LASER beam or a seismic mode that informs us of the chemical composition of the centre of the earth may have huge energy.  I agree that uncertainty is not present in the Schrödinger equation. But that is just a bit of writing on a piece of paper. A wave equation is not a mode or entity. It does not ‘evolve’. As an outsider it seems to me that the term ‘wave function’ encourages the belief that somehow the wave equation is a description of a mode. As I understand it, any given wave equation is a look up table for the probabilities of actualisation of a slew of modes with certain common parameters. A mode appears to have some sort of dynamic field structure with values that relate in spacetime according to complex harmonic math but being an indivisible action I can see no legitimate concept of ‘progression from state to state’. The more I get to know about modern physics and the more I talk to physicists (including Basil Hiley who was kind enough to give me some private tuition) the more it seems to me that the traditional ‘interpretations’ are all metaphysically unsound and counter to the basic concept of a theory of indivisible dynamic units. Douglas Bilodeau wrote a very nice article in 1996 in J Consc Studies to that effect. He also makes some remarks in the article about the conflation of different meanings of ‘quantum’ and ‘classical’. These are probably rather unhelpful terms that lead to misapprehensions. I think Leibniz does better in simply distinguishing descriptions of individual actions and description s of aggregate mechanics. I am not quite sure why you say that the quantum of action does not determine the quantised mode. My understanding of recent developments in condensed matter physics is that the mode is considered the quantum of action. As far as I can see uncertainty remains an essential feature of a world constituted by discrete instantiations of symmetric continuous dynamic laws, just for basic logical reasons, as pointed to by Leibniz. Best wishes No comments: Post a Comment
d0d09058149d083a
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Consider Schrödinger's time-independent equation $$ -\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi. $$ In typical examples, the potential $V(x)$ has discontinuities, called potential jumps. Outside these discontinuities of the potential, the wave function is required to be twice differentiable in order to solve Schrödinger's equation. In order to control what happens at the discontinuities of $V$ the following assumption seems to be standard (see, for instance, Keith Hannabus' An Introduction to Quantum Theory): Assumption: The wave function and its derivative are continuous at a potential jump. 1) Why is it necessary for a (physically meaningful) solution to fulfill this condition? 2) Why is it, on the other hand, okay to abandon twofold differentiability? Edit: One thing that just became clear to me is that the above assumption garanties for a well-defined probability/particle current. share|cite|improve this question You may want to look at – Willie Wong Jul 26 '10 at 13:11 Thanks Willie. This provides some explanation concerning my second question. – Rasmus Bentmann Jul 26 '10 at 13:40 @Downvoter: Why did you downvote? – Rasmus Bentmann Aug 20 '10 at 12:37 up vote 8 down vote accepted To answer your first question: Actually the assumption is not that the wave function and its derivative are continuous. That follows from the Schrödinger equation once you make the assumption that the probability amplitude $\langle \psi|\psi\rangle$ remains finite. That is the physical assumption. This is discussed in Chapter 1 of the first volume of Quantum mechanics by Cohen-Tannoudji, Diu and Laloe, for example. (Google books only has the second volume in English, it seems.) More generally, you may have potentials which are distributional, in which case the wave function may still be continuous, but not even once-differentiable. To answer your second question: Once you deduce that the wave function is continuous, the equation itself tells you that the wave function cannot be twice differentiable, since the second derivative is given in terms of the potential, and this is not continuous. share|cite|improve this answer Your first argument is not clear to me - I'll take a look at Cohen-Tannoudji. – Rasmus Bentmann Jul 26 '10 at 15:23 The idea is the following: suppose that $V$ has isolated discontinuities and let $x_0$ be the location of one such discontinuity. Replace $V$ on $[x_0-\epsilon, x_0+\epsilon]$ with another potential which is continuous and which tends to $V$ as $\epsilon\to 0$. Then you show that the wave-function which solves the Schrödinger equation for this new potential tends in the limit as $\epsilon\to0$ to the wave-function you want and that in this limit the first derivative remains continuous. This is not really proven in Cohen-Tannoudji et al. but only sketched. The details are not hard, though. – José Figueroa-O'Farrill Jul 26 '10 at 16:56 There is a very clear physical reason why the wavefunction should be continuous: it's derivative is proportional to the momentum of the particle, so discontinuities imply that the state has an infinite-momentum component. – Jess Riedel Apr 27 '11 at 3:47 Since you talk about 'jump' discontinuities, I guess you are interested in a one dimensional Schroedinger equation, i.e., $x\in\mathbb{R}$. In this situation a nice theory can be developed under the sole assumption that $V\in L^1(\mathbb{R})$ (and real valued of course). By a nice theory I mean that the operator $-d^2/dx^2+V(x)$ is selfadjoint, with continuous spectrum the positive real axis, and (possibly) a sequence of negative eigenvalues accumulating at 0. Better behaviour can be produced by requiring that $(1+|x|)^a V(x)$ be integrable (e.g. for $a=1$ the negative eigenvalues are at most finite in number). If you are interested in this point of view, a nice starting point might be the classical paper by Deift and Trubowitz on Communications Pure Appl. Math. 1979. Notice that the solutions are at least $H^1_{loc}$ (hence continuous) and even something more. A theory for the case $V$ = Dirac delta (or combination of a finite number of deltas) was developed by Albeverio et al.; the definition of the Schroedinger operator must be tweaked a little to make sense of it. This is probably beyond your interests. Summing up, no differentiability at all is required on the potential to solve the equation in a meaningful way. However, I suspect that this point of view is too mathematical and you are actually more interested in the physical relevance of the assumptions. share|cite|improve this answer Here is a tangential response to your first question: sometimes these discontinuities do have physical significance and are not just issues of mathematical trickery surrounding pathological cases. Wavefunctions for molecular Hamiltonians become pointy where the atomic nuclei lie, which indicate places where the 1/r Coulomb operator becomes singular. There are equations like the Kato cusp conditions (T. Kato, Comm. Pure Appl. Math. 10, 151 (1957)) that relate the magnitude of the discontinuity at the nucleus to the size of the nuclear charge. I have heard this explained as a result of requiring the energy (which is the Hamiltonian's eigenvalue) to remain finite everywhere, thus at places where the potential is singular, the kinetic energy operator must also become singular at those places. Since the kinetic energy operator also controls the curvature of the wavefunction, the wavefunction at points of discontinuity must change in a nonsmooth way. share|cite|improve this answer Your Answer
7c318f1f32c077e0
From Wikipedia, the free encyclopedia Jump to: navigation, search This solution of the vibrating drum problem is, at any point in time, an eigenfunction of the Laplace operator on a disk. In mathematics, an eigenfunction of a linear operator, A, defined on some function space, is any non-zero function f in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. More precisely, one has A f = \lambda f for some scalar, λ, the corresponding eigenvalue. The solution of the differential eigenvalue problem also depends on any boundary conditions required of f. In each case there are only certain eigenvalues λ = λn (n = 1, 2, 3, ...) that admit a corresponding solution for f  =  fn (with each fn belonging to the eigenvalue λn) when combined with the boundary conditions. Eigenfunctions are used to analyze A. For example, fk (x) = ekx is an eigenfunction for the differential operator A = \frac{d^2}{dx^2} - \frac{d}{dx} for any value of k, with corresponding eigenvalue λ = k2k. If boundary conditions are applied to this system (e.g., f  = 0 at two physical locations in space), then only certain values of k = kn satisfy the boundary conditions, generating corresponding discrete eigenvalues \lambda_n=k_n^2-k_n. Specifically, in the study of signals and systems, the eigenfunction of a system is the signal f (t) which when input into the system, produces a response y(t) = λ f (t) with the complex constant λ.[1] Derivative operator[edit] A widely used class of linear operators acting on function spaces are the differential operators on function spaces. As an example, on the space C of infinitely differentiable real functions of a real argument t, the process of differentiation is a linear operator since \frac{d}{dt}(af+bg) = a \frac{df}{dt} + b \frac{dg}{dt}, \qquad f,g \in C^{\infty}, \quad a,b \in \mathbf{R}. The eigenvalue equation for a linear differential operator D in C is then a differential equation D f = \lambda f The functions that satisfy this equation are commonly called eigenfunctions. For the derivative operator d/dt, an eigenfunction is a function that, when differentiated, yields a constant times the original function. That is, \frac{d}{dt} f(t) = \lambda f(t) for all t. This equation can be solved for any value of λ. The solution is an exponential function f(t) = Ae^{\lambda t}. The derivative operator is defined also for complex-valued functions of a complex argument. In the complex version of the space C, the eigenvalue equation has a solution for any complex constant λ. The spectrum of the operator d/dt is therefore the whole complex plane. This is an example of a continuous spectrum. Vibrating strings[edit] The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admissible eigenvalues are governed by the length of the string and determine the frequency of oscillation. Let h(x, t) denote the sideways displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. From the laws of mechanics, applied to infinitesimal portions of the string, one can deduce that the function h satisfies the partial differential equation \frac{\partial^2 h}{\partial t^2} = c^2\frac{\partial^2 h}{\partial x^2}, which is called the (one-dimensional) wave equation. Here c is a constant that depends on the tension and mass of the string. This problem is amenable to the method of separation of variables. If we assume that h(x, t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations: \frac{d^2}{dx^2}X=-\frac{\omega^2}{c^2}X \qquad \frac{d^2}{dt^2}T=-\omega^2 T. Each of these is an eigenvalue equation, for eigenvalues -\tfrac{\omega^2}{c^2} and ω2, respectively. For any values of ω and c, the equations are satisfied by the functions X(x) = \sin \left(\frac{\omega x}{c} + \varphi \right), T(t) = \sin(\omega t + \psi), where φ and ψ are arbitrary real constants. If we impose boundary conditions (that the ends of the string are fixed with X(x) = 0 at x = 0 and x = L, for example) we can constrain the eigenvalues. For those boundary conditions, we find sin(φ) = 0, and so the phase angle φ = 0 and \sin\left(\frac{\omega L}{c}\right) = 0. Thus, the constant ω is constrained to take one of the values ωn = ncπ/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form h(x,t) = \sin \left (\frac{n\pi x}{L} \right )\sin(\omega_n t). From the point of view of our musical instrument, the frequency ωn is the frequency of the n-th harmonic, which is called the (n − 1)-th overtone. Quantum mechanics[edit] Eigenfunctions play an important role in many branches of physics. An important example is quantum mechanics, where the Schrödinger equation H\psi = E \psi, H = -\frac{\hbar^2}{2m}\nabla^2+ V(\mathbf{r},t) has solutions of the form \psi(t) = \sum_k e^{-\frac{i E_k t}{\hbar}} \varphi_k, where φk are eigenfunctions of the operator H with eigenvalues Ek. The fact that only certain eigenvalues Ek with associated eigenfunctions φk satisfy Schrödinger's equation leads to a natural basis for quantum mechanics and the periodic table of the elements, with each Ek an allowable energy state of the system. The success of this equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics. Since the Hamiltonian operator H is a Hermitian Operator, its eigenfunctions are orthogonal functions. This is not necessarily the case for eigenfunctions of other operators (such as the example A mentioned above). Orthogonal functions fi  (i = 1, 2, ...) have the property that 0 = \int \overline{f_i} f_j where fi is the complex conjugate of fi. whenever ij, in which case the set { fi  | iI} is said to be orthogonal. Also, it is linearly independent. 1. ^ Bernd Girod, Rudolf Rabenstein, Alexander Stenger, Signals and systems, 2nd ed., Wiley, 2001, ISBN 0-471-98800-6 p. 49 See also[edit]
201f72839e85a02a
What is Big Think?   We are Big Idea Hunters… Big Think Features: 12,000+ Expert Videos Watch videos World Renowned Bloggers Go to blogs Big Think Edge Find out more With rendition switcher Question: How is our everyday notion of time flawed? David Albert: Well, in a number of ways. First of all, you might say that the sort of gesture with which physics gets under way in the first place is a gesture that lots of philosophers in the Continental tradition have come to call the spatialization of time. That is, the way time appears in physics is as another component, a fourth component, of the address of an event, okay? You want to say -- what there is to say about events is where they were located and how they were distributed, blah blah blah, in the X dimension, in the Y dimension, in the Z dimension and in the T dimension. And what physics aspires to tell us -- and all physics aspires to tell us, when you get right down to it -- is how matter and energy gets distributed over this four-dimensional block of X, Y, Z and T, okay? You tell me where all the particles are in X, Y, Z and T; I'll tell you the history of the world, okay? That's supposed to be, according to the model we've been dealing with in physics from the word go, all there is to say about the world. From the word go, then, something is being done with time that does enormous violence to our prescientific intuitions about time. Prescientifically, it seems like time is as different from space as you could possibly imagine. And the way discussions of this often go is that people will immediately say, well, time moves or flows; space doesn't move or flow. I can move around at will in space; I can't move around at will in time. All these kinds of things. Time seems just as different as it could be from space, and indeed there's a tradition in Continental philosophy, in European philosophy -- you find this in people like Heidegger or Deleuze or people like that -- that it is the fundamental mistake of physics, this spatialization of time. It's from that moment on that physics had already doomed itself to not being able to say anything deep or interesting about what time was. Physics, on the other hand, has this interesting retort that it can apparently explain why you say all these things you say about how time is different from space on a model in which time is just another parameter in the address of an event. So there is this very interesting conversation, there is this very interesting dialog, where on the one hand it looks like physics has made from the word go a profound error about what time is. Physics, on the other hand, has been up to now always in a position to come back and say, you want me to explain to you, based on physics, why you say time passes even though that doesn't actually mean anything, and why you're talking the way you do about our capacity to move around in space but not in time, so on and so forth? I can do all that. It's like -- it's very much like when Newtonian mechanics was first proposed, and it was widely objected that if the earth was indeed moving through space at the enormous velocities that it's predicted to be by Newtonian mechanics, everybody would fall off, okay? And the beauty of Newtonian mechanics was to be able to say not merely that I have an explanation of why we don't fall off, but that the very same theory that predicts that the earth is going to be moving so fast is also going to predict that we wouldn’t fall off. Indeed, it's also going to predict that we would think it isn't moving, even though it is. One wants to play this game with the scientific conception of time as well. Maybe I should go a bit further. Specifically, there is within physics the following problem that's been sitting around for about a hundred years now, at the foundations of physics, which goes like this: imagine watching a film of two billiard balls collide. So in the first frame you just see a billiard ball sitting at the center of the frame. Then another billiard ball comes in, hits the one that's at rest. The one that was moving is now at rest, and the one that was previously at rest goes off and exits the frame on the other side. Imagine that you were shown this film in reverse, so what you're going to see is the second billiard ball coming in here, hitting the first one, which has now been at rest up until this point in the reversed movie, and now the first one goes out and leaves the frame on the other side. Note that the set of events depicted by the movie being shown in reverse is just as much in accord with everything we believe about the laws governing collisions between billiard balls as is the movie being shown in the correct direction. That is, if you were shown a movie like this and asked to guess -- just based on your familiarity with the laws of physics, just based on your familiarity with how billiard balls behave when they collide -- if you were shown a film like this and asked to guess whether it was being shown forward or in reverse, you wouldn't be able to tell. Physicists express this by saying that the laws governing collisions between billiard balls are symmetric under time reversal, okay? And what that means more concretely is -- a law is said to be symmetric under time reversal if it's the case that for any process which is in accord with that law, the same process going in reverse -- that is, the same process as it would appear in a film going backwards -- is also in accord with that law. So we say that the laws governing collisions between pairs of billiard balls are time-reversal symmetric. Good. It's an astonishing thing that all of the serious candidates that anybody has entertained for fundamental laws of physics, from Newton onward -- so by this I mean Newton's laws, I mean Maxwell's equations, I mean the Schrödinger equation, I mean general relativity, I mean string theory, everything -- it's an astonishing fact about all of these theories, even though they differ wildly from one another in all kinds of other respects, and even though they straddle different sides of enormous scientific revolutions, it's an astonishing fact about every single one of these theories that all of them are perfectly symmetric under time reversal. Every single one of them has the feature that for any process which is in accord with those laws, the process going in reverse is in accord with those laws as well. But this poses a problem. And once again, just as with the measurement problem, the structure of this problem is a clash between the laws of physics that we have developed by carefully observing the behaviors of microscopic systems and our everyday macroscopic experience of the world. Okay, what's the problem? The problem is that if we get more sophisticated than collisions between billiard balls, okay -- if we imagine a film of someone walking down the street, or of a piece of paper being consumed by fire, or a time-lapse film of somebody growing old or something like that, we can damn well tell very easily whether we are being shown the film forward or in reverse, okay? Now, that is very directly at odds with what we take ourselves to have very good reasons for believing about the structure of the fundamental physical laws, which are supposed to govern all of these process; namely, that those laws are perfectly symmetric under time reversal, okay? To put it slightly differently, on the level of fundamental physical law there doesn't seem to be any distinction whatsoever between past and future, okay? Moving from the present into the future ought to look, statistically speaking, just like moving from the present into the past if those laws are true. On the other hand, it's probably not an exaggeration to say that the most basic, primordial, unavoidable feature of our everyday experience of being in the world is that there is all the difference in the world between the past and the future. In our everyday experience of the world there is an extremely vivid and pronounced temporal direction, okay? On the level of the fundamental laws, that temporal direction apparently completely vanishes, okay? Once again, there's a question, and like I say, this question has a similar structure to the question in the measurement problem in that it's a clash between our fundamental equations of motion and our everyday experience, macroscopic experience, of being in the world. There's a question with time about how to put these two things together. Once again, it appears as if although the theory does an extremely good job of predicting the motions of elementary particles and so on and so forth, there's got to be something wrong with it, okay, because we have -- although we have very good, clear quantitative experience in the laboratory which bears out these fully time-reversal symmetric laws, at some point there's got to be something wrong with them, because the world that we live in manifestly not even close to being time-reversal symmetric. And once again there are proposals on the table for how to fiddle around with the theory, adding a new law governing initial conditions, for example. There are all kinds of proposals about how to deal with this, but this has been -- this is a very fundamental challenge. Although we've got laws that are doing a fantastic job on the micro level, there's some way in which these laws manifestly get things wrong on the macro level, and we need to figure out what to do about it. Recorded on December 16, 2009 More from the Big Idea for Sunday, July 13 2014 Deep Time The story of our planet, the deep time of the planet - which goes back 4.567 billion years - represents an incredible narrative about how we came to be and how our world works.  In today's less... Read More… The Profound Violence of Time Newsletter: Share:
cb551bbd04abee85
Table of Contents Section 5.1 - Tidal curvature versus curvature caused by local sources Section 5.2 - The stress-energy tensor Section 5.3 - Curvature in two spacelike dimensions Section 5.4 - Curvature tensors Section 5.5 - Some order-of-magnitude estimates Section 5.6 - The covariant derivative Section 5.7 - The geodesic equation Section 5.8 - Torsion Section 5.9 - From metric to curvature Section 5.10 - Manifolds Chapter 5. Curvature a / The expected structure of the field equations in general relativity. General relativity describes gravitation as a curvature of spacetime, with matter acting as the source of the curvature in the same way that electric charge acts as the source of electric fields. Our goal is to arrive at Einstein's field equations, which relate the local intrinsic curvature to the locally ambient matter in the same way that Gauss's law relates the local divergence of the electric field to the charge density. The locality of the equations is necessary because relativity has no action at a distance; cause and effect propagate at a maximum velocity of \(c(=1)\). The hard part is arriving at the right way of defining curvature. We've already seen that it can be tricky to distinguish intrinsic curvature, which is real, from extrinsic curvature, which can never produce observable effects. E.g., example 4 on page 95 showed that spheres have intrinsic curvature, while cylinders do not. The manifestly intrinsic tensor notation protects us from being misled in this respect. If we can formulate a definition of curvature expressed using only tensors that are expressed without reference to any preordained coordinate system, then we know it is physically observable, and not just a superficial feature of a particular model. Nor is the metric a measure of intrinsic curvature. In example 19 on page 140, we found the metric for an accelerated observer to be \[\begin{equation*} g'_{t't'} = (1+ax')^2 g_{x'x'} = -1 , \end{equation*}\] where the primes indicate the accelerated observer's frame. The fact that the timelike element is not equal to \(-1\) is not an indication of intrinsic curvature. It arises only from the choice of the coordinates \((t',x')\) defined by a frame tied to the accelerating rocket ship. The fact that the above metric has nonvanishing derivatives, unlike a constant Lorentz metric, does indicate the presence of a gravitational field. However, a gravitational field is not the same thing as intrinsic curvature. The gravitational field seen by an observer aboard the ship is, by the equivalence principle, indistinguishable from an acceleration, and indeed the Lorentzian observer in the earth's frame does describe it as arising from the ship's acceleration, not from a gravitational field permeating all of space. Both observers must agree that “I got plenty of nothin' ” --- that the region of the universe to which they have access lacks any stars, neutrinos, or clouds of dust. The observer aboard the ship must describe the gravitational field he detects as arising from some source very far away, perhaps a hypothetical vast sheet of lead lying billions of light-years aft of the ship's deckplates. Such a hypothesis is fine, but it is unrelated to the structure of our hoped-for field equation, which is to be local in nature. 5.1 Tidal curvature versus curvature caused by local sources a / Tidal forces disrupt comet Shoemaker-Levy. b / Tidal forces cause the initially parallel world-lines of the fragments to diverge. The spacetime occupied by the comet has intrinsic curvature, but it is not caused by any local mass; it is caused by the distant mass of Jupiter. c / The moon's gravitational field causes the Earth's oceans to be distorted into an ellipsoid. The sign of the sectional curvature is negative in the \(x-t\) plane, but positive in the \(y-t\) plane. d / A cloud of test masses is released at rest in a spherical shell around the earth, shown here as a circle because the \(z\) axis is omitted. The volume of the shell contracts over time, which demonstrates that the local curvature of spacetime is generated by a local source --- the earth --- rather than some distant one. A further complication is the need to distinguish tidal curvature from curvature caused by local sources. Figure a shows Comet Shoemaker-Levy, broken up into a string of fragments by Jupiter's tidal forces shortly before its spectacular impact with the planet in 1994. Immediately after each fracture, the newly separated chunks had almost zero velocity relative to one another, so once the comet finished breaking up, the fragments' world-lines were a sheaf of nearly parallel lines separated by spatial distances of only \(~1\) km. These initially parallel geodesics then diverged, eventually fanning out to span millions of kilometers. If initially parallel lines lose their parallelism, that is clearly an indication of intrinsic curvature. We call it a measure of sectional curvature, because the loss of parallelism occurs within a particular plane, in this case the \((t,x)\) plane represented by figure b. But this curvature was not caused by a local source lurking in among the fragments. It was caused by a distant source: Jupiter. We therefore see that the mere presence of sectional curvature is not enough to demonstrate the existence of local sources. Even the sign of the sectional curvature is not a reliable indication. Although this example showed a divergence of initially parallel geodesics, referred to as a negative curvature, it is also possible for tidal forces exerted by distant masses to create positive curvature. For example, the ocean tides on earth oscillate both above and below mean sea level, c. As an example that really would indicate the presence of a local source, we could release a cloud of test masses at rest in a spherical shell around the earth, and allow them to drop, d. We would then have positive and equal sectional curvature in the \(t-x\), \(t-y\), and \(t-z\) planes. Such an observation cannot be due to a distant mass. It demonstrates an over-all contraction of the volume of an initially parallel sheaf of geodesics, which can never be induced by tidal forces. The earth's oceans, for example, do not change their total volume due to the tides, and this would be true even if the oceans were a gas rather than an incompressible fluid. It is a unique property of \(1/r^2\) forces such as gravity that they conserve volume in this way; this is essentially a restatement of Gauss's law in a vacuum. 5.2 The stress-energy tensor In general, the curvature of spacetime will contain contributions from both tidal forces and local sources, superimposed on one another. To develop the right formulation for the Einstein field equations, we need to eliminate the tidal part. Roughly speaking, we will do this by averaging the sectional curvature over all three of the planes \(t-x\), \(t-y\), and \(t-z\), giving a measure of curvature called the Ricci curvature. The “roughly speaking” is because such a prescription would treat the time and space coordinates in an extremely asymmetric manner, which would violate local Lorentz invariance. To get an idea of how this would work, let's compare with the Newtonian case, where there really is an asymmetry between the treatment of time and space. In the Cartan curved-spacetime theory of Newtonian gravity (page 41), the field equation has a kind of scalar Ricci curvature on one side, and on the other side is the density of mass, which is also a scalar. In relativity, however, the source term in the equation clearly cannot be the scalar mass density. We know that mass and energy are equivalent in relativity, so for example the curvature of spacetime around the earth depends not just on the mass of its atoms but also on all the other forms of energy it contains, such as thermal energy and electromagnetic and nuclear binding energy. Can the source term in the Einstein field equations therefore be the mass-energy \(E\)? No, because \(E\) is merely the timelike component of a particle's momentum four-vector. To single it out would violate Lorentz invariance just as much as an asymmetric treatment of time and space in constructing a Ricci measure of curvature. To get a properly Lorentz invariant theory, we need to find a way to formulate everything in terms of tensor equations that make no explicit reference to coordinates. The proper generalization of the Newtonian mass density in relativity is the stress-energy tensor \(T^{ij}\), whose 16 elements measure the local density of mass-energy and momentum, and also the rate of transport of these quantities in various directions. If we happen to be able to find a frame of reference in which the local matter is all at rest, then \(T^{tt}\) represents the mass density. The reason for the word “stress” in the name is that, for example, the flux of \(x\)-momentum in the \(x\) direction is a measure of pressure. For the purposes of the present discussion, it's not necessary to introduce the explicit definition of \(T\); the point is merely that we should expect the Einstein field equations to be tensor equations, which tells us that the definition of curvature we're seeking clearly has to be a rank-2 tensor, not a scalar. The implications in four-dimensional spacetime are fairly complex. We'll end up with a rank-4 tensor that measures the sectional curvature, and a rank-2 Ricci tensor derived from it that averages away the tidal effects. The Einstein field equations then relate the Ricci tensor to the energy-momentum tensor in a certain way. The stress-energy tensor is discussed further in section 8.1.2 on page 263. 5.3 Curvature in two spacelike dimensions a / This curve has no intrinsic curvature. b / A surveyor on a mountaintop uses a heliotrope. c / A map of a triangulation survey such as the one Gauss carried out. By measuring the interior angles of the triangles, one can determine not just the two-dimensional projection of the grid but its complete three-dimensional form, including both the curvature of the earth (note the curvature of the lines of latitude) and the height of features above and below sea level. d / Example 1. e / Example 2. f / Proof that the angular defect of a triangle in elliptic geometry is proportional to its area. Each white circle represents the entire elliptic plane. The dashed line at the edge is not really a boundary; lines that go off the edge simply wrap back around. In the spherical model, the white circle corresponds to one hemisphere, which is identified with the opposite hemisphere. g / Gaussian normal coordinates on a sphere. h / 1. Gaussian curvature can be interpreted as the failure of parallelism represented by \(d^2\alpha/dxdy\). i / 2. Gaussian curvature as \(L \ne r\theta\). j / A triangle in a space with negative curvature has angles that add to less than \(\pi\). k / A flea on the football cannot orient himself by intrinsic, local measurements. l / Example 6. In 1 and 2, charges that are visible on the front surface of the conductor are shown as solid dots; the others would have to be seen through the conductor, which we imagine is semi-transparent. Since the curvature tensors in 3+1 dimensions are complicated, let's start by considering lower dimensions. In one dimension, a, there is no such thing as intrinsic curvature. This is because curvature describes the failure of parallelism to behave as in E5, but there is no notion of parallelism in one dimension. The lowest interesting dimension is therefore two, and this case was studied by Carl Friedrich Gauss in the early nineteenth century. Gauss ran a geodesic survey of the state of Hanover, inventing an optical surveying instrument called a heliotrope that in effect was used to cover the Earth's surface with a triangular mesh of light rays. If one of the mesh points lies, for example, at the peak of a mountain, then the sum \(\Sigma\theta\) of the angles of the vertices meeting at that point will be less than \(2\pi\), in contradiction to Euclid. Although the light rays do travel through the air above the dirt, we can think of them as approximations to geodesics painted directly on the dirt, which would be intrinsic rather than extrinsic. The angular defect around a vertex now vanishes, because the space is locally Euclidean, but we now pick up a different kind of angular defect, which is that the interior angles of a triangle no longer add up to the Euclidean value of \(\pi\). Example 1: A polygonal survey of a soccer ball Figure d applies similar ideas to a soccer ball, the only difference being the use of pentagons and hexagons rather than triangles. In d/1, the survey is extrinsic, because the lines pass below the surface of the sphere. The curvature is detectable because the angles at each vertex add up to \(120+120+110=350\) degrees, giving an angular defect of 10 degrees. In d/2, the lines have been projected to form arcs of great circles on the surface of the sphere. Because the space is locally Euclidean, the sum of the angles at a vertex has its Euclidean value of 360 degrees. The curvature can be detected, however, because the sum of the internal angles of a polygon is greater than the Euclidean value. For example, each spherical hexagon gives a sum of \(6\times 124.31\) degrees, rather than the Euclidean \(6\times120\). The angular defect of \(6\times4.31\) degrees is an intrinsic measure of curvature. Example 2: Angular defect on the earth's surface Divide the Earth's northern hemisphere into four octants, with their boundaries running through the north pole. These octants have sides that are geodesics, so they are equilateral triangles. Assuming Euclidean geometry, the interior angles of an equilateral triangle are each equal to 60 degrees, and, as with any triangle, they add up to 180 degrees. The octant-triangle in figure e has angles that are each 90 degrees, and the sum is 270. This shows that the Earth's surface has intrinsic curvature. This example suggests another way of measuring intrinsic curvature, in terms of the ratio \(C/r\) of the circumference of a circle to its radius. In Euclidean geometry, this ratio equals \(2\pi\). Let \(\rho\) be the radius of the Earth, and consider the equator to be a circle centered on the north pole, so that its radius is the length of one of the sides of the triangle in figure e, \(r=(\pi/2)\rho\). (Don't confuse \(r\), which is intrinsic, with \(\rho\), the radius of the sphere, which is extrinsic and not equal to \(r\).) Then the ratio \(C/r\) is equal to 4, which is smaller than the Euclidean value of \(2\pi\). Let \(\epsilon=\Sigma\theta-\pi\) be the angular defect of a triangle, and for concreteness let the triangle be in a space with an elliptic geometry, so that it has constant curvature and can be modeled as a sphere of radius \(\rho\), with antipodal points identified. Self-check: In elliptic geometry, what is the minimum possible value of the quantity \(C/r\) discussed in example 2? How does this differ from the case of spherical geometry? We want a measure of curvature that is local, but if our space is locally flat, we must have \(\epsilon\rightarrow0\) as the size of the triangles approaches zero. This is why Euclidean geometry is a good approximation for small-scale maps of the earth. The discrete nature of the triangular mesh is just an artifact of the definition, so we want a measure of curvature that, unlike \(\epsilon\), approaches some finite limit as the scale of the triangles approaches zero. Should we expect this scaling to go as \(\epsilon\propto \rho\)? \(\rho^2\)? Let's determine the scaling. First we prove a classic lemma by Gauss, concerning a slightly different version of the angular defect, for a single triangle. Theorem: In elliptic geometry, the angular defect \(\epsilon=\alpha+\beta+\gamma-\pi\) of a triangle is proportional to its area \(A\). Proof: By axiom E2, extend each side of the triangle to form a line, figure f/1. Each pair of lines crosses at only one point (E1) and divides the plane into two lunes with their four vertices touching at this point, figure f/2. Of the six lunes, we focus on the three shaded ones, which overlap the triangle. In each of these, the two interior angles at the vertex are the same (Euclid I.15). The area of a lune is proportional to its interior angle, as follows from dissection into narrower lunes; since a lune with an interior angle of \(\pi\) covers the entire area \(P\) of the plane, the constant of proportionality is \(P/\pi\). The sum of the areas of the three lunes is \((P/\pi)(\alpha+\beta+\gamma)\), but these three areas also cover the entire plane, overlapping three times on the given triangle, and therefore their sum also equals \(P+2A\). Equating the two expressions leads to the desired result. This calculation was purely intrinsic, because it made no use of any model or coordinates. We can therefore construct a measure of curvature that we can be assured is intrinsic, \(K=\epsilon/A\). This is called the Gaussian curvature, and in elliptic geometry it is constant rather than varying from point to point. In the model on a sphere of radius \(\rho\), we have \(K=1/\rho^2\). Self-check: Verify the equation \(K=1/\rho^2\) by considering a triangle covering one octant of the sphere, as in example 2. It is useful to introduce normal or Gaussian normal coordinates, defined as follows. Through point O, construct perpendicular geodesics, and define affine coordinates \(x\) and \(y\) along these. For any point P off the axis, define coordinates by constructing the lines through P that cross the axes perpendicularly. For P in a sufficiently small neighborhood of O, these lines exist and are uniquely determined. Gaussian polar coordinates can be defined in a similar way. Here are two useful interpretations of \(K\). 1. The Gaussian curvature measures the failure of parallelism in the following sense. Let line \(\ell\) be constructed so that it crosses the normal \(y\) axis at \((0,dy)\) at an angle that differs from perpendicular by the infinitesimal amount \(d\alpha\) (figure h). Construct the line \(x'=dx\), and let \(d\alpha'\) be the angle its perpendicular forms with \(\ell\). Then4 the Gaussian curvature at O is \[\begin{equation*} K=\frac{d^2\alpha}{dxdy} , \end{equation*}\] where \(d^2\alpha=d\alpha'-d\alpha\). 2. From a point P, emit a fan of rays at angles filling a certain range \(\theta\) of angles in Gaussian polar coordinates (figure i). Let the arc length of this fan at \(r\) be \(L\), which may not be equal to its Euclidean value \(L_E=r\theta\). Then5 \[\begin{equation*} K=-3\frac{d^2}{dr^2} \left(\frac{L}{L_E}\right) . \end{equation*}\] Let's now generalize beyond elliptic geometry. Consider a space modeled by a surface embedded in three dimensions, with geodesics defined as curves of extremal length, i.e., the curves made by a piece of string stretched taut across the surface. At a particular point P, we can always pick a coordinate system \((x,y,z)\) such that the surface \(z=\frac{1}{2}k_1x^2+\frac{1}{2}k_2y^2\) locally approximates the surface to the level of precision needed in order to discuss curvature. The surface is either paraboloidal or hyperboloidal (a saddle), depending on the signs of \(k_1\) and \(k_2\). We might naively think that \(k_1\) and \(k_2\) could be independently determined by intrinsic measurements, but as we've seen in example 4 on page 95, a cylinder is locally indistinguishable from a Euclidean plane, so if one \(k\) is zero, the other \(k\) clearly cannot be determined. In fact all that can be measured is the Gaussian curvature, which equals the product \(k_1k_2\). To see why this should be true, first consider that any measure of curvature has units of inverse distance squared, and the \(k\)'s have units of inverse distance. The only possible intrinsic measures of curvature based on the \(k\)'s are therefore \(k_1^2+k_2^2\) and \(k_1k_2\). (We can't have, for example, just \(k_1^2\), because that would change under an extrinsic rotation about the \(z\) axis.) Only \(k_1k_2\) vanishes on a cylinder, so it is the only possible intrinsic curvature. Example 3: Eating pizza When people eat pizza by folding the slice lengthwise, they are taking advantage of the intrinsic nature of the Gaussian curvature. Once \(k_1\) is fixed to a nonzero value, \(k_2\) can't change without varying \(K\), so the slice can't droop. Example 4: Elliptic and hyperbolic geometry We've seen that figures behaving according to the axioms of elliptic geometry can be modeled on part of a sphere, which is a surface of constant \(K>0\). The model can be made into global one satisfying all the axioms if the appropriate topological properties are ensured by identifying antipodal points. A paraboloidal surface \(z=k_1x^2+k_2y^2\) can be a good local approximation to a sphere, but for points far from its apex, \(K\) varies significantly. Elliptic geometry has no parallels; all lines meet if extended far enough. A space of constant negative curvature has a geometry called hyperbolic, and is of some interest because it appears to be the one that describes the spatial dimensions of our universe on a cosmological scale. A hyperboloidal surface works locally as a model, but its curvature is only approximately constant; the surface of constant curvature is a horn-shaped one created by revolving a mountain-shaped curve called a tractrix about its axis. The tractrix of revolution is not as satisfactory a model as the sphere is for elliptic geometry, because lines are cut off at the cusp of the horn. Hyperbolic geometry is richer in parallels than Euclidean geometry; given a line \(\ell\) and a point P not on \(\ell\), there are infinitely many lines through P that do not pass through \(\ell\). Example 5: A flea on a football We might imagine that a flea on the surface of an American football could determine by intrinsic, local measurements which direction to go in order to get to the nearest tip. This is impossible, because the flea would have to determine a vector, and curvature cannot be a vector, since \(z=\frac{1}{2}k_1x^2+\frac{1}{2}k_2y^2\) is invariant under the parity inversion \(x \rightarrow -x\), \(y \rightarrow -y\). For similar reasons, a measure of curvature can never have odd rank. Without violating reflection symmetry, it is still conceivable that the flea could determine the orientation of the tip-to-tip line running through his position. Surprisingly, even this is impossible. The flea can only measure the single number \(K\), which carries no information about directions in space. Example 6: The lightning rod Suppose you have a pear-shaped conductor like the one in figure l/1. Since the pear is a conductor, there are free charges everywhere inside it. Panels 1 and 2 of the figure show a computer simulation with 100 identical electric charges. In 1, the charges are released at random positions inside the pear. Repulsion causes them all to fly outward onto the surface and then settle down into an orderly but nonuniform pattern. We might not have been able to guess the pattern in advance, but we can verify that some of its features make sense. For example, charge A has more neighbors on the right than on the left, which would tend to make it accelerate off to the left. But when we look at the picture as a whole, it appears reasonable that this is prevented by the larger number of more distant charges on its left than on its right. There also seems to be a pattern to the nonuniformity: the charges collect more densely in areas like B, where the Gaussian curvature is large, and less densely in areas like C, where \(K\) is nearly zero (slightly negative). To understand the reason for this pattern, consider l/3. It's straightforward to show that the density of charge \(\sigma\) on each sphere is inversely proportional to its radius, or proportional to \(K^{1/2}\). Lord Kelvin proved that on a conducting ellipsoid, the density of charge is proportional to the distance from the center to the tangent plane, which is equivalent1 to \(\sigma\propto K^{1/4}\); this result looks similar except for the different exponent. McAllister showed in 19902 that this \(K^{1/4}\) behavior applies to a certain class of examples, but it clearly can't apply in all cases, since, for example, \(K\) could be negative, or we could have a deep concavity, which would form a Faraday cage. Problem 1 on p. 199 discusses the case of a knife-edge. Similar reasoning shows why Benjamin Franklin used a sharp tip when he invented the lightning rod. The charged stormclouds induce positive and negative charges to move to opposite ends of the rod. At the pointed upper end of the rod, the charge tends to concentrate at the point, and this charge attracts the lightning. The same effect can sometimes be seen when a scrap of aluminum foil is inadvertently put in a microwave oven. Modern experiments3 show that although a sharp tip is best at starting a spark, a more moderate curve, like the right-hand tip of the pear in this example, is better at successfully sustaining the spark for long enough to connect a discharge to the clouds. 5.4 Curvature tensors a / The definition of the Riemann tensor. The vector \(v^b\) changes by \(dv^b\) when parallel-transported around the approximate parallelogram. (\(v^b\) is drawn on a scale that makes its length comparable to the infinitesimals \(dp^c\), \(dq^d\), and \(dv^b\); in reality, its size would be greater than theirs by an infinite factor.) b / The change in the vector due to parallel transport around the octant equals the integral of the Riemann tensor over the interior. The example of the flea suggests that if we want to express curvature as a tensor, it should have even rank. Also, in a coordinate system in which the coordinates have units of distance (they are not angles, for instance, as in spherical coordinates), we expect that the units of curvature will always be inverse distance squared. More elegantly, we expect that under a uniform rescaling of coordinates by a factor of \(\mu\), a curvature tensor should scale down by \(\mu^{-2}\). Combining these two facts, we find that a curvature tensor should have one of the forms \(R_{ab}\), \(R^a_{bcd}\), ..., i.e., the number of lower indices should be two greater than the number of upper indices. The following definition has this property, and is equivalent to the earlier definitions of the Gaussian curvature that were not written in tensor notation. Definition of the Riemann curvature tensor: Let \(dp^c\) and \(dq^d\) be two infinitesimal vectors, and use them to form a quadrilateral that is a good approximation to a parallelogram.6 Parallel-transport vector \(v^b\) all the way around the parallelogram. When it comes back to its starting place, it has a new value \(v^b \rightarrow v^b+dv^b\). Then the Riemann curvature tensor is defined as the tensor that computes \(dv^a\) according to \(dv^a=R^a_{bcd}v^bdp^cdq^d\). (There is no standardization in the literature of the order of the indices.) Example 7: A symmetry of the Riemann tensor If vectors \(dp^c\) and \(dq^d\) lie along the same line, then \(dv^a\) must vanish, and interchanging \(dp^c\) and \(dq^d\) simply reverses the direction of the circuit around the quadrilateral, giving \(dv^a \rightarrow -dv^a\). This shows that \(R^a_{bcd}\) must be antisymmetric under interchange of the indices \(c\) and \(d\), \(R^a_{bcd}=-R^a_{bdc}\). In local normal coordinates, the interpretation of the Riemann tensor becomes particularly transparent. The constant-coordinate lines are geodesics, so when the vector \(v^b\) is transported along them, it maintains a constant angle with respect to them. Any rotation of the vector after it is brought around the perimeter of the quadrilateral can therefore be attributed to something that happens at the vertices. In other words, it is simply a measure of the angular defect. We can therefore see that the Riemann tensor is really just a tensorial way of writing the Gaussian curvature \(K=d\epsilon/dA\). In normal coordinates, the local geometry is nearly Cartesian, and when we take the product of two vectors in an antisymmetric manner, we are essentially measuring the area of the parallelogram they span, as in the three-dimensional vector cross product. We can therefore see that the Riemann tensor tells us something about the amount of curvature contained within the infinitesimal area spanned by \(dp^c\) and \(dq^d\). A finite two-dimensional region can be broken down into infinitesimal elements of area, and the Riemann tensor integrated over them. The result is equal to the finite change \(\Delta v^b\) in a vector transported around the whole boundary of the region. Example 8: Curvature tensors on a sphere Let's find the curvature tensors on a sphere of radius \(\rho\). Construct normal coordinates \((x,y)\) with origin O, and let vectors \(dp^c\) and \(dq^d\) represent infinitesimal displacements along \(x\) and \(y\), forming a quadrilateral as described above. Then \(R^x_{yxy}\) represents the change in the \(x\) direction that occurs in a vector that is initially in the \(y\) direction. If the vector has unit magnitude, then \(R^x_{yxy}\) equals the angular deficit of the quadrilateral. Comparing with the definition of the Gaussian curvature, we find \(R^x_{yxy}=K=1/\rho^2\). Interchanging \(x\) and \(y\), we find the same result for \(R^y_{xyx}\). Thus although the Riemann tensor in two dimensions has sixteen components, only these two are nonzero, and they are equal to each other. This result represents the defect in parallel transport around a closed loop per unit area. Suppose we parallel-transport a vector around an octant, as shown in figure b. The area of the octant is \((\pi/2)\rho^2\), and multiplying it by the Riemann tensor, we find that the defect in parallel transport is \(\pi/2\), i.e., a right angle, as is also evident from the figure. The above treatment may be somewhat misleading in that it may lead you to believe that there is a single coordinate system in which the Riemann tensor is always constant. This is not the case, since the calculation of the Riemann tensor was only valid near the origin O of the normal coordinates. The character of these coordinates becomes quite complicated far from O; we end up with all our constant-\(x\) lines converging at north and south poles of the sphere, and all the constant-\(y\) lines at east and west poles. Angular coordinates \((\phi,\theta)\) are more suitable as a large-scale description of the sphere. We can use the tensor transformation law to find the Riemann tensor in these coordinates. If O, the origin of the \((x,y)\) coordinates, is at coordinates \((\phi,\theta)\), then \(dx/d \phi=\rho\sin\theta\) and \(dy/d \theta=\rho\). The result is \(R^\phi_{\theta\phi\theta}=R^x_{yxy}(dy/d \theta)^2=1\) and \(R^\theta_{\phi\theta\phi}=R^y_{xyx}(dx/d \phi)^2=\sin^2\theta\). The variation in \(R^\theta_{\phi\theta\phi}\) is not due to any variation in the sphere's intrinsic curvature; it represents the behavior of the coordinate system. The Riemann tensor only measures curvature within a particular plane, the one defined by \(dp^c\) and \(dq^d\), so it is a kind of sectional curvature. Since we're currently working in two dimensions, however, there is only one plane, and no real distinction between sectional curvature and Ricci curvature, which is the average of the sectional curvature over all planes that include \(dq^d\): \(R_{cd}=R^a_{cad}\). The Ricci curvature in two spacelike dimensions, expressed in normal coordinates, is simply the diagonal matrix \(\text{diag}(K,K)\). 5.5 Some order-of-magnitude estimates As a general proposition, calculating an order-of-magnitude estimate of a physical effect requires an understanding of 50% of the physics, while an exact calculation requires about 75%.7 We've reached the point where it's reasonable to attempt a variety of order-of-magnitude estimates. a / The geodetic effect as measured by Gravity Probe B. 5.5.1 The geodetic effect How could we confirm experimentally that parallel transport around a closed path can cause a vector to rotate? The rotation is related to the amount of spacetime curvature contained within the path, so it would make sense to choose a loop going around a gravitating body. The rotation is a purely relativistic effect, so we expect it to be small. To make it easier to detect, we should go around the loop many times, causing the effect to accumulate. This is essentially a description of a body orbiting another body. A gyroscope aboard the orbiting body is expected to precess. This is known as the geodetic effect. In 1916, shortly after Einstein published the general theory of relativity, Willem de Sitter calculated the effect on the earth-moon system. The effect was not directly verified until the 1980's, and the first high-precision measurement was in 2007, from analysis of the results collected by the Gravity Probe B satellite experiment. The probe carried four gyroscopes made of quartz, which were the most perfect spheres ever manufactured, varying from sphericity by no more than about 40 atoms. Let's estimate the size of the effect. The first derivative of the metric is, roughly, the gravitational field, whereas the second derivative has to do with curvature. The curvature of spacetime around the earth should therefore vary as \(GMr^{-3}\), where \(M\) is the earth's mass and \(G\) is the gravitational constant. The area enclosed by a circular orbit is proportional to \(r^2\), so we expect the geodetic effect to vary as \(nGM/r\), where \(n\) is the number of orbits. The angle of precession is unitless, and the only way to make this result unitless is to put in a factor of \(1/c^2\). In units with \(c=1\), this factor is unnecessary. In ordinary metric units, the \(1/c^2\) makes sense, because it causes the purely relativistic effect to come out to be small. The result, up to unitless factors that we didn't pretend to find, is \[\begin{equation*} \Delta \theta \sim \frac{nGM}{c^2r} . \end{equation*}\] We might also expect a Thomas precession. Like the spacetime curvature effect, it would be proportional to \(nGM/c^2r\). Since we're not worrying about unitless factors, we can just lump the Thomas precession together with the effect already calculated. The data for Gravity Probe B are \(r=r_e+(650\ \text{km})\) and \(n \approx 5000\) (orbiting once every 90 minutes for the 353-day duration of the experiment), giving \(\Delta\theta \sim 3\times10^{-6}\) radians. Figure b shows the actual results8 the four gyroscopes aboard the probe. The precession was about 6 arc-seconds, or \(3\times10^{-5}\) radians. Our crude estimate was on the right order of magnitude. The missing unitless factor on the right-hand side of the equation above is \(3\pi\), which brings the two results into fairly close quantitative agreement. The full derivation, including the factor of \(3\pi\), is given on page 212. b / Precession angle as a function of time as measured by the four gyroscopes aboard Gravity Probe B. 5.5.2 Deflection of light rays In the discussion of the momentum four vector in section 4.2.2, we saw that due to the equivalence principle, light must be affected by gravity. There are two ways in which such an effect could occur. Light can gain and lose momentum as it travels up and down in a gravitational field, or its momentum vector can be deflected by a transverse gravitational field. As an example of the latter, a ray of starlight can be deflected by the sun's gravity, causing the star's apparent position in the sky to be shifted. The detection of this effect was one of the first experimental tests of general relativity. Ordinarily the bright light from the sun would make it impossible to accurately measure a star's location on the celestial sphere, but this problem was sidestepped by Arthur Eddington during an eclipse of the sun in 1919. c / One of the photos from Eddington's observations of the 1919 eclipse. This is a photographic negative, so the circle that appears bright is actually the dark face of the moon, and the dark area is really the bright corona of the sun. The stars, marked by lines above and below them, appeared at positions slightly different than their normal ones, indicating that their light had been bent by the sun's gravity on its way to our planet. Let's estimate the size of this effect. We've already seen that the Riemann tensor is essentially just a tensorial way of writing the Gaussian curvature \(K=d\epsilon/dA\). Suppose, for the sake of this rough estimate, that the sun, earth, and star form a non-Euclidean triangle with a right angle at the sun. Then the angular deflection is the same as the angular defect \(\epsilon\) of this triangle, and equals the integral of the curvature over the interior of the triangle. Ignoring unitless constants, this ends up being exactly the same calculation as in section 5.5.1, and the result is \(\epsilon\sim GM/c^2r\), where \(r\) is the light ray's distance of closest approach to the sun. The value of \(r\) can't be less than the radius of the sun, so the maximum size of the effect is on the order of \(GM/c^2r\), where \(M\) is the sun's mass, and \(r\) is its radius. We find \(\epsilon\sim10^{-5}\) radians, or about a second of arc. To measure a star's position to within an arc second was well within the state of the art in 1919, under good conditions in a comfortable observatory. This observation, however, required that Eddington's team travel to the island of Principe, off the coast of West Africa. The weather was cloudy, and only during the last 10 seconds of the seven-minute eclipse did the sky clear enough to allow photographic plates to be taken of the Hyades star cluster against the background of the eclipse-darkened sky. The observed deflection was 1.6 seconds of arc, in agreement with the relativistic prediction. The relativistic prediction is derived on page 220. 5.6 The covariant derivative In the preceding section we were able to estimate a nontrivial general relativistic effect, the geodetic precession of the gyroscopes aboard Gravity Probe B, up to a unitless constant \(3\pi\). Let's think about what additional machinery would be needed in order to carry out the calculation in detail, including the \(3\pi\). First we would need to know the Einstein field equation, but in a vacuum this is fairly straightforward: \(R_{ab}=0\). Einstein posited this equation based essentially on the considerations laid out in section 5.1. But just knowing that a certain tensor vanishes identically in the space surrounding the earth clearly doesn't tell us anything explicit about the structure of the spacetime in that region. We want to know the metric. As suggested at the beginning of the chapter, we expect that the first derivatives of the metric will give a quantity analogous to the gravitational field of Newtonian mechanics, but this quantity will not be directly observable, and will not be a tensor. The second derivatives of the metric are the ones that we expect to relate to the Ricci tensor \(R_{ab}\). a / A double-slit experiment with electrons. If we add an arbitrary constant to the potential, no observable changes result. The wavelength is shortened, but the relative phase of the two parts of the waves stays the same. b / Two wavefunctions with constant wavelengths, and a third with a varying wavelength. None of these are physically distinguishable, provided that the same variation in wavelength is applied to all electrons in the universe at any given point in spacetime. There is not even any unambiguous way to pick out the third one as the one with a varying wavelength. We could choose a different gauge in which the third wave was the only one with a constant wavelength. 5.6.1 The covariant derivative in electromagnetism We're talking blithely about derivatives, but it's not obvious how to define a derivative in the context of general relativity in such a way that taking a derivative results in well-behaved tensor. To see how this issue arises, let's retreat to the more familiar terrain of electromagnetism. In quantum mechanics, the phase of a charged particle's wavefunction is unobservable, so that for example the transformation \(\Psi \rightarrow -\Psi\) does not change the results of experiments. As a less trivial example, we can redefine the ground of our electrical potential, \(\Phi \rightarrow \Phi+\delta\Phi\), and this will add a constant onto the energy of every electron in the universe, causing their phases to oscillate at a greater rate due to the quantum-mechanical relation \(E=hf\). There are no observable consequences, however, because what is observable is the phase of one electron relative to another, as in a double-slit interference experiment. Since every electron has been made to oscillate faster, the effect is simply like letting the conductor of an orchestra wave her baton more quickly; every musician is still in step with every other musician. The rate of change of the wavefunction, i.e., its derivative, has some built-in ambiguity. For simplicity, let's now restrict ourselves to spin-zero particles, since details of electrons' polarization clearly won't tell us anything useful when we make the analogy with relativity. For a spin-zero particle, the wavefunction is simply a complex number, and there are no observable consequences arising from the transformation \(\Psi \rightarrow \Psi' = e^{i\alpha} \Psi\), where \(\alpha\) is a constant. The transformation \(\Phi \rightarrow \Phi-\delta\Phi\) is also allowed, and it gives \(\alpha(t)=(q\delta\Phi/\hbar) t\), so that the phase factor \(e^{i\alpha(t)}\) is a function of time \(t\). Now from the point of view of electromagnetism in the age of Maxwell, with the electric and magnetic fields imagined as playing their roles against a background of Euclidean space and absolute time, the form of this time-dependent phase factor is very special and symmetrical; it depends only on the absolute time variable. But to a relativist, there is nothing very nice about this function at all, because there is nothing special about a time coordinate. If we're going to allow a function of this form, then based on the coordinate-invariance of relativity, it seems that we should probably allow \(\alpha\) to be any function at all of the spacetime coordinates. The proper generalization of \(\Phi \rightarrow \Phi-\delta\Phi\) is now \(A_b \rightarrow A_b-\partial_b \alpha\), where \(A_b\) is the electromagnetic potential four-vector (section 4.2.5, page 137). Self-check: Suppose we said we would allow \(\alpha\) to be a function of \(t\), but forbid it to depend on the spatial coordinates. Prove that this would violate Lorentz invariance. The transformation has no effect on the electromagnetic fields, which are the direct observables. We can also verify that the change of gauge will have no effect on observable behavior of charged particles. This is because the phase of a wavefunction can only be determined relative to the phase of another particle's wavefunction, when they occupy the same point in space and, for example, interfere. Since the phase shift depends only on the location in spacetime, there is no change in the relative phase. But bad things will happen if we don't make a corresponding adjustment to the derivatives appearing in the Schrödinger equation. These derivatives are essentially the momentum operators, and they give different results when applied to \(\Psi'\) than when applied to \(\Psi\): \[\begin{align*} \partial_b \Psi & \rightarrow \partial_b \left(e^{i\alpha} \Psi\right) \\ &= e^{i\alpha} \partial_b \Psi + i\partial_b\alpha \left(e^{i\alpha} \Psi\right) \\ &= \left(\partial_b + A'_b-A_b \right) \Psi' \end{align*}\] To avoid getting incorrect results, we have to do the substitution \(\partial_b \rightarrow \partial_b+ieA_b\), where the correction term compensates for the change of gauge. We call the operator \(\nabla\) defined as \[\begin{equation*} \nabla_b = \partial_b+ieA_b \end{equation*}\] the covariant derivative. It gives the right answer regardless of a change of gauge. c / These three rulers represent three choices of coordinates. As in figure b on page 173, switching from one set of coordinates to another has no effect on any experimental observables. It is merely a choice of gauge. d / Example 9. e / Birdtracks notation for the covariant derivative. 5.6.2 The covariant derivative in general relativity Now consider how all of this plays out in the context of general relativity. The gauge transformations of general relativity are arbitrary smooth changes of coordinates. One of the most basic properties we could require of a derivative operator is that it must give zero on a constant function. A constant scalar function remains constant when expressed in a new coordinate system, but the same is not true for a constant vector function, or for any tensor of higher rank. This is because the change of coordinates changes the units in which the vector is measured, and if the change of coordinates is nonlinear, the units vary from point to point. Consider the one-dimensional case, in which a vector \(v^a\) has only one component, and the metric is also a single number, so that we can omit the indices and simply write \(v\) and \(g\). (We just have to remember that \(v\) is really a covariant vector, even though we're leaving out the upper index.) If \(v\) is constant, its derivative \(dv/dx\), computed in the ordinary way without any correction term, is zero. If we further assume that the coordinate \(x\) is a normal coordinate, so that the metric is simply the constant \(g=1\), then zero is not just the answer but the right answer. (The existence of a preferred, global set of normal coordinates is a special feature of a one-dimensional space, because there is no curvature in one dimension. In more than one dimension, there will typically be no possible set of coordinates in which the metric is constant, and normal coordinates only give a metric that is approximately constant in the neighborhood around a certain point. See figure g pn page 164 for an example of normal coordinates on a sphere, which do not have a constant metric.) Now suppose we transform into a new coordinate system \(X\), which is not normal. The metric \(G\), expressed in this coordinate system, is not constant. Applying the tensor transformation law, we have \(V = v \:dX/dx\), and differentiation with respect to \(X\) will not give zero, because the factor \(dX/dx\) isn't constant. This is the wrong answer: \(V\) isn't really varying, it just appears to vary because \(G\) does. We want to add a correction term onto the derivative operator \(d/dX\), forming a covariant derivative operator \(\nabla_X\) that gives the right answer. This correction term is easy to find if we consider what the result ought to be when differentiating the metric itself. In general, if a tensor appears to vary, it could vary either because it really does vary or because the metric varies. If the metric itself varies, it could be either because the metric really does vary or ... because the metric varies. In other words, there is no sensible way to assign a nonzero covariant derivative to the metric itself, so we must have \(\nabla_X G=0\). The required correction therefore consists of replacing \(d/dX\) with \[\begin{equation*} \nabla_X=\frac{d}{dX}-G^{-1}\frac{dG}{dX} . \end{equation*}\] Applying this to \(G\) gives zero. \(G\) is a second-rank contravariant tensor. If we apply the same correction to the derivatives of other second-rank contravariant tensors, we will get nonzero results, and they will be the right nonzero results. For example, the covariant derivative of the stress-energy tensor \(T\) (assuming such a thing could have some physical significance in one dimension!) will be \( \nabla_X T=dT/dX-G^{-1}(dG/dX)T\). Physically, the correction term is a derivative of the metric, and we've already seen that the derivatives of the metric (1) are the closest thing we get in general relativity to the gravitational field, and (2) are not tensors. In 1+1 dimensions, suppose we observe that a free-falling rock has \(dV/dT=9.8\ \text{m}/\text{s}^2\). This acceleration cannot be a tensor, because we could make it vanish by changing from Earth-fixed coordinates \(X\) to free-falling (normal, locally Lorentzian) coordinates \(x\), and a tensor cannot be made to vanish by a change of coordinates. According to a free-falling observer, the vector \(v\) isn't changing at all; it is only the variation in the Earth-fixed observer's metric \(G\) that makes it appear to change. Mathematically, the form of the derivative is \((1/y)dy/dx\), which is known as a logarithmic derivative, since it equals \(d(\ln y)/dx\). It measures the multiplicative rate of change of \(y\). For example, if \(y\) scales up by a factor of \(k\) when \(x\) increases by 1 unit, then the logarithmic derivative of \(y\) is \(\ln k\). The logarithmic derivative of \(e^{cx}\) is \(c\). The logarithmic nature of the correction term to \(\nabla_X\) is a good thing, because it lets us take changes of scale, which are multiplicative changes, and convert them to additive corrections to the derivative operator. The additivity of the corrections is necessary if the result of a covariant derivative is to be a tensor, since tensors are additive creatures. What about quantities that are not second-rank covariant tensors? Under a rescaling of contravariant coordinates by a factor of \(k\), covariant vectors scale by \(k^{-1}\), and second-rank covariant tensors by \(k^{-2}\). The correction term should therefore be half as much for covariant vectors, \[\begin{equation*} \nabla_X=\frac{d}{dX}-\frac{1}{2}G^{-1}\frac{dG}{dX} . \end{equation*}\] and should have an opposite sign for contravariant vectors. Generalizing the correction term to derivatives of vectors in more than one dimension, we should have something of this form: \[\begin{align*} \nabla_a v^b &= \partial_a v^b + \Gamma^b_{ac}v^c\\ \nabla_a v_b &= \partial_a v_b - \Gamma^c_{ba}v_c , \end{align*}\] where \(\Gamma^b_{ac}\), called the Christoffel symbol, does not transform like a tensor, and involves derivatives of the metric. (“Christoffel” is pronounced “Krist-AWful,” with the accent on the middle syllable.) The explicit computation of the Christoffel symbols from the metric is deferred until section 5.9, but the intervening sections 5.7 and 5.8 can be omitted on a first reading without loss of continuity. An important gotcha is that when we evaluate a particular component of a covariant derivative such as \(\nabla_2 v^3\), it is possible for the result to be nonzero even if the component \(v^3\) vanishes identically. This can be seen in example 5 on p. 273. Example 9: Christoffel symbols on the globe As a qualitative example, consider the geodesic airplane trajectory shown in figure d, from London to Mexico City. In physics it is customary to work with the colatitude, \(\theta\), measured down from the north pole, rather then the latitude, measured from the equator. At P, over the North Atlantic, the plane's colatitude has a minimum. (We can see, without having to take it on faith from the figure, that such a minimum must occur. The easiest way to convince oneself of this is to consider a path that goes directly over the pole, at \(\theta=0\).) At P, the plane's velocity vector points directly west. At Q, over New England, its velocity has a large component to the south. Since the path is a geodesic and the plane has constant speed, the velocity vector is simply being parallel-transported; the vector's covariant derivative is zero. Since we have \(v_\theta=0\) at P, the only way to explain the nonzero and positive value of \(\partial_\phi v^\theta\) is that we have a nonzero and negative value of \(\Gamma^\theta_{\phi\phi}\). By symmetry, we can infer that \(\Gamma^\theta_{\phi\phi}\) must have a positive value in the southern hemisphere, and must vanish at the equator. \(\Gamma^\theta_{\phi\phi}\) is computed in example 10 on page 188. Symmetry also requires that this Christoffel symbol be independent of \(\phi\), and it must also be independent of the radius of the sphere. Example 9 is in two spatial dimensions. In spacetime, \(\Gamma\) is essentially the gravitational field (see problem 6, p. 199), and early papers in relativity essentially refer to it that way.9 This may feel like a joyous reunion with our old friend from freshman mechanics, \(g=9.8\ \text{m}/\text{s}\). But our old friend has changed. In Newtonian mechanics, accelerations like \(g\) are frame-invariant (considering only inertial frames, which are the only legitimate ones in that theory). In general relativity they are frame-dependent, and as we saw on page 176, the acceleration of gravity can be made to equal anything we like, based on our choice of a frame of reference. To compute the covariant derivative of a higher-rank tensor, we just add more correction terms, e.g., \[\begin{align*} \nabla_a U_{bc} = \partial_a U_{bc} - \Gamma^d_{ba}U_{dc}-\Gamma^d_{ca}U_{bd} \\ \text{or} \nabla_a U_b^c = \partial_a U_b^c - \Gamma^d_{ba}U_d^c+\Gamma^c_{ad}U_b^d . \end{align*}\] With the partial derivative \(\partial_\mu\), it does not make sense to use the metric to raise the index and form \(\partial^\mu\). It does make sense to do so with covariant derivatives, so \(\nabla^a = g^{ab} \nabla_b\) is a correct identity. Comma, semicolon, and birdtracks notation Some authors use superscripts with commas and semicolons to indicate partial and covariant derivatives. The following equations give equivalent notations for the same derivatives: \[\begin{align*} \partial_\mu X_\nu &= X_{\nu,\mu} \\ \nabla_a X_b &= X_{b;a} \\ \nabla^a X_b &= X_b^{;a} \end{align*}\] Figure e shows two examples of the corresponding birdtracks notation. Because birdtracks are meant to be manifestly coordinate-independent, they do not have a way of expressing non-covariant derivatives. We no longer want to use the circle as a notation for a non-covariant gradient as we did when we first introduced it on p. 48. 5.7 The geodesic equation a / The geodesic, 1, preserves tangency under parallel transport. The non-geodesic curve, 2, doesn't have this property; a vector initially tangent to the curve is no longer tangent to it when parallel-transported along it. 5.7.1 Characterization of the geodesic A geodesic can be defined as a world-line that preserves tangency under parallel transport, a. This is essentially a mathematical way of expressing the notion that we have previously expressed more informally in terms of “staying on course” or moving “inertially.” A curve can be specified by giving functions \(x^i(\lambda)\) for its coordinates, where \(\lambda\) is a real parameter. A vector lying tangent to the curve can then be calculated using partial derivatives, \(T^i=\partial x^i/\partial\lambda\). There are three ways in which a vector function of \(\lambda\) could change: (1) it could change for the trivial reason that the metric is changing, so that its components changed when expressed in the new metric; (2) it could change its components perpendicular to the curve; or (3) it could change its component parallel to the curve. Possibility 1 should not really be considered a change at all, and the definition of the covariant derivative is specifically designed to be insensitive to this kind of thing. 2 cannot apply to \(T^i\), which is tangent by construction. It would therefore be convenient if \(T^i\) happened to be always the same length. If so, then 3 would not happen either, and we could reexpress the definition of a geodesic by saying that the covariant derivative of \(T^i\) was zero. For this reason, we will assume for the remainder of this section that the parametrization of the curve has this property. In a Newtonian context, we could imagine the \(x^i\) to be purely spatial coordinates, and \(\lambda\) to be a universal time coordinate. We would then interpret \(T^i\) as the velocity, and the restriction would be to a parametrization describing motion with constant speed. In relativity, the restriction is that \(\lambda\) must be an affine parameter. For example, it could be the proper time of a particle, if the curve in question is timelike. 5.7.2 Covariant derivative with respect to a parameter The notation of section 5.6 is not quite adapted to our present purposes, since it allows us to express a covariant derivative with respect to one of the coordinates, but not with respect to a parameter such as \(\lambda\). We would like to notate the covariant derivative of \(T^i\) with respect to \(\lambda\) as \(\nabla_\lambda T^i\), even though \(\lambda\) isn't a coordinate. To connect the two types of derivatives, we can use a total derivative. To make the idea clear, here is how we calculate a total derivative for a scalar function \(f(x,y)\), without tensor notation: \[\begin{equation*} \frac{df}{d \lambda} = \frac{\partial f}{\partial x}\frac{\partial x}{\partial \lambda} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial \lambda} . \end{equation*}\] This is just the generalization of the chain rule to a function of two variables. For example, if \(\lambda\) represents time and \(f\) temperature, then this would tell us the rate of change of the temperature as a thermometer was carried through space. Applying this to the present problem, we express the total covariant derivative as \[\begin{align*} \nabla_\lambda T^i &= (\nabla_b T^i) \frac{dx^b}{d\lambda} \\ &= \left(\partial_b T^i + \Gamma^i_{bc}T^c\right) \frac{dx^b}{d\lambda} . \end{align*}\] 5.7.3 The geodesic equation Recognizing \(\partial_b T^i dx^b/d\lambda\) as a total non-covariant derivative, we find \[\begin{equation*} \nabla_\lambda T^i = \frac{dT^i}{d\lambda} + \Gamma^i_{bc}T^c \frac{dx^b}{d\lambda} . \end{equation*}\] Substituting \(\partial x^i/\partial\lambda\) for \(T^i\), and setting the covariant derivative equal to zero, we obtain \[\begin{equation*} \frac{d^2 x^i}{d\lambda^2} + \Gamma^i_{bc} \frac{dx^c}{d\lambda} \frac{dx^b}{d\lambda} = 0 . \end{equation*}\] This is known as the geodesic equation. If this differential equation is satisfied for one affine parameter \(\lambda\), then it is also satisfied for any other affine parameter \(\lambda'=a\lambda+b\), where \(a\) and \(b\) are constants (problem 4). Recall that affine parameters are only defined along geodesics, not along arbitrary curves. We can't start by defining an affine parameter and then use it to find geodesics using this equation, because we can't define an affine parameter without first specifying a geodesic. Likewise, we can't do the geodesic first and then the affine parameter, because if we already had a geodesic in hand, we wouldn't need the differential equation in order to find a geodesic. The solution to this chicken-and-egg conundrum is to write down the differential equations and try to find a solution, without trying to specify either the affine parameter or the geodesic in advance. We will seldom have occasion to resort to this technique, an exception being example 16 on page 310. 5.7.4 Uniqueness 1. Initialize \(\lambda\), the \(x^i\) and their derivatives \(dx^i/d\lambda\). Also, set a small step-size \(\Delta\lambda\) by which to increment \(\lambda\) at each step below. 2. For each \(i\), calculate \(d^2 x^i/d\lambda^2\) using the geodesic equation. 3. Add \((d^2 x^i/d\lambda^2)\Delta\lambda\) to the currently stored value of \(dx^i/d\lambda\). 4. Add \((dx^i/d\lambda)\Delta\lambda\) to \(x^i\). The practical use of this algorithm to compute geodesics numerically is demonstrated in section 5.9.2 on page 188. 5.8 Torsion This section describes the concept of gravitational torsion. It can be skipped without loss of continuity, provided that you accept the symmetry property \(\Gamma^a_{[bc]}=0\) without worrying about what it means physically or what empirical evidence supports it. Self-check: Interpret the mathematical meaning of the equation \(\Gamma^a_{[bc]}=0\), which is expressed in the notation introduced on page 102. a / Measuring \(\partial^2 T/\partial x\partial y\) for a scalar \(T\). b / The gyroscopes both rotate when transported from A to B, causing Helen to navigate along BC, which does not form a right angle with AB. The angle between the two gyroscopes' axes is always the same, so the rotation is not locally observable, but it does produce an observable gap between C and E. 5.8.1 Are scalars path-dependent? It seems clear that something like the covariant derivative is needed for vectors, since they have a direction in spacetime, and thus their measures vary when the measure of spacetime itself varies. Since scalars don't have a direction in spacetime, the same reasoning doesn't apply to them, and this is reflected in our rules for covariant derivatives. The covariant derivative has one \(\Gamma\) term for every index of the tensor being differentiated, so for a scalar there should be no \(\Gamma\) terms at all, i.e., \(\nabla_a\) is the same as \(\partial_a\). But just because derivatives of scalars don't require special treatment for this particular reason, that doesn't mean they are guaranteed to behave as we intuitively expect, in the strange world of coordinate-invariant relativity. One possible way for scalars to behave counterintuitively would be by analogy with parallel transport of vectors. If we stick a vector in a box (as with, e.g., the gyroscopes aboard Gravity Probe B) and carry it around a closed loop, it changes. Could the same happen with a scalar? This is extremely counterintuitive, since there is no reason to imagine such an effect in any of the models we've constructed of curved spaces. In fact, it is not just counterintuitive but mathematically impossible, according to the following argument. The only reason we can interpret the vector-in-a-box effect as arising from the geometry of spacetime is that it applies equally to all vectors. If, for example, it only applied to the magnetic polarization vectors of ferromagnetic substances, then we would interpret it as a magnetic field living in spacetime, not a property of spacetime itself. If the value of a scalar-in-a-box was path-dependent, and this path-dependence was a geometric property of spacetime, then it would have to apply to all scalars, including, say, masses and charges of particles. Thus if an electron's mass increased by 1% when transported in a box along a certain path, its charge would have to increase by 1% as well. But then its charge-to-mass ratio would remain invariant, and this is a contradiction, since the charge-to-mass ratio is also a scalar, and should have felt the same 1% effect. Since the varying scalar-in-a-box idea leads to a contradiction, it wasn't a coincidence that we couldn't find a model that produced such an effect; a theory that lacks self-consistency doesn't have any models. Self-check: Explain why parallel transporting a vector can only rotate it, not change its magnitude. There is, however, a different way in which scalars could behave counterintuitively, and this one is mathematically self-consistent. Suppose that Helen lives in two spatial dimensions and owns a thermometer. She wants to measure the spatial variation of temperature, in particular its mixed second derivative \(\partial^2 T/\partial x\partial y\). At home in the morning at point A, she prepares by calibrating her gyrocompass to point north and measuring the temperature. Then she travels \(\ell=1\) km east along a geodesic to B, consults her gyrocompass, and turns north. She continues one kilometer north to C, samples the change in temperature \(\Delta T_1\) relative to her home, and then retraces her steps to come home for lunch. In the afternoon, she checks her work by carrying out the same process, but this time she interchanges the roles of north and east, traveling along ADE. If she were living in a flat space, this would form the other two sides of a square, and her afternoon temperature sample \(\Delta T_2\) would be at the same point in space C as her morning sample. She actually doesn't recognize the landscape, so the sample points C and E are different, but this just confirms what she already knew: the space isn't flat.10 None of this seems surprising yet, but there are now two qualitatively different ways that her analysis of her data could turn out, indicating qualitatively different things about the laws of physics in her universe. The definition of the derivative as a limit requires that she repeat the experiment at smaller scales. As \(\ell\rightarrow 0\), the result for \(\partial^2 T/\partial x\partial y\) should approach a definite limit, and the error should diminish in proportion to \(\ell\). In particular the difference between the results inferred from \(\Delta T_1\) and \(\Delta T_2\) indicate an error, and the discrepancy between the second derivatives inferred from them should shrink appropriately as \(\ell\) shrinks. Suppose this doesn't happen. Since partial derivatives commute, we conclude that her measuring procedure is not the same as a partial derivative. Let's call her measuring procedure \(\nabla\), so that she is observing a discrepancy between \(\nabla_x\nabla_y\) and \(\nabla_y\nabla_x\). The fact that the commutator \(\nabla_x\nabla_y-\nabla_y\nabla_x\) doesn't vanish cannot be explained by the Christoffel symbols, because what she's differentiating is a scalar. Since the discrepancy arises entirely from the failure of \(\Delta T_1-\Delta T_2\) to scale down appropriately, the conclusion is that the distance \(\delta\) between the two sampling points is not scaling down as quickly as we expect. In our familiar models of two-dimensional spaces as surfaces embedded in three-space, we always have \(\delta\sim\ell^3\) for small \(\ell\), but she has found that it only shrinks as quickly as \(\ell^2\). For a clue as to what is going on, note that the commutator \(\nabla_x\nabla_y-\nabla_y\nabla_x\) has a particular handedness to it. For example, it flips its sign under a reflection across the line \(y=x\). When we “parallel”-transport vectors, they aren't actually staying parallel. In this hypothetical universe, a vector in a box transported by a small distance \(\ell\) rotates by an angle proportional to \(\ell\). This effect is called torsion. Although no torsion effect shows up in our familiar models, that is not because torsion lacks self-consistency. Models of spaces with torsion do exist. In particular, we can see that torsion doesn't lead to the same kind of logical contradiction as the varying-scalar-in-a-box idea. Since all vectors twist by the same amount when transported, inner products are preserved, so it is not possible to put two vectors in one box and get the scalar-in-a-box paradox by watching their inner product change when the box is transported. Note that the elbows ABC and ADE are not right angles. If Helen had brought a pair of gyrocompasses with her, one for \(x\) and one for \(y\), she would have found that the right angle between the gyrocompasses was preserved under parallel transport, but that a gyrocompass initially tangent to a geodesic did not remain so. There are in fact two inequivalent definitions of a geodesic in a space with torsion. The shortest path between two points is not necessarily the same as the straightest possible path, i.e., the one that parallel-transports its own tangent vector. c / Three gyroscopes are initially aligned with the \(x\), \(y\), and \(z\) axes. After parallel transport along the geodesic \(x\) axis, the \(x\) gyro is still aligned with the \(x\) axis, but the \(y\) and \(z\) gyros have rotated. 5.8.2 The torsion tensor Since torsion is odd under parity, it must be represented by an odd-rank tensor, which we call \(\tau^c_{ab}\) and define according to \[\begin{equation*} (\nabla_a\nabla_b-\nabla_b\nabla_a)f = -\tau^c_{ab}\nabla_c f , \end{equation*}\] where \(f\) is any scalar field, such as the temperature in the preceding section. There are two different ways in which a space can be non-Euclidean: it can have curvature, or it can have torsion. For a full discussion of how to handle the mathematics of a spacetime with both curvature and torsion, see the article by Steuard Jensen at For our present purposes, the main mathematical fact worth noting is that vanishing torsion is equivalent to the symmetry \(\Gamma^a_{bc}=\Gamma^a_{cb}\) of the Christoffel symbols. Using the notation introduced on page 102, \(\Gamma^a_{[bc]}=0\) if \(\tau=0\). Self-check: Use an argument similar to the one in example 5 on page 166 to prove that no model of a two-space embedded in a three-space can have torsion. Generalizing to more dimensions, the torsion tensor is odd under the full spacetime reflection \(x_a \rightarrow -x_a\), i.e., a parity inversion plus a time-reversal, PT. In the story above, we had a torsion that didn't preserve tangent vectors. In three or more dimensions, however, it is possible to have torsion that does preserve tangent vectors. For example, transporting a vector along the \(x\) axis could cause only a rotation in the \(y\)-\(z\) plane. This relates to the symmetries of the torsion tensor, which for convenience we'll write in an \(x\)-\(y\)-\(z\) coordinate system and in the fully covariant form \(\tau_{\lambda\mu\nu}\). The definition of the torsion tensor implies \(\tau_{\lambda(\mu\nu)}=0\), i.e., that the torsion tensor is antisymmetric in its two final indices. Torsion that does not preserve tangent vectors will have nonvanishing elements such as \(\tau_{xxy}\), meaning that parallel-transporting a vector along the \(x\) axis can change its \(x\) component. Torsion that preserves tangent vectors will have vanishing \(\tau_{\lambda\mu\nu}\) unless \(\lambda\), \(\mu\), and \(\nu\) are all distinct. This is an example of the type of antisymmetry that is familiar from the vector cross product, in which the cross products of the basis vectors behave as \(\mathbf{x}\times\mathbf{y}=\mathbf{z}\), \(\mathbf{y}\times\mathbf{z}=\mathbf{x}\), \(\mathbf{y}\times\mathbf{z}=\mathbf{x}\). Generalizing the notation for symmetrization and antisymmetrization of tensors from page 102, we have \[\begin{align*} T_{(abc)} &= \frac{1}{3!}\Sigma T_{abc} \\ T_{[abc]} &= \frac{1}{3!}\Sigma\epsilon^{abc}T_{abc} , \end{align*}\] where the sums are over all permutations of the indices, and in the second line we have used the Levi-Civita symbol. In this notation, a totally antisymmetric torsion tensor is one with \(\tau_{\lambda\mu\nu=}\tau_{[\lambda\mu\nu]}\), and torsion of this type preserves tangent vectors under translation. In two dimensions, there are no totally antisymmetric objects with three indices, because we can't write three indices without repeating one. In three dimensions, an antisymmetric object with three indices is simply a multiple of the Levi-Civita tensor, so a totally antisymmetric torsion, if it exists, is represented by a single number; under translation, vectors rotate like either right-handed or left-handed screws, and this number tells us the rate of rotation. In four dimensions, we have four independently variable quantities, \(\tau_{xyz}\), \(\tau_{tyz}\), \(\tau_{txz}\), and \(\tau_{txy}\). In other words, an antisymmetric torsion of 3+1 spacetime can be represented by a four-vector, \(\tau^a=\epsilon^{abcd}\tau_{bcd}\). d / The University of Washington torsion pendulum used to search for torsion. The light gray wedges are Alnico, the darker ones \(\text{Sm}\text{Co}_5\). The arrows with the filled heads represent the directions of the electron spins, with denser arrows indicating higher polarization. The arrows with the open heads show the direction of the \(\mathbf{B}\) field. 5.8.3 Experimental searches for torsion One way of stating the equivalence principle (see p. 142) is that it forbids spacetime from coming equipped with a vector field that could be measured by free-falling observers, i.e., observers in local Lorentz frames. A variety of high-precision tests of the equivalence principle have been carried out. From the point of view of an experimenter doing this kind of test, it is important to distinguish between fields that are “built in” to spacetime and those that live in spacetime. For example, the existence of the earth's magnetic field does not violate the equivalence principle, but if an experiment was sensitive to the earth's field, and the experimenter didn't know about it, there would appear to be a violation. Antisymmetric torsion in four dimensions acts like a vector. If it constitutes a universal background effect built into spacetime, then it violates the equivalence principle. If it instead arises from specific material sources, then it may still show up as a measurable effect in experimental tests designed to detect Lorentz-invariance. Let's consider the latter possibility. Since curvature in general relativity comes from mass and energy, as represented by the stress-energy tensor \(T_{ab}\), we could ask what would be the sources of torsion, if it exists in our universe. The source can't be the rank-2 stress-energy tensor. It would have to be an odd-rank tensor, i.e., a quantity that is odd under PT, and in theories that include torsion it is commonly assumed that the source is the quantum-mechanical angular momentum of subatomic particles. If this is the case, then torsion effects are expected to be proportional to \(\hbar G\), the product of Planck's constant and the gravitational constant, and they should therefore be extremely small and hard to measure. String theory, for example, includes torsion, but nobody has found a way to test string theory empirically because it essentially makes predictions about phenomena at the Planck scale, \(\sqrt{\hbar G/c^3} \sim 10^{-35}\ \text{m}\), where both gravity and quantum mechanics are strong effects. There are, however, some high-precision experiments that have a reasonable chance of detecting whether our universe has torsion. Torsion violates the equivalence principle, and by the turn of the century tests of the equivalence principle had reached a level of precision sufficient to rule out some models that include torsion. Figure d shows a torsion pendulum used in an experiment by the Eöt-Wash group at the University of Washington.11 If torsion exists, then the intrinsic spin \(\boldsymbol{\sigma}\) of an electron should have an energy \(\boldsymbol{\sigma}\cdot\boldsymbol{\tau}\), where \(\boldsymbol{\tau}\) is the spacelike part of the torsion vector. The torsion could be generated by the earth, the sun, or some other object at a greater distance. The interaction \(\boldsymbol{\sigma}\cdot\boldsymbol{\tau}\) will modify the behavior of a torsion pendulum if the spins of the electrons in the pendulum are polarized nonrandomly, as in a magnetic material. The pendulum will tend to precess around the axis defined by \(\boldsymbol{\tau}\). This type of experiment is extremely difficult, because the pendulum tends to act as an ultra-sensitive magnetic compass, resulting in a measurement of the ambient magnetic field rather than the hypothetical torsion field \(\boldsymbol{\tau}\). To eliminate this source of systematic error, the UW group first eliminated the ambient magnetic field as well as possible, using mu-metal shielding and Helmholtz coils. They also constructed the pendulum out of a combination of two magnetic materials, Alnico 5 and \(\text{Sm}\text{Co}_5\), in such a way that the magnetic dipole moment vanished, but the spin dipole moment did not; Alnico 5's magnetic field is due almost entirely to electron spin, whereas the magnetic field of \(\text{Sm}\text{Co}_5\) contains significant contributions from orbital motion. The result was a nonmagnetic object whose spins were polarized. After four years of data collection, they found \(|\boldsymbol{\tau}|\lesssim 10^{-21}\ \text{eV}\). Models that include torsion typically predict such effects to be of the order of \(m_e^2/m_P \sim 10^{-17}\ \text{eV}\), where \(m_e\) is the mass of the electron and \(m_P=\sqrt{\hbar c/G}\approx10^{19}\ \text{GeV}\approx 20\ \mu\text{g}\) is the Planck mass. A wide class of these models is therefore ruled out by these experiments. Since there appears to be no experimental evidence for the existence of gravitational torsion in our universe, we will assume from now on that it vanishes identically. Einstein made the same assumption when he originally created general relativity, although he and Cartan later tinkered with non-torsion-free theories in a failed attempt to unify gravity with electromagnetism. Some models that include torsion remain viable. For example, it has been argued that the torsion tensor should fall off quickly with distance from the source.12 5.9 From metric to curvature 5.9.1 Finding the Christoffel symbol from the metric We've already found the Christoffel symbol in terms of the metric in one dimension. Expressing it in tensor notation, we have \[\begin{align*} \Gamma^d_{ba} = \frac{1}{2}g^{cd}\left(\partial_? g_{??}\right) , \end{align*}\] where inversion of the one-component matrix \(G\) has been replaced by matrix inversion, and, more importantly, the question marks indicate that there would be more than one way to place the subscripts so that the result would be a grammatical tensor equation. The most general form for the Christoffel symbol would be \[\begin{align*} \Gamma^b_{ac} = \frac{1}{2}g^{db}\left(L \partial_c g_{ab}+ M \partial_a g_{cb} + N \partial_b g_{ca}\right) , \end{align*}\] where \(L\), \(M\), and \(N\) are constants. Consistency with the one-dimensional expression requires \(L+M+N=1\), and vanishing torsion gives \(L=M\). The \(L\) and \(M\) terms have a different physical significance than the \(N\) term. Suppose an observer uses coordinates such that all objects are described as lengthening over time, and the change of scale accumulated over one day is a factor of \(k>1\). This is described by the derivative \(\partial_t g_{xx}\lt1\), which affects the \(M\) term. Since the metric is used to calculate squared distances, the \(g_{xx}\) matrix element scales down by \(1/\sqrt{k}\). To compensate for \(\partial_t v^x\lt0\), so we need to add a positive correction term, \(M>0\), to the covariant derivative. When the same observer measures the rate of change of a vector \(v^t\) with respect to space, the rate of change comes out to be too small, because the variable she differentiates with respect to is too big. This requires \(N\lt0\), and the correction is of the same size as the \(M\) correction, so \(|M|=|N|\). We find \(L=M=-N=1\). Self-check: Does the above argument depend on the use of space for one coordinate and time for the other? The resulting general expression for the Christoffel symbol in terms of the metric is \[\begin{equation*} \Gamma^c_{ab} = \frac{1}{2}g^{cd}\left(\partial_a g_{bd}+\partial_b g_{ad}-\partial_d g_{ab}\right) . \end{equation*}\] One can readily go back and check that this gives \(\nabla_c g_{ab}=0\). In fact, the calculation is a bit tedious. For that matter, tensor calculations in general can be infamously time-consuming and error-prone. Any reasonable person living in the 21st century will therefore resort to a computer algebra system. The most widely used computer algebra system is Mathematica, but it's expensive and proprietary, and it doesn't have extensive built-in facilities for handling tensors. It turns out that there is quite a bit of free and open-source tensor software, and it falls into two classes: coordinate-based and coordinate-independent. The best open-source coordinate-independent facility available appears to be Cadabra, and in fact the verification of \(\nabla_c g_{ab}=0\) is the first example given in the Leo Brewin's handy guide to applications of Cadabra to general relativity.13 Self-check: In the case of 1 dimension, show that this reduces to the earlier result of \(-(1/2)dG/dX\). Since \(\Gamma\) is not a tensor, it is not obvious that the covariant derivative, which is constructed from it, is a tensor. But if it isn't obvious, neither is it surprising -- the goal of the above derivation was to get results that would be coordinate-independent. Example 10: Christoffel symbols on the globe, quantitatively In example 9 on page 177, we inferred the following properties for the Christoffel symbol \(\Gamma^\theta_{\phi\phi}\) on a sphere of radius \(R\): \(\Gamma^\theta_{\phi\phi}\) is independent of \(\phi\) and \(R\), \(\Gamma^\theta_{\phi\phi}\lt0\) in the northern hemisphere (colatitude \(\theta\) less than \(\pi/2\)), \(\Gamma^\theta_{\phi\phi}=0\) on the equator, and \(\Gamma^\theta_{\phi\phi}>0\) in the southern hemisphere. The metric on a sphere is \(ds^2=R^2d\theta^2+R^2\sin^2\thetad\phi^2\). The only nonvanishing term in the expression for \(\Gamma^\theta_{\phi\phi}\) is the one involving \(\partial_\theta g_{\phi\phi}=2R^2\sin\theta\cos\theta\). The result is \(\Gamma^\theta_{\phi\phi}=-\sin\theta\cos\theta\), which can be verified to have the properties claimed above. 5.9.2 Numerical solution of the geodesic equation On page 180 I gave an algorithm that demonstrated the uniqueness of the solutions to the geodesic equation. This algorithm can also be used to find geodesics in cases where the metric is known. The following program, written in the computer language Python, carries out a very simple calculation of this kind, in a case where we know what the answer should be; even without any previous familiarity with Python, it shouldn't be difficult to see the correspondence between the abstract algorithm presented on page 180 and its concrete realization below. For polar coordinates in a Euclidean plane, one can compute \(\Gamma^r_{\phi\phi}=-r\) and \(\Gamma^\phi_{r\phi}=1/r\) (problem 2, page 199). Here we compute the geodesic that starts out tangent to the unit circle at \(\phi=0\). import math l = 0 # affine parameter lambda dl = .001 # change in l with each iteration l_max = 100. # initial position: # initial derivatives of coordinates w.r.t. lambda vr = 0 vphi = 1 k = 0 # keep track of how often to print out updates while l<l_max: l = l+dl # Christoffel symbols: Grphiphi = -r Gphirphi = 1/r # second derivatives: ar = -Grphiphi*vphi*vphi aphi = -2.*Gphirphi*vr*vphi # ... factor of 2 because G^a_{bc}=G^a_{cb} and b # is not the same as c # update velocity: vr = vr + dl*ar vphi = vphi + dl*aphi # update position: r = r + vr*dl phi = phi + vphi*dl if k%10000==0: # k is divisible by 10000 phi_deg = phi*180./math.pi print "lambda=%6.2f r=%6.2f phi=%6.2f deg." % (l,r,phi_deg) k = k+1 It is not necessary to worry about all the technical details of the language (e.g., line 1, which makes available such conveniences as math.pi for \(\pi\)). Comments are set off by pound signs. Lines 16-34 are indented because they are all to be executed repeatedly, until it is no longer true that \(\lambda\lt\lambda_{max}\) (line 15). Self-check: By inspecting lines 18-22, find the signs of \(\ddot{r}\) and \(\ddot{\phi}\) at \(\lambda=0\). Convince yourself that these signs are what we expect geometrically. The output is as follows: lambda= 0.00 r= 1.00 phi= 0.06 deg. lambda= 10.00 r= 10.06 phi= 84.23 deg. lambda= 20.00 r= 20.04 phi= 87.07 deg. lambda= 30.00 r= 30.04 phi= 88.02 deg. lambda= 40.00 r= 40.04 phi= 88.50 deg. lambda= 50.00 r= 50.04 phi= 88.78 deg. lambda= 60.00 r= 60.05 phi= 88.98 deg. lambda= 70.00 r= 70.05 phi= 89.11 deg. lambda= 80.00 r= 80.06 phi= 89.21 deg. lambda= 90.00 r= 90.06 phi= 89.29 deg. We can see that \(\phi\rightarrow 90\ \text{deg.}\) as \(\lambda\rightarrow\infty\), which makes sense, because the geodesic is a straight line parallel to the \(y\) axis. A less trivial use of the technique is demonstrated on page 220, where we calculate the deflection of light rays in a gravitational field, one of the classic observational tests of general relativity. 5.9.3 The Riemann tensor in terms of the Christoffel symbols The covariant derivative of a vector can be interpreted as the rate of change of a vector in a certain direction, relative to the result of parallel-transporting the original vector in the same direction. We can therefore see that the definition of the Riemann curvature tensor on page 168 is a measure of the failure of covariant derivatives to commute: \[\begin{equation*} (\nabla_a \nabla_b - \nabla_b \nabla_a) A^c = A^d R^c_{dab} \end{equation*}\] A tedious calculation now gives \(R\) in terms of the \(\Gamma\)s: \[\begin{equation*} R^a_{bcd} = \partial_c \Gamma^a_{db} - \partial_d \Gamma^a_{cb} + \Gamma^a_{ce}\Gamma^e_{db}-\Gamma^a_{de}\Gamma^e_{cb} \end{equation*}\] This is given as another example later in Brewin's manual for applying Cadabra to general relativity.14 (Brewin writes the upper index in the second slot of \(R\).) a / The Aharonov-Bohm effect. An electron enters a beam splitter at P, and is sent out in two different directions. The two parts of the wave are reflected so that they reunite at Q. The arrows represent the vector potential \(\mathbf{A}\). The observable magnetic field \(\mathbf{B}\) is zero everywhere outside the solenoid, and yet the interference observed at Q depends on whether the field is turned on. See page 137 for further discussion of the \(\mathbf{A}\) and \(\mathbf{B}\) fields of a solenoid. b / The cone has zero intrinsic curvature everywhere except at its tip. An observer who never visits the tip can nevertheless detect its existence, because parallel transport around a path that encloses the tip causes a vector to change its direction. 5.9.4 Some general ideas about gauge Let's step back now for a moment and try to gain some physical insight by looking at the features that the electromagnetic and relativistic gauge transformations have in common. We have the following analogies: differential geometry global symmetry A constant phase shiftα has no observable effects. Adding a constant onto a coordinate has no observable effects. local symmetry A phase shiftα that varies from point to point has no observable effects. An arbitrary coordinate transformation has no observable effects. The gauge is described by … …and differentiation of this gives the gauge field… A second differentiation gives the directly observable field(s) … vcE andvcB The interesting thing here is that the directly observable fields do not carry all of the necessary information, but the gauge fields are not directly observable. In electromagnetism, we can see this from the Aharonov-Bohm effect, shown in figure a.15 The solenoid has \(\mathbf{B}=0\) externally, and the electron beams only ever move through the external region, so they never experience any magnetic field. Experiments show, however, that turning the solenoid on and off does change the interference between the two beams. This is because the vector potential does not vanish outside the solenoid, and as we've seen on page 137, the phase of the beams varies according to the path integral of the \(A_b\). We are therefore left with an uncomfortable, but unavoidable, situation. The concept of a field is supposed to eliminate the need for instantaneous action at a distance, which is forbidden by relativity; that is, (1) we want our fields to have only local effects. On the other hand, (2) we would like our fields to be directly observable quantities. We cannot have both 1 and 2. The gauge field satisfies 1 but not 2, and the electromagnetic fields give 2 but not 1. Figure b shows an analog of the Aharonov-Bohm experiment in differential geometry. Everywhere but at the tip, the cone has zero curvature, as we can see by cutting it and laying it out flat. But even an observer who never visits the tightly curved region at the tip can detect its existence, because parallel-transporting a vector around a closed loop can change the vector's direction, provided that the loop surrounds the tip. In the electromagnetic example, integrating \(\mathbf{A}\) around a closed loop reveals, via Stokes' theorem, the existence of a magnetic flux through the loop, even though the magnetic field is zero at every location where \(\mathbf{A}\) has to be sampled. In the relativistic example, integrating \(\Gamma\) around a closed loop shows that there is curvature inside the loop, even though the curvature is zero at all the places where \(\Gamma\) has to be sampled. The fact that \(\Gamma\) is a gauge field, and therefore not locally observable, is simply a fancy way of expressing the ideas introduced on pp. 176 and 177, that due to the equivalence principle, the gravitational field in general relativity is not locally observable. This non-observability is local because the equivalence principle is a statement about local Lorentz frames. The example in figure b is non-local. Example 11: Geodetic effect and structure of the source \(\triangleright\) In section 5.5.1 on page 170, we estimated the geodetic effect on Gravity Probe B and found a result that was only off by a factor of \(3\pi\). The mathematically pure form of the \(3\pi\) suggests that the geodetic effect is insensitive to the distribution of mass inside the earth. Why should this be so? \(\triangleright\) The change in a vector upon parallel transporting it around a closed loop can be expressed in terms of either (1) the area integral of the curvature within the loop or (2) the line integral of the Christoffel symbol (essentially the gravitational field) on the loop itself. Although I expressed the estimate as 1, it would have been equally valid to use 2. By Newton's shell theorem, the gravitational field is not sensitive to anything about its mass distribution other than its near spherical symmetry. The earth spins, and this does affect the stress-energy tensor, but since the velocity with which it spins is everywhere much smaller than \(c\), the resulting effect, called frame dragging, is much smaller. 5.10 Manifolds This section can be omitted on a first reading. a / In Asteroids, space “wraps around.” 5.10.1 Why we need manifolds General relativity doesn't assume a predefined background metric, and this creates a chicken-and-egg problem. We want to define a metric on some space, but how do we even specify the set of points that make up that space? The usual way to define a set of points would be by their coordinates. For example, in two dimensions we could define the space as the set of all ordered pairs of real numbers \((x,y)\). But this doesn't work in general relativity, because space is not guaranteed to have this structure. For example, in the classic 1979 computer game Asteroids, space “wraps around,” so that if your spaceship flies off the right edge of the screen, it reappears on the left, and similarly at the top and bottom. Even before we impose a metric on this space, it has topological properties that differ from those of the Euclidean plane. By “topological” we mean properties that are preserved if the space is thought of as a sheet of rubber that can be stretched in any way, but not cut or glued back together. Topologically, the space in Asteroids is equivalent to a torus (surface of a doughnut), but not to the Euclidean plane. b / A coffee cup is topologically equivalent to a torus. Another useful example is the surface of a sphere. In example 10 on page 188, we calculated \(\Gamma^\theta_{\phi\phi}\). A similar calculation gives \(\Gamma^\phi_{\theta\phi}=\cot\theta/R\). Now consider what happens as we drive our dogsled north along the line of longitude \(\phi=0\), cross the north pole at \(\theta=0\), and continue along the same geodesic. As we cross the pole, our longitude changes discontinuously from 0 to \(\pi\). Consulting the geodesic equation, we see that this happens because \(\Gamma^\phi_{\theta\phi}\) blows up at \(\theta=0\). Of course nothing really special happens at the pole. The bad behavior isn't the fault of the sphere, it's the fault of the \((\theta,\phi)\) coordinates we've chosen, that happen to misbehave at the pole. Unfortunately, it is impossible to define a pair of coordinates on a two-sphere without having them misbehave somewhere. (This follows from Brouwer's famous 1912 “Hairy ball theorem,” which states that it is impossible to comb the hair on a sphere without creating a cowlick somewhere.) c / General relativity doesn't assume a predefined background metric. Therefore all we can really know before we calculate anything is that we're working on a manifold, without a metric imposed on it. 5.10.2 Topological definition of a manifold This motivates us to try to define a “bare-bones” geometrical space in which there is no predefined metric or even any predefined set of coordinates. There is a general notion of a topological space, which is too general for our purposes. In such a space, the only structure we are guaranteed is that certain sets are defined as “open,” in the same sense that an interval like \(0\lt x \lt 1\) is called “open.” Any point in an open set can be moved around without leaving the set. An open set is essentially a set without a boundary, for in a set like \(0\le x \le 1\), the boundary points 0 and 1 can only be moved in one direction without taking them outside. A toplogical space is too general for us because it can include spaces like fractals, infinite-dimensional spaces, and spaces that have different numbers of dimensions in different regions. It is nevertheless useful to recognize certain concepts that can be defined using only the generic apparatus of a topological space, so that we know they do not depend in any way on the presence of a metric. An open set surrounding a point is called a neighborhood of that point. In a topological space we have a notion of getting arbitrarily close to a certain point, which means to take smaller and smaller neighborhoods, each of which is a subset of the last. But since there is no metric, we do not have any concept of comparing distances of distant points, e.g., that P is closer to Q than R is to S. A continuous function is a purely topological idea; a continuous function is one such that for any open subset U of its range, the set V of points in its domain that are mapped to points in U is also open. Although some definitions of continuous functions talk about real numbers like \(\epsilon\) and \(\delta\), the notion of continuity doesn't depend on the existence of any structure such as the real number system. A homeomorphism is a function that is invertible and continuous in both directions. Homeomorphisms formalize the informal notion of “rubber-sheet geometry without cutting or gluing.” If a homeomorphism exists between two topological spaces, we say that they are homeomorphic; they have the same structure and are in some sense the same space. The more specific type of topological space we want is called a manifold. Without attempting any high level of mathematical rigor, we define an \(n\)-dimensional manifold M according to the following informal principles:16 Example 12: Lines The set of all real numbers is a 1-manifold. Similarly, any line with the properties specified in Euclid's Elements is a 1-manifold. All such lines are homeomorphic to one another, and we can therefore speak of “the line.” Example 13: A circle A circle (not including its interior) is a 1-manifold, and it is not homeomorphic to the line. To see this, note that deleting a point from a circle leaves it in one connected piece, but deleting a point from a line makes two. Here we use the fact that a homeomorphism is guaranteed to preserve “rubber-sheet” properties like the number of pieces. Example 14: No changes of dimension A “lollipop” formed by gluing an open 2-circle (i.e., a circle not including its boundary) to an open line segment is not a manifold, because there is no \(n\) for which it satisfies M1. It also violates M2, because points in this set fall into three distinct classes: classes that live in 2-dimensional neighborhoods, those that live in 1-dimensional neighborhoods, and the point where the line segment intersects the boundary of the circle. Example 15: No manifolds made from the rational numbers The rational numbers are not a manifold, because specifying an arbitrarily small neighborhood around \(\sqrt{2}\) excludes every rational number, violating M3. Similarly, the rational plane defined by rational-number coordinate pairs \((x,y)\) is not a 2-manifold. It's good that we've excluded this space, because it has the unphysical property that curves can cross without having a point in common. For example, the curve \(y=x^2\) crosses from one side of the line \(y=2\) to the other, but never intersects it. This is physically undesirable because it doesn't match up with what we have in mind when we talk about collisions between particles as intersections of their world-lines, or when we say that electric field lines aren't supposed to intersect. Example 16: No boundary The open half-plane \(y>0\) in the Cartesian plane is a 2-manifold. The closed half-plane \(y\ge 0\) is not, because it violates M2; the boundary points have different properties than the ones on the interior. Example 17: Disconnected manifolds Two nonintersecting lines are a 1-manifold. Physically, disconnected manifolds of this type would represent a universe in which an observer in one region would never be able to find out about the existence of the other region. Example 18: No bad glue jobs Hold your hands like you're pretending you know karate, and then use one hand to karate-chop the other. Suppose we want to join two open half-planes in this way. As long as they're separate, then we have a perfectly legitimate disconnected manifold. But if we want to join them by adding the point P where their boundaries coincide, then we violate M2, because this point has special properties not possessed by any others. An example of such a property is that there exist points Q and R such that every continuous curve joining them passes through P. (Cf. problem 5, p. 329.) d / Example 22. 5.10.3 Local-coordinate definition of a manifold An alternative way of characterizing an \(n\)-manifold is as an object that can locally be described by \(n\) real coordinates. That is, any sufficiently small neighborhood is homeomorphic to an open set in the space of real-valued \(n\)-tuples of the form \((x_1,x_2,...,x_n)\). For example, a closed half-plane is not a 2-manifold because no neighborhood of a point on its edge is homeomorphic to any open set in the Cartesian plane. Self-check: Verify that this alternative definition of a manifold gives the same answers as M1-M3 in all the examples above. Roughly speaking, the equivalence of the two definitions occurs because we're using \(n\) real numbers as coordinates for the dimensions specified by M1, and the real numbers are the unique number system that has the usual arithmetic operations, is ordered, and is complete in the sense of M3. As usual when we say that something is “local,” a question arises as to how local is local enough. The language in the definition above about “any sufficiently small neighborhood” is logically akin to the Weierstrass \(\epsilon\)-\(\delta\) approach: if Alice gives Bob a manifold and a point on a manifold, Bob can always find some neighborhood around that point that is compatible with coordinates, but it may be an extremely small neighborhood. As discussed in section 3.3, a method that is equally rigorous --- and usually much more convenient in differential geometry --- is to use infinitesimals. For example, suppose that we want to write down a metric in the form \(ds^2=g_{ab}dx^adx^b\). Infinitesimal distances like \(dx^a\) are always small enough to fit in any open set of real numbers. In fact, an alternative definition of an open set, in a space with a Euclidean metric, is one in which every point can be surrounded by an infinitesimal ball. Similarly, if we want to calculate Christoffel symbols, the Riemann tensor, etc., then all we need is the ability to take derivatives, and this only requires infinitesimal coordinate changes. Likewise the equivalence principle says that a spacetime is compatible with a Lorentzian metric at every point, and this only requires an infinitesimal amount of elbow-room. In practice, we never have to break up a manifold into infinitely many pieces, each of them infinitesimally small, in order to have well-behaved coordinates. Example 19: Coordinates on a circle If we are to define coordinates on a circle, they should be continuous functions. The angle \(\phi\) about the center therefore doesn't quite work as a global coordinate, because it has a discontinuity where \(\phi=0\) is identified with \(\phi=2\pi\). We can get around this by using different coordinates in different regions, as is guaranteed to be possible by the local-coordinate definition of a manifold. For example, we can cover the circle with two open sets, one on the left and one on the right. The left one, L, is defined by deleting only the \(\phi=0\) point from the circle. The right one, R, is defined by deleting only the one at \(\phi=\pi\). On L, we use coordinates \(0\lt\phi_L\lt2\pi\), which are always a continuous function from L to the real numbers. On R, we use \(-\pi\lt\phi_R\lt\pi\). In examples like this one, the sets like L and R are referred to as patches. We require that the coordinate maps on the different patches match up smoothly. In this example, we would like all four of the following functions, known as transition maps, to be continuous: The local-coordinate definition only states that a manifold can be coordinatized. That is, the functions that define the coordinate maps are not part of the definition of the manifold, so, for example, if two people define coordinates patches on the unit circle in different ways, they are still talking about exactly the same manifold. We conclude with a few examples relating to homeomorphism. Example 20: Open line segment homeomorphic to a line Let L be an open line segment, such as the open interval \((0,1)\). L is homeomorphic to a line, because we can map \((0,1)\) to the real line through the function \(f(x)=\tan(\pi x-\pi/2)\). Example 21: Closed line segment not homeomorphic to a line A closed line segment (which is not a manifold) is not homeomorphic to a line. If we map it to a line, then the endpoints have to go to two special points A and B. There is then no way for the mapping to visit the points exterior to the interval \([\text{A},\text{B}]\) without visiting A and B more than once. Example 22: Open line segment not homeomorphic to the interior of a circle If the interior of a circle could be mapped by a homeomorphism \(f\) to an open line segment, then consider what would happen if we took a closed curve lying inside the circle and found its image. By the intermediate value theorem, \(f\) would not be one-to-one, but this is a contradiction since \(f\) was assumed to be a homeomorphism. This is an example of a more general fact that homeomorphism preserves the dimensionality of a manifold. Homework Problems 1. Example 6 on p. 167 discussed some examples in electrostatics where the charge density on the surface of a conductor depends on the Gaussian curvature, when the curvature is positive. In the case of a knife-edge formed by two half-planes at an exterior angle \(\beta>\pi\), there is a standard result17 that the charge density at the edge blows up to infinity as \(R^{\pi/\beta-1}\). Does this match up with the hypothesis that Gaussian curvature determines the charge density?(solution in the pdf version of the book) 2. Show, as claimed on page 188, that for polar coordinates in a Euclidean plane, \(\Gamma^r_{\phi\phi}=-r\) and \(\Gamma^\phi_{r\phi}=1/r\). 3. Partial derivatives commute with partial derivatives. Covariant derivatives don't commute with covariant derivatives. Do covariant derivatives commute with partial derivatives? 4. Show that if the differential equation for geodesics on page 178 is satisfied for one affine parameter \(\lambda\), then it is also satisfied for any other affine parameter \(\lambda'=a\lambda+b\), where \(a\) and \(b\) are constants. 5. Equation [] on page gives a flat-spacetime metric in rotating polar coordinates. (a) Verify by explicit computation that this metric represents a flat spacetime. (b) Reexpress the metric in rotating Cartesian coordinates, and check your answer by verifying that the Riemann tensor vanishes. 6. The purpose of this problem is to explore the difficulties inherent in finding anything in general relativity that represents a uniform gravitational field \(g\). In example 12 on page 59, we found, based on elementary arguments about the equivalence principle and photons in elevators, that gravitational time dilation must be given by \(e^\Phi\), where \(\Phi=gz\) is the gravitational potential. This results in a metric \[\begin{equation*} ds^2 = e^{2gz}dt^2-dz^2 . \end{equation*}\] On the other hand, example 19 on page 140 derived the metric \[\begin{equation*} ds^2 = (1+gz)^2 dt^2 - dz^2 . \end{equation*}\] by transforming from a Lorentz frame to a frame whose origin moves with constant proper acceleration \(g\). (These are known as Rindler coordinates.) Prove the following facts. None of the calculations are so complex as to require symbolic math software, so you might want to perform them by hand first, and then check yourself on a computer. (a) The metrics [] and [] are approximately consistent with one another for \(z\) near 0. (b) When a test particle is released from rest in either of these metrics, its initial proper acceleration is \(g\). (c) The two metrics are not exactly equivalent to one another under any change of coordinates. (d) Both spacetimes are uniform in the sense that the curvature is constant. (In both cases, this can be proved without an explicit computation of the Riemann tensor.) (solution in the pdf version of the book) The incompatibility between [] and [] can be interpreted as showing that general relativity does not admit any spacetime that has all the global properties we would like for a uniform gravitational field. This is related to Bell's spaceship paradox (example 16, p. 66). Some further properties of the metric [] are analyzed in subsection 7.4 on page 255. 7. In a topological space T, the complement of a subset U is defined as the set of all points in T that are not members of U. A set whose complement is open is referred to as closed. On the real line, give (a) one example of a closed set and (b) one example of a set that is neither open nor closed. (c) Give an example of an inequality that defines an open set on the rational number line, but a closed set on the real line. 8. Prove that a double cone (e.g., the surface \(r=z\) in cylindrical coordinates) is not a manifold.(solution in the pdf version of the book) 9. Prove that a torus is a manifold.(solution in the pdf version of the book) 10. Prove that a sphere is not homeomorphic to a torus.(solution in the pdf version of the book) 11. Curvature on a Riemannian space in 2 dimensions is a topic that goes back to Gauss and has a simple interpretation: the only intrinsic measure of curvature is a single number, the Gaussian curvature. What about 1+1 dimensions? The simplest metrics I can think of are of the form \(ds^2=dt^2-f(t)dx^2\). (Something like \(ds^2=f(t)dt^2-dx^2\) is obviously equivalent to Minkowski space under a change of coordinates, while \(ds^2=f(x)dt^2-dx^2\) is the same as the original example except that we've swapped \(x\) and \(t\).) Playing around with simple examples, one stumbles across the seemingly mysterious fact that the metric \(ds^2=dt^2-t^2dx^2\) is flat, while \(ds^2=dt^2-tdx^2\) is not. This seems to require some simple explanation. Consider the metric \(ds^2=dt^2-t^p dx^2\). (a) Calculate the Christoffel symbols by hand. (b) Use a computer algebra system such as Maxima to show that the Ricci tensor vanishes only when \(p=2\).(solution in the pdf version of the book) The explanation is that in the case \(p=2\), the \(x\) coordinate is expanding in proportion to the \(t\) coordinate. This can be interpreted as a situation in which our length scale is defined by a lattice of test particles that expands inertially. Since their motion is inertial, no gravitational fields are required in order to explain the observed change in the length scale; cf. the Milne universe, p. 299. [2] I W McAllister 1990 J. Phys. D: Appl. Phys. 23 359 [3] Moore et al., Journal of Applied Meteorology 39 (1999) 593 [4] Proof: Since any two lines cross in elliptic geometry, \(\ell\) crosses the \(x\) axis. The corollary then follows by application of the definition of the Gaussian curvature to the right triangles formed by \(\ell\), the \(x\) axis, and the lines at \(x=0\) and \(x=dx\), so that \(K=d\epsilon/dA=d^2\alpha/dxdy\), where third powers of infinitesimals have been discarded. [5] In the spherical model, \(L=\rho\theta\sin u\), where \(u\) is the angle subtended at the center of the sphere by an arc of length \(r\). We then have \(L/L_E=\sin u/u\), whose second derivative with respect to \(u\) is \(-1/3\). Since \(r=\rho u\), the second derivative of the same quantity with respect to \(r\) equals \(-1/3\rho^2=-K/3\). [6] Section 5.8 discusses the sense in which this approximation is good enough. [7] This statement is itself only a rough estimate. Anyone who has taught physics knows that students will often calculate an effect exactly while not understanding the underlying physics at all. [9] “On the gravitational field of a point mass according to Einstein's theory,” Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1 (1916) 189, translated in [10] This point was mentioned on page 168, in connection with the definition of the Riemann tensor. [12] Carroll and Field, [15] We describe the effect here in terms of an idealized, impractical experiment. For the actual empirical status of the Aharonov-Bohm effect, see Batelaan and Tonomura, Physics Today 62 (2009) 38. [16] For those with knowledge of topology, these can be formalized a little more: we want a completely normal, second-countable, locally connected topological space that has Lebesgue covering dimension \(n\), is a homogeneous space under its own homeomorphism group, and is a complete uniform space. I don't know whether this is sufficient to characterize a manifold completely, but it suffices to rule out all the counterexamples of which I know. [17] Jackson, Classical Electrodynamics
a70c5f11fea67e36
About this Journal Submit a Manuscript Table of Contents Advances in Mathematical Physics Volume 2012 (2012), Article ID 281705, 42 pages Review Article A New Approach to Black Hole Quasinormal Modes: A Review of the Asymptotic Iteration Method 1Department of Physics, Tamkang University, Tamsui, New Taipei City 25137, Taiwan 2National Institute for Theoretical Physics, School of Physics, University of the Witwatersrand, Wits 2050, South Africa 3Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan 4International College and Department of Physics, Osaka University, Toyonaka, Osaka 560-0043, Japan Received 18 November 2011; Revised 12 March 2012; Accepted 15 March 2012 Academic Editor: Ricardo Weder We discuss how to obtain black hole quasinormal modes (QNMs) using the asymptotic iteration method (AIM), initially developed to solve second-order ordinary differential equations. We introduce the standard version of this method and present an improvement more suitable for numerical implementation. We demonstrate that the AIM can be used to find radial QNMs for Schwarzschild, Reissner-Nordström (RN), and Kerr black holes in a unified way. We discuss some advantages of the AIM over the continued fractions method (CFM). This paper presents for the first time the spin 0, 1/2 and 2 QNMs of a Kerr black hole and the gravitational and electromagnetic QNMs of the RN black hole calculated via the AIM and confirms results previously obtained using the CFM. We also present some new results comparing the AIM to the WKB method. Finally we emphasize that the AIM is well suited to higher-dimensional generalizations and we give an example of doubly rotating black holes. 1. Introduction The study of quasinormal modes (QNMs) of black holes is an old and well-established subject, where the various frequencies are indicative of both the parameters of the black hole and the type of emissions possible. Initially the calculation of these frequencies was done in a purely numerical way, which requires selecting a value for the complex frequency, integrating the differential equation, and checking whether the boundary conditions are satisfied. Note that in the following we will use the definition that QNMs are defined as solutions of the perturbed field equations with boundary conditions: for an time dependence (which corresponds to ingoing waves at the horizon and outgoing waves at infinity). Also note the boundary condition as does not apply to asymptotically anti-de-Sitter spacetimes, where instead something like a Dirichlet boundary condition is imposed, for example, see [1]. Since those conditions are not satisfied in general, the complex frequency plane must be surveyed for discrete values that lead to QNMs. This technique is time consuming and cumbersome, making it difficult to systematically survey the QNMs for a wide range of parameter values. Following early work by Vishveshwara [2], Chandrasekhar and Detweiler [3] pioneered this method for studying QNMs. In order to improve on this, a few semianalytic analyses were also attempted. In one approach, employed by Ferrari and Mashhoon et al. [4], the potential barrier in the effective one-dimensional Schrödinger equation is replaced by a parameterized analytic potential barrier function for which simple exact solutions are known. The overall shape approximates that of the true black hole barrier, and the parameters of the barrier function are adjusted to fit the height and curvature of the true barrier at the peak. The resulting estimates for the QNM frequencies have been applied to the Schwarzschild, Reissner-Nordström, and Kerr black holes, with agreement within a few percent with the numerical results of Chandrasekhar and Detweiler in the Schwarzschild case [3], and with Gunter [5] in the Reissner-Nordström case. However, as this method relies upon a specialized barrier function, there is no systematic way to estimate the errors or to improve the accuracy. The method by Leaver [6], which is a hybrid of the analytic and the numerical, successfully generates QNM frequencies by making use of an analytic infinite-series representation of the solutions, together with a numerical solution of an equation for the QNM frequencies which involves, typically by applying a Frobenius series solution approach, the use of continued fractions. This technique is known as the continued fraction method (CFM). Historically, another commonly applied technique is the WKB approximation [79]. Even though it is based on an approximation, this approach is powerful as the WKB approximation is known in many cases to be more accurate and can be carried to higher orders, either as a means to improve accuracy or as a means to estimate the errors explicitly. Also it allows a more systematic study of QNMs than has been possible using outright numerical methods. The WKB approximation has since been extended to sixth order [10]. However, all of these approaches have their limitations, where in recent years a new method has been developed which can be more efficient in some cases, called the asymptotic iteration method (AIM). Previously this method was used to solve eigenvalue problems [11, 12] as a semi analytic technique for solving second-order homogeneous linear differential equations. It has also been successfully shown by some of the current authors that the AIM is an efficient and accurate technique for calculating QNMs [13]. As such, we will review the AIM as applied to a variety of black hole spacetimes, making (where possible) comparisons with the results calculated by the WKB method and the CFM á la Leaver [6]. Therefore, the structure of this paper will be as following: In Section 2 we will review the AIM and the improved method of Ciftci et al. [11, 12] (also see [14]), along with a discussion of how the QNM boundary conditions are ensured. Applications to simple concrete examples, such as the harmonic oscillator and the Poschl-Teller potential, are also provided. In Section 3 the case of Schwarzschild (A)dS black holes will be discussed, developing the integer and half-spin equations. In Section 4 a review of the QNMs of the Reissner-Nordström black holes will be made, with several frequencies calculated in the AIM and compared with previous results. Section 5 will review the application of the AIM to Kerr black holes for spin fields. Section 6 will discuss the spin-zero QNMs for doubly rotating black holes. We then summarize and conclude in Section 7. 2. The Asymptotic Iteration Method 2.1. The Method To begin we will now review the idea behind the AIM, where we first consider the homogeneous linear second-order differential equation for the function : where and are functions in . In order to find a general solution to this equation, we rely on the symmetric structure of the right-hand side of (2.1) [11, 12]. If we differentiate (2.1) with respect to , we find that where Taking the second derivative of (2.1) we get where Iteratively, for the and the derivatives, , we have and thus bringing us to the crucial observation in the AIM that differentiating the pervious equation times with respect to leaves a symmetric form for the right-hand side: where For sufficiently large the asymptotic aspect of the “method” is introduced, that is where the QNMs are obtained from the “quantization condition”: which is equivalent to imposing a termination to the number of iterations [14]. From the ratio of the and the derivatives, we have From our asymptotic limit, this reduces to which yields where is the integration constant and the right-hand side of (2.8) and the definition of have been used. Substituting this into (2.6), we obtain the first-order differential equation: which leads to the general solution: The integration constants, and , can be determined by an appropriate choice of normalisation. Note that for the generation of exact solutions . 2.2. The Improved Method Ciftci et al. [11, 12] were among the first to note that an unappealing feature of the recursion relations in (2.8) is that at each iteration one must take the derivative of the and terms of the previous iteration. This can slow the numerical implementation of the AIM down considerably and also lead to problems with numerical precision. To circumvent these issues we developed an improved version of the AIM which bypasses the need to take derivatives at each step [13]. This greatly improves both the accuracy and speed of the method. We expand the and in a Taylor series around the point at which the AIM is performed, : where the and are the Taylor coefficients of and , respectively. Substituting these expressions into (2.8) leads to a set of recursion relations for the coefficients: In terms of these coefficients the “quantization condition’’ equation (2.10) can be reexpressed as and thus we have reduced the AIM into a set of recursion relations which no longer require derivative operators. Observing that the right-hand side of (2.17) involves terms of order at most , one can recurse these equations until only and terms remain (i.e., the coefficients of and only). However, for large numbers of iterations, due to the large number of terms, such expressions become impractical to compute. We avert this combinatorial problem by beginning at the stage and calculating the coefficients sequentially until the desired number of recursions is reached. Since the quantisation condition only requires the term, at each iteration we only need to determine coefficients with , where is the maximum number of iterations to be performed. The QNMs that we calculate in this paper will be determined using this improved AIM. 2.3. Two Simple Examples 2.3.1. The Harmonic Oscillator In order to understand the effectiveness of the AIM, it is appropriate to apply this method to a simple concrete problem: the harmonic oscillator potential in one dimension: When approaches infinity, the wave function must approach zero. Asymptotically the function decays like a Gaussian distribution, in which case we can write where is the new wave function. Substituting (2.20) into (2.19) then rearranging the equation and dividing by a common factor, one can obtain We recognise this as Hermite’s equation. For convenience we let , such that in our case and . We define Thus using (2.8) one can find that and the termination condition (2.10) can be written as . Hence must be a nonnegative integer, which means that and this is the exact spectrum for such a potential. Moreover, the wave function can also be derived in this method. We will point out that in this case the termination condition, , is dependent only on the eigenvalue for a given iteration number , and this is the reason why we can obtain an exact eigenvalue. However, for the black hole cases in subsequent sections, the termination condition depends also on , and therefore one can only obtain approximate eigenvalues by terminating the procedure after iterations. 2.3.2. The Poschl-Teller Potential To conclude this section we will also demonstrate that the AIM can be applied to the case of QNMs, which have unbounded (scattering) like potentials, by recalling that we can find QNMs for Scarf II (upside-down Poschl-Teller-like) potentials [15]. This is based on observations made by one of the current authors [16] relating QNMs from quasiexactly solvable models. Indeed bound state Poschl-Teller potentials have been used for QNM approximations previously by inverting black hole potentials [4]. However, the AIM does not require any inversion of the black hole potential as we will show. Starting with the potential term and the Schrödinger equation, we obtain As we will also see in the following sections that it is more convenient to transform our coordinates to a finite domain. Hence, we will use the transformation , which leads to where . The QNM boundary conditions in (1.1) can then be implemented as follows. As we will have . Hence our boundary condition is . Likewise, as we have and the boundary condition . As such we can take the boundary conditions into account by writing and therefore we have where Following the AIM procedure, that is, taking successively for , one can obtain exact eigenvalues: This exact QNM spectrum is the same as the one in [16] obtained through algebraic means. The reader might wonder about approximate results for cases where Poschl-Teller approximations can be used, such as Schwarzschild and SdS backgrounds; for example, see [1, 4]. In fact when the black hole potential can be modeled by a Scarf like potential, the AIM can be used to find the eigenvalues exactly [15] and hence the QNMs numerically. We demonstrate this in the next section. 3. Schwarzschild (A)DS Black Holes We will now begin the core focus of this paper, the study of black hole QNMs using the AIM. Recall that the perturbations of the Schwarzschild black holes are described by the Regge and Wheeler [17] and Zerilli [18] equations, and the perturbations of Kerr black holes are described by the Teukolsky equations [19]. The perturbation equations for Reissner-Nordström black holes were also derived by Zerilli [20] and by Moncrief [2123]. Their radial perturbation equations all have a one-dimensional Schrödinger-like form with an effective potential. Therefore, we will commence in the coming subsections by describing the radial perturbation equations of Schwarzschild black holes first, where our perturbed metric will be , and where is spherically symmetric. As such it is natural to introduce a mode decomposition to . Typically we write where are the standard spherical harmonics. The function then solves the wave equation: where , defined by , are the so-called tortoise coordinates and is a master potential of the following form [24]: In this section with cosmological constant . Here denotes the spin of the perturbation: scalar, electromagnetic, and gravitational (for half-integer spin see [2527] and Section 5.2). 3.1. The Schwarschild Asymptotically Flat Case To explain the AIM we will start with the simplest case of the radial component of a perturbation of the Schwarzschild metric outside the event horizon [18]. For an asymptotically flat Schwarzschild solution (), where from we have for the tortoise coordinate . Note that for the Schwarzschild background the maximum of this potential, in terms of , is given by [28] The choice of coordinates is somewhat arbitrary and in the next section (for SdS) we will see how an alternative choice leads to a simpler solution. Firstly, consider the change of variable: with . In terms of , our radial equation then becomes To accommodate the out-going wave boundary condition as in terms of (which is the limit ) and the regular singularity at the event horizon (), we define where the Coulomb power law is included in the asymptotic behaviour (cf. [6] (5)). The radial equation then takes the following form: where Note that primes of denote derivatives with respect to . Using these expressions we have tabulated several QNM frequencies and compared them to the WKB method of [28] and the CFM of [6] in Table 1. For completeness Table 1 also includes results from an approximate semianalytic third-order WKB method [28]. More accurate semianalytic results with better agreement to Leaver’s method can be obtained by extending the WKB method to 6th order [10] and indeed in Section 5 we use this to compare with the AIM for results where the CFM has not been tabulated. Table 1: QNMs to 4 decimal places for gravitational perturbations () where the fifth column is taken from [28]. Note that the imaginary part of the and result in [28] has been corrected to agree with [6]. [*] Note also that if the number of iterations in the AIM is increased, to say 50, then we find agreement with [6] accurate to 6 significant figures. It might also be worth mentioning that a different semianalytic perturbative approach has recently been discussed by Dolan and Ottewill [29], which has the added benefit of easily being extended to any order in a perturbative scheme. 3.2. The De-Sitter Case We have presented the QNMs for Schwarzchild gravitational perturbations in Table 1 however, to further justify the use of this method, it is instructive to consider some more general cases. As such, we will now consider the Schwarzschild de Sitter (SdS) case, where we have the same WKB-like wave equation and potential as in the radial equation earlier, though now where is the cosmological constant. Interestingly the choice of coordinates we use here leads to a simpler AIM solution, because there is no Coulomb power law tail; however, in the limit we recover the Schwarzschild results. Note that although it is possible to find an expression for the maximum of the potential in the radial equation, for the SdS case, it is the solution of a cubic equation, which for brevity we refrain from presenting here. In our AIM code we use a numerical routine to find the root to make the code more general. In the SdS case it is more convenient to change coordinates to [1], which leads to the following master equation (cf. (3.3)): where we have defined It may be worth mentioning that for SdS we can express [1] in terms of the roots of , where is the event horizon and is the cosmological horizon (and is the surface gravity at each ). This is useful for choosing the appropriate scaling behaviour for QNM boundary conditions. Based on the pervious equation an appropriate choice for QNMs is to scale out the divergent behaviour at the cosmological horizon (note that this is opposite to the case presented in [1], where they define the QNMs as solution with boundary conditions as , for time dependence): which implies in term of . Furthermore, based on the scaling in (3.17), the correct QNM condition at the horizon implies where with , and is the smallest real solution of , implying . The differential equation then takes the standard AIM form: where Using these equations, we present in Table 2 results for SdS with . Table 2: QNMs to 6 signifcant figures for Schwarzschild de Sitter gravitational perturbations () for and modes. We only present results for the AIM method, because the results are identical to those of the CFM after a given number of iterations (in this case 50 iterations for both methods). The modes can be compared with the results in [30] for . Identical results were generated by the AIM and CFM, both after 50 iterations. Though results are presented for , and modes only, the AIM is robust enough to be applied to any other case where like the case, agreement with other methods in more extreme parameter choices would only require further iterations. As far as we are aware only [30] (who used a semianalytic WKB approach) has presented tables for general spin fields for the SdS case. We have also compared our results to those in [30] for the cases and find identical results (to a given accuracy in the WKB method). It may be worth mentioning that a set of three-term recurrence relations was derived in [13] for the CFM, valid for electromagnetic and gravitational perturbations (), while for this reduces to a five-term recurrence relation. However, for the AIM we can treat the perturbations on an equal footing; see [13] for more details. Typically Gaussian Elimination steps are required to reduce an recurrence to a 3-term continued fraction; for example, for Reissner-Nördtrom see [31] and for higher-dimensional Schwarzschild backgrounds see [32] (for an application of the CFM to higher-dimensional asymptotic QNMs see [33]). However, all that is necessary in the AIM is to factor out the correct asymptotic behaviour at the horizon(s) and infinity (we showed this for higher-dimensional scalar spheroids in [34]). 3.3. The Spin-Zero Anti-de-Sitter Case There are various approaches to finding QNMs for the SAdS case (an eloquent discussion is given in the appendix of [35], see also [36, 37]). One approach is that of Horowitz and Hubeny [38], which uses a series solution chosen to satisfy the SAdS QNM boundary conditions. This method can easily be applied to all perturbations (). The other approach is to use the Frobenius method of Leaver [6], but instead of developing a continued fraction the series must satisfy a boundary condition at infinity, such as a Dirichlet boundary condition [1]. The AIM does not seem easy to apply to metrics where there is an asymptotically anti-de-Sitter background, because for general spin, , the potential at infinity is a constant and hence would include a combination of ingoing and outgoing waves, leading to a sinusoidal dependence [39]. However, for the scalar spin zero () case, the potential actually blows up at infinity and is effectively a bound state problem. In this case the AIM can easily be applied as we show here in after. Let us consider the scalar wave equation in SAdS spacetime, where , and is the AdS radius. The master equation takes the same form as for the graviational case, except that the potential becomes Here for simplicity we have taken the AdS radius , the mass of the black hole , and the angular momentum number . Hence the horizon radius equals 1. Thus, with this choice we can compare with the data in Table 3.2 on page 37 of [40] (see Table 3). Table 3: Comparison of the first few QNMs to 6 significant figures for Schwarzschild anti-de-Sitter scalar perturbations () for modes with . The second column corresponds to data [40] using the Horowitz and Hubeny (HH) method [38], while the third column is for the AIM using 70 iterations. [*] Note the mismatch for the real part of the mode in [40]; we have confirmed this using the Mathematica notebook provided in [24]. To implement the AIM we first look at the asymptotic behavior of . As , the potential goes to zero. In addition, For QNMs we choose the out-going (into the black hole) boundary condition. That is, On the other extreme of our space, , the potential goes to infinity. This is a crucial difference from the case of gravitational perturbations. In that case, the potential goes to a constant. However, in the scalar case, as , , and to implement the Dirichlet boundary condition, we take For the AIM one possible choice of variables is and we see that to accommodate the asymptotic behaviour of the wavefunction we should take Finally, after some work we find that the scalar perturbation equation is where and . Using the AIM we find the results presented in Table 3. 4. Reissner-Nordström Black Holes The procedure for obtaining the quasinormal frequencies of Reissner-Nordström black holes in four-dimensional spacetime is similar to that of our earlier cases. Starting with the Reissner-Nordström metric, where and is the charge of the black hole. If we consider perturbations exterior to the event horizon, the perturbation equations of the Reissner-Nordström (charged and nonrotating) geometry can be separated into two pairs of Schrödinger-like equations, which describe the even- and odd-parity oscillations, respectively [2023]. They are given by where corresponds to even- and to odd-parity modes: for (), where and . Here is the frequency, the angular momentum parameter, and and the radii of the inner and outer (event) horizons of the black hole, respectively. Note that and at the Schwarzschild limit ; at the extremal limit . Here the tortoise coordinate is given by which ranges from at the event horizon to at spatial infinity. The QNMs of the Reissner-Nordström black holes are ordinarily accompanied by the emission of both electromagnetic and gravitational radiation, except at the Schwarzschild limit [31, 41]. Equation (4.2) corresponds to purely gravitational perturbations for the radial wave functions and purely electromagnetic perturbations for at the Schwarzschild limit. Chandrasekhar [42] has shown that the solutions for the even-parity oscillations and for the odd-parity oscillations have the following relationship: so one can just consider solutions for a specific parity, as in the Schwarzschild case, to understand the property of the black hole. Since the formalism of the effective potential in the odd-parity equation is much simpler than in the even-parity equation, it is customary to compute the QNMs for the odd-parity modes. Note that the mass of the Reissner-Nordström black hole has been scaled to , so its quasinormal frequencies are uniquely determined by the charge , the angular momentum , and the overtone number of the mode. The following procedure is similar to that in Section 3. At first we change to the variable in (4.2) for the odd-parity mode. From (4.4) we have Substituting (4.9) into (4.2) for the odd-parity mode, we get Considering the QNM boundary conditions in the Reissner-Nordström case and incorporating this into the radial wave function , we have a form involving the asymptotic behaviour [31]: Differentiating (4.12) one and two times with respect to , we have where is defined by Substituting (4.13) into (4.10), we obtain For the same reason as in Section 3, here we change the variable to by the definition , which ranges from 0 at the event horizon to 1 at spatial infinity. Thus we have Substituting (4.16) into (4.15), and rewriting the equation in the AIM form, we obtain where The numerical results to four decimal places are presented in Tables 4, 5, and 6. They are compared with and from [31] and [41], respectively. The quasinormal frequencies appear as complex conjugate pairs in ; we list only the ones with . Note that we arrange as . In Table 6 the quasinormal frequencies obtained by the WKB method are not available. It is apparent that the quasinormal frequencies obtained by the AIM are very accurate except for in the extremal case in Tables 4 and 6. Table 4: Reissner-Nordström quasinormal frequency parameter values () for the fundamental () and two lowest overtones for and . The QNMs of and in Table 4 reduce to the purely gravitational QNMs in the Schwarzschild case at , while the QNMs of and in Table 5 reduce to the purely electromagnetic QNMs at . Some comments on the higher () overtones for the Reissner-Nordström black hole for the extremal limit () are perhaps necessary. In general, much like the CFM the AIM begins to break down for larger overtones, requiring more iterations. However, near the extremal limit () the horizons become degenerate and the singularity structure of the corresponding differential (radial) equation changes [43] (the number of singular points is different in the nonextremal and the extremal cases) and causes the current implementation of the AIM (cf. (4.12)) to break down. Thus, we see in Tables 46 that for some of the values have large errors when compared to the CFM. 5. Kerr Black Holes A rotating black hole carrying angular momentum is described by the Kerr metric (in Boyer-Lindquist coordinates) as with where is the Kerr rotation parameter with , being included as a general black hole mass. The horizons and are again the inner and the outer (event) horizons, respectively. Teukolsky [19] showed that the perturbation equations in the Kerr geometry are separable, where the separated equations for the angular wave function and the radial wave function are given by where we have the following function: In , is the spin weight, is the spin-weighted separation constant for the angular equation, and is another angular momentum parameter. For completeness the evaluation of the separation constant using the AIM is discussed in Appendix A. In order to use the AIM we need to solve for the angular solution in the radial equation. However, for nonzero the effective potential of the radial equation is in general complex. A straight forward application of the AIM does not give the correct answer. In fact a similar problem occurs in both numerical [44] and WKB [79] methods. For this reason we will look at each of the spin cases () separately in the following subsections. 5.1. The Spin-Zero Case Because the AIM works better on a compact domain, we define a new variable , which ranges from 0 at the event horizon to 1 at spatial infinity. It is then necessary to incorporate the boundary conditions, expressed in the new compact domain, where is By making the change of coordinates and change of function, (5.7) takes the following form: where As aforementioned, we have defined where , and is again our rotation parameter. Equation (5.8) is now in the correct form to use the AIM for QNM frequency calculations. Note that the potential is where the angular separation constant is defined via . (Even though the radial and angular equations are coupled via the separation constant, , we are able to find excellent agreement with the CFM by starting from the Schwarzschild () result for our initial guess of the Kerr () QNM solution using FindRoot in Mathematica in our AIM code (at least for )). Presented in Tables 7 and 8 are the QNM frequencies for the scalar perturbations of the Kerr black hole with the two extreme (minimum and maximum) values of the angular momentum per unit mass, that is, and . was set to 0, while was given values of 0, 1, and 2 and varied accordingly. Table 7: The QNM frequencies for the Scalar Perturbations of the Kerr Black Hole, with , that is, the Schwarzschild limit (, ). Numerical data via the CFM were taken from [45, 46], where the AIM was set to run at iterations. Table 8: The QNM frequencies for the Scalar Perturbations of the Kerr Black Hole, with (, ). Numerical data via the CFM taken from [45, 46], where the AIM was set to run at 15 iterations. Included in Table 7 are the numerically determined QNM frequencies published by Leaver in 1985 [6]. The percentages bracketed under each QNM frequency via the AIM are the percentage differences between the calculated value and the numerical value published by Leaver. With the exceptions of the QNM frequencies for , , and , , the AIM values correspond to the CFM up to four decimal places and even those anomalies differ by less than , proving, at least in this case that the AIM is a precise semianalytical technique. In Table 8, all three values were calculated in this work, even though published values are available for the third-order WKB(J), at least graphically, where numerical values using the CFM were taken from [45, 46]. Since the WKB(J) is a generally accepted semianalytical technique for QNM frequency calculations, the percentages below the AIM values are the differences to the sixth-order WKB(J) values. Only in the case of and does the AIM QNM frequency significantly differ from the sixth-order WKB(J) value. Note that in an upcoming work, further values will be presented for values of , , and , with the same variations of and with [47]. 5.2. The Spin-Half Case For the spin- case we would like to know how the AIM can be used to derive an appropriate form of the Dirac equation in this spacetime background using the basis set-up by four null vectors which are the basis of the Newman-Penrose formalism; for further details see [47]. That is, in the Kerr background we adopt the following vectors as the null tetrad: where and are nothing but complex conjugates of and , respectively. It is clear that the basis vectors basically become derivative operators when these are applied as tangent vectors to the function . Therefore we can write where and with . The spin coefficients can be written as a combination of basis vectors in the Newman-Penrose formalism which are now expressed in terms of the elements of different components of the Kerr metric. So by combining these different components of basis vectors in a suitable manner we get the spin coefficients as Using the aformentioned definitions, and by choosing , , , and (where and are a pair of spinors), the Dirac equation reduces to We separate the Dirac equation into radial and angular parts by choosing Replacing these and and using as the separation constant, we get where is redefined as . Equations (5.19) and (5.20) are the angular and radial Dirac equation respectively, in a coupled form with the separation constant [42]. Decoupling (5.19) gives the eigenvalue/angular equation for spin-half particles as and satisfies the adjoint equation (obtained by replacing by ). (Note that this angular equation is that given by (5.3) for and hence using the method in Appendix A, we could solve this numerically). Decoupling (5.20) then gives the radial equation for spin-half particles as and satisfies the complex-conjugate equation. Furthermore, unlike the case of a scalar particle, a spin-half particle is not capable of extracting energy from a rotating black hole; that is, there is no Penrose Process (superradiance) equivalent scenario [47]. Returning now to the AIM, recall that it will work better on a compact domain, where we define a new variable , which ranges from 0 at the event horizon to 1 at spatial infinity. It is then necessary to incorporate the boundary conditions, expressed in the new compact domain, where where we have defined satisfies the WKB(J)-like equation: with potential and (for more details see [47]). By making the change of coordinates and change of functions, our equation takes the following form: where as in Section 5.1 we have Presented in Tables 9 and 10 are the QNM frequencies for the spin-half perturbations of the Kerr black hole with the two extreme values of the angular momentum per unit mass, that is, and , was set to 0, while was given values of 0, 1, and 2 and varied accordingly. Table 9: The QNM frequencies for the spin half perturbations of the Kerr black hole, with , that is, the Schwarzschild limit (). Numerical data via the CFM were taken from [48], where the AIM was set to run at 15 iterations. Table 10: The QNM frequencies for the spin half perturbations of the Kerr black hole, with (). Numerical data via the CFM were taken from [48], where the AIM was set to run at 15 iterations. Included in Table 9 are the numerically determined QNM frequencies published by Jing and Pan [48]. Even though the WKB method has been used to calculate the Schwarzschild limit QNM frequencies before [27], the sixth-order WKB values and AIM values are novel to this work and will be explored more fully in [47]. The percentages bracketed under each QNM frequency is the percentage difference between the calculated value and the numerical value published by Jing and Pan [48]. As expected, since there are additional correction terms, the sixth-order WKB QNM frequencies are closer to the numerical values than the third-order WKB values. While the AIM does not prove as accurate in its calculation of the spin-half QNM frequencies as it did with the scalar values (both for 15 iterations), none of the differences between the AIM values and the numerical values exceed , except for when and (better accuracy can be achieved by increasing the number of iterations). Similarly in Table 10 are the numerically determined QNM frequencies published by Jing and Pan [48]. Both the third- and sixth-order WKB values along with the AIM values are novel to this work and will also be explored more fully in [47]. The percentages bracketed under each QNM frequency are the percentage differences between the calculated value and the numerical value published by Jing and Pan, at least for and . For , the AIM values are compared to the sixth-order WKB values. As already noted, since there are additional correction terms, the sixth-order WKB QNM frequencies are closer to the numerical values than the third-order WKB values. Again the AIM does not appear to be as precise in calculating the QNM frequencies for spin-half perturbations of the Kerr black hole as it was for the scalar perturbations (at least for 15 iterations). As we mentioned, additional tables and plots of these Kerr processes will constitute a future work [47]. 5.3. The Spin-Two Case As we have mentioned earlier, the radial equation for nonzero spin is in general complex. In fact, it does not even reduce to the Regge-Wheeler and Zerilli equations when the rotation parameter is . Detweiler [44] has found a way to overcome this problem, where he defined a new function If the functions and are required to satisfy then it can be shown that the radial equation in (5.4) becomes where As Detweiler has indicated, it is possible to choose the functions and so that the resulting effective potential is real and has the following form: where When the Kerr rotation parameter approaches zero, the potential in (5.33) coincides with the Regge-Wheeler potential for negative and coincides with the Zerilli potential for positive . Here we choose to be negative, where the choice of the sign in (5.35) is determined by the sign of [79, 44]. The QNM boundary conditions for are where Hence, we write Substituting this into (5.31), we have where As we did earlier, we define the variable which has a compact domain . Equation (5.40) can then be written in the AIM form: where The results for the gravitational (spin-two) case are presented in Tables 11 and 12. In general the error in the separation constant is smaller than that of the quasinormal frequencies. As for the quasinormal frequencies the error in the Kerr case is larger than that of either the Schwarzschild or the Reissner-Nordström cases, where this is due to our consideration of the angular and the radial equations simultaneously. The number of iterations that can be performed in the code is relatively small: much like the number of continued fractions in the CFM is typically smaller due to the coupling between radial and angular equations. Table 11: Spin-2 angular separation constants and Kerr gravitational quasinormal frequencies for the fundamental mode corresponding to and compared with the CFM [6] (). 6. Doubly Rotating Kerr (A)DS Black Holes Rotating black holes in higher dimensions were first discussed in the seminal paper by Myers and Perry [49]. One of the unexpected results to come from this work was that some families of solutions were shown to have event horizons for arbitrarily large values of their rotation parameters. The stability of such black holes is certainly in question [50, 51], with numerical evidence recently provided by Shibata and Yoshino [52, 53]. Another new feature of the Myers-Perry (MP) solutions is that they in general have spin parameters, making them more complex than the four-dimensional Kerr solution. The first asymptotically nonflat five-dimensional MP metric was given in [54]. Subsequent generalizations to arbitrary dimensions were done in [55], and finally the most general Kerr-(A)dS-NUT metric was found by Chen et al. [56]. In this section we review how the AIM can be used to solve the two-rotation scalar perturbation equations (for more details on the metric and resulting separation see [57]). The scalar field master equations are found to be [57] where the radial and angular frequencies are defined by and . In the aformentioned is the curvature of the spacetime satisfying (e.g., see [56]), and are the two rotation parameters, and for later reference we define . Doubly rotating black holes are more complicated than simply rotating black holes (cf. [58]), because two rotation planes lead to two coupled spheroids which are also needed for the solution of the radial equation. 6.1. Radial Quasinormal Modes For simplicity we will consider the flat case, setting , which leads to easier QNM boundary conditions (cf. Schwarzschild to Schwarzschild-dS). These satisfy the boundary condition that there are only waves ingoing at the black hole horizon and outgoing waves at asymptotic infinity. As we have shown with the previous examples, it is easier to work on a compact domain and define the variable , so that infinity is mapped to zero and the outer horizon stays at . The domain of will therefore be . Thus the QNM boundary condition is translated into the statement that the waves move leftward at and rightward at . We again choose the AIM point in the middle of the domain, that is, at . In terms of the radial equation in (6.1) becomes
7b2a4bae11df3ddd
Psychology Wiki Eigenvalue, eigenvector and eigenspace 34,136pages on this wiki Revision as of 08:38, November 11, 2013 by Rotlink (Talk | contribs) Mona Lisa with eigenvector Fig. 1. In this shear mapping of the Mona Lisa, the picture was deformed in such a way that its central vertical axis (red vector) was not modified, but the diagonal vector (blue) has changed direction. Hence the red vector is an eigenvector of the transformation and the blue vector is not. Since the red vector was neither stretched nor compressed, its eigenvalue is 1. All vectors with the same vertical direction - i.e., parallel to this vector - are also eigenvectors, with the same eigenvalue. Together with the zero-vector, they form the eigenspace for this eigenvalue. In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector of a given linear transformation is a vector which is multiplied by a constant called the eigenvalue during that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues). For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation is the span of the eigenvectors of that transformation with the same eigenvalue, together with the zero vector (which has no direction). An eigenspace is an example of a subspace of a vector space. In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below. These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics. Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency. Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix.[1] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[2] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.[3] Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[4] Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues.[2] This was extended by Hermite in 1855 to what are now called Hermitian matrices.[3] Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[2] and Clebsch found the corresponding result for skew-symmetric matrices.[3] Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.[2] In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm-Liouville theory.[5] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[6] At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[7] He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic" or "individual"—emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[8] The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by Francis and Kublanovskaya in 1961.[9] Definitions: the eigenvalue equationEdit See also: Eigenplane Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space L a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L. For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function A is linear if it has the following two properties: additivity \ A(\mathbf{x}+\mathbf{y})=A(\mathbf{x})+A(\mathbf{y}) homogeneity \ A(\alpha \mathbf{x})=\alpha A(\mathbf{x}) where x and y are any two vectors of the vector space L and α is any real number. Such a function is variously called a linear transformation, linear operator, or linear endomorphism on the space L. Given a linear transformation A, a non-zero vector x is defined to be an eigenvector of the transformation if it satisfies the eigenvalue equation A \mathbf{x} = \lambda \mathbf{x} for some scalar λ. In this situation, the scalar λ is called an eigenvalue of A corresponding to the eigenvector x. The key equation in this definition is the eigenvalue equation, Ax = λx. Most vectors x will not satisfy such an equation. A typical vector x changes direction when acted on by A, so that Ax is not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors. But in the usual case, eigenvectors are few and far between. They are the "normal modes" of the system, and they act independently.[10] The requirement that the eigenvector be non-zero is imposed because the equation A0 = λ0 holds for every A and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. An eigenvalue can be, and usually is, also a complex number. In the definition given above, eigenvectors and eigenvalues do not occur independently. Instead, each eigenvector is associated with a specific eigenvalue. For this reason, an eigenvector x and a corresponding eigenvalue λ are often referred to as an eigenpair. One eigenvalue can be associated with several or even with infinite number of eigenvectors. But conversely, if an eigenvector is given, the associated eigenvalue for this eigenvector is unique. Indeed, from the equality Ax = λx = λ'x and from x0 it follows that λ = λ'.[11] File:Eigenvalue equation.svg Geometrically (Fig. 2), the eigenvalue equation means that under the transformation A eigenvectors experience only changes in magnitude and sign — the direction of Ax is the same as that of x. This type of linear transformation is defined as homothety (dilatation[12], similarity transformation). The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by A. If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation I under which a vector x remains unchanged, Ix = x, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection. If x is an eigenvector of the linear transformation A with eigenvalue λ, then any vector y = αx is also an eigenvector of A with the same eigenvalue. From the homogeneity of the transformation A it follows that Ay = α(Ax) = α(λx) = λ(αx) = λy. Similarly, using the additivity property of the linear transformation, it can be shown that any linear combination of eigenvectors with eigenvalue λ has the same eigenvalue λ.[13] Therefore, any non-zero vector in the line through x and the zero vector is an eigenvector with the same eigenvalue as x. Together with the zero vector, those eigenvectors form a subspace of the vector space called an eigenspace. The eigenvectors corresponding to different eigenvalues are linearly independent[14] meaning, in particular, that in an n-dimensional space the linear transformation A cannot have more than n eigenvectors with different eigenvalues.[15] The vectors of the eigenspace generate a linear subspace of A which is invariant (unchanged) under this transformation.[16] If a basis is defined in vector space Ln, all vectors can be expressed in terms of components. Polar vectors can be represented as one-column matrices with n rows where n is the space dimensionality. Linear transformations can be represented with square matrices; to each linear transformation A of Ln corresponds a square matrix of rank n. Conversely, to each square matrix of rank n corresponds a linear transformation of Ln at a given basis. Because of the additivity and homogeneity of the linear trasformation and the eigenvalue equation (which is also a linear transformation — homothety), those vector functions can be expressed in matrix form. Thus, in a the two-dimensional vector space L2 fitted with standard basis, the eigenvector equation for a linear transformation A can be written in the following matrix representation: \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix}, where the juxtaposition of matrices means matrix multiplication. This is equivalent to a set of n linear equations, where n is the number of basis vectors in the basis set. In these equations both the eigenvalue λ and the components of x are unknown variables. The eigenvectors of A as defined above are also called right eigenvectors because they are column vectors that stand on the right side of the matrix A in the eigenvalue equation. If there exists a transposed matrix AT that satifies the eigenvalue equation, that is, if ATx = λx, then λxT = (λx)T = (ATx)T = xTA, or xTA = λxT. The last equation is similar to the eigenvalue equation but instead of the column vector x it contains its transposed vector, the row vector xT, which stands on the left side of the matrix A. The eigenvectors that satisfy the eigenvalue equation xTA = λxT are called left eigenvectors. They are row vectors.[17] In many common applications, only right eigenvectors need to be considered. Hence the unqualified term "eigenvector" can be understood to refer to a right eigenvector. Eigenvalue equations, written in terms of right or left eigenvectors (Ax = λx and xTA = λxT) have the same eigenvalue λ.[18] An eigenvector is defined to be a principal or dominant eigenvector if it corresponds to the eigenvalue of largest magnitude (for real numbers, largest absolute value). Repeated application of a linear transformation to an arbitrary vector results in a vector proportional (collinear) to the principal eigenvector.[18] The applicability the eigenvalue equation to general matrix theory extends the use of eigenvectors and eigenvalues to all matrices, and thus greatly extends the scope of use of these mathematical constructs not only to transformations in linear vector spaces but to all fields of science that use matrices: linear equations systems, optimization, vector and tensor calculus, all fields of physics that use matrix quantities, particularly quantum physics, relativity, and electrodynamics, as well as many engineering applications. Characteristic equationEdit Main article: Characteristic equation Main article: Characteristic polynomial The determination of the eigenvalues and eigenvectors is important in virtually all areas of physics and many engineering problems, such as stress calculations, stability analysis, oscillations of vibrating systems, etc. It is equivalent to matrix diagonalization, and is the first step of orthogonalization, finding of invariants, optimization (minimization or maximization), analysis of linear systems, and many other common applications. The usual method of finding all eigenvectors and eigenvalues of a system is first to get rid of the unknown components of the eigenvectors, then find the eigenvalues, plug those back one by one in the eigenvalue equation in matrix form and solve that as a system of linear equations to find the components of the eigenvectors. From the identity transformation Ix = x, where I is the identity matrix, x in the eigenvalue equation can be replaced by Ix to give: A \mathbf{x} = \lambda I \mathbf{x} The identity matrix is needed to keep matrices, vectors, and scalars straight; the equation (A − λ) x = 0 is shorter, but mixed up since it does not differentiate between matrix, scalar, and vector.[19] The expression in the right hand side is transferred to left hand side with a negative sign, leaving 0 on the right hand side: A \mathbf{x} - \lambda I \mathbf{x} = 0 The eigenvector x is pulled out behind parentheses: (A - \lambda I) \mathbf{x} = 0 This can be viewed as a linear system of equations in which the coefficient matrix is the expression in the parentheses, the matrix of the unknowns is x, and the right hand side matrix is zero. According to Cramer's rule, this system of equations has non-trivial solutions (not all zeros, or not any number) if and only if its determinant vanishes, so the solutions of the equation are given by: This equation is defined as the characteristic equation (less often, secular equation) of A, and the left-hand side is defined as the characteristic polynomial. The eigenvector x or its components are not present in the characteristic equation, so at this stage they are dispensed with, and the only unknowns that remain to be calculated are the eigenvalues (the components of matrix A are given, i. e, known beforehand). For a vector space L2, the transformation A is a 2 × 2 square matrix, and the characteristic equation can be written in the following form: \begin{vmatrix} a_{11} - \lambda & a_{12}\\a_{21} & a_{22} - \lambda\end{vmatrix} = 0 Expansion of the determinant in the left hand side results in a characteristic polynomial which is a monic (its leading coefficient is 1) polynomial of the second degree, and the characteristic equation is the quadratic equation \lambda^2 - \lambda (a_{11} + a_{22}) + (a_{11} a_{22} - a_{12} a_{21}) = 0, \, which has the following solutions (roots): \lambda_{1,2} = \frac{1}{2} \left [(a_{11} + a_{22}) \pm \sqrt{4a_{12} a_{21} + (a_{11} - a_{22})^2} \right ]. For real matrices, the coefficients of the characteristic polynomial are all real. The number and type of roots depends on the value of the discriminant, Δ. For cases Δ = 0, Δ > 0, or Δ < 0, respectively, the roots are one real, two real, or two complex. If the roots are complex, they are also complex conjugates of each other. When the number of roots is less than the degree of the characteristic polynomial (the latter is also the rank of the matrix, and the number of dimensions of the vector space) the equation has a multiple root. In the case of a quadratic equation with one root, this root is a double root, or a root with multiplicity 2. A root with a multiplicity of 1 is a simple root. A quadratic equation with two real or complex roots has only simple roots. In general, the algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The spectrum of a transformation on a finite dimensional vector space is defined as the set of all its eigenvalues. In the infinite-dimensional case, the concept of spectrum is more subtle and depends on the topology of the vector space. The general formula for the characteristic polynomial of an n-square matrix is p(\lambda) = \sum_{k=0}^n (-1)^k S_k \lambda^{n-k}, where S0 = 1, S1 = tr(A), the trace of the transformation matrix A, and Sk with k > 1 are the sums of the principal minors of order k.[20] The fact that eigenvalues are roots of an n-order equation shows that a linear transformation of an n-dimensional linear space has at most n different eigenvalues.[21] According to the fundamental theorem of algebra, in a complex linear space, the characteristic polynomial has at least one zero. Consequently, every linear transformation of a complex linear space has at least one eigenvalue. [22][23] For real linear spaces, if the dimension is an odd number, the linear transformation has at least one eigenvalue; if the dimension is an even number, the number of eigenvalues depends on the determinant of the transformation matrix: if the determinant is negative, there exists at least one positive and one negative eigenvalue, if the determinant is positive nothing can be said about existence of eigenvalues.[24] The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space), n. Thus, for n = 3, eigenvalues are roots of the cubic equation, for n = 4 — roots of the quartic equation. For n > 4 there are no exact solutions and one has to resort to root-finding algorithms, such as Newton's method (Horner's method) to find numerical approximations of eigenvalues. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors. In order to find the eigenvectors, the eigenvalues thus found as roots of the characteristic equations are plugged back, one at a time, in the eigenvalue equation written in a matrix form (illustrated for the simplest case of a two-dimensional vector space L2): \left (\begin{bmatrix} a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix} - \lambda \begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix} \right ) \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} a_{11} - \lambda & a_{12}\\a_{21} & a_{22} - \lambda \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, where λ is one of the eigenvalues found as a root of the characteristic equation. This matrix equation is equivalent to a system of two linear equations: \left ( a_{11} - \lambda \right ) x + a_{12} y = 0 \\ a_{21} x + \left ( a_{22} - \lambda \right ) y = 0 The equations are solved for x and y by the usual algebraic or matrix methods. Often, it is possible to divide both sides of the equations to one or more of the coefficients which makes some of the coefficients in front of the unknowns equal to 1. This is called normalization of the vectors, and corresponds to choosing one of the eigenvectors (the normalized eigenvector) as a representative of all vectors in the eigenspace corresponding to the respective eigenvalue. The x and y thus found are the components of the eigenvector in the coordinate system used (most often Cartesian, or polar). Using the Cayley-Hamilton theorem which states that every square matrix satisfies its own characteristic equation, it can be shown that (most generally, in the complex space) there exists at least one non-zero vector that satisfies the eigenvalue equation for that matrix.[25] As it was said in the Definitions section, to each eigenvalue correspond an infinite number of colinear (linearly dependent) eigenvectors that form the eigenspace for this eigenvalue. On the other hand, the dimension of the eigenspace is equal to the number of the linearly independent eigenvectors that it contains. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace. A multiple eigenvalue may give rise to a single eigenvector so that its algebraic multiplicity may be different than the geometric multiplicity.[26] However, as already stated, different eigenvalues are paired with linearly independent eigenvectors.[14] From the aforementioned, it follows that the geometric multiplicity cannot be greater than the algebraic multiplicity.[27] For instance, an eigenvector of a rotation in three dimensions is a vector located within the axis about which the rotation is performed. The corresponding eigenvalue is 1 and the corresponding eigenspace contains all the vectors along the axis. As this is a one-dimensional space, its geometric multiplicity is one. This is the only eigenvalue of the spectrum (of this rotation) that is a real number. The examples that follow are for the simplest case of two-dimensional vector space L2 but they can easily be applied in the same manner to spaces of higher dimensions. Homothety, identity, point reflection, and null transformationEdit File:Homothety in two dim.svg As a one-dimensional vector space L1, consider a rubber string tied to unmoving support in one end, such as that on a child's sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated it preserves its original direction. This type of transformation is called homothety (similarity transformation). For a two-dimensional vector space L2, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at a fixed point on the balloon surface are stretched equally with the same scaling factor λ. The homothety transformation in two-dimensions is described by a 2 × 2 square matrix, acting on an arbitrary vector in the plane of the stretching/shrinking surface. After doing the matrix multiplication, one obtains: A \mathbf{x} = \begin{bmatrix}\lambda & 0\\0 & \lambda\end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix}\lambda . x + 0 . y \\0 . x + \lambda . y\end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \mathbf{x}, which, expressed in words, means that the transformation is equivalent to multiplying the length of the vector by λ while preserving its original direction. The equation thus obtained is exactly the eigenvalue equation. Since the vector taken was arbitrary, in homothety any vector in the vector space undergoes the eigenvalue equation, i. e. any vector lying on the balloon surface can be an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching, if λ < 1, it is shrinking. Several other transformations can be considered special types of homothety with some fixed, constant value of λ: in identity which leaves vectors unchanged, λ = 1; in reflection about a point which preserves length and direction of vectors but changes their orientation to the opposite one, λ = −1; and in null transformation which transforms each vector to the zero vector, λ = 0. The null transformation does not give rise to an eigenvector since the zero vector cannot be an eigenvector but it has eigenspace since eigenspace contains also the zero vector by definition. Unequal scalingEdit For a slightly more complicated example, consider a sheet that is stretched uneqally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: k1 for the scaling in direction x, and k2 for the scaling in direction y. The transformation matrix is \begin{bmatrix}k_1 & 0\\0 & k_2\end{bmatrix}, and the characteristic equation is λ2 − λ (k1 + k2) + k1k2 = 0. The eigenvalues, obtained as roots of this equation are λ1 = k1, and λ2 = k2 which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging k1 back in the eigenvalue equation gives one of the eigenvectors: \begin{bmatrix}k_1 - k_1 & 0\\0 & k_2 - k_1\end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} = \begin{cases} \left ( k_1 - k_1 \right ) x + 0 . y \\ 0 . x + \left ( k_2 - k_1 \right ) y \end{cases} = \left ( k_2 - k_1 \right ) y = 0. Dividing the last equation by k2k1, one obtains y = 0 which represents the x axis. A vector with lenght 1 taken along this axis represents the normalized eigenvector corresponding to the eigenvalue λ1. The eigenvector corresponding to λ2 which is a unit vector along the y axis is found in a similar way. In this case, both eigenvalues are simple (with algebraic and geometric multiplicities equal to 1). Depending on the values of λ1 and λ2, there are several notable special cases. In particular, if λ1 > 1, and λ2 = 1, the transformation is a stretch in the direction of axis x. If λ2 = 0, and λ1 = 1, the transformation is a projection of the surface L2 on the axis x because all vectors in the direction of y become zero vectors. Let the rubber sheet is stretched along the x axis (k1 > 1) and simultaneously shrunk along the y axis (k2 < 1). Then λ1 = k1 will be the principal eigenvalue. Repeatedly applying this transformation of stretching/shrinking many times to the rubber sheet will turn the latter more and more similar to a rubber string. Any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching), that is, it will become collinear with the principal eigenvector. Mona LisaEdit Mona Lisa with eigenvector For the example shown on the right, the matrix that would produce a shear transformation similar to this would be A=\begin{bmatrix}1 & 0\\ -\frac{1}{2} & 1\end{bmatrix}. The set of eigenvectors \mathbf{x} for A is defined as those vectors which, when multiplied by A, result in a simple scaling \lambda of \mathbf{x}. Thus, A\mathbf{x} = \lambda\mathbf{x}. If we restrict ourselves to real eigenvalues, the only effect of the matrix on the eigenvectors will be to change their length, and possibly reverse their direction. So multiplying the right hand side by the Identity matrix I, we have A\mathbf{x} = (\lambda I)\mathbf{x}, and therefore (A-\lambda I)\mathbf{x}=0. In order for this equation to have non-trivial solutions, we require the determinant \det(A - \lambda I), which is called the characteristic polynomial of the matrix A, to be zero. In our example we can calculate the determinant as \det\!\left(\begin{bmatrix}1 & 0\\ -\frac{1}{2} & 1\end{bmatrix} - \lambda\begin{bmatrix}1 & 0\\ 0 & 1\end{bmatrix} \right)=(1-\lambda)^2, and now we have obtained the characteristic polynomial (1-\lambda)^2 of the matrix A. There is in this case only one distinct solution of the equation (1-\lambda)^2 = 0, \lambda=1. This is the eigenvalue of the matrix A. As in the study of roots of polynomials, it is convenient to say that this eigenvalue has multiplicity 2. Having found an eigenvalue \lambda=1, we can solve for the space of eigenvectors by finding the nullspace of A-(1)I. In other words by solving for vectors \mathbf{x} which are solutions of \begin{bmatrix}1-\lambda & 0\\ -\frac{1}{2} & 1-\lambda \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}=0 Substituting our obtained eigenvalue \lambda=1, \begin{bmatrix}0 & 0\\ -\frac{1}{2} & 0 \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}=0 Solving this new matrix equation, we find that vectors in the nullspace have the form \mathbf{x} = \begin{bmatrix}0\\ c\end{bmatrix} where c is an arbitrary constant. All vectors of this form, i.e. pointing straight up or down, are eigenvectors of the matrix A. The effect of applying the matrix A to these vectors is equivalent to multiplying them by their corresponding eigenvalue, in this case 1. In general, 2-by-2 matrices will have two distinct eigenvalues, and thus two distinct eigenvectors. Whereas most vectors will have both their lengths and directions changed by the matrix, eigenvectors will only have their lengths changed, and will not change their direction, except perhaps to flip through the origin in the case when the eigenvalue is a negative number. Also, it is usually the case that the eigenvalue will be something other than 1, and so eigenvectors will be stretched, squashed and/or flipped through the origin by the matrix. Other examplesEdit As the Earth rotates, every arrow pointing outward from the center of the Earth also rotates, except those arrows which are parallel to the axis of rotation. Consider the transformation of the Earth after one hour of rotation: An arrow from the center of the Earth to the Geographic South Pole would be an eigenvector of this transformation, but an arrow from the center of the Earth to anywhere on the equator would not be an eigenvector. Since the arrow pointing at the pole is not stretched by the rotation of the Earth, its eigenvalue is 1. Standing wave Fig. 2. A standing wave in a rope fixed at its boundaries is an example of an eigenvector, or more precisely, an eigenfunction of the transformation giving the acceleration. As time passes, the standing wave is scaled by a sinusoidal oscillation whose frequency is determined by the eigenvalue, but its overall shape is not modified. However, three-dimensional geometric space is not the only vector space. For example, consider a stressed rope fixed at both ends, like the vibrating strings of a string instrument (Fig. 2). The distances of atoms of the vibrating rope from their positions when the rope is at rest can be seen as the components of a vector in a space with as many dimensions as there are atoms in the rope. Assume the rope is a continuous medium. If one considers the equation for the acceleration at every point of the rope, its eigenvectors, or eigenfunctions, are the standing waves. The standing waves correspond to particular oscillations of the rope such that the acceleration of the rope is simply its shape scaled by a factor—this factor, the eigenvalue, turns out to be -\omega^2 where \omega is the angular frequency of the oscillation. Each component of the vector associated with the rope is multiplied by a time-dependent factor \sin(\omega t). If damping is considered, the amplitude of this oscillation decreases until the rope stops oscillating, corresponding to a complex ω. One can then associate a lifetime with the imaginary part of ω, and relate the concept of an eigenvector to the concept of resonance. Without damping, the fact that the acceleration operator (assuming a uniform density) is Hermitian leads to several important properties, such as that the standing wave patterns are orthogonal functions. However, it is sometimes unnatural or even impossible to write down the eigenvalue equation in a matrix form. This occurs for instance when the vector space is infinite dimensional, for example, in the case of the rope above. Depending on the nature of the transformation T and the space to which it applies, it can be advantageous to represent the eigenvalue equation as a set of differential equations. If T is a differential operator, the eigenvectors are commonly called eigenfunctions of the differential operator representing T. For example, differentiation itself is a linear transformation since \displaystyle\frac{d}{dt}(af+bg) = a \frac{df}{dt} + b \frac{dg}{dt} (f(t) and g(t) are differentiable functions, and a and b are constants). Consider differentiation with respect to t. Its eigenfunctions h(t) obey the eigenvalue equation: \displaystyle\frac{dh}{dt} = \lambda h, where λ is the eigenvalue associated with the function. Such a function of time is constant if \lambda = 0, grows proportionally to itself if \lambda is positive, and decays proportionally to itself if \lambda is negative. For example, an idealized population of rabbits breeds faster the more rabbits there are, and thus satisfies the equation with a positive lambda. The solution to the eigenvalue equation is g(t)= \exp (\lambda t), the exponential function; thus that function is an eigenfunction of the differential operator d/dt with the eigenvalue λ. If λ is negative, we call the evolution of g an exponential decay; if it is positive, an exponential growth. The value of λ can be any complex number. The spectrum of d/dt is therefore the whole complex plane. In this example the vector space in which the operator d/dt acts is the space of the differentiable functions of one variable. This space has an infinite dimension (because it is not possible to express every differentiable function as a linear combination of a finite number of basis functions). However, the eigenspace associated with any given eigenvalue λ is one dimensional. It is the set of all functions g(t)= A \exp (\lambda t), where A is an arbitrary constant, the initial population at t=0. Spectral theoremEdit For more details on this topic, see spectral theorem. In its simplest version, the spectral theorem states that, under certain conditions, a linear transformation of a vector \mathbf{v} can be expressed as a linear combination of the eigenvectors, in which the coefficient of each eigenvector is equal to the corresponding eigenvalue times the scalar product (or dot product) of the eigenvector with the vector \mathbf{v}. Mathematically, it can be written as: \mathcal{T}(\mathbf{v})= \lambda_1 (\mathbf{v}_1 \cdot \mathbf{v}) \mathbf{v}_1 + \lambda_2 (\mathbf{v}_2 \cdot \mathbf{v}) \mathbf{v}_2 + \cdots where \mathbf{v}_1, \mathbf{v}_2, \dots and \lambda_1, \lambda_2, \dots stand for the eigenvectors and eigenvalues of \mathcal{T}. The theorem is valid for all self-adjoint linear transformations (linear transformations given by real symmetric matrices and Hermitian matrices), and for the more general class of (complex) normal matrices. If one defines the nth power of a transformation as the result of applying it n times in succession, one can also define polynomials of transformations. A more general version of the theorem is that any polynomial P of \mathcal{T} is given by P(\mathcal{T})(\mathbf{v}) = P(\lambda_1) (\mathbf{v}_1 \cdot \mathbf{v}) \mathbf{v}_1 + P(\lambda_2) (\mathbf{v}_2 \cdot \mathbf{v}) \mathbf{v}_2 + \cdots The theorem can be extended to other functions of transformations, such as analytic functions, the most general case being Borel functions. Main article: Eigendecomposition (matrix) The spectral theorem for matrices can be stated as follows. Let \mathbf{A} be a square (n\times n) matrix. Let \mathbf{q}_1 ... \mathbf{q}_k be an eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the space spanned by the eigenvectors of \mathbf{A}. If k=n, then \mathbf{A} can be written where \mathbf{Q} is the square (n\times n) matrix whose ith column is the basis eigenvector \mathbf{q}_i of \mathbf{A} and \mathbf{\Lambda} is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e. \Lambda_{ii}=\lambda_i. Infinite-dimensional spacesEdit If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which \left(T-\lambda\right)^{-1} is not defined; that is, such that T-\lambda has no bounded inverse. Clearly if λ is an eigenvalue of T, λ is in the spectrum of T. In general, the converse is not true. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following example. The bilateral shift on the Hilbert space \ell^2(\mathbf{Z}) (the space of all sequences of scalars \dots a_{-1}, a_0, a_1,a_2,\dots such that \cdots + |a_{-1}|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + \cdots converges) has no eigenvalue but has spectral values. In infinite-dimensional spaces, the spectrum of a bounded operator is always nonempty. This is also true for an unbounded self adjoint operator. Via its spectral measures, the spectrum of any self adjoint operator, bounded or otherwise, can be decomposed into absolutely continuous, pure point, and singular parts. (See Decomposition of spectrum.) Exponential functions are eigenfunctions of the derivative operator (the derivative of exponential functions are proportional to themself). Exponential growth and decay therefore provide examples of continuous spectra, as does the vibrating string example illustrated above. The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantified). Schrödinger equationEdit Fig. 4. The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward: n=1,2,3,...) and angular momentum (increasing across: s, p, d,...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement. The center of each figure is the atomic nucleus, a proton. An example of an eigenvalue equation where the transformation \mathcal{T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: H\psi_E = E\psi_E \, where H, the Hamiltonian, is a second-order differential operator and \psi_E, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E, interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for \psi_E within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which \psi_E and H can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. (Fig. 4 presents the lowest eigenfunctions of the Hydrogen atom Hamiltonian.) The Dirac notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by |\Psi_E\rangle. In this notation, the Schrödinger equation is: H|\Psi_E\rangle = E|\Psi_E\rangle where |\Psi_E\rangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation above H|\Psi_E\rangle is understood to be the vector obtained by application of the transformation H to |\Psi_E\rangle. Molecular orbitalsEdit In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of implicit eigenvalue equation. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. Geology and Glaciology: (Orientation Tensor)Edit In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram [28], [29], or as a Stereonet on a Wulff Net [30]. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 [31] are in the order E1 > E2 > E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength. The clast orientation is defined as the Eigenvector, on a compass rose of 360°. Dip is measured as the Eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). Various values of E1, E2 and E3 mean different things, as can be seen in the book 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 [32]. Factor analysisEdit In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and eigenvalues to the variance explained by these factors. Factor analysis is a statistical technique used in the social sciences and in marketing, product management, operations research, and other applied sciences that deal with large quantities of data. The objective is to explain most of the covariability among a number of observable random variables in terms of a smaller number of unobservable latent variables called factors. The observable random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered too weak, not explaining a significant portion of the data variability. Fig. 5. Eigenfaces as examples of eigenvectors In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. More on determining sign language letters using eigen systems can be found here: Similar to this concept, eigenvoices concept is also developed which represents the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation. Tensor of inertiaEdit In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass. Stress tensorEdit In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. Eigenvalues of a graphEdit In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix, which is either TA or IT 1/2AT −1/2, where T is a diagonal matrix holding the degree of each vertex, and in T −1/2, 0 is substituted for 0−1/2. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest eigenvalue of A, or the eigenvector corresponding to the kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second principal eigenvector can be used to partition the graph into clusters, via spectral clustering. See also Edit The Book of Mathematical Proofs may have more about this subject. 1. See Hawkins (1975), §2. 2. 2.0 2.1 2.2 2.3 See Hawkins (1975), §3. 3. 3.0 3.1 3.2 See Kline 1972, pp. 807-808 4. See Kline 1972, p. 673 5. See Kline 1972, pp. 715-716 6. See Kline 1972, pp. 706-707 7. See Kline 1972, p. 1063 8. See Aldrich (2006). 9. See Golub & van Loan 1996, §7.3; Meyer 2000, §7.3 10. See Strang 2006, p. 249 11. See Sharipov 1996, p. 66 12. See Bowen & Wang 1980, p. 148 13. For a proof of this lemma, see Shilov 1969, p. 131, and Lemma for the eigenspace 14. 14.0 14.1 For a proof of this lemma, see Shilov 1969, p. 130, Hefferon 2001, p. 364, and Lemma for linear independence of eigenvectors 15. See Shilov 1969, p. 131 16. For proof, see Sharipov 1996, Theorem 4.4 on p. 68 17. See Shores 2007, p. 252 18. 18.0 18.1 For a proof of this theorem, see Weisstein, Eric W. Eigenvector From MathWorld − A Wolfram Web Resource 19. See Strang 2006, footnote to p. 245 20. For details and proof, see Meyer 2000, p. 494-495 21. See Greub 1975, p. 118 22. See Greub 1975, p. 119 23. For proof, see Gelfand 1971, p. 115 24. For proof, see Greub 1975, p. 119 25. For details and proof, see Kuttler 2007, p. 151 26. See Shilov 1969, p. 134 27. See Shilov 1969, p. 135 and Problem 11 to Chapter 5 28. Graham, D., and Midgley, N., 2000. Earth Surface Processes and Landforms (25) pp 1473-1477 29. Sneed ED, Folk RL. 1958. Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis. Journal of Geology 66(2): 114–150 30. GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system 31. Stereo32 32. Benn, D., Evans, D., 2004. A Practical Guide to the study of Glacial Sediments. London: Arnold. pp 103-107 • Korn, Granino A.; Korn, Theresa M. (2000), Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review, 1152 p., Dover Publications, 2 Revised edition, ISBN 0-486-41147-8 . • John Aldrich, Eigenvalue, eigenfunction, eigenvector, and related terms. In Jeff Miller (Editor), Earliest Known Uses of Some of the Words of Mathematics, last updated 7 August 2006, accessed 22 August 2006. • Strang, Gilbert (1993), Introduction to linear algebra, Wellesley-Cambridge Press, Wellesley, MA, ISBN 0-961-40885-5 . • Strang, Gilbert (2006), Linear algebra and its applications, Thomson, Brooks/Cole, Belmont, CA, ISBN 0-030-10567-6 . • Bowen, Ray M.; Wang, Chao-Cheng (1980), Linear and multilinear algebra, Plenum Press, New York, NY, ISBN 0-306-37508-7 . • Claude Cohen-Tannoudji, Quantum Mechanics, Wiley (1977). ISBN 0-471-16432-1. (Chapter II. The mathematical tools of quantum mechanics.) • John B. Fraleigh and Raymond A. Beauregard, Linear Algebra (3rd edition), Addison-Wesley Publishing Company (1995). ISBN 0-201-83999-7 (international edition). • Golub, Gene H.; van Loan, Charles F. (1996), Matrix computations (3rd Edition), Johns Hopkins University Press, Baltimore, MD, ISBN 978-0-8018-5414-9 . • T. Hawkins, Cauchy and the spectral theory of matrices, Historia Mathematica, vol. 2, pp. 1–29, 1975. • Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback). • Kline, Morris (1972), Mathematical thought from ancient to modern times, Oxford University Press, ISBN 0-195-01496-0 . • Meyer, Carl D. (2000), Matrix analysis and applied linear algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, ISBN 978-0-89871-454-8 . • Brown, Maureen, "Illuminating Patterns of Perception: An Overview of Q Methodology" October 2004 • Gene H. Golub and Henk A. van der Vorst, "Eigenvalue computation in the 20th century," Journal of Computational and Applied Mathematics 123, 35-65 (2000). • Max A. Akivis and Vladislav V. Goldberg, Tensor calculus (in Russian), Science Publishers, Moscow, 1969. • Gelfand, I. M. (1971), Lecture notes in linear algebra, Russian: Science Publishers, Moscow  • Pavel S. Alexandrov, Lecture notes in analytical geometry (in Russian), Science Publishers, Moscow, 1968. • Carter, Tamara A., Richard A. Tapia, and Anne Papaconstantinou, Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students, Rice University, Online Edition, Retrieved on 2008-02-19. • Steven Roman, Advanced linear algebra 3rd Edition, Springer Science + Business Media, LLC, New York, NY, 2008. ISBN 978-0-387-72828-5 • Shilov, G. E. (1969), Finite-dimensional (linear) vector spaces, Russian: State Technical Publishing House, 3rd Edition, Moscow . • Hefferon, Jim (2001), Linear Algebra, Online book, St Michael's College, Colchester, Vermont, USA, . • Kuttler, Kenneth (2007), An introduction to linear algebra, Online e-book in PDF format, Brigham Young University, . • James W. Demmel, Applied Numerical Linear Algebra, SIAM, 1997, ISBN 0-89871-389-7. • Robert A. Beezer, A First Course In Linear Algebra, Free online book under GNU licence, University of Puget Sound, 2006 • Lancaster, P. Matrix Theory (in Russian), Science Publishers, Moscow, 1973, 280 p. • Paul R. Halmos, Finite-Dimensional Vector Spaces, 8th Edition, Springer-Verlag, New York, 1987, 212 p., ISBN 0387900934 • Pigolkina, T. S. and Shulman, V. S., Eigenvalue (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977. • Pigolkina, T. S. and Shulman, V. S., Eigenvector (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977. • Greub, Werner H. (1975), Linear Algebra (4th Edition), Springer-Verlag, New York, NY, ISBN 0-387-90110-8 . • Larson, Ron and Bruce H. Edwards, Elementary Linear Algebra, 5th Edition, Houghton Mifflin Company, 2003, ISBN 0618335676. • Curtis, Charles W., Linear Algebra: An Introductory Approach, 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0387909923. • Shores, Thomas S. (2007), Applied linear algebra and matrix analysis, Springer Science+Business Media, LLC, ISBN 0-387-33194-8 . • Sharipov, Ruslan A. (1996), Course of Linear Algebra and Multidimensional Geometry: the textbook, Online e-book in PDF format, Bashkir State University, Ufa, arXiv:math/0405323v1, ISBN 5-7477-0099-5, Archived from the original on 2009-10-26, . External linksEdit Algebra may have more about this subject. Linear Algebra may have more about this subject. Around Wikia's network Random Wiki
e412fcd6422c708a
Archive for the ‘Metaphysical Spouting’ Category “Could a Quantum Computer Have Subjective Experience?” Monday, August 25th, 2014 Author’s Note: Below is the prepared version of a talk that I gave two weeks ago at the workshop Quantum Foundations of a Classical Universe, which was held at IBM’s TJ Watson Research Center in Yorktown Heights, NY.  My talk is for entertainment purposes only; it should not be taken seriously by anyone.  If you reply in a way that makes clear you did take it seriously (“I’m shocked and outraged that someone who dares to call himself a scientist would … [blah blah]“), I will log your IP address, hunt you down at night, and force you to put forward an account of consciousness and decoherence that deals with all the paradoxes discussed below—and then reply at length to all criticisms of your account. If you’d like to see titles, abstracts, and slides for all the talks from the workshop—including by Charles Bennett, Sean Carroll, James Hartle, Adrian Kent, Stefan Leichenauer, Ken Olum, Don Page, Jason Pollack, Jess Riedel, Mark Srednicki, Wojciech Zurek, and Michael Zwolak—click here.  You’re also welcome to discuss these other nice talks in the comments section, though I might or might not be able to answer questions about them.  Apparently videos of all the talks will be available before long (Jess Riedel has announced that videos are now available). (Note that, as is probably true for other talks as well, the video of my talk differs substantially from the prepared version—it mostly just consists of interruptions and my responses to them!  On the other hand, I did try to work some of the more salient points from the discussion into the text below.) Thanks so much to Charles Bennett and Jess Riedel for organizing the workshop, and to all the participants for great discussions. I didn’t prepare slides for this talk—given the topic, what slides would I use exactly?  “Spoiler alert”: I don’t have any rigorous results about the possibility of sentient quantum computers, to state and prove on slides.  I thought of giving a technical talk on quantum computing theory, but then I realized that I don’t really have technical results that bear directly on the subject of the workshop, which is how the classical world we experience emerges from the quantum laws of physics.  So, given the choice between a technical talk that doesn’t really address the questions we’re supposed to be discussing, or a handwavy philosophical talk that at least tries to address them, I opted for the latter, so help me God. Let me start with a story that John Preskill told me years ago.  In the far future, humans have solved not only the problem of building scalable quantum computers, but also the problem of human-level AI.  They’ve built a Turing-Test-passing quantum computer.  The first thing they do, to make sure this is actually a quantum computer, is ask it to use Shor’s algorithm to factor a 10,000-digit number.  So the quantum computer factors the number.  Then they ask it, “while you were factoring that number, what did it feel like?  did you feel yourself branching into lots of parallel copies, which then recohered?  or did you remain a single consciousness—a ‘unitary’ consciousness, as it were?  can you tell us from introspection which interpretation of quantum mechanics is the true one?”  The quantum computer ponders this for a while and then finally says, “you know, I might’ve known before, but now I just … can’t remember.” I like to tell this story when people ask me whether the interpretation of quantum mechanics has any empirical consequences. Look, I understand the impulse to say “let’s discuss the measure problem, or the measurement problem, or derivations of the Born rule, or Boltzmann brains, or observer-counting, or whatever, but let’s take consciousness off the table.”  (Compare: “let’s debate this state law in Nebraska that says that, before getting an abortion, a woman has to be shown pictures of cute babies.  But let’s take the question of whether or not fetuses have human consciousness—i.e., the actual thing that’s driving our disagreement about that and every other subsidiary question—off the table, since that one is too hard.”)  The problem, of course, is that even after you’ve taken the elephant off the table (to mix metaphors), it keeps climbing back onto the table, often in disguises.  So, for better or worse, my impulse tends to be the opposite: to confront the elephant directly. Having said that, I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other.  Many people would say that the connection doesn’t go any deeper than: “quantum mechanics is mysterious, consciousness is also mysterious, ergo maybe they’re related somehow.”  But I’m not sure that’s entirely true.  One thing that crystallized my thinking about this was a remark made in a lecture by Peter Byrne, who wrote a biography of Hugh Everett.  Byrne was discussing the question, why did it take so many decades for Everett’s Many-Worlds Interpretation to become popular?  Of course, there are people who deny quantum mechanics itself, or who have basic misunderstandings about it, but let’s leave those people aside.  Why did people like Bohr and Heisenberg dismiss Everett?  More broadly: why wasn’t it just obvious to physicists from the beginning that “branching worlds” is a picture that the math militates toward, probably the simplest, easiest story one can tell around the Schrödinger equation?  Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it? Here was Byrne’s answer: he said, before you can really be on board with Everett, you first need to be on board with Daniel Dennett (the philosopher).  He meant: you first need to accept that a “mind” is just some particular computational process.  At the bottom of everything is the physical state of the universe, evolving via the equations of physics, and if you want to know where consciousness is, you need to go into that state, and look for where computations are taking place that are sufficiently complicated, or globally-integrated, or self-referential, or … something, and that’s where the consciousness resides.  And crucially, if following the equations tells you that after a decoherence event, one computation splits up into two computations, in different branches of the wavefunction, that thereafter don’t interact—congratulations!  You’ve now got two consciousnesses. And if everything above strikes you as so obvious as not to be worth stating … well, that’s a sign of how much things changed in the latter half of the 20th century.  Before then, many thinkers would’ve been more likely to say, with Descartes: no, my starting point is not the physical world.  I don’t even know a priori that there is a physical world.  My starting point is my own consciousness, which is the one thing besides math that I can be certain about.  And the point of a scientific theory is to explain features of my experience—ultimately, if you like, to predict the probability that I’m going to see X or Y if I do A or B.  (If I don’t have prescientific knowledge of myself, as a single, unified entity that persists in time, makes choices, and later observes their consequences, then I can’t even get started doing science.)  I’m happy to postulate a world external to myself, filled with unseen entities like electrons behaving in arbitrarily unfamiliar ways, if it will help me understand my experience—but postulating other versions of me is, at best, irrelevant metaphysics.  This is a viewpoint that could lead you Copenhagenism, or to its newer variants like quantum Bayesianism. Of course, there are already tremendous difficulties here, even if we ignore quantum mechanics entirely.  Ken Olum was over much of this ground in his talk yesterday (see here for a relevant paper by Davenport and Olum).  You’ve all heard the ones about, would you agree to be painlessly euthanized, provided that a complete description of your brain would be sent to Mars as an email attachment, and a “perfect copy” of you would be reconstituted there?  Would you demand that the copy on Mars be up and running before the original was euthanized?  But what do we mean by “before”—in whose frame of reference? Some people say: sure, none of this is a problem!  If I’d been brought up since childhood taking family vacations where we all emailed ourselves to Mars and had our original bodies euthanized, I wouldn’t think anything of it.  But the philosophers of mind are barely getting started. You might say, sure, maybe these questions are puzzling, but what’s the alternative?  Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na.  Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and the lookup table and homomorphically-encrypted brain and Vaidman brain and all these other apparitions aren’t, because we alone have “biological causal powers.”  And what do those causal powers consist of?  Hey, you’re not supposed to ask that!  Just accept that we have them.  Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity.  But neither of those two options ever struck me as much of an improvement. Yet I submit to you that, between these extremes, there’s another position we can stake out—one that I certainly don’t know to be correct, but that would solve so many different puzzles if it were correct that, for that reason alone, it seems to me to merit more attention than it usually receives.  (In an effort to give the view that attention, a couple years ago I wrote an 85-page essay called The Ghost in the Quantum Turing Machine, which one or two people told me they actually read all the way through.)  If, after a lifetime of worrying (on weekends) about stuff like whether a giant lookup table would be conscious, I now seem to be arguing for this particular view, it’s less out of conviction in its truth than out of a sense of intellectual obligation: to whatever extent people care about these slippery questions at all, to whatever extent they think various alternative views deserve a hearing, I believe this one does as well. Before I go further, let me be extremely clear about what this view is not saying.  Firstly, it’s not saying that the brain is a quantum computer, in any interesting sense—let alone a quantum-gravitational computer, like Roger Penrose wants!  Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical.  The view I’m discussing doesn’t challenge conventional neuroscience on that account. Secondly, this view doesn’t say that consciousness is in any sense necessary for decoherence, or for the emergence of a classical world.  I’ve never understood how one could hold such a belief, while still being a scientific realist.  After all, there are trillions of decoherence events happening every second in stars and asteroids and uninhabited planets.  Do those events not “count as real” until a human registers them?  (Or at least a frog, or an AI?)  The view I’m discussing only asserts the converse: that decoherence is necessary for consciousness.  (By analogy, presumably everyone agrees that some amount of computation is necessary for an interesting consciousness, but that doesn’t mean consciousness is necessary for computation.) Thirdly, the view I’m discussing doesn’t say that “quantum magic” is the explanation for consciousness.  It’s silent on the explanation for consciousness (to whatever extent that question makes sense); it seeks only to draw a defensible line between the systems we want to regard as conscious and the systems we don’t—to address what I recently called the Pretty-Hard Problem.  And the (partial) answer it suggests doesn’t seem any more “magical” to me than any other proposed answer to the same question.  For example, if one said that consciousness arises from any computation that’s sufficiently “integrated” (or something), I could reply: what’s the “magical force” that imbues those particular computations with consciousness, and not other computations I can specify?  Or if one said (like Searle) that consciousness arises from the biology of the brain, I could reply: so what’s the “magic” of carbon-based biology, that could never be replicated in silicon?  Or even if one threw up one’s hands and said everything was conscious, I could reply: what’s the magical power that imbues my stapler with a mind?  Each of these views, along with the view that stresses the importance of decoherence and the arrow of time, is worth considering.  In my opinion, each should be judged according to how well it holds up under the most grueling battery of paradigm-cases, thought experiments, and reductios ad absurdum we can devise. So, why might one conjecture that decoherence, and participation in the arrow of time, were necessary conditions for consciousness?  I suppose I could offer some argument about our subjective experience of the passage of time being a crucial component of our consciousness, and the passage of time being bound up with the Second Law.  Truthfully, though, I don’t have any a-priori argument that I find convincing.  All I can do is show you how many apparent paradoxes get resolved if you make this one speculative leap. For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk.  But that has an interesting consequence: if you’re an outside observer who doesn’t know the chunk’s quantum state, it might be difficult or impossible for you to predict what the chunk is going to do next—even just to give decent statistical predictions, like you can for a hydrogen atom.  And of course, you can’t in general perform a measurement that will tell you the chunk’s quantum state, without violating the No-Cloning Theorem.  For the same reason, there’s in general no physical procedure that you can apply to the chunk to duplicate it exactly: that is, to produce a second chunk that you can be confident will behave identically (or almost identically) to the first, even just in a statistical sense.  (Again, this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.) It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator.  You can make, if not an exact copy of a Geiger counter, surely one that’s close enough for practical purposes.  And, even though the two counters will record different sequences of clicks when pointed at identical sources, the statistical distribution of clicks will be the same (and precisely calculable), and surely that’s all that matters.  So, what separates these examples from the sorts of examples I want to discuss? What separates them is the undisputed existence of what I’ll call a clean digital abstraction layer.  By that, I mean a macroscopic approximation to a physical system that an external observer can produce, in principle, without destroying the system; that can be used to predict what the system will do to excellent accuracy (given knowledge of the environment); and that “sees” quantum-mechanical uncertainty—to whatever extent it does—as just a well-characterized source of random noise.  If a system has such an abstraction layer, then we can regard any quantum noise as simply part of the “environment” that the system observes, rather than part of the system itself.  I’ll take it as clear that such clean abstraction layers exist for a Geiger counter, a photodetector, or a computer with a quantum random number generator.  By contrast, for (say) an animal brain, I regard it as currently an open question whether such an abstraction layer exists or not.  If, someday, it becomes routine for nanobots to swarm through people’s brains and make exact copies of them—after which the “original” brains can be superbly predicted in all circumstances, except for some niggling differences that are traceable back to different quantum-mechanical dice rolls—at that point, perhaps educated opinion will have shifted to the point where we all agree the brain does have a clean digital abstraction layer.  But from where we stand today, it seems entirely possible to agree that the brain is a physical system obeying the laws of physics, while doubting that the nanobots would work as advertised.  It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels.  Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—not only as a contingent matter of technology but as a fundamental matter of physics. (As a side note, I should stress that obviously, even without invasive nanobots, our brains are constantly changing, but we normally don’t say as a result that we become completely different people at each instant!  To my way of thinking, though, this transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you, in the sense we’re talking about.  For one thing, all your transtemporal doppelgängers are connected by a single, linear chain of causation.  For another, outside movies like Bill and Ted’s Excellent Adventure, you can’t meet your transtemporal doppelgängers and have a conversation with them, nor can scientists do experiments on some of them, then apply what they learned to others that remained unaffected by their experiments.) So, on this view, a conscious chunk of matter would be one that not only acts irreversibly, but that might well be unclonable for fundamental physical reasons.  If so, that would neatly resolve many of the puzzles that I discussed before.  So for example, there’s now a straightforward reason why you shouldn’t consent to being killed, while your copy gets recreated on Mars from an email attachment.  Namely, that copy will have a microstate with no direct causal link to your “original” microstate—so while it might behave similarly to you in many ways, you shouldn’t expect that your consciousness will “transfer” to it.  If you wanted to get your exact microstate to Mars, you could do that in principle using quantum teleportation—but as we all know, quantum teleportation inherently destroys the original copy, so there’s no longer any philosophical problem!  (Or, of course, you could just get on a spaceship bound for Mars: from a philosophical standpoint, it amounts to the same thing.) Similarly, in the case where the simulation of your brain was run three times for error-correcting purposes: that could bring about three consciousnesses if, and only if, the three simulations were tied to different sets of decoherence events.  The giant lookup table and the Earth-sized brain simulation wouldn’t bring about any consciousness, unless they were implemented in such a way that they no longer had a clean digital abstraction layer.  What about the homomorphically-encrypted brain simulation?  That might no longer work, simply because we can’t assume that the microscopic fluctuations that get amplified are homomorphically encrypted.  Those are “in the clear,” which inevitably leaks information.  As for the quantum computer that simulates your thought processes and then perfectly reverses the simulation, or that queries you like a Vaidman bomb—in order to implement such things, we’d of course need to use quantum fault-tolerance, so that the simulation of you stayed in an encoded subspace and didn’t decohere.  But under our assumption, that would mean the simulation wasn’t conscious. Now, it might seem to some of you like I’m suggesting something deeply immoral.  After all, the view I’m considering implies that, even if a system passed the Turing Test, and behaved identically to a human, even if it eloquently pleaded for its life, if it wasn’t irreversibly decohering microscopic events then it wouldn’t be conscious, so it would be fine to kill it, torture it, whatever you want. But wait a minute: if a system isn’t doing anything irreversible, then what exactly does it mean to “kill” it?  If it’s a classical computation, then at least in principle, you could always just restore from backup.  You could even rewind and not only erase the memories of, but “uncompute” (“untorture”?) whatever tortures you had performed.  If it’s a quantum computation, you could always invert the unitary transformation U that corresponded to killing the thing (then reapply U and invert it again for good measure, if you wanted).  Only for irreversible systems are there moral acts with irreversible consequences. This is related to something that’s bothered me for years in quantum foundations.  When people discuss Schrödinger’s cat, they always—always—insert some joke about, “obviously, this experiment wouldn’t pass the Ethical Review Board.  Nowadays, we try to avoid animal cruelty in our quantum gedankenexperiments.”  But actually, I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment.  And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}.  But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states.  (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.)  But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all?  Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from. (There followed discussion among some audience members about the question of whether, if you destroyed all records of some terrible atrocity, like the Holocaust, everywhere in the physical world, you would thereby cause the atrocity “never to have happened.”  Many people seemed surprised by my willingness to accept that implication of what I was saying.  By way of explaining, I tried to stress just how far our everyday, intuitive notion of “destroying all records of something” falls short of what would actually be involved here: when we think of “destroying records,” we think about burning books, destroying the artifacts in museums, silencing witnesses, etc.  But even if all those things were done and many others, still the exact configurations of the air, the soil, and photons heading away from the earth at the speed of light would retain their silent testimony to the Holocaust’s reality.  “Erasing all records” in the physics sense would be something almost unimaginably more extreme: it would mean inverting the entire physical evolution in the vicinity of the earth, stopping time’s arrow and running history itself backwards.  Such ‘unhappening’ of what’s happened is something that we lack any experience of, at least outside of certain quantum interference experiments—though in the case of the Holocaust, one could be forgiven for wishing it were possible.) OK, so much for philosophy of mind and morality; what about the interpretation of quantum mechanics?  If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders?  You could make a case for either.  The Many-Worlders would be right that we could always, if we chose, think of decoherence events as “splitting” our universe into multiple branches, each with different versions of ourselves, that thereafter don’t interact.  On the other hand, the Copenhagenists would be right that, even in principle, we could never do any experiment where this “splitting” of our minds would have any empirical consequence.  On this view, if you can control a system well enough that you can actually observe interference between the different branches, then it follows that you shouldn’t regard the system as conscious, because it’s not doing anything irreversible. In my essay, the implication that concerned me the most was the one for “free will.”  If being conscious entails amplifying microscopic events in an irreversible and unclonable way, then someone looking at a conscious system from the outside might not, in general, be able to predict what it’s going to do next, not even probabilistically.  In other words, its decisions might be subject to at least some “Knightian uncertainty”: uncertainty that we can’t even quantify in a mutually-agreed way using probabilities, in the same sense that we can quantify our uncertainty about (say) the time of a radioactive decay.  And personally, this is actually the sort of “freedom” that interests me the most.  I don’t really care if my choices are predictable by God, or by a hypothetical Laplace demon: that is, if they would be predictable (at least probabilistically), given complete knowledge of the microstate of the universe.  By definition, there’s essentially no way for my choices not to be predictable in that weak and unempirical sense!  On the other hand, I’d prefer that my choices not be completely predictable by other people.  If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought.  But you’d have to actually do the experiment (or convince me that it could be done): it doesn’t count just to talk about it, or to extrapolate from fMRI experiments that predict which of two buttons a subject is going to press with 60% accuracy a few seconds in advance. But since we’ve got some cosmologists in the house, let me now turn to discussing the implications of this view for Boltzmann brains. (For those tuning in from home: a Boltzmann brain is a hypothetical chance fluctuation in the late universe, which would include a conscious observer with all the perceptions that a human being—say, you—is having right now, right down to false memories and false beliefs of having arisen via Darwinian evolution.  On statistical grounds, the overwhelming majority of Boltzmann brains last just long enough to have a single thought—like, say, the one you’re having right now—before they encounter the vacuum and freeze to death.  If you measured some part of the vacuum state toward which our universe seems to be heading, asking “is there a Boltzmann brain here?,” quantum mechanics predicts that the probability would be ridiculously astronomically small, but nonzero.  But, so the argument goes, if the vacuum lasts for infinite time, then as long as the probability is nonzero, it doesn’t matter how tiny it is: you’ll still get infinitely many Boltzmann brains indistinguishable from any given observer; and for that reason, any observer should consider herself infinitely likelier to be a Boltzmann brain than to be the “real,” original version.  For the record, even among the strange people at the IBM workshop, no one actually worried about being a Boltzmann brain.  The question, rather, is whether, if a cosmological model predicts Boltzmann brains, then that’s reason enough to reject the model, or whether we can live with such a prediction, since we have independent grounds for knowing that we can’t be Boltzmann brains.) At this point, you can probably guess where this is going.  If decoherence, entropy production, full participation in the arrow of time are necessary conditions for consciousness, then it would follow, in particular, that a Boltzmann brain is not conscious.  So we certainly wouldn’t be Boltzmann brains, even under a cosmological model that predicts infinitely more of them than of us.  We can wipe our hands; the problem is solved! I find it extremely interesting that, in their recent work, Kim Boddy, Sean Carroll, and Jason Pollack reached a similar conclusion, but from a completely different starting point.  They said: look, under reasonable assumptions, the late universe is just going to stay forever in an energy eigenstate—just sitting there doing nothing.  It’s true that, if someone came along and measured the energy eigenstate, asking “is there a Boltzmann brain here?,” then with a tiny but nonzero probability the answer would be yes.  But since no one is there measuring, what licenses us to interpret the nonzero overlap in amplitude with the Boltzmann brain state, as a nonzero probability of there being a Boltzmann brain?  I think they, too, are implicitly suggesting: if there’s no decoherence, no arrow of time, then we’re not authorized to say that anything is happening that “counts” for anthropic purposes. Let me now mention an obvious objection.  (In fact, when I gave the talk, this objection was raised much earlier.)  You might say, “look, if you really think irreversible decoherence is a necessary condition for consciousness, then you might find yourself forced to say that there’s no consciousness, because there might not be any such thing as irreversible decoherence!  Imagine that our entire solar system were enclosed in an anti de Sitter (AdS) boundary, like in Greg Egan’s science-fiction novel Quarantine.  Inside the box, there would just be unitary evolution in some Hilbert space: maybe even a finite-dimensional Hilbert space.  In which case, all these ‘irreversible amplifications’ that you lay so much stress on wouldn’t be irreversible at all: eventually all the Everett branches would recohere; in fact they’d decohere and recohere infinitely many times.  So by your lights, how could anything be conscious inside the box?” My response to this involves one last speculation.  I speculate that the fact that we don’t appear to live in AdS space—that we appear to live in (something evolving toward) a de Sitter space, with a positive cosmological constant—might be deep and important and relevant.  I speculate that, in our universe, “irreversible decoherence” means: the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone—even if for no others—you can’t put Humpty Dumpty back together again.  (Here I should point out, as several workshop attendees did to me, that Bousso and Susskind explored something similar in their paper The Multiverse Interpretation of Quantum Mechanics.) Does this mean that, if cosmologists discover tomorrow that the cosmological constant is negative, or will become negative, then it will turn out that none of us were ever conscious?  No, that’s stupid.  What it would suggest is that the attempt I’m now making on the Pretty-Hard Problem had smacked into a wall (an AdS wall?), so that I, and anyone else who stressed in-principle irreversibility, should go back to the drawing board.  (By analogy, if some prescription for getting rid of Boltzmann brains fails, that doesn’t mean we are Boltzmann brains; it just means we need a new prescription.  Tempting as it is to skewer our opponents’ positions with these sorts of strawman inferences, I hope we can give each other the courtesy of presuming a bare minimum of sense.) Another question: am I saying that, in order to be absolutely certain of whether some entity satisfied the postulated precondition for consciousness, one might, in general, need to look billions of years into the future, to see whether the “decoherence” produced by the entity was really irreversible?  Yes (pause to gulp bullet).  I am saying that.  On the other hand, I don’t think it’s nearly as bad as it sounds.  After all, the category of “consciousness” might be morally relevant, or relevant for anthropic reasoning, but presumably we all agree that it’s unlikely to play any causal role in the fundamental laws of physics.  So it’s not as if we’ve introduced any teleology into the laws of physics by this move. Let me end by pointing out what I’ll call the “Tegmarkian slippery slope.”  It feels scientific and rational—from the perspective of many of us, even banal—to say that, if we’re conscious, then any sufficiently-accurate computer simulation of us would also be.  But I tried to convince you that this view depends, for its aura of obviousness, on our agreeing not to probe too closely exactly what would count as a “sufficiently-accurate” simulation.  E.g., does it count if the simulation is done in heavily-encrypted form, or encoded as a giant lookup table?  Does it matter if anyone actually runs the simulation, or consults the lookup table?  Now, all the way at the bottom of the slope is Max Tegmark, who asks: to produce consciousness, what does it matter if the simulation is physically instantiated at all?  Why isn’t it enough for the simulation to “exist” mathematically?  Or, better yet: if you’re worried about your infinitely-many Boltzmann brain copies, then why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π?  Why not hold workshops about how to avoid the prediction that we’re infinitely likelier to be “living in π” than to be our “real” selves? From this extreme, even most scientific rationalists recoil.  They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow.  But now I have the opening I want.  I can say: once we agree that physical existence is a prerequisite for consciousness, why not participation in the Arrow of Time?  After all, our ordinary ways of talking about sentient beings—outside of quantum mechanics, cosmology, and maybe theology—don’t even distinguish between the concepts “exists” and “exists and participates in the Arrow of Time.”  And to say we have no experience of reversible, clonable, coherently-executable, atemporal consciousnesses is a massive understatement. Of course, we should avoid the sort of arbitrary prejudice that Turing warned against in Computing Machinery and Intelligence.  Just because we lack experience with extraterrestrial consciousnesses, doesn’t mean it would be OK to murder an intelligent extraterrestrial if we met one tomorrow.  In just the same way, just because we lack experience with clonable, atemporal consciousnesses, doesn’t mean it would be OK to … wait!  As we said before, clonability, and aloofness from time’s arrow, call severely into question what it even means to “murder” something.  So maybe this case isn’t as straightforward as the extraterrestrials after all. At this point, I’ve probably laid out enough craziness, so let me stop and open things up for discussion. Integrated Information Theory: Virgil Griffith opines Wednesday, June 25th, 2014 I suppose there are a few things. Giulio Tononi and Me: A Phi-nal Exchange Friday, May 30th, 2014 I. The Copernicus-of-Consciousness Argument • You are conscious (though not when anesthetized). What else can we do? II. The Axiom Argument To elaborate a bit: III. The Ironic Empirical Argument IV. The Phenomenology Argument Wednesday, May 21st, 2014 Happy birthday to me! We then consider the sum and hence or after normalizing, and hence Retiring falsifiability? A storm in Russell’s teacup Friday, January 17th, 2014 My good friend Sean Carroll took a lot of flak recently for answering this year’s Edge question, “What scientific idea is ready for retirement?,” with “Falsifiability”, and for using string theory and the multiverse as examples of why science needs to break out of its narrow Popperian cage.  For more, see this blog post of Sean’s, where one commenter after another piles on the beleaguered dude for his abandonment of science and reason themselves. My take, for whatever it’s worth, is that Sean and his critics are both right. Sean is right that “falsifiability” is a crude slogan that fails to capture what science really aims at.  As a doofus example, the theory that zebras exist is presumably both “true” and “scientific,” but it’s not “falsifiable”: if zebras didn’t exist, there would be no experiment that proved their nonexistence.  (And that’s to say nothing of empirical claims involving multiple nested quantifiers: e.g., “for every physical device that tries to solve the Traveling Salesman Problem in polynomial time, there exists an input on which the device fails.”)  Less doofusly, a huge fraction of all scientific progress really consists of mathematical or computational derivations from previously-accepted theories—and, as such, has no “falsifiable content” apart from the theories themselves.  So, do workings-out of mathematical consequences count as “science”?  In practice, the Nobel committee says sure they do, but only if the final results of the derivations are “directly” confirmed by experiment.  Far better, it seems to me, to say that science is a search for explanations that do essential and nontrivial work, within the network of abstract ideas whose ultimate purpose to account for our observations.  (On this particular question, I endorse everything David Deutsch has to say in The Beginning of Infinity, which you should read if you haven’t.) On the other side, I think Sean’s critics are right that falsifiability shouldn’t be “retired.”  Instead, falsifiability’s portfolio should be expanded, with full-time assistants (like explanatory power) hired to lighten falsifiability’s load. I also, to be honest, don’t see that modern philosophy of science has advanced much beyond Popper in its understanding of these issues.  Last year, I did something weird and impulsive: I read Karl Popper.  Given all the smack people talk about him these days, I was pleasantly surprised by the amount of nuance, reasonableness, and just general getting-it that I found.  Indeed, I found a lot more of those things in Popper than I found in his latter-day overthrowers Kuhn and Feyerabend.  For Popper (if not for some of his later admirers), falsifiability was not a crude bludgeon.  Rather, it was the centerpiece of a richly-articulated worldview holding that millennia of human philosophical reflection had gotten it backwards: the question isn’t how to arrive at the Truth, but rather how to eliminate error.  Which sounds kind of obvious, until I meet yet another person who rails to me about how empirical positivism can’t provide its own ultimate justification, and should therefore be replaced by the person’s favorite brand of cringe-inducing ugh. Oh, I also think Sean might have made a tactical error in choosing string theory and the multiverse as his examples for why falsifiability needs to be retired.  For it seems overwhelmingly likely to me that the following two propositions are both true: 1. Falsifiability is too crude of a concept to describe how science works. 2. In the specific cases of string theory and the multiverse, a dearth of novel falsifiable predictions really is a big problem. As usual, the best bet is to use explanatory power as our criterion—in which case, I’d say string theory emerges as a complex and evolving story.  On one end, there are insights like holography and AdS/CFT, which seem clearly to do explanatory work, and which I’d guess will stand as permanent contributions to human knowledge, even if the whole foundations on which they currently rest get superseded by something else.  On the other end, there’s the idea, championed by a minority of string theorists and widely repeated in the press, that the anthropic principle applied to different patches of multiverse can be invoked as a sort of get-out-of-jail-free card, to rescue a favored theory from earlier hopes of successful empirical predictions that then failed to pan out.  I wouldn’t know how to answer a layperson who asked why that wasn’t exactly the sort of thing Sir Karl was worried about, and for good reason. Finally, not that Edge asked me, but I’d say the whole notions of “determinism” and “indeterminism” in physics are past ready for retirement.  I can’t think of any work they do, that isn’t better done by predictability and unpredictability. Luke Muehlhauser interviews me about philosophical progress Saturday, December 14th, 2013 I’m shipping out today to sunny Rio de Janeiro, where I’ll be giving a weeklong course about BosonSampling, at the invitation of Ernesto Galvão.  Then it’s on to Pennsylvania (where I’ll celebrate Christmas Eve with old family friends), Israel (where I’ll drop off Dana and Lily with Dana’s family in Tel Aviv, then lecture at the Jerusalem Winter School in Theoretical Physics), Puerto Rico (where I’ll speak at the FQXi conference on Physics of Information), back to Israel, and then New York before returning to Boston at the beginning of February.  Given this travel schedule, it’s possible that blogging will be even lighter than usual for the next month and a half (or not—we’ll see). In the meantime, however, I’ve got the equivalent of at least five new blog posts to tide over Shtetl-Optimized fans.  Luke Muehlhauser, the Executive Director of the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence), did an in-depth interview with me about “philosophical progress,” in which he prodded me to expand on certain comments in Why Philosophers Should Care About Computational Complexity and The Ghost in the Quantum Turing Machine.  Here are (abridged versions of) Luke’s five questions: 5. Which object-level thinking tactics … do you use in your own theoretical (especially philosophical) research?  Are there tactics you suspect might be helpful, which you haven’t yet used much yourself? For the answers—or at least my answers—click here! PS. In case you missed it before, Quantum Computing Since Democritus was chosen by Scientific American blogger Jennifer Ouellette (via the “Time Lord,” Sean Carroll) as the top physics book of 2013.  Woohoo!! The Ghost in the Quantum Turing Machine Saturday, June 15th, 2013 I’ve been traveling this past week (in Israel and the French Riviera), heavily distracted by real life from my blogging career.  But by popular request, let me now provide a link to my very first post-tenure publication: The Ghost in the Quantum Turing Machine. Here’s the abstract: In honor of Alan Turing’s hundredth birthday, I unwisely set out some thoughts about one of Turing’s obsessions throughout his life, the question of physics and free will. I focus relatively narrowly on a notion that I call “Knightian freedom”: a certain kind of in-principle physical unpredictability that goes beyond probabilistic unpredictability. Other, more metaphysical aspects of free will I regard as possibly outside the scope of science. I examine a viewpoint, suggested independently by Carl Hoefer, Cristi Stoica, and even Turing himself, that tries to find scope for “freedom” in the universe’s boundary conditions rather than in the dynamical laws. Taking this viewpoint seriously leads to many interesting conceptual problems. I investigate how far one can go toward solving those problems, and along the way, encounter (among other things) the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb’s paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption. I also compare the viewpoint explored here to the more radical speculations of Roger Penrose. The result of all this is an unusual perspective on time, quantum mechanics, and causation, of which I myself remain skeptical, but which has several appealing features. Among other things, it suggests interesting empirical questions in neuroscience, physics, and cosmology; and takes a millennia-old philosophical debate into some underexplored territory. See here (and also here) for interesting discussions over on Less Wrong.  I welcome further discussion in the comments section of this post, and will jump in myself after a few days to address questions (update: eh, already have).  There are three reasons for the self-imposed delay: first, general busyness.  Second, inspired by the McGeoch affair, I’m trying out a new experiment, in which I strive not to be on such an emotional hair-trigger about the comments people leave on my blog.  And third, based on past experience, I anticipate comments like the following: “Hey Scott, I didn’t have time to read this 85-page essay that you labored over for two years.  So, can you please just summarize your argument in the space of a blog comment?  Also, based on the other comments here, I have an objection that I’m sure never occurred to you.  Oh, wait, just now scanning the table of contents…” So, I decided to leave some time for people to RTFM (Read The Free-Will Manuscript) before I entered the fray. For now, just one remark: some people might wonder whether this essay marks a new “research direction” for me.  While it’s difficult to predict the future (even probabilistically :-) ), I can say that my own motivations were exactly the opposite: I wanted to set out my thoughts about various mammoth philosophical issues once and for all, so that then I could get back to complexity, quantum computing, and just general complaining about the state of the world. “Quantum Information and the Brain” Thursday, January 24th, 2013 A causality post, for no particular reason Friday, November 2nd, 2012 The following question emerged from a conversation with the machine learning theorist Pedro Domingos a month ago. Consider a hypothetical race of intelligent beings, the Armchairians, who never take any actions: never intervene in the world, never do controlled experiments, never try to build anything and see if it works.  The sole goal of the Armchairians is to observe the world around them and, crucially, to make accurate predictions about what’s going to happen next.  Would the Armchairians ever develop the notion of cause and effect?  Or would they be satisfied with the notion of statistical correlation?  Or is the question kind of silly, the answer depending entirely on what we mean by “developing the notion of cause and effect”?  Feel free to opine away in the comments section. Why Many-Worlds is not like Copernicanism Saturday, August 18th, 2012 [Update (8/26): Inspired by the great responses to my last Physics StackExchange question, I just asked a new one---also about the possibilities for gravitational decoherence, but now focused on Gambini et al.'s "Montevideo interpretation" of quantum mechanics. Also, on a completely unrelated topic, my friend Jonah Sinick has created a memorial YouTube video for the great mathematician Bill Thurston, who sadly passed away last week.  Maybe I should cave in and set up a Twitter feed for this sort of thing...] [Update (8/26): I've now posted what I see as one of the main physics questions in this discussion on Physics StackExchange: "Reversing gravitational decoherence."  Check it out, and help answer if you can!] [Update (8/23): If you like this blog, and haven't yet read the comments on this post, you should probably do so!  To those who've complained about not enough meaty quantum debates on this blog lately, the comment section of this post is my answer.] [Update: Argh!  For some bizarre reason, comments were turned off for this post.  They're on now.  Sorry about that.] I’m in Anaheim, CA for a great conference celebrating the 80th birthday of the physicist Yakir Aharonov.  I’ll be happy to discuss the conference in the comments if people are interested. In the meantime, though, since my flight here was delayed 4 hours, I decided to (1) pass the time, (2) distract myself from the inanities blaring on CNN at the airport gate, (3) honor Yakir’s half-century of work on the foundations of quantum mechanics, and (4) honor the commenters who wanted me to stop ranting and get back to quantum stuff, by sharing some thoughts about a topic that, unlike gun control or the Olympics, is completely uncontroversial: the Many-Worlds Interpretation of quantum mechanics. Proponents of MWI, such as David Deutsch, often argue that MWI is a lot like Copernican astronomy: an exhilarating expansion in our picture of the universe, which follows straightforwardly from Occam’s Razor applied to certain observed facts (the motions of the planets in one case, the double-slit experiment in the other).  Yes, many holdouts stubbornly refuse to accept the new picture, but their skepticism says more about sociology than science.  If you want, you can describe all the quantum-mechanical experiments anyone has ever done, or will do for the foreseeable future, by treating “measurement” as an unanalyzed primitive and never invoking parallel universes.  But you can also describe all astronomical observations using a reference frame that places the earth is the center of the universe.  In both cases, say the MWIers, the problem with your choice is its unmotivated perversity: you mangle the theory’s mathematical simplicity, for no better reason than a narrow parochial urge to place yourself and your own experiences at the center of creation.  The observed motions of the planets clearly want a sun-centered model.  In the same way, Schrödinger’s equation clearly wants measurement to be just another special case of unitary evolution—one that happens to cause your own brain and measuring apparatus to get entangled with the system you’re measuring, thereby “splitting” the world into decoherent branches that will never again meet.  History has never been kind to people who put what they want over what the equations want, and it won’t be kind to the MWI-deniers either. This is an important argument, which demands a response by anyone who isn’t 100% on-board with MWI.  Unlike some people, I happily accept this argument’s framing of the issue: no, MWI is not some crazy speculative idea that runs afoul of Occam’s razor.  On the contrary, MWI really is just the “obvious, straightforward” reading of quantum mechanics itself, if you take quantum mechanics literally as a description of the whole universe, and assume nothing new will ever be discovered that changes the picture. Nevertheless, I claim that the analogy between MWI and Copernican astronomy fails in two major respects. The first is simply that the inference, from interference experiments to the reality of many-worlds, strikes me as much more “brittle” than the inference from astronomical observations to the Copernican system, and in particular, too brittle to bear the weight that the MWIers place on it.  Once you know anything about the dynamics of the solar system, it’s hard to imagine what could possibly be discovered in the future, that would ever again make it reasonable to put the earth at the “center.”  By contrast, we do more-or-less know what could be discovered that would make it reasonable to privilege “our” world over the other MWI branches.  Namely, any kind of “dynamical collapse” process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick. Admittedly, like most quantum folks, I used to dismiss the notion of “dynamical collapse” as so contrived and ugly as not to be worth bothering with.  But while I remain unimpressed by the specific models on the table (like the GRW theory), I’m now agnostic about the possibility itself.  Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with.  But as Roger Penrose never tires of pointing out, there’s at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?). To make the discussion more concrete, consider the proposed experiment of Bouwmeester et al., which seeks to test (loosely) whether one can have a coherent superposition over two states of the gravitational field that differ by a single Planck length or more.  This experiment hasn’t been done yet, but some people think it will become feasible within a decade or two.  Most likely it will just confirm quantum mechanics, like every previous attempt to test the theory for the last century.  But it’s not a given that it will; quantum mechanics has really, truly never been tested in this regime.  So suppose the interference pattern isn’t seen.  Then poof!  The whole vast ensemble of parallel universes spoken about by the MWI folks would have disappeared with a single experiment.  In the case of Copernicanism, I can’t think of any analogous hypothetical discovery with even a shred of plausibility: maybe a vector field that pervades the universe but whose unique source was the earth?  So, this is what I mean in saying that the inference from existing QM experiments to parallel worlds seems too “brittle.” As you might remember, I wagered $100,000 that scalable quantum computing will indeed turn out to be compatible with the laws of physics.  Some people considered that foolhardy, and they might be right—but I think the evidence seems pretty compelling that quantum mechanics can be extrapolated at least that far.  (We can already make condensed-matter states involving entanglement among millions of particles; for that to be possible but not quantum computing would seem to require a nasty conspiracy.)  On the other hand, when it comes to extending quantum-mechanical linearity all the way up to the scale of everyday life, or to the gravitational metric of the entire universe—as is needed for MWI—even my nerve falters.  Maybe quantum mechanics does go that far up; or maybe, as has happened several times in physics when exploring a new scale, we have something profoundly new to learn.  I wouldn’t give much more informative odds than 50/50. The second way I’d say the MWI/Copernicus analogy breaks down arises from a closer examination of one of the MWIers’ favorite notions: that of “parochial-ness.”  Why, exactly, do people say that putting the earth at the center of creation is “parochial”—given that relativity assures us that we can put it there, if we want, with perfect mathematical consistency?  I think the answer is: because once you understand the Copernican system, it’s obvious that the only thing that could possibly make it natural to place the earth at the center, is the accident of happening to live on the earth.  If you could fly a spaceship far above the plane of the solar system, and watch the tiny earth circling the sun alongside Mercury, Venus, and the sun’s other tiny satellites, the geocentric theory would seem as arbitrary to you as holding Cheez-Its to be the sole aim and purpose of human civilization.  Now, as a practical matter, you’ll probably never fly that spaceship beyond the solar system.  But that’s irrelevant: firstly, because you can very easily imagine flying the spaceship, and secondly, because there’s no in-principle obstacle to your descendants doing it for real. Now let’s compare to the situation with MWI.  Consider the belief that “our” universe is more real than all the other MWI branches.  If you want to describe that belief as “parochial,” then from which standpoint is it parochial?  The standpoint of some hypothetical godlike being who sees the entire wavefunction of the universe?  The problem is that, unlike with my solar system story, it’s not at all obvious that such an observer can even exist, or that the concept of such an observer makes sense.  You can’t “look in on the multiverse from the outside” in the same way you can look in on the solar system from the outside, without violating the quantum-mechanical linearity on which the multiverse picture depends in the first place. The closest you could come, probably, is to perform a Wigner’s friend experiment, wherein you’d verify via an interference experiment that some other person was placed into a superposition of two different brain states.  But I’m not willing to say with confidence that the Wigner’s friend experiment can even be done, in principle, on a conscious being: what if irreversible decoherence is somehow a necessary condition for consciousness?  (We know that increase in entropy, of which decoherence is one example, seems intertwined with and possibly responsible for our subjective sense of the passage of time.)  In any case, it seems clear that we can’t talk about Wigner’s-friend-type experiments without also talking, at least implicitly, about consciousness and the mind/body problemand that that fact ought to make us exceedingly reluctant to declare that the right answer is obvious and that anyone who doesn’t see it is an idiot.  In the case of Copernicanism, the “flying outside the solar system” thought experiment isn’t similarly entangled with any of the mysteries of personal identity. There’s a reason why Nobel Prizes are regularly awarded for confirmations of effects that were predicted decades earlier by theorists, and that therefore surprised almost no one when they were finally found.  Were we smart enough, it’s possible that we could deduce almost everything interesting about the world a priori.  Alas, history has shown that we’re usually not smart enough: that even in theoretical physics, our tendencies to introduce hidden premises and to handwave across gaps in argument are so overwhelming that we rarely get far without constant sanity checks from nature. I can’t think of any better summary of the empirical attitude than the famous comment by Donald Knuth: “Beware of bugs in the above code.  I’ve only proved it correct; I haven’t tried it.”  In the same way, I hereby declare myself ready to support MWI, but only with the following disclaimer: “Beware of bugs in my argument for parallel copies of myself.  I’ve only proved that they exist; I haven’t heard a thing from them.”
6a50edeec594a1e1
Schrödinger's equation — what is it? Marianne Freiberger Here is a typical textbook question. Your car has run out of petrol. With how much force do you need to push it to accelerate it to a given speed? The answer comes from Newton’s second law of motion:   \[ F=ma, \]     where $a$ is acceleration, $F$ is force and $m$ is mass. This wonderfully straightforward, yet subtle law allows you to describe motion of all kinds and so it can, in theory at least, answer pretty much any question a physicist might want to ask about the world. Erwin Schrödinger Schrödinger's equation is named after Erwin Schrödinger, 1887-1961. Or can it? When people first started considering the world at the smallest scales, for example electrons orbiting the nucleus of an atom, they realised that things get very weird indeed and that Newton's laws no longer apply. To describe this tiny world you need quantum mechanics, a theory developed at the beginning of the twentieth century. The core equation of this theory, the analogue of Newton's second law, is called Schrödinger's equation. Waves and particles "In classical mechanics we describe a state of a physical system using position and momentum," explains Nazim Bouatta, a theoretical physicist at the University of Cambridge. For example, if you’ve got a table full of moving billiard balls and you know the position and the momentum (that’s the mass times the velocity) of each ball at some time $t$, then you know all there is to know about the system at that time $t$: where everything is, where everything is going and how fast. "The kind of question we then ask is: if we know the initial conditions of a system, that is, we know the system at time $t_0,$ what is the dynamical evolution of this system? And we use Newton’s second law for that. In quantum mechanics we ask the same question, but the answer is tricky because position and momentum are no longer the right variables to describe [the system]." The problem is that the objects quantum mechanics tries to describe don't always behave like tiny little billiard balls. Sometimes it is better to think of them as waves. "Take the example of light. Newton, apart from his work on gravity, was also interested in optics," says Bouatta. "According to Newton, light was described by particles. But then, after the work of many scientists, including the theoretical understanding provided by James Clerk Maxwell, we discovered that light was described by waves." But in 1905 Einstein realised that the wave picture wasn't entirely correct either. To explain the photoelectric effect (see the Plus article Light's identity crisis) you need to think of a beam of light as a stream of particles, which Einstein dubbed photons. The number of photons is proportional to the intensity of the light, and the energy E of each photon is proportional to its frequency f:   \[ E=hf, \]     Here $h=6.626068 \times 10^{-34} m^2kg/s$ is Planck's constant, an incredibly small number named after the physicist Max Planck who had already guessed this formula in 1900 in his work on black body radiation. "So we were facing the situation that sometimes the correct way of describing light was as waves and sometimes it was as particles," says Bouatta. The double slit experiment The double slit experiment: The top picture shows the interference pattern created by waves passing though the slits, the middle picture shows what you'd expect to see when particles are fired through the slits, and the bottom picture shows what actually happens when you fire particles such as electrons through the slits: you get the interference pattern you expect from waves, but the electrons are registered as arriving as particles. Einstein's result linked in with the age-old endeavour, started in the 17th century by Christiaan Huygens and explored again in the 19th century by William Hamilton: to unify the physics of optics (which was all about waves) and mechanics (which was all about particles). Inspired by the schizophrenic behaviour of light the young French physicist Louis de Broglie took a dramatic step in this journey: he postulated that not only light, but also matter suffered from the so-called wave-particle duality. The tiny building blocks of matter, such as electrons, also behave like particles in some situations and like waves in others. De Broglie's idea, which he announced in the 1920s, wasn't based on experimental evidence, rather it sprung from theoretical considerations inspired by Einstein's theory of relativity. But experimental evidence was soon to follow. In the late 1920s experiments involving particles scattering off a crystal confirmed the wave-like nature of electrons (see the Plus article Quantum uncertainty). One of the most famous demonstrations of wave-particle duality is the double slit experiment. In it electrons (or other particles like photons or neutrons) are fired one at a time all over a screen containing two slits. Behind the screen there's a second one which can detect where the electrons that made it through the slits end up. If the electrons behaved like particles, then you would expect them to pile up around two straight lines behind the two slits. But what you actually see on the detector screen is an interference pattern: the pattern you would get if the electrons were waves, each wave passing through both slits at once and then interfering with itself as it spreads out again on the other side. Yet on the detector screen, the electrons are registered as arriving just as you would expect: as particles. It's a very weird result indeed but one that has been replicated many times — we simply have to accept that this is the way the world works. Schrödinger's equation The radical new picture proposed by de Broglie required new physics. What does a wave associated to a particle look like mathematically? Einstein had already related the energy $E$ of a photon to the frequency $f$ of light, which in turn is related to the wavelength $\lambda $ by the formula $\lambda = c/f.$ Here $c$ is the speed of light. Using results from relativity theory it is also possible to relate the energy of a photon to its momentum. Putting all this together gives the relationship $\lambda =h/p$ between the photon’s wavelength $\lambda $ and momentum $p$ ($h$ again is Planck’s constant). (See Light's identity crisis for details.) Following on from this, de Broglie postulated that the same relationship between wavelength and momentum should hold for any particle. At this point it's best to suspend your intuition about what it really means to say that a particle behaves like a wave (we'll have a look at that in the third article) and just follow through with the mathematics. In classical mechanics the evolution over time of a wave, for example a sound wave or a water wave, is described by a wave equation: a differential equation whose solution is a wave function, which gives you the shape of the wave at any time $t$ (subject to suitable boundary conditions). For example, suppose you have waves travelling through a string that is stretched out along the $x$-axis and vibrates in the $xy$-plane. In order to describe the wave completely, you need to find the displacement $y(x,t)$ of the string in the $y$-direction at every point $x$ and every time $t$. Using Newton’s second law of motion it is possible to show that $y(x,t)$ obeys the following wave equation:   \[ \frac{\partial ^2y}{\partial x^2} = \frac{1}{v^2} \frac{\partial ^2 y}{\partial t^2}, \]     where $v$ is the speed of the waves. Cosine wave A snapshot in time of a string vibrating in the xy-plane. The wave shown here is described by the cosine function. A general solution $y(x,t)$ to this equation is quite complicated, reflecting the fact that the string can be wiggling around in all sorts of ways, and that you need more information (initial conditions and boundary conditions) to find out exactly what kind of motion it is. But as an example, the function   \[ y(x,t)=A \cos {\omega (t-\frac{x}{v})} \]     describes a wave travelling in the positive $x$-direction with an angular frequency $\omega $, so as you would expect, it is a possible solution to the wave equation. By analogy, there should be a wave equation governing the evolution of the mysterious "matter waves", whatever they may be, over time. Its solution would be a wave function $\Psi $ (but resist thinking of it as describing an actual wave) which tells you all there is to know about your quantum system — for example a single particle moving around in a box — at any time $t$. It was the Austrian physicist Erwin Schrödinger who came up with this equation in 1926. For a single particle moving around in three dimensions the equation can be written as   \[ \frac{ih}{2\pi } \frac{\partial \Psi }{\partial t} = -\frac{h^2}{8 \pi ^2 m} \left(\frac{\partial ^2 \Psi }{\partial x^2} + \frac{\partial ^2 \Psi }{\partial y^2} + \frac{\partial ^2 \Psi }{\partial z^2}\right) + V\Psi . \]     Here $V$ is the potential energy of the particle (a function of $x$, $y$, $z$ and $t$), $i=\sqrt {-1},$ $m$ is the mass of the particle and $h$ is Planck’s constant. The solution to this equation is the wave function $\Psi (x,y,z,t).$ In some situations the potential energy does not depend on time $t.$ In this case we can often solve the problem by considering the simpler time-independent version of the Schrödinger equation for a function $\psi $ depending only on space, i.e. $\psi =\psi (x,y,z):$   \[ \frac{\partial ^2 \psi }{\partial x^2} + \frac{\partial ^2 \psi }{\partial y^2} + \frac{\partial ^2 \psi }{\partial z^2} + \frac{8 \pi ^2 m}{h^2}(E-V)\psi = 0, \]     where $E$ is the total energy of the particle. The solution $\Psi $ to the full equation is then   \[ \Psi = \psi e^{-(2 \pi i E/h)t}. \]     These equations apply to one particle moving in three dimensions, but they have counterparts describing a system with any number of particles. And rather than formulating the wave function as a function of position and time, you can also formulate it as a function of momentum and time. Enter uncertainty We'll see how to solve Schrödinger's equation for a simple example in the second article, and also that its solution is indeed similar to the mathematical equation that describes a wave. But what does this solution actually mean? It doesn't give you a precise location for your particle at a given time $t$, so it doesn't give you the trajectory of a particle over time. Rather it's a function which, at a given time $t,$ gives you a value $\Psi (x,y,z,t)$ for all possible locations $(x,y,z)$. What does this value mean? In 1926 the physicist Max Born came up with a probabilistic interpretation. He postulated that the square of the absolute value of the wave function,   \[ |\Psi (x,y,z,t)|^2 \]     gives you the probability density for finding the particle at position $(x,y,z)$ at time $t$. In other words, the probability that the particle will be found in a region $R$ at time $t$ is given by the integral   \[ \int _{R} |\Psi (x,y,z,t)|^2 dxdydz. \]     (You can find out more about probability densities in any introduction to probability theory, for example here.) Werner Heisenberg Werner Heisenberg, 1901-1976. This probabilistic picture links in with a rather shocking consequence of de Broglie's formula for the wavelength and momentum of a particle, discovered by Werner Heisenberg in 1927. Heisenberg found that there is a fundamental limit to the precision to which you can measure the position and the momentum of a moving particle. The more precise you want to be about the one, the less you can say about the other. And this is not down to the quality of your measuring instrument, it is a fundamental uncertainty of nature. This result is now known as Heisenberg's uncertainty principle and it's one of the results that's often quoted to illustrate the weirdness of quantum mechanics. It means that in quantum mechanics we simply cannot talk about the location or the trajectory of a particle. "If we believe in this uncertainty picture, then we have to accept a probabilistic account [of what is happening] because we don’t have exact answers to questions like ’where is the electron at time $t_0$?’," says Bouatta. In other words, all you can expect from the mathematical representation of a quantum state, from the wave function, is that it gives you a probability. Whether or not the wave function has any physical interpretation was and still is a touchy question. "The question was, we have this wave function, but are we really thinking that there are waves propagating in space and time?" says Bouatta. "De Broglie, Schrödinger and Einstein were trying to provide a realistic account, that it's like a light wave, for example, propagating in a vacuum. But [the physicists], Wolfgang Pauli, Werner Heisenberg and Niels Bohr were against this realistic picture. For them the wave function was only a tool for computing probabilities." We'll have a closer look at the interpretation of the wave function in the third article of this series. Does it work? Louis de Broglie Louis de Broglie, 1892-1987. Why should we believe this rather fantastical set-up? In this article we have presented Schrödinger's equation as if it were plucked out of thin air, but where does it actually come from? How did Schrödinger derive it? The famous physicist Richard Feynman considered this a futile question: "Where did we get that [equation] from? It's not possible to derive it from anything you know. It came out of the mind of Schrödinger." Yet, the equation has held its own in every experiment so far. "It's the most fundamental equation in quantum mechanics," says Bouatta. "It's the starting point for every quantum mechanical system we want to describe: electrons, protons, neutrons, whatever." The equation's earliest success, which was also one of Schrödinger's motivations, was to describe a phenomenon that had helped to give birth to quantum mechanics in the first place: the discrete energy spectrum of the hydrogen atom. According to Ernest Rutherford's atomic model, the frequency of radiation emitted by atoms such as hydrogen should vary continuously. Experiments showed, however, that it doesn't: the hydrogen atom only emits radiation at certain frequencies, there is a jump when the frequency changes. This discovery flew in the face of conventional wisdom, which endorsed a maxim set out by the 17th century philosopher and mathematician Gottfried Leibniz: "nature does not make jumps". In 1913 Niels Bohr came up with a new atomic model in which electrons are restricted to certain energy levels. Schrödinger applied his equation to the hydrogen atom and found that his solutions exactly reproduced the energy levels stipulated by Bohr. "This was an amazing result — and one of the first major achievement of Schrödinger's equation." says Bouatta. With countless experimental successes under its belt, Schrödinger's equation has become the established analogue of Newton's second law of motion for quantum mechanics. Now let's see Schrödinger's equation in action, using the simple example of a particle moving around in a box. We will also explore another weird consequence of the equation called quantum tunneling. Read the next article: Schrödinger's equation — in action But if you don't feel like doing the maths you can skip straight to the third article which explores the interpretation of the wave function. About this article Marianne Freiberger is Editor of Plus. She interviewed Bouatta in Cambridge in May 2012. She would also like to thank Jeremy Butterfield, a philosopher of physics at the University of Cambridge, and Tony Short, a Royal Society Research Fellow in Foundations of Quantum Physics at the University of Cambridge, for their help in writing these articles. Was just browsing the web looking for a quick blurb about schrodinger and found this. Very nice setup and nice flow. Reminded me of my college days. Thank you very much for this very nice illustration, developed step by step chronically. Reading this nice article, one can see how quantum mechanics evolves and understand it better. Thank you very much. This article has been extremely helpful with the readings of Einstein's essays. Thank you The article is really helpful !!!! A very concise article giving a big picture description of the basic tenets of QM. Thanks. Would have been more impactful if the article was written straight instead of writing quotes from the interview. Although I am a PhD student in physics and have been studying quantum physics for several years, this article gives me a better view and summary of quantum concepts. With special thanks