chash
stringlengths
16
16
content
stringlengths
267
674k
5cd6442138646729
fredag 20 februari 2015 Physical Quantum Mechanics 9: Big Lie More Convincing In this sequence we argue that a second order real-valued form of Schrödinger's equation: • $\ddot\psi + H^2\psi =0$       (1)  may be to prefer before the standard first order complex-valued form: • $i\dot\psi + H\psi =0$,            (2) where $H$ is a Hamiltonian depending on a space variable $x$, the dot signifies differentiation with respect to time $t$, and $\psi =\psi (x,t)$ is a wave function.  This is because (1) can be given a physical interpretation as a force balance, while the interpretation of (2) has baffled physicists since it was introduced by Schrödinger in 1926.   Formally, (2) appears as the "square-root" of (1) and it is not strange that if (2) has a physical meaning then (1) as a "square-root" may lack physical meaning.  The non-physical aspect of (2), first formulated for the Hydrogen atom with one electron and $x$ a 3d space variable, in the extension to an atom with $N>1$ electrons the space variable, is expanded to $3N$ dimensions with a 3d independent space variable for each electron.  The standard Schrödinger wave function for an atom with $N$ electrons thus depends on $3N$ space variables, which makes direct physical interpretation impossible, and the only interpretation that physicists could come up with was in terms of a probability distribution, without physical meaning.  This made Schrödinger very unhappy, as well as Einstein. But the newly born so promising modern physics could not be allowed to die in its infancy and so following the strong leadership by Born-Bohr-Heisenberg, the non-physical aspect of the standard Schrödinger equation was turned from catastrophe into a virtue as an expression of a deep mystical uncertain stochastic nature of atomistic physics beyond any form of human comprehension, yet discovered by clever physicists as something very new and modern and very Big.   In this process, the non-physical aspect of (2) was helpful: If (2) already for a Hydrogen atom with one 3d space variable was deeply mystical as a "square-root" without physical interpretation, expansion to non-physical multi-d $3N$ space variables was just an expansion of the mystery and as such could only be more functional following a well-known device: The great masses (of physicists) will be more easily convinced by a Big Lie than a small one. To the non-physical aspect of (2) could then be added non-computbality as an equation in $3N$ space dimensions asking for impossible a $googol=10^{100}$ flops already for small $N$. But it did not matter that (2) was uncomputable, since (2) anyway was unphysical and as such of no scientific interest and value, although very Big. On the other hand, sticking to physics with (1) as a physical force balance, an atom with $N>1$ electrons may naturally be described as a system of $N$ wave functions each one depending on a 3d space variable, which can be given a direct physical meaning including extensions to radiation, and is computable as a system in 3d. One may compare with another Big Lie, that of dangerous global warming by back radiation evidenced by a pyrgeometer from human emission of CO2, which is threating to send Western civilization back to stone-age. Physicists in charge of the basic physics of global climate including radiative heat transfer in the atmosphere, do not tell the truth to politicians and the people. One Big Lie thus appears to be compatible with another Big Lie and even demand it. The reckoning in the history of science to be written will be harsh, even if as of now nobody seems to care. Another thing is that questioning a Big Lie may not be a small thing and may draw a big cost. But if Humpty Dumpty falls, then the Fall may be great. Inga kommentarer: Skicka en kommentar
b93f5e305d946503
Frank Wilczek [1.14.09] The most exciting thing that can happen is when theoretical dreams that started as fantasies, as desires, become projects that people work hard to build. There is nothing like it; it is the ultimate tribute. At one moment you have just a glimmer of a thought and at another moment squiggles on paper. Then one day you walk into a laboratory and there are all these pipes, and liquid helium is flowing, and currents are coming in and out with complicated wiring, and somehow all this activity is supposedly corresponds to those little thoughts that you had. When this happens, it's magic. FRANK WILCZEK, a theoretical physicist at MIT and recipient of the Nobel Prize in Physics (2004), is known, among other things, for the discovery of asymptotic freedom, the development of quantum chromodynamics, the invention of axions, and the discovery and exploitation of new forms of quantum statistics (anyons). He is the author of Lightness of Being: Mass, Ether, and the Unification of Forces. Frank Wilczek's Edge Bio Page [FRANK WILCZEK:] In retrospect, I realize now that having the Nobel Prize hovering out there but never quite arriving was a heavy psychological weight; it bore me down. It was a tremendous relief to get it. Fortunately, it turns out I didn't anticipate that getting it is fantastic fun—the whole bit: there are marvelous ceremonies in Sweden, it's a grand party, and it continues, and is still continuing. I've been going to big events several times a month. The most profound aspect of it, though, is that I've really felt from my colleagues something I didn't anticipate: a outpouring of genuine affection. It's not too strong to call it love. Not for me personally—but because our field, theoretical fundamental physics, gets recognition and attention. People appreciate what's been accomplished, and it comes across as recognition for an entire community and an attitude towards life that produced success. So I've been in a happy mood. But that was a while ago, and the ceremonial business gets old after a while, and takes time. Such an abrupt change of life encourages thinking about the next stage. I was pleased when I developed a kind of three-point plan that gives me direction. Now I ask myself, when I'm doing something in my work: Is it relating to point one? Is it relating to point two? Is it relating to point three? If it's not relating to any of those, then I'm wasting my time. Point one is in a sense the most straightforward. An undignified way to put it would be to say it's defending turf, or pissing on trees, but I won't say that: I'll say it's following up ideas that I've had physics in the past that are reaching fruition. There are several that I'm very excited about now. The great machine at CERN, the LHC, is going to start operating in about a year. Ideas—about unification and supersymmetry and producing Higgs particles—that I had a big hand in developing 20-30 years ago, are finally going to be tested. Of course, if they're correct that'll be a major advance in our understanding of the world, and very gratifying to me personally. Then there's the area of exotic behavior of electrons at low temperature, so-called anyons, which is a little more tech It was thought for a long time that all particles were either bosons or fermions. In the early 80s, I realized there were other possibilities, and it turns out that there are materials in which these other possibilities can be realized, where the electrons organize themselves into collective states that have different properties from individual electrons and actually do obey the peculiar new rules, and are anyons. This is leading to qualitatively new possibilities for electronics. I call it anyonics. Recently advanced anyonics has been notionally bootstrapped into strategy for building quantum computers that might even turn out to be successful. In any case, whether it's successful or not, the vision of anyonics—this new form of electronics—has inspired a lot of funding and experimentalists are getting into the game. Here similarly, there are kinds of experiments that have been in my head for 20 years but are very difficult, and people needed motivation and money to do them, that are now going to be done. It's a lot of fun to be involved in something that might actually have practical consequences and might even change the world. This stuff also, in a way, brings me back to my childhood because when I was growing up, my father was an electrical engineer and was taking home circuit diagrams, and I really admired these things. Now I get to think about making fundamentally new kinds of circuits, and it's very cool. I really like the mixture of abstract and concrete. At a deeper level, what excites me about quantum computing and this whole subject of quantum information processing is that it touches such fundamental questions that potentially it could lead to qualitatively new kinds of intelligences. It's notorious that human beings have a hard time understanding quantum mechanics; it's hard for humans to relate to its basic notions of superpositions of states—that you can have Schrödinger's cat that's both dead and alive—that are not in our experience. But an intelligence based on quantum computers—mechanical quantum thinking—from the start would have that in its bones, so to speak, or in its circuits. That would be its primary way of thinking. It's quite challenging but fascinating to try to put yourself in the other guy's shoes, when that guy has a fundamentally different kind of mind, a quantum mind. It's almost an embarrassment of riches, but some of the ideas I had about axions turn out to go together very very well with inflationary cosmology, and to get new pictures for what the dark matter might be. It ties into questions about anthropic reasoning, because with axions you get really different amounts of dark matter in different parts of the multiverse. The amounts of dark matter would be different elsewhere and the only way to argue about how much dark matter there should be turns out to be, if you have too much dark matter, life as we know it couldn't arise. There's a lot of stuff in physics that I really feel I have to keep track of, and do justice to. That's point one. The second point is another way of having fun: looking for outlets, cultivating a public, not just thinking about science all the time. I'm in the midst of writing a mystery novel that combines physics with music, philosophy, sex, the rule that only three people at most can share a Nobel Prize—and murder (or was it suicide?). When a four-person MIT-Harvard collaboration makes a great discovery in physics (they figure out what the dark matter is) somebody's got to go. That project and I hope other subsequent projects will be outlets in reaching out to the public and bringing in all of life and just having fun. The third point is what I like to call the Odysseus project. I'm a great fan of Odysseus, the wanderer who had adventures and was very clever. I really want to do more great work—not following up what I did before, but doing essentially different things. I got into theoretical physics almost by accident; when I was an undergraduate, I had intended to study how minds work and neurobiology. But it became clear to me rather quickly, at Chicago, that that subject at that time wasn't ripe for the kind of mathematical analytical approach that I really like and get excited about, and am good at. I switched and majored in mathematics and eventually wound up in physics. But I've always maintained that interest and in the meantime the tools available for addressing those questions have improved exponentially. Both in terms of studying the brain itself—imaging techniques and genetic techniques and a variety of others—but also the inspiring model of computation. The explosion of computational ability and understanding of computer science and networks is a rich source of metaphors and possible ways of thinking about the nature of intelligence and how the brain works. That's a direction I really want to explore more deeply. I've been reading a lot; I don't know exactly what I want to do, but I have been nosing out what's possible and what's available. I think it's a capital mistake, as Sherlock Holmes said, to start theorizing before you have the data. So I'm gathering the data. Quantum Computers and Anyons Quantum computing is an inspiring vision, but at present it's not clear what the technical means to carry it off are. There is a variety of proposals. It's not clear which is the best, or if any of them is practical. Let me backtrack a little bit, though, because even before you get to a full-scale quantum computer, there are information processing tasks for which quantum mechanics could be useful with much less than a full-scale quantum computer. A full-scale quantum computer is extremely demanding: you have to build various kinds of gates, you have to connect them in complicated ways, you have to do error correction—it's very complicated. That's sort of like envisioning a supersonic aircraft when you're at the stage of the Wright brothers. However, there are applications that I think are almost in-hand. The most mature is for a kind of cryptography: you can exploit the fact that quantum mechanics has this phenomenon that's roughly called 'collapse of the wave' function—I don't like it—I don't think that's a really good way to talk about it—but for better or worse, that's the standard terminology. Which in this case means that if you send a message that's essentially quantum mechanical—in terms of the direction of spins of photons, for instance—then you can send photons one by one with different spins and encode information that way. If someone eavesdrops on this, you can tell because the act of observation necessarily disturbs the information you're sending. So that's very useful. If you want to transmit messages and make sure that they haven't been eavesdropped, you can have that guaranteed by the laws of physics. If somebody eavesdrops, you'll be able to tell. You can't prevent it, necessarily, but you can tell. If you do things right, the probability of anyone being able to eavesdrop successfully can be made negligibly small. So that's a valuable application that's almost tangible. People are beginning to try to commercialize that kind of idea. I think in the long run the killer application of quantum computers will be doing quantum mechanics. Doing chemistry by numbers, designing molecules, designing materials by calculation. A capable quantum computer would let chemists and materials scientists work at a another level, because instead of having to mix up the stuff and watch what happens, you can just compute. We know exactly what the equations are that govern the behavior of nuclei and electrons and the things that make up atoms and molecules. So in principle, it's a solved problem to figure out chemistry: just compute. We don't know all the laws of physics, but it's essentially certain that we know the adequate laws of physics with sufficient accuracy to design molecules and to predict their properties with confidence. But our practical ability to solve the equations is limited. The equations live in big multi-dimensional spaces, and they have a complicated structure and, to make a long story short, we can't solve any but very simple problems. With a quantum computer we'll be able to do much better. As I sort of alluded to earlier, it's not decided yet what the best long-term strategy is for achieving powerful quantum computers. People are doing simulations and building little prototypes. There are different strategies being pursued based on nuclear spins, electron spins, trapped atoms, anyons. I am very fond of anyons because I worked at the beginning on the fundamental physics involved. It was thought, until the late 70s and early 80s, that all fundamental particles, or all quantum mechanical objects that you could regard as discrete entities fell into two classes: so-called bosons after the Indian physicist Bose, and fermions, after Enrico Fermi. Bosons are particles such that if you take one around another, the quantum mechanical wave function doesn't change. Fermions are particles such that if you take one around another the quantum mechanical wave function is multiplied by a minus sign. It was thought for a long time that those were the only consistent possibilities for behavior of quantum mechanical entities. In the late 70s and early 80s, we realized that in two plus one dimensions, not in our everyday three dimensional space (plus one dimension for time), but in planar systems, there are other possibilities. In such systems, if you take one particle around another, you might get not a factor of one or minus one, but multiplication by a complex number—there are more general possibilities. More recently, the idea that when you move one particle around another, it's possible not only that the wave function gets multiplied by a number, but that it actually gets distorted and moves around in bigger space, has generated a lot of excitement. Then you have this fantastic mapping from motion in real space as you wind things around each other, to motion of the wave function in Hibert space—in quantum mechanical space. It's that ability navigate your way through Hilbert space—that connects to quantum computing and gives you access to a gigantic space with potentially huge bandwidth that you can play around with in highly parallel ways, if you're clever about the things you do in real space. But in anyons we're really at the primitive stage. There's very little doubt that the theory is correct, but the experiments are at a fairly primitive stage—they're just breaking now. Quantum Logic and Quantum Minds Quantum mechanics is so profound that it genuinely changes the laws of logic. In classical logic a statement is either true or false, there's no real sense of in-between. But in quantum mechanics you can have statements or propositions encoded in wave functions that have different components, some of which are true, some of which are false. When you measure the result is indeterminate. You don't know what you are going to get. You have states, meaningful states of computation, what you can think of as states of consciousness, that simultaneously contain contradictory ideas and can work with them simultaneously. I find that concept tremendously liberating and mind expanding. The classic structures of logic are really far from adequate to do justice to what we find in the physical world. To do justice to the possible states, the possible conditions that just a few objects can be in, say, five spins, classically you would think you would have to say for each one whether it's up or down. At any one time they are in some particular configuration. In quantum mechanics, every single configuration—there are 32 of them, up or down for each spin—has some probability of existing. So simultaneously to do justice to the physical situation, instead of just saying that there is some configuration these objects are in, you have to specify roughly that there is a certain probability for each one and those probabilities evolve. But that verbal description is too rough because what's involved is not probabilities, it's something called amplitudes. The difference is profound. Whereas probabilities have a kind of independence, with amplitudes the different configurations can interact with one other. There are different states which are in the physical reality and they are interacting with each other. Classically they would be different things that couldn't happen simultaneously. In quantum theory they coexist and interact with one another. That also goes to this issue of logic that I mentioned before. One way of representing true or false that is famously used in computers is, you have true as one and false as zero, spin up is true, spin down in false. In quantum theory the true statement and the false statement can interact with each other and you can do useful computations by having simultaneous propositions that contradict each other, sort of interacting with each other, working in creative tension. I just love that idea. I love the idea of opposites coexisting and working with one another. Come to think of it, it's kind of sexy. More on Quantum Computers Realizing this vision will be a vast enterprise. It's hard to know how long it's going to take to get something useful, let alone something that is competitive with the kind of computing we already have developed, which is already very powerful and keeps improving, let alone create new minds that are different from and more powerful than the kind of minds we're familiar with. We'll need to progress on several fronts. You can set aside the question of engineering, if you like, and ask: Suppose I had a big quantum computer, what would I do with it, how would I program it, what kind of tasks could it accomplish? That is a mathematical investigation. You abstract the physical realization away. Then it becomes a question for mathematicians, and even philosophers have got involved in it. Then there is the other big question: how do I build it? How do I build it in practice? That's a question very much for physicists. In fact, there is no winning design yet. People have struggled to make even very small prototypes. My intuition, though, is that when there is a really good idea, progress could be very rapid. That is what I am hoping for and going after. I have glimmers of how it might be done, based on anyons. I've been thinking about this sort of thing on and off for a long time. I pioneered some of the physics, but other theorists including Alexei Kitaev and my former student Chetan Nayak have taken things to another level. There's now a whole field called "topological quantum computing" with its own literature, and conferences, and it's moving fast. What has changed is that now a lot of people, and in particular experimentalists, have taken it up. Methods and Styles in Physics The great art of theoretical physics is the revelation of surprising things about reality. Historically there have been many approaches to that art, which have succeeded in different ways. In the early days of physics, people like Galileo and Newton were very close to the data and stressed that they were trying to put observed behavior into mathematical terms. They developed some powerful abstract concepts, but by today's standards those concepts were down-to-earth; they were always in terms of things that you could touch and feel, or at least see through telescopes. That approach very much dominated physics, at least through the 19th century. Maxwell's great synthesis of electricity and magnetism and optics, and leading to the understanding that light was a form of electricity and magnetism, predicting new kinds of light that we call radio and microwaves and so forth—that came from a very systematic review of all that was known about electricity and magnetism experimentally and trying to put it into equations, noticing an inconsistency and fixing it up. That's the kind of classic approach. In the 20th century, some of the most successful enterprises have looked rather different. Without going into the details it's hard to do justice to all the subtleties, but it's clear that theories like special relativity—especially general relativity—were based on much larger conceptual leaps. In constructing special relativity, Einstein abstracted just two very broad regularities about the physical world: that is that the laws of physics should look the same if you're moving at a constant velocity, and that the speed of light should be a universal constant. This wasn't based on a broad survey of a lot of detailed experimental facts and putting them together; it was selecting a few very key facts and exploiting them conceptually for all they're worth. General relativity even more so: it was trying to make the theory of gravity consistent with the insights of special relativity. This was a very theoretical enterprise, not driven by any specific experimental facts*, but led to a theory that changed our notions of space and time, did lead to experimental predictions, and to many surprises. (*Actually, there was a big "coincidence" that Newtonian gravity left unexplained, the equality of inertial and gravitational mass, which was an important guiding clue.) The Dirac equation is a more complicated case. Dirac was moved by broad theoretical imperatives; he wanted to make the existing equation for quantum mechanical behavior of electrons—that's the Schrödinger equation—consistent with special relativity. To do that, he invented a new equation—the Dirac equation—that seemed very strange and problematic, yet undeniably beautiful, when he first found it. That strange equation turned out to require vastly new interpretations of all the symbols in it, that weren't anticipated. It led to the prediction of antimatter and the beginnings of quantum field theory. This was another revolution that was, in a sense, conceptually driven. On the other hand, what gave Dirac and others confidence that his equation was on the right track, was that it predicted corrections to the behavior of electrons in hydrogen atoms that were very specific, and that agreed with precision measurements. This support forced them to stick with it, and find an interpretation to let it be true! So there was important empirical guidance, and encouragement, from the start. Our foundational work on QCD falls in the same pattern. We were led to specific equations by theoretical considerations, but the equations seemed problematic. They were full of particles that aren't observed (quarks and—especially—gluons), and didn't contain any of the particles that are observed! We persisted with them nevertheless, because they explained a few precision measurements, and that persistence eventually paid off. In general, as physics has matured in the 20th century, we've realized more and more the power of mathematical considerations of consistency and symmetry to dictate the form of physical laws. We can do a lot with less experimental input. (Nevertheless the ultimate standard must be getting experimental output: illuminating reality.) How far can esthetics take you? Should you let that be your main guide, or should you try to assemble and do justice to a lot of specific facts? Different people have different styles; some people try to use a lot of facts and extrapolate a little bit; other people try not to use any facts at all and construct a theory that's so beautiful that it has to be right and then fill in the facts later. I try to consider both possibilities, and see which one is fruitful. What's been fruitful for me is to take salient experimental facts that are somehow striking, or that seem anomalous—don't really fit into our understanding of physics—and try to improve the equations to include just those facts. My reading of history is that even the greatest advances in physics, when you pick them apart, were always based on a firm empirical foundation and straightening out some anomalies between the existing theoretical framework and some known facts about the world. Certainly QCD was that way; when we developed asymptotic freedom to explain some behaviors of quarks, that they seem to not interact when they're close together seemed inconsistent with quantum field theory, but we were able to push and find very specific quantum field theories in which that behavior was consistent which essentially solved the problem of the strong interaction, and has had many fruitful consequences. Axions also—similar thing—a little anomaly— there's a quantity that happens to be very small in the world, but our theories don't explain why it's small; you can change the theories to make them a little more symmetrical—then we do get zero—but that has other consequences: the existence of these new particles rocks cosmology, and they might be the dark matter—I love that kind of thing. String theory is sort of the extreme of non-empirical physics. In fact, its historical origins were based on empirical observations, but wrong ones. String theory was originally based on trying to explain the nature of the strong interactions, the fact that hadrons come in big families, and the idea was that they could be modeled as different states of strings that are spinning around or vibrating in different ways. That idea was highly developed in the late 60s and early 70s, but we put it out of business with QCD, which is a very different theory that turns out to be the correct theory of the strong interaction. But the mathematics that was developed around that wrong idea, amazingly, turned out to contain, if you do things just right, and tune it up, to contain a description of general relativity and at the same time obeys quantum mechanics. This had been one of the great conceptual challenges of 20th century physics: to combine the two very different seeming kinds of theories—quantum mechanics, our crowning achievement in understanding the micro-world, and general relativity, which was abstracted from the behavior of space and time in the macro-world. Those theories are of a very different nature and, when you try to combine them, you find that it's very difficult to make an entirely consistent union of the two. But these evolved string theories seem to do that. The problems that arise in making a quantum theory of gravity, unfortunately for theoretical physicists who want to focus on them, really only arise in thought experiments of a very high order—thought experiments involving particles of enormous energies, or the deep interior of black holes, or perhaps the earliest moments of the Big Bang that we don't understand very well. All very remote from any practical, do-able experiments. It's very hard to check the fundamental hypotheses of this kind of idea. The initial hope, when the so-called first string revolution occurred in the mid-1980s, was that when you actually solved the equations of string theory, you'd find a more or less unique solution, or maybe a handful of solutions, and it would be clear that one of them described the real world. From these highly conceptual considerations of what it takes to make a theory of quantum gravity, you would be led "by the way" to things that we can access and experiment, and it would describe reality. But as time went on, people found more and more solutions with all kinds of different properties, and that hope—that indirectly by addressing conceptual questions you would be able to work your way down to description of concrete things about reality—has gotten more and more tenuous. That's where it stands today. My personal style in fundamental physics continues to be opportunistic: To look at the phenomena as they emerge and think about possibilities to beautify the equations that the equations themselves suggest. As I mentioned earlier, I certainly intend to push harder on ideas that I had a long time ago but that still seem promising and still haven't been exhausted in supersymmetry and axions and even in additional applications of QCD. I'm also always trying to think of new things. For example, I've been thinking about the new possibilities for phenomena that might be associated with this Higgs particle that probably will be discovered at the LHC. I realized something I'd been well aware of at some low level for a long time, but I think now I've realized has profound implications, which is that the Higgs particle uniquely opens a window into phenomena that no other particle within the standard model would be sensitive to. If you look at the mathematics of the standard model, you discover that there are possibilities for hidden sectors—things that would interact very weakly with the kind of particles we've had access to so far, but would interact powerfully with the Higgs particles. We'll be opening that window. Very recently I've been trying to see if we can get inflation out of the standard model, by having the Higgs particle interact in a slightly nonstandard way with gravity. That seems promising too. Most of my bright ideas will turn out to be wrong, but that's OK. I have fun, and my ego is secure. On National Greatness In 1983 the Congress of the United States canceled the SSC project, the Superconducting SuperCollider, that was under construction near Waxahachie, Texas. Many years of planning, many careers had been invested in that project, also $2 billion had already been put into the construction. All that came out of it was a tunnel from nowhere to nothing. Now it's 2009 and a roughly equivalent machine, the Large Hadron Collider LHC, will coming into operation at CERN near Geneva. The United States has some part in that. It has invested half a billion dollars out of the $15 billion total. But it's a machine that is in Europe, really built by the Europeans; there's no doubt that they have contributed much more. Of course, the information that comes out will be shared by the entire scientific community. So the end result, in terms of tangible knowledge, is the same. We avoided spending the extra money. Was that a clever thing to do? I don't think so. Even in the narrowest economic perspective, I think it wasn't a clever thing to do. Most of the work that went into this $15 billion was local, locally subcontracted within Europe. It went directly into the economies involved and furthermore into dynamic sectors of the economy for high-tech industries involved in superconducting magnets, fancy cryogenic engineering and civil engineering of great sophistication and of course computer technology. All that know-how is going to pay off much more than the investment in the long run. But even if it weren't the case that purely economically it was a good thing to do, The United States missed an opportunity for national greatness. A 100 years or 200 years from now, people will largely have forgotten about the various spats we got into, the so-called national greatness of imposing our will on foreigners, and they will remember the glorious expansion of human knowledge that is going to happen at the LHC and the gigantic effort that went into getting it. As a nation we don't get many opportunities to show history our national greatness, and I think we really missed one there. Maybe we can recoup. The time is right for an assault on the process of aging. A lot of the basic biology is in place. We know what has to be done. The aging process itself is really the profound aspect of public health, eliminating major diseases, even big ones like cancer or heart disease, would only increase life expectancy by a few years. We really have to get to the root of the process. Another project on a grand scale would be to search systematically for life in the galaxy. We have tools in astronomy, we can design tools, to find distant planets that might be earth like, study their atmospheres, and see if there is evidence for life. It would be feasible, given a national investment of will and money, to survey the galaxy and see if there are additional earth-like planets that are supporting life. We should think hard about doing things we will be proud to be remembered for, and think big.
3a58fd0a9fcffdd5
Dismiss Notice Dismiss Notice Join Physics Forums Today! Courses Is Modern Physics course hard? 1. Apr 15, 2005 #1 Is "Modern Physics" course hard? Is "Modern Physics" course hard? It's a 3 Credit course that deals with special relativity, atomic structure etc. Would it be impossible to study for this on my own to test out of it? 2. jcsd 3. Apr 15, 2005 #2 It's 3 credits, so they either zoom over everything, or they just barely get to it. Ask people who have taken it in your school, or the prof who teaches it. 4. Apr 15, 2005 #3 User Avatar Staff: Mentor Is this a college/university or high school course? USA or UK or where? What are the prerequisites? At my college, "Introductory Modern Physics" comes after two semesters of "General Physics" and is normally taken by second-year (sophomore) physics majors. It assumes students have had two semesters of calculus out of a four-semester sequence. It covers relativity, photons, atomic structure, hydrogen-atom energy levels and spectra, and a taste of quantum mechanics (Schrödinger equation for the "particle in a box"). This year I ran out of time before getting to the hydrogen-atom quantum numbers, spin, etc. We'll do that next semester anyway. Depends on your background and how sharp you are, and on whether your school lets you test out of courses to begin with. I think most of our students find relativity and wave/particle stuff rather difficult conceptually, although the math isn't very heavy for them (at least not until we get to the Schrödinger equation).
afe96310e8927fc0
Laguerre polynomials From Wikipedia, the free encyclopedia   (Redirected from Laguerre polynomial) Jump to: navigation, search In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834 - 1886), are solutions of Laguerre's equation: which is a second-order linear differential equation. This equation has nonsingular solutions only if n is a non-negative integer. More generally, the name Laguerre polynomials is used for solutions of Then they are also named generalized Laguerre polynomials, as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonin polynomials, after their inventor[1] Nikolay Yakovlevich Sonin). The Laguerre polynomials are also used for Gaussian quadrature to numerically compute integrals of the form These polynomials, usually denoted L0L1, ..., are a polynomial sequence which may be defined by the Rodrigues formula, reducing to the closed form of a following section. They are orthogonal polynomials with respect to an inner product The sequence of Laguerre polynomials n! Ln is a Sheffer sequence, The Rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. Further see the Tricomi–Carlitz polynomials. The Laguerre polynomials arise in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of oscillator systems in quantum mechanics in phase space. They further enter in the quantum mechanics of the Morse potential and of the 3D isotropic harmonic oscillator. Physicists sometimes use a definition for the Laguerre polynomials which is larger by a factor of n! than the definition used here. (Likewise, some physicists may use somewhat different definitions of the so-called associated Laguerre polynomials.) The first few polynomials[edit] These are the first few Laguerre polynomials: The first six Laguerre polynomials. Recursive definition, closed form, and generating function[edit] One can also define the Laguerre polynomials recursively, defining the first two polynomials as and then using the following recurrence relation for any k ≥ 1: The closed form is The generating function for them likewise follows, Polynomials of negative index can be expressed using the ones with positive index: Generalized Laguerre polynomials[edit] For arbitrary real α the polynomial solutions of the differential equation [2] are called generalized Laguerre polynomials, or associated Laguerre polynomials. One can also define the generalized Laguerre polynomials recursively, defining the first two polynomials as The simple Laguerre polynomials are the special case α = 0 of the generalized Laguerre polynomials: The Rodrigues formula for them is The generating function for them is The first few generalized Laguerre polynomials, Ln(k)(x) Explicit examples and properties of the generalized Laguerre polynomials[edit] is a generalized binomial coefficient. When n is an integer the function reduces to a polynomial of degree n. It has the alternative expression[4] in terms of Kummer's function of the second kind. • The closed form for these generalized Laguerre polynomials of degree n is[5] derived by applying Leibniz's theorem for differentiation of a product to Rodrigues' formula. • The first few generalized Laguerre polynomials are: • If α is non-negative, then Ln(α) has n real, strictly positive roots (notice that is a Sturm chain), which are all in the interval [citation needed] • The polynomials' asymptotic behaviour for large n, but fixed α and x > 0, is given by[6][7] and summarizing by where is the Bessel function. As a contour integral[edit] Given the generating function specified above, the polynomials may be expressed in terms of a contour integral where the contour circles the origin once in a counterclockwise direction. Recurrence relations[edit] The addition formula for Laguerre polynomials:[8] Laguerre's polynomials satisfy the recurrence relations in particular They can be used to derive the four 3-point-rules combined they give this additional, useful recurrence relations Since is a monic polynomial of degree in , there is the partial fraction decomposition The second equality follows by the following identity, valid for integer i and n and immediate from the expression of in terms of Charlier polynomials: For the third equality apply the fourth and fifth identities of this section. Derivatives of generalized Laguerre polynomials[edit] Differentiating the power series representation of a generalized Laguerre polynomial k times leads to This points to a special case (α = 0) of the formula above: for integer α = k the generalized polynomial may be written the shift by k sometimes causing confusion with the usual parenthesis notation for a derivative. Moreover, the following equation holds: which generalizes with Cauchy's formula to The derivative with respect to the second variable α has the form,[9] This is evident from the contour integral representation below. The generalized Laguerre polynomials obey the differential equation which may be compared with the equation obeyed by the kth derivative of the ordinary Laguerre polynomial, where for this equation only. In Sturm–Liouville form the differential equation is which shows that Lα is an eigenvector for the eigenvalue n. The generalized Laguerre polynomials are orthogonal over [0, ∞) with respect to the measure with weighting function xα ex:[10] which follows from If denotes the Gamma distribution then the orthogonality relation can be written as The associated, symmetric kernel polynomial has the representations (Christoffel–Darboux formula)[citation needed] Moreover,[clarification needed Limit as n goes to infinity?] Turán's inequalities can be derived here, which is The following integral is needed in the quantum mechanical treatment of the hydrogen atom, Series expansions[edit] Let a function have the (formal) series expansion The series converges in the associated Hilbert space L2[0, ∞) if and only if Further examples of expansions[edit] Monomials are represented as while binomials have the parametrization This leads directly to for the exponential function. The incomplete gamma function has the representation Multiplication theorems[edit] Erdélyi gives the following two multiplication theorems [11] Relation to Hermite polynomials[edit] The generalized Laguerre polynomials are related to the Hermite polynomials: where the Hn(x) are the Hermite polynomials based on the weighting function exp(−x2), the so-called "physicist's version." Because of this, the generalized Laguerre polynomials arise in the treatment of the quantum harmonic oscillator. Relation to hypergeometric functions[edit] The Laguerre polynomials may be defined in terms of hypergeometric functions, specifically the confluent hypergeometric functions, as where is the Pochhammer symbol (which in this case represents the rising factorial). Poisson kernel[edit] See also[edit] • Transverse mode, an important application of Laguerre polynomials to describe the field intensity within a waveguide or laser beam profile. 1. ^ Nikolay Sonin (1880). "Recherches sur les fonctions cylindriques et le développement des fonctions continues en séries". Math. Ann. 16 (1): 1–80. doi:10.1007/BF01459227.  2. ^ A&S p. 781 3. ^ A&S p.509 4. ^ A&S p.510 5. ^ A&S p. 775 6. ^ G. Szegő, "Orthogonal polynomials", 4th edition, Amer. Math. Soc. Colloq. Publ., vol. 23, Amer. Math. Soc., Providence, RI, 1975, p. 198. 7. ^ D. Borwein, J. M. Borwein, R. E. Crandall, "Effective Laguerre asymptotics", SIAM J. Numer. Anal., vol. 46 (2008), no. 6, pp. 3285-3312 doi:10.1137/07068031X 8. ^ A&S equation (22.12.6), p. 785 9. ^ W. Koepf, "Identities for families of orthogonal polynomials and special functions.", Integral Transforms and Special Functions 5, (1997) pp.69-102. (Theorem 10) 10. ^ 11. ^ C. Truesdell, "On the Addition and Multiplication Theorems for the Special Functions", Proceedings of the National Academy of Sciences, Mathematics, (1950) pp.752-757. External links[edit]
ca76ca67d1042523
Monday, 30 June 2008 Tortoise and Hare - the Chinese Space Programme From via Cumudgeon's Corner: China is stepping up and out in the world of space exploration. Space officials in that country are readying the Shenzhou 7 spacecraft for an October sendoff, one that will carry a trio of their "taikonauts" into Earth orbit. China has initiated a step-by-step approach in flying their taikonauts: The single-person Shenzhou 5 flight in 2003 of 14 orbits; the two-person voyage of Shenzhou 6 in 2005 lasting 5 days; and soon to head skyward, a threesome of space travelers. And on this flight, one of those space travelers is to carry out China's first spacewalk, also known as extravehicular activity, or EVA for short. For the U.S., the Mercury series of single-seat flights led to the two-person missions of Gemini spacecraft, followed by sojourns of the Apollo three-person crew capsule. More to the point, in the U.S., the first human-carrying orbital flight of Mercury was in 1962; Gemini in 1965; and Apollo in 1968. Except... there were many Mercury, and even more Gemini flights as they got the "bugs out of the systems". Some of that can be ascribed to first-time experimentation, things that once shown to be possible, don't need repeating. "Implications, as far as I can see...few, if any," said Joan Johnson-Freese, an analyst of China's space policy and Chair of the National Security Decision-Making Department at the U.S. Naval War College in Newport, R.I. Johnson-Freese told that the U.S. Mercury program of the 1960s was spearheading research just to see if humans could swallow in space...or how the human psyche would react once in Earth orbit. There were lots of medical questions, she noted. NASA's Project Mercury was quickly followed by a salvo of 10 human-carrying Gemini flights from March 1965 to November 1966. All-in-all, piloted Mercury and Gemini orbital outings tally up to 14 flights in five years, Johnson-Freese observed — and don't forget those two earlier and piloted suborbital Mercury missions. "Technology development was incremental because it was all new, but consistent," Johnson-Freese stressed. "The Chinese will have three flights with a successful mission next fall. They have been able to benefit from lots of lessons learned from both the Americans and the Russians. That is not to downplay the difficulty of the technology or the achievements of the Chinese...they just have the luxury of starting much higher on the learning curve," she concluded. Exactly - and it's because of that that I disagree with her "no big deal" assessment. Because there were a number of uncrewed Shenzhou missions before they sent a man up. Unlike the Space Race in the 60's, they weren't in a hurry, and didn't have to cut corners. They also benefit from nearly 50 years of development in the computer field, leading to far more reliable and robust systems. Ones far less complex than desktop computers, but then, they don't have to be more complex than most microwave ovens or washing machines. Just reliable. Systems in the 60's were neither as capable, nor as reliable. "Yes, is worth flagging," said Dean Cheng, an Asian affairs specialist at the U.S.-based Center for Naval Analysis in Alexandria, Virginia. "Now, the flip side to that, of course, is that it has also been done before. So it's not like they need to engineer everything from scratch," Cheng told, adding that China can depend on designs similar to those proven to work by the U.S. and former Russians. "But, yes, it is nonetheless impressive." Cheng points out, however: "The main difference ...there were more Mercury and Gemini flights in the intervening period. What is interesting about the Chinese effort is that they are doing it with so few flights. Four unmanned flights...then pow-pow-, two-man, three-man/EVA." What they haven't done yet is trained a cadre of Taikonauts in the skills required in the 60's as regards docking, station-keeping and EVA. But as the Russians have shown with their automated Progress craft docking with Mir, even primitive 80's and 90's technology should be good enough. Roger Launius, senior curator for the Division of Space History at the Smithsonian Institution's National Air and Space Museum in Washington, D.C (said) "Learning what China needs to know about conducting a lunar trip, probably a circumlunar trip, on three missions seems a bit thin to me," Launius told "Let's take the Gemini program," Launius said. "A central reason for it was to perfect techniques for rendezvous and docking, EVA, and long duration flight. Assuming that these same skills will be required in a Chinese moon program, and I believe they will, where will the knowledge and experience for them come from in these three missions?" "A core question, it seems to me, is this: "Will ground simulation be able to compensate for the lack of orbital experience?" Launius said. "Perhaps, but I'm not sure." I would expect a series of uncrewed missions to test out the automated docking systems required, possibly in conjunction with a crewed mission. It's a cautious, but not necessarily slow, "progress" if you'll pardon the pun. The long-term strategy is to have a reliable, tested system for getting people to and from a permanent, largely self-sustaining lunar base, within the next 50 years. The plan is to get some more uncrewed lunar surveyors and sample-return landers working by 2017, while activities continue in earth orbit. Activities including a construction facility for assembling lunar missions. If all goes well, expect a crewed landing in 2020, but it could easily be later than that. There's no hurry, and to telegraph their moves with Space Spectaculars is exactly what they don't want. The US space program has been captured by political pork-barelling, and is seen primarily as a way to distribute largesse to political constituents. If they make a workable spacecraft, so much the better, but really, that's not necessary. Meanwhile, the Chinese are steadily building up the necessary infrastructure. Tracking facilities, an Taikonaut Corps... Earlier this month, it was noted that six taikonauts had been selected for the upcoming mission from 14 candidates — a crowd that included Yang Liwei, China's first space explorer who flew solo on Shenzhou 5. For Shenzhou 7, three will fly the actual mission with the others tagged as substitutes. Also, Yuanwang 6, an ocean-going tracking ship, has been delivered for service in Shanghai to participate in the Shenzhou 7 flight and to assist in the slated spacewalk. It joins sister ship, Yuanwang 5, to take part in maritime space surveying and mission controlling operations. Qi Faren, academician of the Chinese Academy of Engineering and researcher of China Spaceflight Technology Research Institute — credited as chief designer of China's first five Shenzhou spaceships and chief consultant for Shenzhou 6 and Shenzhou 7 - has been quoted as saying that plans are already underway for Shenzhou 8 and Shenzhou 9. He added that "the intervals between each launch will become shorter." As was said a few months ago: Last year, the United States managed 16 space launches; Russia had 22; China blasted off 10. China's exploding economy is paying for the education of hundreds of thousands of engineers each year, they are acquiring less space technology from other nations and developing more of their own, and they appear committed to dominating the heavens. Their space program is still behind, says Robert Zubring, one of America's strongest proponents for Mars travel, but it is rocketing. Just in November, a Chinese robotic spacecraft circled the moon, capturing 3-D images. Chinese scientists talk about mining the lunar surface for possible nuclear energy resources that are plentiful there but rare on Earth. Mars is a real target for future travel. All three major presidential candidates -- Sens. Barack Obama, Hillary Clinton and John McCain -- say space is important, but none is strongly talking about a timeline for the moon or Mars. And certainly, there are other pressing issues: the war and the economy. But there is genuine and growing fear among some scientists that if space does not become a higher priority, the Chinese program will be on par with America's by the end of the next president's second term. Then, it will be a real race to Mars even if we want to join in. "Race" to Mars? No, the Chinese aren't interested in "races". They're interested in exploitation and colonisation. Mars will still be there when they've finished building up a good lunar infrastructure for interplanetary travel. And others are sending one-off scientific missions every few decades. More on the Chinese Space programme - Space Plane, The Moon as a nuclear He3 source, Slow, Steady, Planned, Lunar Plans,2006 Summary, Moon Plans Firm Up, Shenzhou 6 again, Shenzhou 6, Shenzhou 6 Preparations, Moon Programme Updated and others... Here's what I wrote 5 years ago: Make that 45 now. Saturday, 28 June 2008 Today's Battles Over at the Memphis Flyer. That's the Memphis is Tennessee, not Egypt. And the ABC, the American network, not the Australian Broadcasting Commission. The "Reasonable Christian" continues to go "la la la, I can't hear you" when faced with evidence he's wrong, so we can write him off as both dishonest and hypocritical. It happens. Too bad, he has a keen intellect, and is probably a good person in many ways. Friday, 27 June 2008 From the ACLU - I can't summarise it without doing it injustice, so here it is in full. An audio of her testimony is also available. My name is Diane Schroer, Colonel, U.S. Army, Retired, and I am a transgender woman. I grew up in Chicago as David Schroer with two older brothers in the most normal of loving families. I entered the U.S. Army through ROTC as a 2nd Lieutenant immediately following graduation from Northern Illinois University. I completed Ranger and Airborne School and served four years on the East-West German Border, completing three company command tours along the way. In 1987, I was an honor graduate of the U.S. Army Special Forces Qualification Course. Since my retirement, I have been intimately involved in Homeland Security, Critical Infrastructure Protection, and Maritime High-Risk Counterterrorism Operations. I currently run a small, independent consulting company that has done work for the Department of Homeland Security, U.S. Coast Guard, the National Guard, and the Federal Bureau of Investigation, to name a few. I possess a current Top Secret, Special Compartmented Information capable security clearance, which was updated in a Periodic Review completed without issue in July 2007. I am here today because, in Fall 2004, I applied and interviewed for the position of Specialist in Terrorism and International Crime with the Congressional Research Service of the Library of Congress. In December 2004, I was told I had been selected for the position and after some rapid salary negotiations, I accepted the job. I knew that I was well-qualified for the position. The U.S. Government had spent 30 years and several million of dollars educating me and perfecting my experience in the fields of Insurgency and Counterterrorism. As an aside, I also have a personal library collection of approximately 18,000 volumes covering predominantly those subjects. At the time I applied for the position, I was in the process of my gender transition from Dave to Diane. However, I was still legally David — meaning that all my documentation was still under the name David — and therefore, applied for the position as David. When I was offered the job by CRS in December 2004, I felt that it would cause less confusion all around if I simply started work as Diane, rather than starting as David and then transitioning to Diane. So, I invited my future supervisor at CRS to lunch so I could tell her about my plans, and help her ensure everything went smoothly. On the day of our lunch meeting, I met my future supervisor at her office. She introduced me to several new “colleagues” as she put it, on our way out of the building. At lunch she spoke at length about my new responsibilities, which would involve preparing, publishing and informing Members about the critical issues surrounding terrorism and homeland security. During a break in her description of my new duties, I mentioned that I had a personal item I wanted to discuss with her. I asked her if she knew what it meant to be transgender, and explained that I had a female gender identity, and would be transitioning to living as a female on a full-time basis. My intent was to do this when I commenced work at CRS. I knew that whether I was David, or Diane, I would provide excellent research support to the Congress. I had truly thought that my future supervisor at CRS would feel the same way. Yet, as we parted company following our lunch conversation, she said that “I had given her a lot to think about.” And then, the following day, she called and said that “After a long and sleepless night, she decided I was not a good fit for the Library.” I told her I was very disappointed to hear her say that. In 24 hours, I had gone from a welcome addition to the staff to someone who was “not a good fit” because I was a woman. Hero to zero in 24 hours. I enlisted the assistance of the ACLU and, in June 2005, they filed suit in Federal Court on my behalf against the Library of Congress. In its legal papers, the Library has claimed that it did not hire me because it was concerned that I would lose my colleagues in the Special Operations community as a result of my gender transition. The ironic thing is that these are precisely the people who have been only second to my family as my staunchest supporters in this fight. The Library has claimed that it could not hire me because it was concerned I might lose my clearance, yet I hold a current TS/SCI capable clearance and continue to work on several highly classified initiatives. The Library has claimed that it could not hire me because I would have no credibility with Members, given that a woman could not possibly know the things I know. And yet I testify in front of this committee here today. In summary, as a Master Parachutist, honor graduate of Army Ranger School, the Special Forces Qualification Course, Command and General Staff College, and the National War College, with two Masters Degrees, having been awarded the Defense Superior Service Award, four Meritorious Service Medals, five foreign parachute qualifications, and two Expeditionary Medals for combat operations, I hope every day for the call to come from the Library saying, “We’ve made a tremendous mistake.” I am ready and able to serve this country once again, and look forward to the day when I am given the opportunity to do so. I'll also quote in full the Traditional Values Coalition : One hearing in 50 years is too many. And while "brave American soldiers are being killed on the battlefield to fight Islamic terrorism", one of those soldiers whose expertise might just reduce the number of casualties is castigated as a Freak in a Freak show, a pornographic prostitute "she-male". For Transphobia is a TVC core value, unlike National Security, which it seems only these "freaks" take seriously. Only one side uses the blood of slain heroes and heroines as a weapon against those they delight in persecuting - including some of those self same heroes and heroines. Even if it damages their own country. The Contrast is obvious. It's not the opposition. I can easily understand people who fear the strange, and who are trying to uphold what they see as moral standards, no matter how much I disagree. It's their Phariseeism, their fundamental dishonesty, their hypocrisy, malice, hatred and "bearing false witness" that is repugnant to me. Thursday, 26 June 2008 Four Levels of Universe At Discover Magazine, a highly speculative piece about the nature of the Universe(s): Let’s talk about your effort to understand the measurement problem by positing parallel universes—or, as you call them in aggregate, the multiverse. Can you explain parallel universes? There are four different levels of multiverse. Three of them have been proposed by other people, and I’ve added a fourth—the mathematical universe. What is the multiverse’s first level? The level I multiverse is simply an infinite space. The space is infinite, but it is not infinitely old—it’s only 14 billion years old, dating to our Big Bang. That’s why we can’t see all of space but only part of it—the part from which light has had time to get here so far. Light hasn’t had time to get here from everywhere. But if space goes on forever, then there must be other regions like ours—in fact, an infinite number of them. No matter how unlikely it is to have another planet just like Earth, we know that in an infinite universe it is bound to happen again. So we are just at level I. What’s the next level of the multiverse? Level II emerges if the fundamental equations of physics, the ones that govern the behavior of the universe after the Big Bang, have more than one solution. It’s like water, which can be a solid, a liquid, or a gas. In string theory, there may be 10500 kinds or even infinitely many kinds of universes possible. Of course string theory might be wrong, but it’s perfectly plausible that whatever you replace it with will also have many solutions. OK, on to level III. Level III comes from a radical solution to the measurement problem proposed by a physicist named Hugh Everett back in the 1950s. [Everett left physics after completing his Ph.D. at Prince­ton because of a lackluster response to his theories.] Everett said that every time a measurement is made, the universe splits off into parallel versions of itself. In one universe you see result A on the measuring device, but in another universe, a parallel version of you reads off result B. After the measurement, there are going to be two of you. So there are parallel me’s in level III as well. Sure. You are made up of quantum particles, so if they can be in two places at once, so can you. It’s a controversial idea, of course, and people love to argue about it, but this “many worlds” interpretation, as it is called, keeps the integrity of the mathematics. In Everett’s view, the wave function doesn’t collapse, and the Schrödinger equation always holds. The level I and level II multiverses all exist in the same spatial dimensions as our own. Is this true of level III? No. The parallel universes of level III exist in an abstract mathematical structure called Hilbert space, which can have infinite spatial dimensions. Each universe is real, but each one exists in different dimensions of this Hilbert space. The parallel universes are like different pages in a book, existing independently, simultaneously, and right next to each other. In a way all these infinite level III universes exist right here, right now. Ah, the Platonic "Theory of Forms", that there exists somewhere an "ideal chair", and that all objects which we call "chairs" are more or less chairlike depending only on how much they resemble the ideal object, the ideal chair. Even such concepts as "sameness" and "difference" have reality in this ideal existence. They are real things. Of course it gets difficult when the concept is "a concept that does not exist in the Ideal Universe". For the ideal Universe includes all concepts, including this one. So we have a concept that by definition simultaneously cannot exist as an Ideal, and one that has to. I think the same critique may hold of the Type IV Universe as postulated, if I understand it correctly. Goedel Incompleteness states basically that any Mathematical system can either be complete, or consistent, but not both. And we can prove that - that there are either unprovable true statements, or some statements are both provably true and provably false. Either would be fatal to a mathematical universe. If I understand it correctly - which I may well not. I'm good at intuition, but this is one for a specialist in Pure Mathematics, not a beginner like me. There is more of interest in the article though. Max, this is pretty rarefied territory. On a personal level, how do you reconcile this pursuit of ultimate truth with your everyday life? Sometimes it’s quite comical. I will be thinking about the ultimate nature of reality and then my wife says, “Hey, you forgot to take out the trash.” The big picture and the little picture just collide. Your wife is a respected cosmologist herself. Do you ever talk about this over breakfast cereal with your kids? She makes fun of me for my philosophical “bananas stuff,” but we try not to talk about it too much. We have our kids to raise. Do your theories help with raising your kids, or does that also seem like two different worlds? The overlap with the kids is great because they ask the same questions I do. I did a presentation about space for my son Alexander’s preschool when he was 4. I showed them videos of the moon landing and brought in a rocket. Then one little kid put up his hand and said: “I have a question. Does space end or go on forever?” I was like, “Yeah, that is exactly what I am thinking about now.” Science is a very Human activity. It's one where we never lose that childlike quality of wanting to know "why?". Wednesday, 25 June 2008 New South Wales Politics From the Sydney Morning Herald : NSW Liberal MP Ray Williams has been ejected from state parliament after bringing a stuffed-toy iguana into the lower house. I'd give a pithy comment, if I could think of one. For those foreign visitors not familiar with the issue, here's a summary. There is a married couple who are politicians - Labor ones, the equivalent of the Democrats. The relatively sane one is at the state level, and was a senior member in the NSW government. The other is a member of the federal government. She's the one that goes around hitting people, making threats about getting them fired by using her political clout, and so on. He just drives drunk, repeatedly. Which is quite understandable, given the nature of the person he's married to. The Iguana is a reference to the incidents at the Iguana club. I'll leave it to the Malaysian Star to be non-partisan about this. Sometimes being too close to an issue can spoil objectivity: But an alleged altercation between the club staff members and Federal Labour MP Belinda Neal and her husband State Education Minister John Della Bosca caused two days of uproar in Federal Parliament, including an unsuccessful motion of dissent against Speaker Harry Jenkins for disallowing an Opposition question on the affair. Out of the disarray and controversy arose claims and counter-claims of what really happened at the Iguana – allegations of bullying, intimidation and abuse of power, foul language and bad behaviour and, to top it all, accusations of Neal taunting a Liberal pregnant MP Sophie Mirabella that “evil thoughts” would make her unborn child “a demon”. Initially, the beleaguered Neal denied the accusation. But a tape with her voice recorded at a seemingly innocuous committee debate last month was heard saying to Mirabella: “Your child will turn into a demon if you have such evil thoughts. You’ll make your child a demon. You’ll make your child a demon. Evil thoughts will make a child a demon.” When Mirabella, who was expecting her first child within two weeks, asked Neal to withdraw her comment unreservedly, she denied making it. Instead she accused Mirabella, a tough debater in parliament, of having imagined hearing that statement. The following day, however, Neal apologised, not directly, though. Nonetheless Mirabella took it as an admission that she had, in fact, made those offending remarks. What made Neal taunt Mirabella is not known yet. Presumably, this will come out when the all-party parliamentary privileges committee questions her. The whole sordid story of how this situation came about is told by Alan Ramsay at the SMH. It's just the usual smoke-filled backroom deal stuff, the cronyism and neopotism that is found in every nation in the world. But most people have more sense that to go around kicking sports opponents when they're on the ground, or abusing power, or making outrageous remarks and then denying them - when they've been caught on tape. They don't break the 11th Commandment so egregiously. Not unless they've become drunk with power, and think they can get away with anything. Some can of course. But they require a number of powerful people who depend on them for patronage, or even survival. Not just political survival, I mean "a la lanterne" should they lose grip on power. We're lucky here, where Plush Toys are used as political weapons, and not Machetes and Machine Guns. Grand Rounds Grand Rounds, the Carnival of Medical Blogging, is now posted over at My Three Shrinks' ShrinkRap. There's also some words from the contributors on their Podcast. Including some of mine. So if you want to know what I sound like, go over there and listen. Tuesday, 24 June 2008 Kate said, and I quote belief that Zoe is an extraordinary person.. Monday, 23 June 2008 All they have to do... Anyone saying: All they have to do is respond with facts, not bombs, Molotov cocktails, burned embassies, burned flags, death fatwas, or even lawsuits. ... would no doubt be guilty of Islamophobia - in Canada anyway. In fact, the quote is from AltMuslim, and reads: Would that stop Islamophobic articles? No. It would stop them from being believed though. Or treated seriously. Or being seen as anything other than raving lunacy. But each death fatwa, call for the extermination of Jews, even attempts to suppress all criticism of Islam whatsoever, that will give these articles about fear of dangerous muslim fanaticism credence. Because to some extent they will be true, won't they? "Behead those who say Islam is violent" really doesn't go down well. I know we're talking about a tiny minority of extremists. But we're also talking about a much larger proportion of Muslims who feel compelled to defend those fanatics in the name of Muslim Solidarity. While they do, they will be seen as a threat too. It's not a battle of Muslim vs Christian, or Muslim vs Jew, or $religion1 vs $religion2. It's a battle between the sane and the loony. I've had enough of that recently, in other contexts. There is a political group within the Transsexual movement called "HBS" or "Harry Benjamin Syndrome". I'll quote from the The Original HBS Site : Harry Benjamin's Syndrome is an intersex condition developed in the early stages of pregnancy affecting the process of sexual differentiation between male and female. This happens when the brain develops as a certain sex but the rest of the body takes on the physical characteristics of the opposite sex. The difference between this and most other intersex conditions is that there is no apparent evidence until much later after the baby is born or even as late as adolescence. I completely agree about the medical issues, as I've posted about earlier. The evidence that they're correct is overwhelming. So what has this to do with politics? It's about elitism. And transphobia. And homophobia too. Please read this article by Charlotte Goiar at the International HBS forum. I agree with pretty much everything as far as the paragraph "Practical and definite Terminology and its meaning.". There I depart. Persons with HBS are people who have Harry Benjamin’s Syndrome (HBS), a purely physiological condition. They are simply men or women. Such people are born with the characteristics of both male and female. In common with others who exhibit typical sexual development, they desire to modify their phenotype and endocrinal system to correct it to their dominant sexual identity, an identity that is determined by the structure of the brain. The person with HBS does not change sex, as gender identity is fixed at birth, and the medical treatment involved is only physical correction.Transsexualism (TS), Gender Identity Disorder (GID), or Gender Dysphoria is a mental condition that consists of the desire to live and to receive acceptance as a member of the opposite sex. Do not confuse this with HBS, as it is not medical. It isn't? It's a symptom of cross-gendered neurology. Call it HBS or anything else, it's in the brain, and no amount of psychotherapeutic mumbo-jumbo will affect it in the slightest. That's where some (and I stress some, not all) of the more fanatic elements of HBS theory start going wrong. Various kinds of HBS fanatics insist that only they are Intersexed, and others are psychiatrically ill. Some require anyone who's "really" HBS to be straight, to transition early, to be post-operative, to have had facial surgery and breast enhancement, to eschew all "boy stuff" and be perfect models of 1950s womanhood, or some combination thereof. And of course to have nothing to do with Gays or those freakish "Transgenderists" who are totally unlike them. I'll quote from Laura's HBS Peer review : As the owner of a Transsexual,Transgender web site I have always provided the latest information on Transsexual research on my site long before HBS was first uttered. So the research on HBS sites was very familiar to me. In order to learn more I joined the Official Yahoo HBS Support Group. What I learned had little to do with HBS. It instead turned out to be an anti-GLBT group. People who asked simple questions and needed support, were diagnosed by militant members as being transgendered, perverts and fetishists. Gays and lesbians were also denigrated with frequent slurs. In fact those who did support GLBT rights were banned simply for supporting them. Several that were diagnosed by those without medical degrees were affirmed post-ops with similar stories to mine. One thing Yahoos HBS groups are not is an HBS Support Group. The group moderator defends the constant anti-GLBT slurs as member venting. It's difficult for me to be in this position. They're right about so much, but some (and I emphasise some) are total fruit-loops in other ways. Hateful too. I just don't understand how a group of people so badly persecuted can join in the persecution of others. I don't identify as "transgendered", nor as gay. In fact, they are both as alien concepts to me as is, well, masculinity. But I'm no androphobe, my homophobia is mainly a thing of the past, and my transphobia under control. Mainly. It leaks out on occasion, as you'll see below. I'm a member of the Australian HBS support group, simply because they're right about so much. Here's what I wrote in reply to one of the more strident members, not fanatical, merely hard-line: The evidence is what it is, and I reserve the right to alter my opinions based on the evidence. I do not ask you to "accept" anything. It would be useful if you could give more data, or if not, propose experiments that might prove your viewpoint is correct. We "see through a glass darkly" here, taking clues and hints from all sorts of sources, some more reliable than others. I place little weight on most psychological studies, they've been proven to be unreliable in the past. I place more weight on MRI scans, and other objective data. I place zero weight on my own self-perceptions, that I'm just a woman with an interesting medical history . Likewise my own desires as to what "should be". For if I had my druthers, there would be a nice neat binary, with HBS men and women easily and clearly distinguished from a variety of self-advertising publicity-seeking "TG Pride" paraphiliacs and fetishists. For that matter, I would like to be either regarded as Intersexed or Transsexual(ie only neurally Intersexed), and not something in-between, with characteristics of both. Still, if I'm going to dream, let's go back to conception and give me 46xx chromosomes and a standard factory model female body, one that matches my brain. My reading of the data though leads me to conclusions I don't like, but accept pending data that would contradict them. 1. That CG sexuality is not usually associated with extreme CG Gender Identity, but in a third or so of cases, it's associated with mild CG behavior, butch Lez or femme Gay, and rather more CG gender behaviour in childhood. 2. That CG gender identity is often (50%) associated with CG sexual orientation, hence CG gender behaviour in childhood. 1/3 of children showing CG behaviour are TS. 3. That CG gender identity is always associated with CG patterns of thought - emotional response in particular- but not necessarily CG sexual orientation (and childhood CG gender behaviour etc) 4.That CG gender identity is often associated with CG body image, something else determined in a nearby part of the brain, and in most cases this will require hormones, and in extreme cases, surgery to alleviate the dysfunctionality. But... that CG sexual orientation, gender identity, and body image, while coupled, are not absolutely so. They are distinct things. It's possible that only one will be affected, or two of the three. Worse, there are degrees, so both degrees of bigender and bisexuality exist, as do people not particularly enamoured with either M or F pattern bodies. I will explain "CG" though, "Cross-Gendered". I define it as being "in relation to the arbitrary assignment at birth", and not in relation to genitalia, endocrinology, or anything else. Otherwise it can't be applied to any Intersexed people, for whom the chromosomes etc differ from the assignment. Furthermore, in relation to people with CG Gender Identity, who are strongly gendered, it can be more useful in a practical sense to reverse the polarity. So a transWOMAN does not have a strongly CG gender identity, she has a strongly normal gender identity for a woman. It's her chromosomes (usually), her genitalia(usually), and in general her body apart from the brain that is cross-gendered for a woman. Sometimes her body image is cross-gendered too, and she's non-op. Sometimes her sexual orientation is cross-gendered too, and she's lesbian. You may prefer to believe in a strict binary, where all 3, gender identity, sexual orientation, and body image are always the same. Either M or F, with no "degrees", just a binary. And anything else is an artifact of abnormal psychology, not neurology. My reading of the data indicates otherwise, but not only may my reasoning be wrong, the datasets are too small for comfort. I can't say that you're definitely wrong. Your belief would certainly simplify all sorts of issues, legal and otherwise. I have stated my conclusions. I have stated the evidence I base my conclusions on. In terms of "HBS activism", the political aspect, the fact that to me the evidence of a biological cause of "transsexuality" means I have to support the rationale for HBS activism. "It's the neurology, stupid! Female brains and minds lumbered at birth with male bodies!". And despite the "degrees" and "blurriness" of the biology, sometimes you have to simplify. It's *not* a rainbow spectrum, it's Red and Blue, but with a small band of purple between , some more red than blue, some more blue than red, and some just purple. To say it's Red or Blue, while not 100% accurate, a least won't confuse the ignorant, while all sorts of scientific disclaimers might just give them the totally false impression that colour doesn't exist. If I may make an analogy - the Earth doesn't orbit the Sun. They orbit each other around a common centre of gravity, which is really close to the Sun's centre. And that's a simplification, because other planets perturb things a bit, and the Moon causes even more perturbation., And the orbit isn't circular, but elliptical. It's complex. But to say "the Earth doesn't orbit the Sun", while strictly accurate, is misleading to the ignorant. I support the Heliocentric model of the solar system to the same extent that I support the HBS theory, if you get my point. That's why I'm here, to give battle to the Geocentricists and Flat Earthers, the followers of Bailey, Raymond and the like. I'm hoping the Australian HBS group retains the moderate stance it has taken so far - basing its views on medical data, rather than psychological insecurities and elitism. Why does this all have to be so complicated? *SIGH* Friday, 20 June 2008 Ice on Mars See the rocks in the bottom left of the picture that disappear? That's because they're chunks of ice that are evaporating now that they've been brought up to the surface, and exposed to sunlight. Now we've known for some time that there's Ice on Mars - at the polar icecaps. What we haven't known is if there's ice stuck in the Martian Soil. And that's a Big Deal, because Ice in the soil means that Life could thrive on Mars. Could it evolve there? There seems to be no reason why not, and if we don't find life within the Martian Soil or bedrock, that will tell us something too about the chances of finding life elsewhere. Because if it isn't found in such a relatively hospitable place, one periodically hit with Life's precursors just like the Earth is, then it means there's something else necessary we haven't thought of. And life will be rare in the Universe. I think it now far more probable than not that Life, similar to some of the extremeophiles found on Earth, will be found on Mars. It will be astounding if Hypoliths or Endoliths aren't there somewhere. Of course, it might take us some time to find them. In the short-term, colonising Mars with Earthlike life and giving us another planet is very probably feasible. The planet is rather small though. In the medium-term, it's not really a good bet, simply because the planet is too small to hold much of an atmosphere. It would require domes, or even a single cellular building covering the whole planetary surface, a la Trantor. In the long-term, by the time we are able to shift planets around and control gravity, we'll probably have outgrown the need for planets anyway. Or suns. Or corporeal existence. Assigned reading on the subject: The Last Question. Photo courtesy of via Little Green Footballs. Thursday, 19 June 2008 Nothing Unusual The facts: Full story at WMCTV. Californian Same Sex Marriage "Love the Shrimper, Hate the Shrimp". Wednesday, 18 June 2008 BiGender and the Brain Some pieces of the puzzle that led me to these conclusions: From the Grauniad : Sexual Orientation and Gender Identity are different. I'd better explain diagramatically. Another part of the puzzle, as I've blogged about before : I also gave the critique: Two separate questions there. I didn’t take it well. how old you were when you really started gender bending It's difficult being dispassionate when you're your own experimental animal. Tuesday, 17 June 2008 Economical with the Truth, and the Code Today's battle takes an interesting turn. Over at Montgomery County, Maryland, as I've blogged about before, there's a "scare campaign" being run through the Maryland Citizens for Responsible Government. Their website is The html source code of this site has an interesting second line, commented out: <!---<?php include_once("/var/www/vhosts/"); include_once("/var/www/vhosts/"); include_once("/var/www/vhosts/"); include("/var/www/vhosts/"); ?> ---> is quite coincidentally the website for another "scare campaign", this time our campaign to protect innocent children from sexual indoctrination. It is equally as economical with the truth. From ProudParenting: An initial response of the civil rights bill screams, "Mom and Dad as well as husband and wife have been banned from California schools under a bill signed by Gov. Arnold Schwarzenegger, who with his signature also ordered public schools to allow boys to use girls restrooms and locker rooms, and vice versa, if they choose." Randy Thomasson, president of Campaign for Children and Families said, "Arnold Schwarzenegger has delivered young children into the hands of those who will introduce them to alternative sexual lifestyles. This means children as young as five years old will be mentally molested in school classrooms." Read Senate Bill 777 for yourself to see if there's any mention of restrooms or locker rooms. There isn't of course. Go to the ProudParenting site for the hyperlinks to the legislation. And the use of the phrase "Mom and Dad" continues in California schools. But these campaigns don't even have a passing acquaintance with the truth. The Big Lie is more effective at getting the campaign donations rolling in, and people's memories are short. When the promised Dire Consequences (tm) don't result, they shrug their shoulders and move on. And fall for the same thing next time. What most offends me is that the ones running this scam are so arrogant, they don't think anyone would notice the connection. They didn't even bother to wipe off the fingerprints. As a Software Engineer, of course I approve of the authorised re-use of working code. But at least they could have gone back to the original source: Author: Eric King Featured on Dynamic Drive script library ( If you're going to run a conspiracy of like minds, formal or informal, at least be professional about it. I wonder if they've left a money trail too? It wouldn't surprise me. More on the subject at Monday, 16 June 2008 Schizophrenia : A Hideous Progression Over at the New York Times Health Section, a set of animations showing development of different parts of the brain during childhood and adolescence - and what happens when Schizophrenia strikes. We may not know the Why of Schizophrenia, but at least we have a good handle on What it is, something we only guessed at before. The tool of functional MRI continues to provide us with far more data about the way the brain works, and sometimes doesn't work, than any other experimental device. It is giving us new insights, and in the process, abolishing much of the nonsense we formerly swallowed, lacking better data. As I wrote in a comment on ShrinkWrap, The model of Id, Ego, Superego etc may have it's uses. But it's difficult to view much of past psychiatric theory as much better than superstition - on a par with trying to make rain by propitiating the right deities, as opposed to cloud-seeding. Trying to cure Schizophrenia by psychoanalysis and "How do you feel about your mother?" is exactly as effective as donning JuJu masks and reading the entrails of goats. We've known that for a while. Some people with the disease manage to route-around the problems, taking advantage of an unusually plastic brain, so environmental factors allow them to live with it. But we have no idea what the most effective factors may be. It could be that just interacting with other people, doing simple manual tasks involving great creativity, or conversely no creativity at all, may be useful. We don't know that yet. We do know that theories about Schizophrenia involving Oedipal or Electra complexes, oral versus anal personalities and so on are so much phlogiston. Which rather casts doubt on their accuracy in general. Don't get me started on the various psychic models of gender identity, compared to the neuroanatomical ones. Unlike the psychiatric profession, I'm not in two minds about that.
82fe2aee475b9038
Sturm–Liouville theory From Wikipedia, the free encyclopedia   (Redirected from Sturm–Liouville operator) Jump to: navigation, search In mathematics and its applications, a classical Sturm–Liouville theory, named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882), is the theory of a real second-order linear differential equation of the form where y is a function of the free variable x. Here the functions p(x), q(x), and w(x) > 0 are specified at the outset. In the simplest of cases all coefficients are continuous on the finite closed interval [a,b], and p has continuous derivative. In this simplest of all cases, this function y is called a solution if it is continuously differentiable on (a,b) and satisfies the equation (1) at every point in (a,b). In addition, the unknown function y is typically required to satisfy some boundary conditions at a and b. The function w(x), which is sometimes called r(x), is called the "weight" or "density" function. The value of λ is not specified in the equation; finding the values of λ for which there exists a non-trivial solution of (1) satisfying the boundary conditions is part of the problem called the Sturm–Liouville (S–L) problem. Such values of λ, when they exist, are called the eigenvalues of the boundary value problem defined by (1) and the prescribed set of boundary conditions. The corresponding solutions (for such a λ) are the eigenfunctions of this problem. Under normal assumptions on the coefficient functions p(x), q(x), and w(x) above, they induce a Hermitian differential operator in some function space defined by boundary conditions. The resulting theory of the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in a suitable function space became known as Sturm–Liouville theory. This theory is important in applied mathematics, where S–L problems occur very commonly, particularly when dealing with linear partial differential equations that are separable. A Sturm–Liouville (S–L) problem is said to be regular if p(x), w(x) > 0, and p(x), p′(x), q(x), and w(x) are continuous functions over the finite interval [ab], and has separated boundary conditions of the form Under the assumption that the S–L problem is regular, the main tenet of Sturm–Liouville theory states that: • The eigenvalues λ1, λ2, λ3, ... of the regular Sturm–Liouville problem (1)(2)(3) are real and can be ordered such that • Corresponding to each eigenvalue λn is a unique (up to a normalization constant) eigenfunction yn(x) which has exactly n − 1 zeros in (ab). The eigenfunction yn(x) is called the n-th fundamental solution satisfying the regular Sturm–Liouville problem (1)(2)(3). in the Hilbert space L2([ab], w(xdx). Here δmn is the Kronecker delta. Note that, unless p(x) is continuously differentiable and q(x), w(x) are continuous, the equation has to be understood in a weak sense. Sturm–Liouville form[edit] The differential equation (1) is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear ordinary differential equations can be recast in the form on the left-hand side of (1) by multiplying both sides of the equation by an appropriate integrating factor (although the same is not true of second-order partial differential equations, or if y is a vector.) The Bessel equation[edit] which can be written in Sturm–Liouville form as The Legendre equation[edit] which can easily be put into Sturm–Liouville form, since , so the Legendre equation is equivalent to An example using an integrating factor[edit] Divide throughout by x3: Multiplying throughout by an integrating factor of which can be easily put into Sturm–Liouville form since so the differential equation is equivalent to The integrating factor for a general second-order differential equation[edit] multiplying through by the integrating factor and then collecting gives the Sturm–Liouville form: or, explicitly, Sturm–Liouville equations as self-adjoint differential operators[edit] The map can be viewed as a linear operator mapping a function u to another function Lu. One may study this linear operator in the context of functional analysis. In fact, equation (1) can be written as This is precisely the eigenvalue problem; that is, one is trying to find the eigenvalues λ1, λ2, λ3, ... and the corresponding eigenvectors u1, u2, u3, ... of the L operator. The proper setting for this problem is the Hilbert space L2([a, b], w(x) dx) with scalar product In this space L is defined on sufficiently smooth functions which satisfy the above boundary conditions. Moreover, L gives rise to a self-adjoint operator. This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of L corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem, one looks at the resolvent where z is chosen to be some real number which is not an eigenvalue. Then, computing the resolvent amounts to solving the inhomogeneous equation, which can be done using the variation of parameters formula. This shows that the resolvent is an integral operator with a continuous symmetric kernel (the Green's function of the problem). As a consequence of the Arzelà–Ascoli theorem, this integral operator is compact and existence of a sequence of eigenvalues αn which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that are equivalent. If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls L singular. In this case, the spectrum no longer consists of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in quantum mechanics, since the one-dimensional time-independent Schrödinger equation is a special case of a S–L equation. We wish to find a function u(x) which solves the following Sturm–Liouville problem: where the unknowns are λ and u(x). As above, we must add boundary conditions, we take for example Observe that if k is any integer, then the function is a solution with eigenvalue λ = k2. We know that the solutions of a S–L problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the S–L problem in this case has no other eigenvectors. Given the preceding, let us now solve the inhomogeneous problem with the same boundary conditions. In this case, we must write f(x) = x in a Fourier series. The reader may check, either by integrating ∫e ikxx dx or by consulting a table of Fourier transforms, that we thus obtain This particular Fourier series is troublesome because of its poor convergence properties. It is not clear a priori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "square-summable", the Fourier series converges in L2 which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier's series converge at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at π) converges to the average of the left and right limits (see convergence of Fourier series). Therefore, by using formula (4), we obtain that the solution is In this case, we could have found the answer using anti-differentiation. This technique yields whose Fourier series agrees with the solution we found. The anti-differentiation technique is no longer useful in most cases when the differential equation is in many variables. Application to normal modes[edit] Certain partial differential equations can be solved with the help of S–L theory. Suppose we are interested in the modes of vibration of a thin membrane, held in a rectangular frame, 0 ≤ x ≤ L1, 0 ≤ y ≤ L2. The equation of motion for the vertical membrane's displacement, W(x, y, t) is given by the wave equation: The method of separation of variables suggests looking first for solutions of the simple form W = X(x) × Y(y) × T(t). For such a function W the partial differential equation becomes X"/X + Y"/Y = (1/c2)T"/T. Since the three terms of this equation are functions of x,y,t separately, they must be constants. For example, the first term gives X" = λX for a constant λ. The boundary conditions ("held in a rectangular frame") are W = 0 when x = 0, L1 or y = 0, L2 and define the simplest possible S–L eigenvalue problems as in the example, yielding the "normal mode solutions" for W with harmonic time dependence, where m and n are non-zero integers, Amn are arbitrary constants, and The functions Wmn form a basis for the Hilbert space of (generalized) solutions of the wave equation; that is, an arbitrary solution W can be decomposed into a sum of these modes, which vibrate at their individual frequencies . This representation may require a convergent infinite sum. Representation of solutions and numerical calculation[edit] The Sturm–Liouville differential equation (1) with boundary conditions may be solved in practice by a variety of numerical methods. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places. 1. Shooting methods.[1][2] These methods proceed by guessing a value of λ, solving an initial value problem defined by the boundary conditions at one endpoint, say, a, of the interval [ab], comparing the value this solution takes at the other endpoint b with the other desired boundary condition, and finally increasing or decreasing λ as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues.[clarification needed] 2. Finite difference method. 3. The Spectral Parameter Power Series (SPPS) method[3] makes use of a generalization of the following fact about second-order ordinary differential equations: if y is a solution which does not vanish at any point of [a,b], then the function is a solution of the same equation and is linearly independent from y. Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value λ0* (often λ0* = 0; it does not need to be an eigenvalue) and any solution y0 of (1) with λ = λ0* which does not vanish on [ab]. (Discussion below of ways to find appropriate y0 and λ0*.) Two sequences of functions X(n)(t), X ~(n)(t) on [ab], referred to as iterated integrals, are defined recursively as follows. First when n = 0, they are taken to be identically equal to 1 on [ab]. To obtain the next functions they are multiplied alternately by 1/(py02) and wy02 and integrated, specifically when n > 0. The resulting iterated integrals are now applied as coefficients in the following two power series in λ: Then for any λ (real or complex), u0 and u1 are linearly independent solutions of the corresponding equation (1). (The functions p(x) and q(x) take part in this construction through their influence on the choice of y0.) Next one chooses coefficients c0, c1 so that the combination y = c0u0 + c1u1 satisfies the first boundary condition (2). This is simple to do since X(n)(a) = 0 and X ~(n)(a) = 0, for n > 0. The values of X(n)(b) and X ~(n)(b) provide the values of u0(b) and u1(b) and the derivatives u0'(b) and u1'(b), so the second boundary condition (3) becomes an equation in a power series in λ. For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in λ whose roots are approximations of the sought-after eigenvalues. When λ = λ0, this reduces to the original construction described above for a solution linearly independent to a given one. The representations ('5'),('6') also have theoretical applications in Sturm–Liouville theory.[3] Construction of a nonvanishing solution[edit] The SPPS method can, itself, be used to find a starting solution y0. Consider the equation (py')' = μqy; i.e., q, w, and λ are replaced in (1) by 0, −q, and μ respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue μ0 = 0. While there is no guarantee that u0 or u1 will not vanish, the complex function y0 = u0 + iu1 will never vanish because two linearly independent solutions of a regular S–L equation cannot vanish simultaneously as a consequence of the Sturm separation theorem. This trick gives a solution y0 of (1) for the value λ0 = 0. In practice if (1) has real coefficients, the solutions based on y0 will have very small imaginary parts which must be discarded. Application to PDEs[edit] For a linear second-order in one spatial dimension and first-order in time of the form: Let us apply separation of variables, which in doing we must impose that: Then our above PDE may be written as: Since, by definition, and are independent of time t and and are independent of position x, then both sides of the above equation must be equal to a constant: The first of these equations must be solved as a Sturm–Liouville problem. Since there is no general analytic (exact) solution to Sturm–Liouville problems, we can assume we already have the solution to this problem, that is, we have the eigenfunctions and eigenvalues . The second of these equations can be analytically solved once the eigenvalues are known. See also[edit] 1. ^ Pryce, J. D. (1993). Numerical Solution of Sturm–Liouville Problems. Oxford: Clarendon Press. ISBN 0-19-853415-9.  2. ^ Ledoux, V.; Van Daele, M.; Berghe, G. Vanden (2009). "Efficient computation of high index Sturm–Liouville eigenvalues for problems in physics". Comput. Phys. Comm. 180: 532–554. doi:10.1016/j.cpc.2008.10.001.  3. ^ a b Kravchenko, V. V.; Porter, R. M. (2010). "Spectral parameter power series for Sturm–Liouville problems". Mathematical Methods in the Applied Sciences. 33 (4): 459–468. doi:10.1002/mma.1205.  Further reading[edit]
6132a4e1acb48e5f
Tuesday, June 08, 2010 What's so misleading about Nassim Haramein? "If a planet suddenly stopped spinning it would explode" NH explains his Grand Unified Field Theory (here). Traducción al español y la discusión aquí. Below are some examples. 1. The hype and the Schwarzschild Proton Here's one example: a. The force between protons I don't know. 2. Misunderstanding basic physics a. The "first law of physics" It's terrible misinformation. I think people deserve better than this. b. Why the night sky is black Here's another example, again from his Rogue Valley presentation. c. Peer review (That's the most charitable way of seeing it.) d. Atoms as mini white wholes / black holes Here's some more: e. Biological cells are black holes too That's very clearly not what biological cells are like. In fact that's what event horizon means. 3. Other examples of basic misunderstandings a. Quantum mechanics and the strong nuclear force b. The phi spiral (a) investigate it further? (b) calculate it? True, anyone can make a mistake or four. 4. The Resonance Project website a. More claims for himself and his work And so it goes on. b. 'Layman Paper' on the Origin of Spin c. Science xkcd's illustration of the original big bang theory predictions of the The theory was correct to a spectacular degree of accuracy. And I'll happily accept what I don't know. It's very good for the soul. 5. A little thought «Oldest   ‹Older   401 – 484 of 484   Newer›   Newest» Anonymous said... Thanks for the critics on a scientist work that ''seems'' partly dubvious in many ways. Maybe Haraheim is trying to hide something after saying it all... His he aware of this? Or maybe he's so close to understand the rest of physics that he may be confuse on how to conclude? At first seight, it seems that Haramein wants to be number 1 (one), followed by many 0 (zéros) = 10000000000000000... Now, who could not take interest in his work? All you need to do is put a $ in there... but i'm just a listener, not a follower... no guru can take our own place in this universe kids. I'm not saying he's a 0 (zero) followed by many enlighted pairs 0,000000000000000001 down to an infinite number that refers to a sort of meaning at the end (or at the origin of singularity with quantum mechanics). I think he's a great storyteller (too bad for his high pitch voice though). I kind of like his stress explanations for a neat and clear structure of the universe, but I can't buy it as it is... Will take some but not all of it. A black hole is a strange object, but remains an object as any other, either star, neutron star, etc. It is it's infinite small size and density that differs from any other object. He could be right when he says we live in a black hole... but it is only a speculative idea once brought by sci-fi writer Asimov's. And it seems that it is the density of all these objects that makes an object a palpable world in relation with other worlds, just like a russian doll related to a bigger/smaller doll. So evidence is, it cannot be anything else than the center of gravitation that holds everything together. The void IS the central architect of the universe and it seems that Haramein understands this very well. Thank God. We can approch physics with a better understanding of the ultimate force... being nothing, the void, the emptiness of all matter. If I understand Haramein, what he's trying to do, his to explain the intrisic values of the structure of the quarks themselves... but I haven't heard once the word Quark (muster Mark). Yes, there are six quarks (three matter and three antimatter). His explanation of a quark is not clear but all I know is that the double torus makes sens. It could well be the inside structure of a single quark. Now, he needs to explain the overall mechanics of the quarks. My humble opinion is that's the easy part. Abstract reasoning will do the rest. Good inspiration to you gentlemen! Bob said... Hi Anonymous. What a lot of opinions you have. If you're more interested in your own opinions and speculations than you are in learning about physics, challenging your views deeply and finding out what the Universe is actually like, then you'll just go around in funny little circles, getting seduced by anyone with a story that appeals to your preferences. If what you care about is your preferences, then what you're doing is fine. Enjoy the ride. It isn't anything to do with the Universe, it's your own little echo chamber. If you want to know about the Universe, you'll need to learn to let go of the pretty stories and spend some time finding out who you can rely on to be honest about what they know and how they know it. Good luck, whichever path you choose. P.S. Just to clear up any uncertainty: scientifically, Haramein is a zero :) Anonymous said... Not only this blog hopes to remove Nassim's remarks from the world... Wiki removed the page about Nassim... climing he isn't real - while Homer Simpson has a wiki page. Why should we believe anyone in this blog - pro or con? It seems we believe a Homer Simpson exists while a Nassim Haramein doesn't deserve to have his thoughts, right or wrong. Someone in Wiki wirld (read how the common man finds his 'facts' agrees with the writer of this blog. Bob said... This blog isn't about removing anything or anybody from the world. It's about shedding light on what Haramein says and what he does. It's also not about telling people what to think or what to believe. It's for people who would like some light to be shed, and are interested in exploring for themselves, and interested in discussing the content of Haramein's theories. I've been very very clear about both of these things in the blog. You'd have realised this if you'd read it. I have no interest in anyone who is motivated by taking sides, or by their opinions, or by their little tribal allegiances. But if you want to follow this joker, nobody will get in your way. Yes, you'll find that lots of people agree with the writer of this blog, and with the majority view of Wikipedia, that Haramein's theories are entirely bogus. There's a straightforward reason for why so many people think this. See if you can figure it out. Anonymous said... I love it when people use science for the be-all-end-all bearing for truth and authority in the same way religious people use their sacred texts. You criticize Haramein with some kind of divine understanding of all things science or physics related. If you truly are a scientist, and you are honest with yourself, at the end of the day, I think all you can really claim is that you just dont know what is going on. Science has always uncovered more questions than it answers. Einstein really said it best......science is basically just a refinement of everyday thinking. People hold on to current scientific understanding like it's objective or absolute in its truth or relevancy. Step back just 500 years in time and try and convince someone that the planet was round, or not geocentric, etc. Dont get me wrong, I love science. Its the best way to make sense of our surroundings. You make observations, you collect evidence, you test theories. Its simple and pure but not perfect. We laugh at the notion that people once thought the world was flat. At every paradigm shift, you can laugh at the ignorance of the previous understanding and applaud the visionaries bold enough to come up with new ideas. At the very least, I can promise you we are paradigm shift upon paradigm shift away from the finish line. I like Haramein. I think he absolutely deserves more credit than you give him. So what if he's wrong? Steven Hawking now admits his understanding of black holes was wrong. Haramein suggests some of our current understandings of physics and science are wrong and physicists and scientists keep patching current models and equations with more complex and convoluted mathematics just so observation can support theory and now you start coming up with names like dark matter and energy. Bottom line, everything you say or claim, anything anyone has ever said or claimed.....theory. All of it. Obviously qauntum physics and classical physics work together but we dont know how. No one can explain the double-slit experiment. We have no idea how stuff works and I absolutely applaud Haramein and his willingness to shed all dogma and come up with new ideas. I dont care what you say, a civilization that had not incorporated the wheel into their repertoire of technical accessories did not build the pyramids. I hate to marginalize, but youre a complete moron if you believe "the Egyptians did it" is a better excuse than Haramein's vodoo-people-sungods-from-afar theory or however you wanna call it. Bob said... Nice of you to rock up calling me a moron for thinking it likely that the pyramids were built by Egyptians, and straw-manning me with the cliché about science as "the be-all-end-all bearing for truth and authority in the same way religious people use their sacred texts" (of course it isn't). As you say, you don't care whether your guy is wrong or not, so I'm not sure what point there is in trying to discuss anything. Some folks care about investigation of the world around them on its own terms, and if someone bullshits them, they aren't impressed. Others just enjoy whatever ride takes their fancy. I have no disagreement with you if you choose the latter. Anonymous said... My argument is very simple. Science is NOT truth, it is a refinement of everyday thinking, which, in my opinion, is dogma thrown upon dogma. Its just as easy for me to say that your assessment of Haramein's physics is wrong. I mean, who are you? Youre no one special. And again, let me say it clearly.....if you think a civilization that hadnt even discovered the concept of a wheel built the pyramids, youre a complete idiot and your critique of a brilliant individual is void of any merit or credibility. Dogma on, friend. Dogma on. Anonymous said... I mean, seriously, man. Youre entire argument is......his physics is wrong. Ill bet my first born child Haramein has a better command of theoretical science and physics than you do, ever had, or ever will. He's a theoretical scientist. He works with theories. He's a brilliant and innovative thinker and I applaud him for pushing the paradigm shift science so desperately needs. Galileo was ostracized and imprisoned for the rest of his life for expressing his views on heliocentrism. You defend with stupid passion the status quo as sacred truth and all you can say is his physics is wrong. Are you going to discount everything Hawking has ever said or done because he now admits he was wrong about his theories on black holes? Bob said... Your first born child? :/ Anonymous said... Copernicus and Galileo Galilei, two examples of how knowledge led them to death by simply thinking and seeing beyond others. Bob said... Yes, I'm aware of them. Neither were frauds. Neither were clueless. People who are clueless and who are frauds are not Galileo and they are not Copernicus. It's not a difficult argument to follow. Anonymous said... You are clueless, you are a fraud. Whatever you can assert with no evidence, so can I. Anonymous said... Youre too entrenched in dogma or too feeble-minded to try and think outside the box. I can say with assurance that your knowledge has come from someone else.....youre only preaching what youve been told. Youre a dogma junkie. Again, your only argument is his physics is wrong. What a pathetic and childish foundation for rhetoric. Haramein is a theorist. You have to listen to his ideas and concepts. Remember, if everyone believes that something which is wrong is actually right, it is still wrong. But dont worry, pal. Science or mainstream opinion has never been wrong. Ever. Anonymous said... I didnt even get through the first few paragraphs before I realized how misinformed and drunk off of stupidity you are. First, you confuse mass and weight. A proton can have a mass of millions, billions, kuh-zillions of tons and not weigh anything. Weight is a product of gravity. Your statement that protons having a mass of millions of tonnes "ought to raise a few (very heavy) eyebrows" demonstrates your failure to understand even the most basic and fundamental properties of physics. Under what authority or special knowledge do you have to critique the complex research of Haramein? NONE! You claim these results are "so far from reality." Really? The singularity of a black hole is INFINITELY smaller with INFINITE more mass, so how is Haramein's description of a proton within the boundaries of his theory a radical departure from reality? More ludicrous claims on your behalf.....I like how youre shell-shocked at Haramein's calculations of the force holding protons together when we "can separate protons from a nucleus by tapping them with a tiny electron in a small accelerator." Here is the demonstration of the pinnacle of your stubborn ignorance. Small accelerator? It has a circumference of 17 miles with speeds approaching the speed of light. You pull everything out of context and speak in wildly vague terms. You fail to accurately grasp even the most simple of physics concepts and this is the authority you claim to criticize Haramein? These are the flagrant inconsistencies I found in the first few paragraphs. Seriously, youre an idiot. Bob said... Remember the part where I said there's no point discussing anything with you? That. You have my permission to take that to mean you're right about everything and I'm wrong about everything, if it makes you happy. Bob said... What I can't understand is how someone can have so little self-awareness and so little honesty that they'll delude themselves that they've studied physics to some depth, when they haven't. Surely you know that you haven't? I mean, you're actually you, aren't you? It's bizarre that you'd need other people to point this out. Amber said... Unfortunately my knowledge of physics is limited, and I have been trying desperately over the last year to learn and comprehend it. This is mainly from a spiritual need to understand the laws behind creation and my interest in sacred geometry. I found this site after viewing some of Haramein's work and trying to research more about the man behind the theories, and I just wanted to say "thank you". Although I am undecided about what to believe, I understand my ignorance on the subject would only grow if I did not have access to the reasons behind the controversies. So thank you for taking the time to explain your position, and for giving me some much needed solid points to consider! Jared Stearns said... Can you explain how the I ching is represented by his theory with the yin yang and the 64 star tetrahedron and why that is wrong? Bob said... This is a blog about physics. I can tell you about physics, and all Haramein's physics is wrong, he hasn't a clue, he knows he hasn't a clue, and he carries on pretending and selling himself as a physicist. He uses outright lies to sell his brand to the people who trust him, and misleads them utterly in everything he says about physics. I'm committed to being honest, and to only speaking on things that I understand very well. So I won't comment on anything outside of his bogus physics claims (I've discussed hundreds and hundreds of these). Would you really want to put your trust in any kind of theory by someone who routinely misleads his followers as much as Haramein does? Michael Hansen said... Oh my god. Honestly, Bob, I am very impressed with your perseverance for explaining why Nassim Haramein is very wrong. However, like you, I am intrigued and baffled by the strange wonders of physics and the Universe and I love to explain about the subject I am studying. Let me give you a quick explanation why a proton could never be a black hole. Lets look at how the mass works. According to General Relativity, mass bends the spacetime and therefore, a very massive object will bend light around it a lot. So a black hole, however small it would be, would bend light around it and anything going beyond the event horizon would never come back. So, lets look at the Earth. If we wanted to make a BH out of the Earth, we would have to shrink it the size of a golf ball. You can then imagine how big a proton would initially have to be, in order to become a black hole. Lets say we go out on a limp and believe that the proton was actually created as BH. We humans, just about anything in the world, are comprised of a heck of a lot of neutrons, protons and electrons and if every proton was a BH, then how could neutrons attach to them, without getting sucked in? At that distance, the gravitational force would be immense. Furthermore, how could molecules of matter get together, without getting ripped apart by the immense force from the BH protons. Finally, why don't we get stuck to the table, the mouse, each other, every time we touch and lastly, why dont we bend light around us? If we are to believe that the proton is a black hole, then every proton would have a mass of 6.7x10^11 kg, which is tremendous. Then imagine what my combined mass would be, being comprised of an extremely amount of protons. We have actually measured the weight of the proton and let me tell you, it is not even close to 10^11 kg. Another relevant example is when Haramein first writes "We now calculate the velocity of two Schwarzschild protons orbiting each other with their centers separated by a proton diameter." and "nterestingly, recent evidence has shown supermassive black holes, at galactic centers, seemingly have relativistic velocities." he is comparing apples and bananas. When we talk about spinning black holes, we talk about intrinsic spin. When he is talking about is orbital rotation and not spin. And furthermore, the article he is referring to, talkt about rapidly spinning black holes, creating relativitstic jets. He also confuses the ideas about rotation and spin. Spin for a proton is not the same as the rotation for a black hole. The spin for a proton as a quantum mechanical effect and not a real rotation of the proton. You should not view it as a physically spinning ball. The spin comes when you solve the relativistic Schrödinger equation and is a part of the quantum numbers. Electron have +1/2 and -1/2 as spin numbers. Even if we could see the proton as a actual spinning sphere, his calculation is still wrong. When dealing with a relativistic effects you have to have motion in a reference frame. If an object moves from A to B all the elements in the body would experience the same speed and hence, the same relativistic effects. But a sphere does not experience the same speed everywhere. Therefore he would have to consider an element of the mass element of the sphere and then integrate over all elements, in order to get the real relativistic mass. Michael Hansen said... In theory, yes. But if you wanted a proton to a mass of 10^11 kg and wanted it to weigh ~10^-28 N on a planet, it would have to have a tremendously low density and/or a very very small radius. Fact. The relation between force and mass on earth is given by Newtons law F=ma, where a~10m/s^2. So a proton with a mass of 10^11 kg would weigh 10^12 N. It so happens that 10 N = 1 kg (weight, not mass) and our very massive proton would weigh about 10^12 N / 10 kg/N = 10^11 kg (weight). So yes, it would definitely be some very heavy eyebrows. Anonymous said... I love this: 'Cuz, these smart fellers at MIT don't know anything about grounding straps or Faraday cages. They were askeered off by "static electricity". Too funny! Anonymous said... Bob, you are truly a saint. In the most science-y way possible. I'm not sure how you're responding to the most idiotic dogmatic (Its funny how the most dogmatic use that word so often, isn't it? By funny I mean incredibly annoying since they can't differentiate between dogma and science) 3 (4?) years after your original post. Eddison S Titus said... I read your post concerning the Nassim fella who made the black hole videos. I liked what you had to say and I would like to learn the necessary math and physics to start learning all this high level math stuff.....quantum, string, quarks, gravity, light, phi, the golden ratio....id like to learn as much as i can about all of it because I naturally dont like taking anybody's word about anything. So I figured I would ask you...where do i start? What basic math should I start with and in what order should I progress? By the way I plan on doing this on my free time. I just finished law school so this is just something to do on the side so i can make sense of "sacred geometry" and see whats real and whats not. Ill also be posting this on your blog. hopefully it find you well. Bob said... Hi Titus, That's quite a journey you're wanting to set out on! Where you start depends on what maths you are familiar with now. The route you take depends on how rigorously you feel you need to follow the logic, and how carefully you feel you need to scrutinise the experimental evidence base. And where you stop depends on what it is you really want to know. String theory is incredibly complex, requires a vast understanding of physics, and has no evidence base at all. I wouldn't recommend it. But it's not impossible! For the rest... Lenny Susskind's lecture courses are the most concise. They're called "Theoretical Minimum", and they teach only the essential material, to give a thorough grounding in theoretical physics. There are 15 core and supplemental courses, with about 20 hours of lectures per course, and they go all the way to introducing string theory (if you really do want some string theory). Before you start on those, you will have to become skilled in the language of algebra and calculus (at least as far as ordinary differential equations, and partial differentiation), vectors, some linear algebra (matrices), and some statistics (random variables). This is the mathematics that underlies the logic of physics. Of these, calculus is the most essential. To get the mathematics background, you could use the Khan academy. You can start as far back as you need. A college course or two would be even better. Khan can also get you up to speed on physics. What Lenny Susskind's (or any) online lectures cannot give you is a sense of real world conviction. You want to test the claims of physics for yourself against the real world, and not merely take anybody's word. For that you need to carry out experiments, make observations, work in a lab or an observatory. Alternatively, if you don't want to do lots of experiments yourself, you can study the work of the people who have done the experiments, perhaps by studying the history of physics. As a rule, everything in physics is testable against observation of nature, and you needn't take anybody's word for anything. (String theory is an exception, which is why I don't recommend it.) Realistically, checking everything would take many lifetimes, so you should choose the things you feel most skeptical about. More importantly, you should also choose to test the things you feel are most intuitive. Because your intuition is far more likely to lead you astray than anything else in physics. The one skill you need above all else is to be willing and able to let go of your intuitions whenever they are consistently contradicted by nature. Your personal preferences for how you'd like things to be, the stories you like the best, are the hardest to let go of, and they will be the worst things for preventing you from seeing things as they are. If anything in physics is a spiritual exercise, it is this. It can be a very profound (and unsettling) exercise in letting go. :) Good luck! Eddison S Titus said... Thank you very much for the advice and the resources! Wendy Langer said... Hi Bob and Eddison! :) :) I can heartily endorse Bob's suggestions of Susskind and the Khan academy, and especially with his advice about the overall approach and attitude to take. It's hard to know exactly where to begin, with such a broad topic area and so many online resources, so I'm going to try to create a specific kind of 'course outline' or 'syllabus' I'm going to outline a recommended course of study, after which you would be able to sensibly converse with any trained physicist in a rigourous way about many topics. This is pretty serious stuff - It would take an absolute minumum of two years to complete all this work, but will probaly take longer since you will be busy with your law work 'on the side' ;) However you may be happy to stop after the first three courses (6 months or so), in which case there is an 'exit' at this point where you will have a much more well-rounded understanding of physics thanb you started with, and you will be able to do rigourous calculations in some areas, but without the full rigorous treatment of everything. Because Blogger (perhaps unsurprisingly ) won't let me post the entire syllabus I have so cunningly devised as a comment, due to length requirements (comments are 4096 characters max) I've dumped it as a post on my own blog, which you can view here: It's titled "So you want to teach yourself physics?", in case it may be useful to anyone else later down the track... (I havent;' finished editing it yet so pleas excuse typos etc!) Cheers :) Wendy Langer Bob said... Thanks Wendy - that's intriguing! Can't get the link to work though... Wendy Langer said... O dear, it seems I hadn't actually 'published' it yet, I had just 'updated' the draft... try this one! Bob said... Excellent :) Dustin Stein said... Don't be to quick to ridicule somone. I can tell by reading your criticism of Nassim that you, Bob, are not anymore knowledgeable of physics concepts than he is. I think he is misunderstood frequently and I think that may have something to do with how he speaks. It is true his ideas are very questionable and lack verifiable evidence but that is why they are presented as theory and doesn't mean they should be immediately discarded as nonsense. Some of his ideas do lend credibility! For one it has been postulated that the big bang could actually have been a white hole stemming from a black hole in a parallel or mother universe(which could account for the contraction you bitterly criticized) and that our universe could actually be inside a black hole. Yes these ideas are theoretical but are being proposed by legitimate scientists. I do not have a degree in physics but I have been studying physics for a decade in my free time and metaphysical ideas and eastern philosophy for four years and have had very profound and intuitive insights. By all means you should not take everything the man says as fact but I have noticed a lot of his ideas agree with my own intuition and he has been misinterpreted by a lot of critics. Also if you listen to other topics he talks about it is obvious he is a peace loving individual and makes valid points about the moral decline and materialistic obsession or society has stooped to. All eastern philosophy talks of oneness. I think you would benefit from looking into these ideas. Bob said... Hi Dustin, and thanks for your thoughts. "Don't be to quick to ridicule somone. " - I've been looking into this subject for over four years. Haramein was ridiculous when I started and he's ridiculous now. "I can tell by reading your criticism of Nassim that you, Bob, are not anymore knowledgeable of physics concepts than he is." - There are thousands of comments on the seven posts here, and still nobody has used physics concepts to find fault with any of the things I've said about Haramein's physics ideas. Lots of people give opinions based on their intuitions, with no physics content at all, and you're now another on that list. If you have a physics criticism of what I've said, let's hear it. Don't just claim that you can intuitively tell it's wrong. "He is a peace loving individual and makes valid points about the moral decline and materialistic obsession or society has stooped to" - sure, lots of people can do this, and still be clueless about physics. Personally I think there's something morally bankrupt about being clueless about physics and selling yourself to people as someone with radical ideas about physics. It is a lie, and I'd prefer a society of people who don't stoop to lies in order to gain a following. "All eastern philosophy talks of oneness. I think you would benefit from looking into these ideas." - I spent over a decade living as a Buddhist. I benefited a lot, it's true. I recommend it too. Honest speech - not misleading others for personal fame and fortune - is a big deal in Buddhist ethics. Trying one's best not to delude oneself is also a big deal. Being open to legitimate criticism is another. These are personal commitments for me. If you have legitimate criticism of the physics I've presented, I'd be happy to acknowledge it. Opinions and intuitions that you think I should conform to - no thanks :) flow said... ""if a planet suddenly stopped spinning it would explode."" lol. we all know it would implode as the shell collapsed towards the hollow centre! the ultimate fate of planets, though, is to fall apart once they've expanded too much and the shell has thinned past the ability of the lateral gravitational attraction to hold it together against its own momentum Bob said... Flow... planets don't 'expand', their shells don't 'thin' and they don't have 'lateral gravitational attraction' Anonymous said... Bob, thank you so much for writing this article. I just posted a link to it from my Facebook account, with the line "So glad someone wrote this so I didn't have to." Your analysis is right on the mark, as are all of your subsequent comments. Keep up the great work. Bob said... Thanks :) Anonymous said... Thank you, Bob, for writing a different point of view on Nassim. I've heard him speak at length in person in 2012 at a university conference and took a lot of notes. As a PhD philosophy student, his ideas on cosmometry and physics are intriguing. He also spoke about a number of ideas that were v e r y far out in left field, and I have notes on those as well. He was not polite to me when I challenged him on a topic that I do have good working knowledge of (Arc of the Covenant) during the Q&A, and that was not acceptable. His tone was noticed by the professors in attendance. As for the deep physics validity, I do not have sufficient knowledge to credit or discredit him. It is sensible to look at all sides to evaluate anything, especially when it is as abstract as these physics topics. It is very sensible to have your point of view. :) Anonymous from WA state jmu said... Hi Bob, 2 things are important in my view. 1) Haramein comes up with new hypotheses, which is a good thing. The field of Fundamental Physics is very slowly progressing, so we don't get breakthrough technology from it. For example: Maxwellian theory on the Dynamical Theory of the EM field, Dec 1864 [p465-490] is 150 years old and contains a lot of angles that are still not main stream physics and still very relevant. He should produce proper theories. 2) Of course all theories need to be pure, shouldn't contain errors and should be backup-ed with real live, reproducable experiments. It is the job of the fundamental guys to test his usefull ideas with real experiments where possible and disregard or falsify where applicalble. Or they need to cone up with breakthrough ideas themselves. It seems to me that the real university PhD physics folks dont push the enveloppe as hard as they could and should. BR Jeroen (jagmulder@gmail.com) Bob said... Hi jmu, 1. The current understanding of Maxwell's theory (and its relationship to the rest of physics) is entirely self-consistent and entirely with all experimental observations. Please consider how many experiments have been done over the past 150 years, and how many thousands of the world's most creative and brilliant people have tried to demonstrate experimentally even a tiny deviation from this understanding. If you believe there's a problem with Maxwell's theory, bear this in mind. You'll need to be very clear about what you mean. If you even know what you mean. 2. Again, that is exactly what physicists are employed for. All physicists know that if they can convincingly demonstrate any fault with our best established theories - in particular the standard model of particle physics, general relativity and Lambda-CDM cosmology, they will be famous and their breakthrough will be celebrated for centuries. All academic institutions know this too, and they are desperate to catch the physicists who they think are most likely to achieve this kind of breakthrough. People who make the kind of broad, baseless accusations about university physics that you're making (and that Haramein and his supporters love to make too) really have no understanding of what physicists are there for. Vague accusations based on ignorance are just expressions of prejudice. There's no truth to be found there. jmu said... Hi there Bob, Ad 1: First of all I admire the work of James Clerk Maxwell. I think it is utterly brilliant: best physics ever in my personal view. His 20 quaternion equations that he put down in 1864 which were wrenched into 4 EM equations by Heaviside because the math was too complex for ordinary use, still leave a lot of room for exploration. Intuitively I find there is much more happening in nature, that "we" don't know about or we can explain. The good thing about Maxwell's work is that it is accepted science. So it leaves a base for further exploration. I studied physics in grammar school, I am a chemical engineer, but frankly speaking I need to do a lot of reading to grasp the stuff that Maxwell put down there 150 years ago. I don't know your background, but you seem to be really into physics professionaly. So, you outflank me there. But this also happens on a wider scale. Asking the simple intriguiging questions and making hypotheses on science is something for Everyone to do. It is not the province of Academics with PhDs in Science only. What is happening now, is that possible hypotheses - potentially valid ones - are rejected because some earlier premises in the line of reasoning contain some errors. It is like you are only allowed to speak to the professor if you have a PhD. It is like the student with some errors in his/her paper or test is expelled from the discussion or ridiculed. Physics got too complex for educated laymen to engage in discussions. But Physics is not for the intellectual elite. I am not in favour of that at all, since it blocks technological progress for the common good. Academic scientists in Europe are publicly funded by taxpayers like myself. I rather would see a dialogue between the professionals explaining science in a very simple language / way like in high school and the people who are dubbed pseudoscientists. After all both groups are after the same objective: Try to understand/explain the mysteries of science. Ask just for yourself: what was the reason to study physics in the first place? Let's not be dogmatic. Ad 2: I am accusing nobody. Physicist have done a great job on the Standard Model. However Gravity is not unified in it. Progress in my eyes comes from wondering about simple questions, supported by experiments: - what really makes the apple fall (Newton, 1687)? --> the Gravity Force is known, but what causes Gravity on an (sub)atomic level really? - Is space really empty? Or does it contain energy / flux? - What does make the world spin? - What does make an electron spin? - What is dark matter? - What is dark energy? - Why does light travel at the same speed in all directions? - Is Ning Li's work for real? http://en.m.wikipedia.org/wiki/Ning_Li_(physicist) The Standard Model was the great theory after WW2. Are we making really progress in Fundamental Physics the last 20-30 years? However, I find it a pity that a lot of physicists seem to play it safe. I would appreciate it if they take a stance and pursue bold audacious hypotheses on those simple questions. Haramein states : everything spins. Well it might be unconventional, it might be partly wrong. But why not utilize the 10% useful stuff from his hypotheses. There might be a lot of BS in it like you elaborately point out, but some things could be worthwhile to explore. It might lead us in new territory. I am very strong on using reproducable experiments, then try to explain by theory, then make improved practical applications which are scalable. Theory in itself is not worth much in my humble opinion. I personally rather take a pragmatic engineering point of view. After all people could fly in an aeroplane before they understood the aerodynamics of it. Once they understood the aerodynamics they were able to make better planes. At the end of the day it is the application or end user experience that matters. Bob said... "What is happening now, is that possible hypotheses - potentially valid ones - are rejected because some earlier premises in the line of reasoning contain some errors." - There's no rule in academia saying this is the case, and no mass blindness making people do it. Of course it will happen sometimes, but if it happens a lot in place A and less of the time in place B, then place B will be doing great physics and place A will fall behind. Stating it as a sweeping generalisation, as you did, is just silly though. It's the job of physics community to ensure that every one of the questions you have posed is either (a) modelled successfully and tested rigorously, or (b) being investigated from as many angles as possible that aren't obviously wrong from the outset, or (c) replaced with a question that makes more sense and then subject to (a) or (b). That's their job. Telling them they aren't doing their job is silly. "Are we making really progress in Fundamental Physics the last 20-30 years?" - of course we are. Stop the lazy naysaying and find out about it. It's all open. Everyone can play. "I find it a pity that a lot of physicists seem to play it safe. I would appreciate it if they take a stance and pursue bold audacious hypotheses on those simple questions." - some do sometimes, because all physicists are human beings and some humans are more risk-averse than others. "Haramein states : everything spins." - the answer to which is: no it doesn't. "But why not utilize the 10% useful stuff from his hypotheses." - there is no 10% useful stuff. There is 0% useful stuff. Why make a vague claim that there's 10% useful stuff? Wouldn't it be better to either (a) ask how much of his stuff is scientifically useful, or (b) give examples of things you think are useful and we can discuss it if you wish. "Theory in itself is not worth much in my humble opinion." Please - no more daft empty claims. There's no point to them. You're have intelligence - try to engage it in keeping an eye on this tendency to make silly sweeping ignorant generalisations. If you can say clearly and precisely what you are interested in or concerned about, I'm interested. Matheus Adorni Dardenne said... It is funny how the guy who wrote this critique knows nothing about the holographic model. Long story short, the cosmic horizon is like the surface of a black hole, but instead of being convex, it is concave (like if we were seeing the walls of a spherical room from inside). And just like the 3D matter entering the black hole can be converted in 2D information stored on the area of it's surface, all 3D matter in the universe can be only 2D information stored on the cosmic horizon. The whole refutation is a huge strawman fallacy. Bob said... That's a nice story, Matheus. I am familiar with the holographic principle, though. Your second paragraph is all fine, but it isn't relevant to anything discussed in this post so it doesn't really work as a counterargument. If you think there are fallacies in what I've said, please do point them out. jmu said... Bob, What does your critical mind make of these 3 Dirac/Hotson papers? [JMU] "Let's keep the assertion that Dirac was right as it is an appealing idea. The Dirac formula is elegant: E^2= c^2*p^2+m^2*c^4 and perhaps ill understood. The idea is that there are 4 combinations of neutrons (or epo-pairs) possible: positive, negative, lefthanded, right handed where the Bose-Einstein Condensate has a central role. Is it accurate what Hotson writes? part 1 - is the idea of electron-positron pairs accepted ? p11 - For Hotson the BIG BEC (Vacuum) is the same as aether --> can (positive) matter dissapear in the BEC? part 2 - p3/4: the part on neutrosynthesis. Neutron is 1836 vs 1838.6 electron masses explained. Reference is made to a Vortex effect in the BEC - p5: strong nuclear force explained. 2000 factor (2055=15x137) between strong and Coulomb force. 137 (alpha= e^2/hc=1/137) - p6: magnetogravitation g-factor 1.0011596522 explains for a tiny unbalance in magnetic moment - p8 neutrino/antineutron: neutrons are ejected from the BEC in vortices into 4 dimensions: positive / negative. left- and righthanded - p10-20: covers the Octave system in the solar system. part 3 - summary on p17-18 - p20: gravitation acts faster than light at least in a time frame tau - p21-22: explains the role of BEC in the atomic model --> can dark energy/matter be linked back to the BEC concept? - p24: self organization effect of plasma's - p26: LENR experiment on an TI isotope - p27: here things are getting metaphysical .. - p28: reality: quons(quantum objects) are recreated continuously every tau - p30-31: effects on LENR from very thin surfaces "The electron / positron (epo) pairs have been observed temporarily during annihilation. Apparently the electron and positron never get closer than e.g. an atomic diameter. They then orbit each other in the configuration known as positronium (ortho or para depending on spins). And then they annihilate into 2 or 3 gamma ray photons with apparently nothing else left. The spin energy of the electron and positron simply evaporate, which is another reason to look for by-products of annihilation. There is a mechanism for how this could work on www.dirac-was-right.com/model-ABCD.php. It models the original electron / positron as 3-part composites, with 4 parts going towards the 2 photons and the final 2 parts going towards the residual epo. It all works from an electrostatic attraction / repulsion point of view, including the photons accelerating to light speed and the epo being stable in-situ. jmu said... "Hotson noted: "...A perturbation, as Dirac pointed out, must cause transitions from states of positive energy to those of negative energy. Quantum mechanics must be symmetric with respect to energy. Since our reality has a large positive energy balance, symmetry requires another reality with a large negative-energy balance...." 1.11 Similarly, he also showed that 'perturbations' of the epos comprising the vacuum (Big BEC) lead to: "...Epos vibrating in one “real” dimension form the electromagnetic field. Vibrating in two “real” dimensions, they carry angular momentum around at the speed of light: the “photon.” And vibrating in three “real” dimensions, they form matter...." 2.23 The quiescent state of the BEC is massless, that is, only 'negative energy' epos (which do not vibrate in 'real' directions, only 'imaginary' ones) comprise it. So the epos of the vacuum can also be made to vibrate in 'real' directions. This happens when bare charges, like an electron, are 'introduced' into the BEC. It also happens within atoms, which maintains a stable relationship between the electrons and the nucleus. BTW, the 1,2,3 sequence of epo vibrations would seem to indicate the relative 'propagation cost' of travel through the vacuum: One 'real' direction: the EM field and gravitation, 1 tau time of travel. Two 'real' directions: EM waves, c, velocity of travel. Three 'real' directions: matter, < c , velocity of travel.] Matter, the energy that makes our reality, which we call positive energy, is the ugly stepdaughter of the Universal BEC, the vacuum. Hotson states the epos are active, essential parts of the atom. Not only do they comprise the electron, proton, and neutron, but they also form the connecting structures between the atoms nucleus and surrounding electrons. Stable matter cannot exist unless its constituent epos are arranged in very specific configurations. "... I am not sure where Hotson is coming from with regard to the spin energy. Whatever that portion is, it will be subsumed in the electron's rest mass. I can see how the energies of virtual particles can cause discomfort but as far as I am concerned the energy balance with regards to accounting for particle spin is just fine". Bob said... whoa, whoa... what's this got to do with anything in this blog? Bob said... I read some of Hotson's paper - it's full of misunderstandings of Dirac's work and how it has been interpreted. He is really quite confused about it. I don't think there's anything to learn from it, other than the fact that it's always best to understand first and write papers later jmu said... Could you indicate where Hotson is taking the wrong turn? There is a summary in the first pages of the last part 3. Too bad there are many misunderstandings and confusions, Hotson is no physicist. His concepts are interesting tough - if true or possible. Would you be willing to make a separate blog thread on Hotson's work like you did for Nassim H? Bob said... I don't see a 'wrong turn', just someone attempting to talk about a theory that he (a) hasn't understood, and (b) doesn't realise he hasn't understood. When you see (a) and (b), it's almost inevitable that you'll also see (c) a conviction that he's understood it in a way that the whole of the physics community in the preceding 75 years have missed. (Here's a little example: Hotson repeatedly refers to h/4π as an energy, when it is a unit of action or angular momentum. Any first year physics undergraduate would be ashamed of that kind of error; and they don't even start learning anything about the Dirac equation until their 3rd or 4th year. I could list hundreds of errors just from a brief look at a couple of pages, as could anyone with a background in relativistic quantum theory.) His ideas don't seem to have made an impact on many people, so I don't see much point in writing about him. What is it about them that you like? Perhaps I could help you find some genuine physics ideas that you'd find satisfying? jmu said... That sounds constructive. The quickest way is to read Hotson are the summaries of part 1 and of part 2 and the first 2 pages of part 3 Not sure if you agree with these: 1) the idea of the negative BEC (Bose Einstein Condensate) is a big placeholder with aetherlike capabilities. 2) the g-factor idea explaning for a slightly unbalanced magnetic moment in electrons [part 2, p.6]: magnetograviation. 3) "gravitation reacts to changes in mass instantaneously (or at least in time t : ca. 1e-24sec.) "This explains that the Earth and Sun don’t form a “couple,” and why the Earth “feels” gravitation from the Sun at the Sun’s instantaneous position, rather than its retarded position, as is shown by astronomical observations (Van Flandern, 1998)". [part 2, p.6] Not familiair with this article. Bob said... 1. There are people working on Superfluid Vacuum Theory who advocate something like this as an alternative to QFT. I don't know much about the details of this theory. 2. The g-factor is well understood: the magnetic moment of electrons was predicted correctly by QED (one of the QFTs in the Standard Model) to 1 part in a trillion - the most accurate prediction in the history of physics. Hotson appears to be claiming the BEC theory predicted it, which is not true. 3. Van Flandern's claims about gravity being instantaneous were disproven soon after that paper. He had made a number of false assumptions. See, for example (Carlip, 1999). jmu said... Thanks for validating. I think 1) SVT is a sound theory. Why is there so much opposition against the idea of an aether? Einstein mentioned it/revisited the idea in his 1920 speech in Leiden. This idea comes back repeatedly. With Maxwell, Dirac etc. See also this quote. Albert Einstein, Leiden, 1920 2) Could you help link the g-factor to gravitation? Hotson refered to the same study. He links it to the BEC theory. 3) The experimentation side of Van Flandern's theory is not clear to me. Bob said... Yes, SVT is a sound theory, although I don't believe that it is correct, and neither would the vast majority of physicists. Physics isn't about belief, it's about observation and understanding, and it is (just about) possible to have a deep understanding of physics and the observations made in the past few hundred years and still advocate this theory, and some people do. I wouldn't say that there's opposition to the idea of aether - just that it hasn't been found to be very useful. It's a nice story, but if you don't get anything directly related to reality from a concept then it won't be widely adopted. 1920 is a long time ago. A lot of things have been learned since then. 2. There is no link between Dirac's g and gravitation. It stands for gyromagnetic ratio. 3. As I said, Van Flandern's paper is discredited. If you're interested in the experimental side of the theory of gravitation, try this paper (click the PDF link for the full paper). There are 113 pages and 415 references, and Van Flandern isn't mentioned once - not because of any kind of prejudice, but because his ideas have not contributed anything significant to any of our current understanding of gravity. There's a discussion on the speed of gravity on page 44, and links to several other related papers. Experimentally and theoretically it is indistinguishable from the speed of light. Anonymous said... Bob, thank you for exposing the "ramblings" of this poor soul...I read his FB posts and saw some of his videos...lol. What a joke! Once someone starts promoting himself on & on & selling dvd's! Their true nature is exposed. He should stop being a "physicist" and just be another new age guru! Apparently there is a huge thirst for knowledge and meaning of Life. But Nassim is confusing the masses with his pseudoscience and trying to make a living without really working. Thanks again. Anonymous said... Not a Scientist but it seems to me in efforts to explain something new, the same terms are being used to fit an existing paradigm and this is confusing their meanings. Like those blind men examining an elephant... Kadillac said... Bob said... Hi Kadillac. Don't know what you mean. People can comment if they want, and they still do even though this post is five years old. (I've tried communicating with Haramein. It's a waste of time.) JMC said... Congratulations everyone it's finally happening... We are working together to figure this all out. When we will know the truth is when there will really be no more reason to talk about it anymore. Until then though we may as well entertain ourselves. Thankyou for the learning and fun. Peter Devita said... This sounds like science fiction not real science. Even so, these folks claim to have a free energy generation product. Has a reputable Lab bought one and done a test on it - someone like the DOE in the US or NRCan in Canada??? peter devita, Bob said... If he were ever to get any half-sensible results, science journalists would be all over it. He has a large cult following - reputable science news sites would love for him to get something right. If he made a claim that appeared to be based on any kind of understanding of physics, the science community would be all over it, even if they thought he was wrong. (Especially if they thought he was wrong.) If it's a dishonest stream of self-publicity and fake claims (which it is), of course they won't touch it. Howard Pinch said... I am visiting this site because of a posting on Face Book. It cited Nassim Haramein: "The Unified Theory and Beyond." I have never heard of him and the comments on that posting sounded intriguing. So I decided to Google him and find out what he was all about. I read a little and found his material was available at a cost. Wondered why? Then I saw the link to your link: "UP: Nassim Haramein - Fraud or Sage?" I decided to check this out..First, I have never bothered with blogs. Second, I did not realize I was going to be reading a detailed physics argument. Let me say, that I have had very little training in physics (10 credits) but always interested. And after reading your arguments, I could only conclude, Nassim is a fraud and I totally agreed with your assessment.(Reminds me of Jim and Tammy Baker, Oral Roberts, etc - money, money, money. I then went on to read the comments on your blog. I was astounded to say the least. As many said you have unbelievable patience and really kept your cool at some of the arguments and comments leveled at you. Your tolerance is to be commended and your stance of asking only for examples of how Nassim's physics is right on any level debunked any attempt otherwise. I would like to thank you for taking the time you have spent with this effort and it opened my eyes and will save me the time and effort of learning about NH. I hope your blog is still active as I note the dates of the postings I read. I will share this on Face Book. Bob said... Hi Howard - thank you for sharing your thoughts, I really appreciate it :) bikash maharjan said... hey dude... existing and comming in physics are just theories. Only nature knows the real truth. Universe is not human centric..It all our mess of thoughts.Thought only know by duality . Light and dark, up and down etc only by disecting.Can I know you by disecting your body.I may physical stucture of body.I would never know you as person.Truth is beyond duality.We have to transcend mind to know truth. Let go of these all theories of mind. Your life is biggest happening here on universe.Don't you believe that.Close your nose for minutes...then you will tell , hell with these theories...I want to live. So be conscious about your aliveness.your constant transaction and contentedness with nature around your on many level(physically breathing)...,Be conscious,Be conscious,Be conscious,Be conscious...before its too late .. So if you want to know..if your really want to know then.. query about your exitence..what is its nature..what is its full potential...body is greatest gadget in the earth..we even don't know fully about a single cell....and here were are talking/discussing/arguing about what I belive and what you belive...forget about my believe...truth is beyond believe..Don't believe in anything to know whoever he is. Don't even believe in Einstein. He may seem right so far..but may be proved in future. Who know..But you are alive here existentially. Then Truth will reveal to you by itself. Bob said... Hey bikash. I often see comments by people who make great claims on the pointlessness of rational human thought. Firstly, if that's the case, who in hell are you to know what is or isn't truth? Are you a god? But more importantly, what you're talking about is metaphysics and faith, and has nothing to do with what science is about. Science is all about letting go and finding ways to see beyond what you think you know. That's the whole point of it. There's nothing enlightened or enlightening about dismissing entire disciplines of human endeavour as a waste, or as not really conscious. It's lazy and arrogant. If you are interested, learn to understand the subject, learn to appreciate the lives of the individuals who devoted themselves to it. If you're not interested in science as a way of finding out more about the universe, just be honest about it. There's nothing wise about claiming your vision to be superior and dehumanising people who don't share your convictions. bikash maharjan said... Bob said... Hi Bikash - thanks for the link. I have a lot of respect for people like Sadhguru here. It's clear that the scientist talking with him (David Eagleman) does too. It doesn't seem that they are able to truly hear each other's perspective very well in that interview, but that probably shouldn't be a surprise given the very different directions from which they are approaching the subject. Still, it strikes me as a generous and insightful interview. Unlike Haramein, there is nothing dishonest here and nothing transparently pretentious. Unlike Haramein, Sadhguru isn't misleading people about science or any other discipline, he's presenting a different perspective, eloquently and intelligently. I won't extend that respect to whoever made the film, though. Adding a kind of cathedral reverb to Sadhguru's voice to make him sound like some kind of booming God cliché, while leaving Eagleman sounding like a human, is a pretty naff trick. That sort of creepy manipulation doesn't assist any kind of truth. Creepy sound manipulation aside, the interview is definitely worth watching. I love to see people presenting good arguments that go beyond or even against the principles and assumptions of science, so long as they're honest, genuine and intelligent. My issue with Haramein is not that he strikes out against the mainstream - that is a bold and wonderful thing that is always needed. My issue with Haramein, is that he does it dishonestly, pretentiously and stupidly, and that he feeds and relies on ignorance and prejudice. Anonymous said... I tell you one thing about NH, all there is on his Illustrious table, is the intent of selling "charged" crystals to new age hippies. That us the extent of his "scientific" work currently!! Anonymous said... Check www.toraeon.com Anonymous said... Bob said... Wow. Toraeon LLC? Yet another way of getting people to fund projects whose promise is based on Haramein's entirely bogus reputation as a research scientist. In what fantasy world could anyone say this is not fraud? So much money-making on the back of blatantly misleading pretend science. (The other one looks pretty kooky too, but it isn't obviously connected to Haramein) Anonymous said... Unfortunately there are some willing to fund scam artists. I bet NH is laughing all the way to the bank, nice way of making a super living at the expense of others, no? Bob said... Anonymous said... Anonymous said... Bob said... Indeed. There's plenty of information here and here The story hasn't changed at all: Haramein's physics pretence is laughable and false, he's a showman and a salesman with a very lucrative brand and an obsession with self-promotion. Show this movie to a physicist and the sheer stupidity of it will make their toes curl :) Anonymous said... Changed their domain to http://www.torustech.com They are working on "charging" crystals for selling to new age hippies ..how bout that for science ?! Bob said... Cool, thanks. What a bunch of pricks. Good luck to them :) Arr Wiley said... I just wanted to say, thanks for maintaining this for so long, Bob. It's given me the time and space to be introduced to Haramein's work, think it was pretty cool, sit on it for a few years, go to college, find this blog and get more critical, take chemistry and physics as a highly interested adult, get the basics of what you're talking about, read this again and go , man, this IS a line of bullshit. Doesn't help that it seems to get worse all the time, he's got links to "articles by resonance scientists" on his web site, but they're just third party links and. I can't find any published connection between the authors and Haramein. Do physicists get pissed when they just get linked to by him? Anyway, thanks again for your longstanding patience and commitment to honesty. I know you know you're right, but with all the crap at gets piled up on you on this page, for what it's worth, I think you're right too. California Maritime Academy Awragash said... Bob, in the hopes you may still see this, i'd just like to give thanks to you for taking the time to attempt to verify Nassim Haramein's claims. It must have been a bit if a slog but i hope you know that as it has stood on the web for many years now, it has surely saved many people from being caught up in what is quite simply a trap. I do see though, when reading the blog posts, many instances of when i feel it people who have fallen into this "trap" would be turned off due to the wording or approach (although it is some of the most tactful and courteous writing i have seen from a critique) In my experience, it is more efficient to try to first find what is good and agreeable in someones beliefs and then go on to explain what it is about their ideas that still leaves you with questions. For example "I can really get behind nassims' distrust of the scientific community and his wish for people to live in harmony with each other but i doesnt seen to go far enough for reasons such as..." What you can agree on with the person your speaking with will become the foundations for the bridge that allows them to see your point of view. If they can see that you share with them their basic values, then they can be open to the idea that these values are still valid even if their beliefs are not. It can be shameful to admit being fooled and it is not easy to have your self image shaken. One must take extra care to assure a person that there is nothing wrong with being misinformed without being patronizing. Very difficult, but maintaining that any person can see the truth is useful. The question is if the individual is worth your time. Thank you again, you have done the world a service. Bob said... Thanks Ryan! I don't know any physicists who aren't inundated by oddballs who think they've found a magic theory of everything. Either that or they think they've proved Einstein wrong, or they're convinced that their favourite charismatic guru has the solutions that the world's physicists have missed. Or some combination of all three. Maybe some physicists find them annoying, but most seem to quickly get used to them and ignore them. Jim Al Khalili's "sweet souls" comment sums it up very nicely. I'm sure he has Haramein in his box folder. I used to reply to them, but I soon learned that this never worked out well. Here's an article you might like, by someone who's turned it around in a way that benefits both sides. She's well-known for not taking any shit and not holding back when she disagrees with other scientists, but I imagine she'd be good with these people. Twenty years ago, Haramein might have been one of those sweet souls who'd write to Jim or Skype with Sabine, but sadly he's not that any more. He's had years of adulation and cash from fans of his crackpot stuff, so he's stuck with it now. All the best Bob said... Hi Awragash Yes, I agree with you. I don’t always say the right thing to people who see things differently. Sometimes I have the patience to try to respond in the spirit you suggest, but at other times I don’t. In particular, when people openly accuse me of something awful, or insist on demanding that their understanding is correct and all of mainstream physics has to listen to them, I often stop caring at that point. On the whole, I do try to assume everyone is coming from a good place. But I’m no counsellor, and I have a lot to learn about how to communicate with people whose distrust makes them hostile. As you say, it’s extremely hard for people to admit to having been fooled, and the path of least resistance or least pain so often seems to be to cling ever more strongly. It’s a very tricky web to untangle. Thanks for your thoughts, I appreciate it. I think the skills you’re describing are needed more and more in this world, given recent events. Anonymous said... Having worked with this guy, UNFORTUNATELY, I can tell you he is a complete douchebag , an ignorant dick wannabe that never misses a chance to remind everyone how great he is ,NOT...a megalomaniac prick of the worst kind...he continuously hallucinates profusely about aliens, probably of all the drugs in his system Taylor Westmore said... You made a comment about the vacuum energy ground state being the lowest energy it can possibly be. This is incorrect. The Casimir-Polder forces show that non-zero, positive or negative energy density ground-state can occur if the EM field modes are suppressed or enhanced as long as their 4-vectors sum to zero. This can be thought of in terms of vacuum polarization where virtual particle flux is biased, muted or enhanced in the region of the scalar or higher order tensor field gradient produced by the dot product of the EM 4-potential. Bob said... Hi Taylor If you have a vacuum in a region of space, and two metal plates (or molecules or other objects) a long way apart, it's possible to extract work by allowing the plates to move closer together via the Casimir-Polder force. I agree that this can be thought of as a lowering of the energy density of the vacuum in that region of space. This is a local effect that relies on there being significant departures from the vacuum nearby (e.g. metal plates made of particles). In the bigger picture, the energy of this system would be lower if the metal plates were not present. In quantum field theory, the vacuum state is a global state, and is the lowest state possible by definition. Post a Comment
6344b30143fd1d01
Harmonic oscillator – series solution Required math: calculus Required physics: harmonic oscillator The complete Schrödinger equation for the harmonic oscillator potential is \displaystyle -\frac{\hbar^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}+\frac{1}{2}kx^{2}\psi=E\psi \ \ \ \ \ (1) To solve this equation, we split the wave function {\psi} into two factors: the first factor is the asymptotic behaviour for large {x}, and the second is a function {f} which we have yet to find. To simplify the notation we introduced two auxiliary variables. The independent variable {y} is related to the spatial variable {x} by \displaystyle y\equiv\sqrt{\frac{m\omega}{\hbar}}x \ \ \ \ \ (2) and the parameter {\epsilon} is related to the energy {E} by \displaystyle \epsilon\equiv\frac{2E}{\hbar\omega} \ \ \ \ \ (3) The parameter {\omega} is the frequency of the oscillator. After analyzing the asymptotic behaviour of the Schrödinger equation for the harmonic oscillator, we write the wave function in the form \displaystyle \psi(y)=e^{-y^{2}/2}f(y) \ \ \ \ \ (4) Substituting this back into the Schrödinger equation gives us a differential equation for {f(y)}: \displaystyle \frac{d^{2}f}{dy^{2}}-2y\frac{df}{dy}+(\epsilon-1)f=0 \ \ \ \ \ (5) If we hurl this equation into mathematical software like Maple, it tells us that the solution involves two forms of Kummer functions, otherwise known as confluent hypergeometric functions of the first and second kinds. Apart from being able to impress your friends in the pub, these terms don’t really help us learn much about the physics. For that we need to solve the differential equation using a power series. The idea is to propose a solution of the form \displaystyle f(y)=\sum_{j=0}^{\infty}a_{j}y^{j} \ \ \ \ \ (6) The theory behind Taylor series in elementary calculus assures us that for any ‘reasonable’ function (that is, pretty well any function found in physics), it is possible to write the function as a power series, so we should be able to find such a solution. At this stage, we can’t guarantee that such a solution will tell us much, but it’s worth a try. To use the series, we need to calculate its first two derivatives: \displaystyle \frac{df}{dy} \displaystyle = \displaystyle \sum_{j=0}^{\infty}ja_{j}y^{j-1}\ \ \ \ \ (7) \displaystyle \frac{d^{2}f}{dy^{2}} \displaystyle = \displaystyle \sum_{j=0}^{\infty}j(j-1)a_{j}y^{j-2}\ \ \ \ \ (8) \displaystyle \displaystyle = \displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}y^{j} \ \ \ \ \ (9) The fancy footwork in the last line just relabels the summation index to make it more convenient for the next step, as we’ll see. To convince yourself it is the same series as the line above it, just write out the first 4 or 5 terms in the series and you’ll see it is the same. Notice also that in the first derivative series the first term is zero due to the factor of {j}, so we don’t actually get a term with {y^{-1}} in it. We now want to substitute these derivatives back into the differential equation 5 we want to solve. The reason we juggled the summation index in the second derivative is that we want the series in all three terms in the equation to contain {y^{j}} terms rather than {y} to some other power. This makes it easier to group together the terms with equal powers of {y}. Doing the substitution, we get: \displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}y^{j}-2\sum_{j=0}^{\infty}ja_{j}y^{j}+(\epsilon-1)\sum_{j=0}^{\infty}a_{j}y^{j} \displaystyle = \displaystyle 0\ \ \ \ \ (10) \displaystyle \sum_{j=0}^{\infty}[(j+2)(j+1)a_{j+2}-2ja_{j}+(\epsilon-1)a_{j}]y^{j} \displaystyle = \displaystyle 0 \ \ \ \ \ (11) From the mathematics of power series expansions, it is known that any given function’s expansion is unique (the proof takes us too far into pure mathematics so we’ll leave it for now). That means that, having decided on a value for {\epsilon}, there is one and only one sequence of {a_{j}}s that defines the function {f(y)}. So, since {y} can be any value, the only way the above sum can be zero for all values of {y} is if the coefficient of each power of {y} vanishes separately. That is, \displaystyle (j+2)(j+1)a_{j+2}-2ja_{j}+(\epsilon-1)a_{j}=0 \ \ \ \ \ (12) This, in turn, gives a recursion relation for the coefficients: \displaystyle a_{j+2}=\frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j} \ \ \ \ \ (13) Since we are solving a second order differential equation we would expect to have two arbitrary constants that must be determined by initial conditions and normalization, and we see that since the recursion formula relates every second coefficient, we need to specify both {a_{0}} and {a_{1}} to be able to generate all the coefficients. If we start off with {a_{0}} we get all the even coefficients: \displaystyle a_{2} \displaystyle = \displaystyle \frac{1-\epsilon}{2}a_{0}\ \ \ \ \ (14) \displaystyle a_{4} \displaystyle = \displaystyle \frac{5-\epsilon}{12}a_{2}=\frac{(5-\epsilon)(1-\epsilon)}{24}a_{0}\ \ \ \ \ (15) \displaystyle \displaystyle \ldots There is a similar sequence of calculations for the odd coefficients starting with {a_{1}}. That’s about as far as we can go without using some external information to put some conditions on the series. As usual, we require the solution (the original solution, that is, {\psi}) to be normalizable. We now know that this solution has the form \displaystyle \psi(y) \displaystyle = \displaystyle e^{-y^{2}/2}f(y)\ \ \ \ \ (16) \displaystyle \displaystyle = \displaystyle e^{-y^{2}/2}\sum_{j=0}^{\infty}a_{j}y^{j} \ \ \ \ \ (17) So in order to be normalizable, the series will have to converge to some function that doesn’t expand to infinity as fast as {e^{y^{2}/2}}. Otherwise the series term will kill off the negative exponential, and the overall wave function will not tend to zero as {y} goes to infinity. This seems like a difficult condition to check, but let’s have a look at the asymptotic behaviour (for large {j}) of the recursion formula 13. \displaystyle a_{j+2} \displaystyle = \displaystyle \frac{2j+1-\epsilon}{(j+1)(j+2)}a_{j}\ \ \ \ \ (18) \displaystyle \displaystyle = \displaystyle \frac{2j+1-\epsilon}{j^{2}+3j+2}a_{j}\ \ \ \ \ (19) \displaystyle \displaystyle \sim \displaystyle \frac{2}{j}a_{j} \ \ \ \ \ (20) Thus the ratio of two successive even terms (or two successive odd terms) in the series is \displaystyle \frac{a_{j+2}y^{j+2}}{a_{j}y^{j}}=\frac{2}{j}y^{2} \ \ \ \ \ (21) How does this compare with the series for an exponential function? The Taylor series for {e^{x^{2}}} is \displaystyle e^{x^{2}} \displaystyle = \displaystyle \sum_{j=0}^{\infty}\frac{x^{2j}}{j!}\ \ \ \ \ (22) \displaystyle \displaystyle = \displaystyle \sum_{j\; even}^{\infty}\frac{x^{j}}{(j/2)!} \ \ \ \ \ (23) The ratio of two successive terms from this series is \displaystyle \frac{x^{j+2}/((j+2)/2)!}{x^{j}/(j/2)!}=\frac{2}{j+2}x^{2} \ \ \ \ \ (24) which for large {j} is essentially the same as relation 21. And this is for only half (either even or odd terms) of the series; the other half will contribute another function of roughly equal size. Thus it looks like the series’ asymptotic behaviour is that of {e^{y^{2}}} so the overall behaviour of the wave function is {e^{y^{2}}e^{-y^{2}/2}=e^{y^{2}/2}}, which diverges and is therefore not normalizable. This looks like a serious problem, but there is in fact a way out: if the series terminates after a finite number of terms, then the behaviour is that of a polynomial rather than an exponential, and multiplying any polynomial by {e^{-y^{2}/2}} will always give a normalizable function. So if we can arrange things so that the recursion formula 13 gives {a_{j+2}=0} for some {j}, then clearly all further terms will be zero. The condition to be satisfied is therefore \displaystyle 2j+1-\epsilon \displaystyle = \displaystyle 0\ \ \ \ \ (25) \displaystyle \epsilon \displaystyle = \displaystyle 2j+1\ \ \ \ \ (26) \displaystyle E \displaystyle = \displaystyle \frac{1}{2}\hbar\omega(2j+1) \ \ \ \ \ (27) where {j} is some integer 0, 1, 2, 3, … Note however, that each choice of {j}, that is, each choice of where the series terminates, gives a different value for the energy. The lowest possible energy for the harmonic oscillator is when {j=0}, and is {E_{0}=\frac{1}{2}\hbar\omega} and the energies increase at regular intervals of {\hbar\omega} so the energy levels are all equally spaced. It is more usual to give the energy formula as \displaystyle E_{n}=\left(n+\frac{1}{2}\right)\hbar\omega \ \ \ \ \ (28) with {n=}0, 1, 2, 3, 4, One note of caution here. Once we have chosen an energy level, this fixes the value of {j} at which the series terminates. If {j} is even, then the odd series must be zero right from the start, and vice versa. There is no way of getting both the even and odd series to terminate at some intermediate values in the same solution. So if we choose an even value of {j} we must have {a_{1}=0} to remove all the odd terms from the sum, and conversely if we choose {j} to be odd, we must have {a_{0}=0} to remove all the even terms. The stationary states for the harmonic oscillator are therefore products of polynomials and the exponential factor. The polynomials turn out to be well-studied in mathematics and are known as Hermite polynomials. We will explore their properties in another post. To summarize the behaviour of the quantum harmonic oscillator, we’ll list a few points. 1. The harmonic oscillator potential is parabolic, and goes to infinity at infinite distance, so all states are bound states – there is no energy a particle can have that will allow it to be free. 2. The energies are equally spaced, with spacing {\hbar\omega}. 3. The lowest energy is the ground state {E_{0}=\hbar\omega/2}, so a particle always has positive, non-zero energy. 4. The stationary states consist of either an even or an odd polynomial function multiplied by {e^{-y^{2}/2}=e^{-m\omega x^{2}/2\hbar}} which is always even. Thus a stationary state is either an even or an odd function of {y} (and hence of {x}). 5. The polynomial functions in the stationary states are Hermite polynomials. 17 thoughts on “Harmonic oscillator – series solution 1. Pingback: Harmonic oscillator – summary « Physics tutorials 2. Pingback: Harmonic oscillator – asymptotic solution « Physics tutorials 3. Pingback: Hermite differential equation – generating functions « Physics tutorials 4. Pingback: Harmonic oscillator – Hermite polynomials « Physics tutorials 5. Pingback: Legendre equation – Legendre polynomials « Physics tutorials 6. Pingback: Index – Physics – Quantum mechanics « Physics tutorials 7. Pingback: Hydrogen atom – series solution « Physics tutorials 8. Pingback: Hydrogen atom – radial equation « Physics tutorials 9. Pingback: Hermite polynomials – generation « Physics tutorials 10. Pingback: Hermite polynomials – the Rodrigues formula « Physics tutorials 11. Pingback: Angular momentum: restriction to integer values « Physics tutorials 12. Pingback: Electromagnetism in quantum mechanics: example « Physics tutorials 13. Pingback: Electromagnetic force law in quantum mechanics « Physics tutorials 14. Pingback: Harmonic oscillator – Hermite polynomials | Physics pages 15. Pingback: Legendre equation – Legendre polynomials | Physics pages 16. Pingback: Hydrogen atom – series solution and Bohr energy levels | Physics pages Leave a Reply
6b9bf9eede14628b
Professor Erich Krautz-Preisträger 2015: Zhe Wang Low-dimensional quantum magnets provide unique possibilities to study ground and excited states of quantum models, to explore new phases of quantum matter, and to investigate the interplay of quantum and thermal fluctuations. In his dissertation entitled "Terahertz and Infrared Spectroscopy on Low-Dimensional Quantum Magnets", Zhe Wang has studied quantum phase transitions and quantum spin dynamics, and its interplay with lattice and orbital degrees of freedom, in a variety of low-dimensional quantum spin systems in static and pulsed high magnetic fields up to 60 Tesla. In Zhe Wang’s award-winning dissertation, the confinement of spinon excitations, as analogue of the concept of quark confinement, is realized and identified in a spin-1/2 Heisenberg-Ising antiferromagnetic chain, which can be described by a one-dimensional Schrödinger equation. By careful measurement of spin excitations in high magnetic fields, quantum phase transitions are revealed. In the spin-1/2 system, a quantum critical phase is induced by a magnetic field, which is characterized by string excitations and fractional spin excitations that emerge above the phase transition. In a spin-1 antiferromagnetic chain, where the Haldane phase is realized as the ground state, Ising- and XY-type antiferromagnetic phases are observed above their respective field-induced quantum phase transitions.
54768f450f326839
Friday, April 17, 2009 Does gravity change with time? El Cid said... I think you idea about the aether may be equivalent with the Dirac sea, which is a consequence of the Dirac equation. This equation is like the Schrödinger equation but with Lorentz symmetry. According to wikipedia : Dirac, Einstein and others recognised that it is related to the 'metaphysical' aether [1]: ... with the new theory of electrodynamics we are rather forced to have an aether. – P.A.M. Dirac, ‘Is There An Aether?,’ Nature, v.168, 1951, p.906. As you can see, the idea of the aether, although many times has been a successfull idea, doesn't like to the theoretical physics community. Zephir said... El Cid, Aether of AWT is a Boltzmann gas of infinite mass/energy density. As such it isn't a consequence of the Dirac's equation and isn't equivalent of Dirac's sea - for example it doesn't consider antiparticles as a "holes" of Aether. Such antiparticles would have a negative rest mass, i.e. they would be a tachyons. But you're right, theorists don't like an Aether concept, because they tend to easily predictable solutions. Multiparticle systems are difficult to handle by formal math. El Cid said... Could AWT be modelled by an ideal gas? Namely is this gas described by the equation PV=nRT? How much is n for the aether that filled the Universe? Zephir said... Indeed - but state equation of ideal gas is valid only for sparse ideal gas, where the free average path of density fluctuations is negligible with compare to their size (i.e. energy density >> mass density). Whereas in Aether mass density ≈ energy density - at least for vacuum phase, because we can see only density gradients in it. Such equilibrium corresponds condensing supercritical fluid with foamy density fluctuations - rather then ideal gas under terrestrial conditions. The above doesn't say, Aether isn't such ideal gas - but we tend to ignore its indeterministic portion, so Aether appears a much more close to supercritical state, then it really may be. The N constant depends on molar mass, which is arbitrary number, depending on what we we consider as a fundamental particles of Aether. Anonymous said... BLABLA...Zeph & El Cid HAHA El Cid said... The problem could be that in an ideal gas the density is constant, because it's a system in thermodynamic equilibrium. As the density is constant, its gradient is zero. I think if the aether can be modelled like an ideal gas, then the matter couldn't arise from the density gradient of the aether, could it?. If I'm writing nonsenses, sorry, but my knowledge of the AWT is rather sparse. sorry but, BLABLABLABLA could be useful information, because AWT could be a very good idea. Don't discard the AWT, until you have proven that the theory is wrong, thanks. El Cid said... And if the matter doesn't arise then the gravity doesn't change. In this case, the Universe would be a dense ideal gas, but as, this gas would be in thermodynamic equilibrium then it's static and the time wouldn't exist. As the time wouldn't exist, we can model the universe with only three dimension, and the special relativity is not necessary, neither. Moreover, if the aether has got a density constant, we only need a one dimension to describe the Universe because the three dimensions would be identical, one with the others, don't they? Zephir said... / ideal gas the density is constant.../ Only macroscopically, from this the uniformity of Universe follows. At sufficiently local level the density fluctuations of ideal gas follow Maxwell-Boltzmann distribution, the temperature fluctuations as well. The decision, how "local" this scope should be depends solely on density of gas. In very dense gas, the random density fluctuations will become deep, multidimensional and pretty large with respect of more subtle/shallow fluctuations. El Cid said... Now, I think I see it clear. One thing is the macroscopic physics, which deal with averaged quantities, and other thing is the microscopic physics, which deal with the fluctuations of the averaged quantities. Sorry for my ignorance. Maybe one prediction of the AWT is that the dimensions of the space arise due to the density fluctuations of the aether, namely, due to the density gradient of the aether. One thing more to say. It seems to me, that you use the classical statistical distribution, i.e. Maxwell-Boltzmann distribution, to describe the macroscopic behaviour of the aether, instead of using quantum statistical distributions, like Fermi-Dirac distribution or Bose-Einstein distribution. why? Do you think the classical physics could be more fundamental than quantum mechanics, as Einstein dreamt? Zephir said... /* you think the classical physics could be more fundamental than quantum mechanics, as Einstein dreamt...*/ We cannot built up a fundamental description of reality on equations and postulates, which we don't understand in full depth. Only behavior of abstract system of coliding particles is fully predictable from its very beginning and it doesn't need to be verified in experiments. Zephir said... /*..the dimensions of the space arise due to the density fluctuations of the aether..*/ Of course, by AWT the number of dimensions of our space-time follows from compactness of 3D kissing hyperspheres arrangement. Zephir said... /*..sorry for my ignorance...*/ No problemo... But your case illustrates clearly, people have certain problem to adopt AWT ideas even at the moment, when they're quite opened to them. Not saying about situation, when they hostile, because they promote alternative TOEs, for example. For me it's somewhat surprising, why people even after two thousands years of Aether concept existence have problem to consider, Aether is very dense gas forming the vacuum. How else the light could spread through it in large energy density, for example? In my opinion, every physicist should consider this idea at the first place, because it's most natural model from everything, what we know about Nature. Anonymous said... Zeph don`t be a fool !! Zephir said... Why not - I like it? Fools are simple people, who can comprehend trivial things only - whereas fundamental concepts are supposed to be simple. Therefore fools are forced to think in Occam's razor way authomatically. The common problem of experts is (between others), their thinking is naturally divergent, as they tend to stuck in various details and complexities of their models - so they cannot see a forest through the woods. Try to imagine, you're finding your way in fractal landscape - the thorough approach based on deep analysis of all details isn't apparently the best approach here.
5013a21e90eac3aa
Becoming a Physicist Question: What drew you to math at a young age Gates: My first fascination with math, I have this first conscious memory of sort of thinking about it, goes back to when I was age eight.  Mathematics in our family is something we kind of like.  My grandfather could neither read nor write, but he could do simple arithmetic.  My dad never graduated from high school, but during the period when he was trying to get an equivalency exam he studied mathematics.  So I remember watching him learning trigonometry and algebra.  And, you know, that’s kind of unusual to watch your dad learn mathematics.  And then I always did well in school in mathematical subjects also.  So it’s kind of the family bug.  My kids like mathematics, interestingly enough.  So we’re just like- you know, we’re fond of it.  But this conscious memory that you ask about goes to a specific event.  When I was about nine years old, dad had bought a set of Encyclopedia Britannica.  I was paging through one day and I found this thing that was clearly mathematics because it had equal signs in it, it had plus signs in it, but the rest of it, as the saying goes, was Greek to me.  It was literally Greek symbols.  And the equation that I saw was one of the most important equations for understanding the world of the very small; it’s called a Schrodinger equation.  And for me, this thing felt like walking along a beach, seeing a very beautiful and shiny shell, looking at it, and saying gee, I wonder what made this.  And so that’s the reaction I had to it. Question: What is the Schrödinger equation? Gates: Well, a lot of people have heard about quantum theory, this sort of spooky behavior that goes on when you look at parts of our universe that are extremely small like atoms.  So you need to have a precise understanding of how these tiny objects work.  And the way that science does this is we have found that there’s only one human language that is accurately constructed enough so that we can describe nature and that language turns out to be mathematics.  So when we write our equations, we’re actually trying to describe something.  So the Schrodinger equation is the first equation that describes the quantum weirdness that electrons and atoms demonstrate and which allow us to build things like cell phones. Gates traces his early childhood roots as a lover of science and math. LinkedIn meets Tinder in this mindful networking app Swipe right to make the connections that could change your career. Getty Images Keep reading Show less (Photo by J. Wilds/Keystone/Getty Images) Politics & Current Affairs Keep reading Show less A world map of Virgin Mary apparitions She met mere mortals with and without the Vatican's approval. Strange Maps Keep reading Show less Why I wear my life on my skin For Damien Echols, tattoos are part of his existential armor. Keep reading Show less
41ea00c82258b703
Page protected with pending changes level 1 Quantum mechanics From Wikipedia, the free encyclopedia Jump to navigation Jump to search Wavefunctions of the electron in a hydrogen atom at different energy levels. Quantum mechanics cannot predict the exact location of a particle in space, only the probability of finding it at different locations.[1] The brighter areas represent a higher probability of finding the electron. Classical physics, the physics existing before quantum mechanics, describes nature at ordinary (macroscopic) scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.[3] Quantum mechanics differs from classical physics in that energy, momentum, angular momentum and other quantities of a system are restricted to discrete values (quantization); objects have characteristics of both particles and waves (wave-particle duality); and there are limits to the precision with which quantities can be measured (uncertainty principle).[note 1] Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[7] In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper titled On the nature of light and colours. This experiment played a major role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[8] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets) precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation,[9] known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Following Max Planck's solution in 1900 to the black-body radiation problem (reported 1859), Albert Einstein offered a quantum-based theory to explain the photoelectric effect (1905, reported 1887). Around 1900-1910, the atomic theory and the corpuscular theory of light[10] first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, and Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, for which Niels Bohr developed his theory of the atomic structure, which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[11] This phase is known as old quantum theory. Max Planck is considered the father of the quantum theory. where h is Planck's constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[12] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[13] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. He won the 1921 Nobel Prize in Physics for this work. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete quantum of energy that was dependent on its frequency.[14] In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). In 1926 Erwin Schrödinger suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states—whose properties turned out to be exactly the same as implied by matrix mechanics.[15] From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.[citation needed] By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann[16] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. Its speculative modern developments include string theory and quantum gravity theories. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies.[citation needed] While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors,[17] and superfluids.[18] The word quantum derives from the Latin, meaning "how great" or "how much".[19] In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[20][better source needed] Some fundamental aspects of the theory are still actively studied.[21] Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom were solely described by classical mechanics, electrons would not orbit the nucleus, since orbiting electrons emit radiation (due to circular motion) and would eventually collide with the nucleus due to this loss of energy. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, smeared, probabilistic wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[22] Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account: Mathematical formulations[edit] In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[23] David Hilbert,[24] John von Neumann,[25] and Hermann Weyl,[26] the possible states of a quantum mechanical system are symbolized[27] as unit vectors (called state vectors). Formally, these reside in a complex separable Hilbert space—variously called the state space or the associated Hilbert space of the system—that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system—for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues. According to one interpretation, as the result of a measurement the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable—which explains the choice of Hermitian operators, for which all the eigenvalues are real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that - given a wave function at an initial time - it makes a definite prediction of what the wave function will be at any later time.[36] Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Denser areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics, and are modes of oscillation as well, possessing a sharp energy and, thus, a definite frequency. The angular momentum and energy are quantized, and take only discrete values like those shown (as is the case for resonant frequencies in acoustics) Some wave functions produce probability distributions that are constant, or independent of time—such as when in a stationary state of constant energy, time vanishes in the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wave function surrounding the nucleus (Fig. 1) (note, however, that only the lowest angular momentum states, labeled s, are spherically symmetric).[40] The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the "wave-like" behavior of quantum states. As it turns out, analytic solutions of the Schrödinger equation are available for only a very small number of relatively simple model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom are the most important representatives. Even the helium atom—which contains just one more electron than does the hydrogen atom—has defied all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions, however. In the important method known as perturbation theory, one uses the analytic result for a simple quantum mechanical model to generate a result for a more complicated model that is related to the simpler model by (for one example) the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces only weak (small) deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos. Mathematically equivalent formulations of quantum mechanics[edit] There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics - matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[41] Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM was overlooked until the 1954 Nobel award. The role is noted in a 2005 biography of Born, which recounts his role in the matrix formulation of quantum mechanics, and the use of probability amplitudes. Heisenberg himself acknowledges having learned matrices from Born, as published in a 1940 festschrift honoring Max Planck.[42] In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[43] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Interactions with other scientific theories[edit] Question dropshade.png Unsolved problem in physics: (more unsolved problems in physics) It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity (the most accurate theory of gravity currently known) and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. Quantum mechanics and classical physics[edit] Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[46] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[47] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[48] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein–Podolsky–Rosen (EPR) paradox — an attack on a certain philosophical interpretation of quantum mechanics by an appeal to local realism.[49] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[50] Quantum coherence is not typically evident at macroscopic scales, though an exception to this rule may occur at extremely low temperatures (i.e. approaching absolute zero) at which quantum behavior may manifest itself macroscopically.[51] This is in accordance with the following observations: • While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with particles of extremely small size or velocities approaching the speed of light, the laws of classical, often considered "Newtonian", physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[53] Copenhagen interpretation of quantum versus classical kinematics[edit] A big difference between classical and quantum mechanics is that they use very different kinematic descriptions.[54] In Niels Bohr's mature view, quantum mechanical phenomena are required to be experiments, with complete descriptions of all the devices for the system, preparative, intermediary, and finally measuring. The descriptions are in macroscopic terms, expressed in ordinary language, supplemented with the concepts of classical mechanics.[55][56][57][58] The initial condition and the final condition of the system are respectively described by values in a configuration space, for example a position space, or some equivalent space such as a momentum space. Quantum mechanics does not admit a completely precise description, in terms of both position and momentum, of an initial condition or "state" (in the classical sense of the word) that would support a precisely deterministic and causal prediction of a final condition.[59][60] In this sense, advocated by Bohr in his mature writings, a quantum phenomenon is a process, a passage from initial to final condition, not an instantaneous "state" in the classical sense of that word.[61][62] Thus there are two kinds of processes in quantum mechanics: stationary and transitional. For a stationary process, the initial and final condition are the same. For a transition, they are different. Obviously by definition, if only the initial condition is given, the process is not determined.[59] Given its initial condition, prediction of its final condition is possible, causally but only probabilistically, because the Schrödinger equation is deterministic for wave function evolution, but the wave function describes the system only probabilistically.[63][64] General relativity and quantum mechanics[edit] Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. Many prominent physicists, including Stephen Hawking, have labored for many years in the attempt to discover a theory underlying everything. This TOE would combine not only the different models of subatomic physics, but also derive the four fundamental forces of nature - the strong force, electromagnetism, the weak force, and gravity - from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, he has concluded that one is not obtainable, and has stated so publicly in his lecture "Gödel and the End of Physics" (2002).[69] Attempts at a unified field theory[edit] The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory in competition with general relativity,[70][71] has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field.[72] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of those searching for a coherent TOE is Edward Witten, a theoretical physicist who formulated the M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are - at lower energies - completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing. Another popular theory is Loop quantum gravity (LQG), a theory first proposed by Carlo Rovelli that describes the quantum properties of gravity. It is also a theory of quantum space and quantum time, because in general relativity the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature of the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here it is space itself which is discrete. More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to theory, there is no meaning to length shorter than this (cf. Planck scale energy). Therefore, LQG predicts that not just matter, but also space itself, has an atomic structure. Philosophical implications[edit] Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions, took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[73] According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[74] The Copenhagen interpretation — due largely to Niels Bohr and Werner Heisenberg — remains most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality." It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the conjugate nature of evidence obtained under different experimental situations. Albert Einstein, himself one of the founders of quantum theory, did not accept some of the more philosophical or metaphysical interpretations of quantum mechanics, such as rejection of determinism and of causality. He is famously quoted as saying, in response to this aspect, "God does not play with dice".[75] He rejected the concept that the state of a physical system depends on the experimental arrangement for its measurement. He held that a state of nature occurs in its own right, regardless of whether or how it might be observed. In that view, he is supported by the currently accepted definition of a quantum state, which remains invariant under arbitrary choice of configuration space for its representation, that is to say, manner of observation. He also held that underlying quantum mechanics there should be a theory that thoroughly and directly expresses the rule against action at a distance; in other words, he insisted on the principle of locality. He considered, but rejected on theoretical grounds, a particular proposal for hidden variables to obviate the indeterminism or acausality of quantum mechanical measurement. He considered that quantum mechanics was a currently valid but not a permanently definitive theory for quantum phenomena. He thought its future replacement would require profound conceptual advances, and would not come quickly or easily. The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. In arguing for his views, he produced a series of objections, the most famous of which has become known as the Einstein–Podolsky–Rosen paradox. John Bell showed that this EPR paradox led to experimentally testable differences between quantum mechanics and theories that rely on added hidden variables. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that quantum mechanics cannot be improved upon by addition of hidden variables.[76] Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement. By the early 1980s, experiments had shown that such inequalities were indeed violated in practice—so that there were in fact correlations of the kind suggested by quantum mechanics. At first these just seemed like isolated esoteric effects, but by the mid-1990s, they were being codified in the field of quantum information theory, and led to constructions with names like quantum cryptography and quantum teleportation.[77] Entanglement, as demonstrated in Bell-type experiments, does not, however, violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is proposed for use in high-security commercial applications in banking and government. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[78] This is not accomplished by introducing some "new axiom" to quantum mechanics, but on the contrary, by removing the axiom of the collapse of the wave packet. All of the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical - not just formally mathematical, as in other interpretations - quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can only observe the universe (i.e., the consistent state contribution to the aforementioned superposition) that we, as observers, inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, these "parallel universes" will never be accessible to us. The inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away at the speed of light towards the other end of the universe. In order to prove that the wave function did not collapse, one would have to bring all these particles back and measure them again, together with the system that was originally measured. Not only is this completely impractical, but even if one could theoretically do this, it would have to destroy any evidence that the original measurement took place (including the physicist's memory). In light of these Bell tests, Cramer (1986) formulated his transactional interpretation[79] which is unique in providing a physical explanation for the Born rule.[80] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen Interpretation. Quantum mechanics has had enormous[81] success in explaining many of the features of our universe. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism). Quantum mechanics is also critically important for understanding how individual atoms are joined by covalent bond to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved.[82] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics. Many modern electronic devices are designed using quantum mechanics. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging (MRI). The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems, computer and telecommunication devices. Another application is for making laser diode and light emitting diode which are a high-efficiency source of light. Many electronic devices operate under effect of quantum tunneling. It even exists in the simple light switch. The switch would not work if electrons could not quantum tunnel through the layer of oxidation on the metal contact surfaces. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. Some negative differential resistance devices also utilize quantum tunneling effect, such as resonant tunneling diode. Unlike classical diodes, its current is carried by resonant tunneling through two or more potential barriers (see right figure). Its negative resistance behavior can only be understood with quantum mechanics: As the confined state moves close to Fermi level, tunnel current increases. As it moves away, current decreases. Quantum mechanics is necessary to understanding and designing such electronic devices. Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. An inherent advantage yielded by quantum cryptography when compared to classical cryptography is the detection of passive eavesdropping. This is a natural result of the behavior of quantum bits; due to the observer effect, if a bit in a superposition state were to be observed, the superposition state would collapse into an eigenstate. Because the intended recipient was expecting to receive the bit in a superposition state, the intended recipient would know there was an attack, because the bit's state would no longer be in a superposition.[83] Quantum computing[edit] Another goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Quantum programmers are able to manipulate the superposition of qubits in order to solve problems that classical computing cannot do effectively, such as searching unsorted databases or integer factorization. IBM claims that the advent of quantum computing may progress the fields of medicine, logistics, financial services, artificial intelligence and cloud security.[84] Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances. Macroscale quantum effects[edit] While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures. The fractional quantum Hall effect is a topological ordered state which corresponds to patterns of long-range quantum entanglement.[85] States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Quantum theory[edit] Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[86] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this fundamental process of plants and many other organisms.[87] Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Since classical formulas are much simpler and easier to compute than quantum formulas, classical approximations are used and preferred when the system is large enough to render the effects of quantum mechanics insignificant. Free particle[edit] Particle in a box[edit] 1-dimensional potential energy box (or infinite potential well) With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle. or, from Euler's formula, The infinite potential walls of the box determine the values of C, D, and k at x = 0 and x = L where ψ must be zero. Thus, at x = 0, and D = 0. At x = L, in which C cannot be zero as this would conflict with the Born interpretation. Therefore, since sin(kL) = 0, kL must be an integer multiple of π, The ground state energy of the particles is E1 for n=1. Energy of particle in the nth state is En =n2E1, n=2,3,4,..... Particle in a box with boundary condition V(x)=0 -a/2<x<+a/2 In this condition the general solution will be same, there will a little change to the final result, since the boundary conditions are changed At x=0, the wave function is not actually zero at all value of n. Clearly, from the wave function variation graph we have, At n=1,3,4,...... the wave function follows a cosine curve with x=0 as origin At n=2,4,6,...... the wave function follows a sine curve with x=0 as origin Variation of wave function with x and n. Wave Function Variation with x and n. So in this case the resultant wave equation is ψn(x) = Acos(knx) n=1,3,5,............. = Bsin(knx) n=2,4,6,............. Finite potential well[edit] Rectangular potential barrier[edit] Harmonic oscillator[edit] where Hn are the Hermite polynomials and the corresponding energy levels are Step potential[edit] The potential in this case is given by: See also[edit] 5. ^ Matson, John. "What Is Quantum Mechanics Good for?". Scientific American. Retrieved 18 May 2016.  7. ^ Max Born & Emil Wolf, Principles of Optics, 1999, Cambridge University Press 8. ^ Mehra, J.; Rechenberg, H. (1982). The historical development of quantum theory. New York: Springer-Verlag. ISBN 0387906428.  9. ^ Kragh, Helge (2002). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. p. 58. ISBN 0-691-09552-3.  Extract of page 58 10. ^ Ben-Menahem, Ari (2009). Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1. Springer. p. 3678. ISBN 3540688315.  Extract of page 3678 11. ^ E Arunan (2010). "Peter Debye" (PDF). Resonance. Indian Academy of Sciences. 15 (12).  12. ^ Kuhn, T. S. (1978). Black-body theory and the quantum discontinuity 1894-1912. Oxford: Clarendon Press. ISBN 0195023838.  13. ^ Kragh, Helge (1 December 2000), Max Planck: the reluctant revolutionary,  15. ^ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1056. ISBN 1-57955-008-8.  16. ^ van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society. 64 (3): Part2:95–99. doi:10.1090/s0002-9904-1958-10206-2.  17. ^ Feynman, Richard. "The Feynman Lectures on Physics III 21-4". California Institute of Technology. Retrieved 2015-11-24. " was long believed that the wave function of the Schrödinger equation would never have a macroscopic representation analogous to the macroscopic representation of the amplitude for photons. On the other hand, it is now realized that the phenomena of superconductivity presents us with just this situation.  18. ^ Richard Packard (2006) "Berkeley Experiments on Superfluid Macroscopic Quantum Effects" Archived November 25, 2015, at the Wayback Machine. accessdate=2015-11-24 19. ^ "Quantum - Definition and More from the Free Merriam-Webster Dictionary". Retrieved 2012-08-18.  21. ^ "". Retrieved 11 September 2015.  22. ^ "QUANTUM MECHANICS". 2009-10-26. Archived from the original on 2009-10-26. Retrieved 2016-06-13.  23. ^ P.A.M. Dirac, The Principles of Quantum Mechanics, Clarendon Press, Oxford, 1930. 24. ^ D. Hilbert Lectures on Quantum Theory, 1915–1927 26. ^ H.Weyl "The Theory of Groups and Quantum Mechanics", 1931 (original title: "Gruppentheorie und Quantenmechanik"). 27. ^ Dirac, P.A.M. (1958). The Principles of Quantum Mechanics, 4th edition, Oxford University Press, Oxford UK, p. ix: "For this reason I have chosen the symbolic method, introducing the representatives later merely as an aid to practical calculation." 28. ^ Greiner, Walter; Müller, Berndt (1994). Quantum Mechanics Symmetries, Second edition. Springer-Verlag. p. 52. ISBN 3-540-58080-8. , Chapter 1, p. 52 29. ^ "Heisenberg - Quantum Mechanics, 1925–1927: The Uncertainty Relations". Retrieved 2012-08-18.  30. ^ a b Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition. Jones and Bartlett Publishers, Inc. p. 215. ISBN 0-7637-2470-X. , Chapter 8, p. 215 32. ^ Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Cambridge University Press. p. 265. ISBN 0-521-80412-4. , Chapter, p. 33. ^ " dictionary :: eigen :: German-English translation". Retrieved 11 September 2015.  34. ^ "Topics: Wave-Function Collapse". 2012-07-27. Retrieved 2012-08-18.  35. ^ "Collapse of the wave-function". Retrieved 2012-08-18.  36. ^ "Determinism and Naive Realism : philosophy". 2009-06-01. Retrieved 2012-08-18.  37. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Retrieved 2010-10-15.  38. ^ Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Retrieved 2010-10-15.  39. ^ Mathews, Piravonu Mathews; Venkatesan, K. (1976). A Textbook of Quantum Mechanics. Tata McGraw-Hill. p. 36. ISBN 0-07-096510-2. , Chapter 2, p. 36 40. ^ "Wave Functions and the Schrödinger Equation" (PDF). Retrieved 2010-10-15. [dead link] 42. ^ Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124-8 and 285-6. 44. ^ "The Nobel Prize in Physics 1979". Nobel Foundation. Retrieved 2010-02-16.  45. ^ Carl M. Bender; Daniel W. Hook; Karta Kooner (2009-12-31). "Complex Elliptic Pendulum". arXiv:1001.0131Freely accessible [hep-th].  47. ^ Tipler, Paul; Llewellyn, Ralph (2008). Modern Physics (5 ed.). W. H. Freeman and Company. pp. 160–161. ISBN 978-0-7167-7550-8.  48. ^ "Quantum mechanics course iwhatisquantummechanics". 2008-09-14. Retrieved 2012-08-18.  49. ^ Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can quantum-mechanical description of physical reality be considered complete?". Phys. Rev. 47: 777. Bibcode:1935PhRv...47..777E. doi:10.1103/physrev.47.777.  51. ^ (see macroscopic quantum phenomena, Bose–Einstein condensate, and Quantum machine) 52. ^ "Atomic Properties". Retrieved 2012-08-18.  53. ^ 59. ^ a b Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here [1], "But in the rigorous formulation of the law of causality, — "If we know the present precisely, we can calculate the future" — it is not the conclusion that is faulty, but the premise." 65. ^ Bohr, N. (1928). "The Quantum postulate and the recent development of atomic theory". Nature. 121: 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0.  67. ^ Goldstein, H. (1950). Classical Mechanics, Addison-Wesley, ISBN 0-201-02510-8. 68. ^ "There is as yet no logically consistent and complete relativistic quantum field theory.", p. 4.  — V. B. Berestetskii, E. M. Lifshitz, L P Pitaevskii (1971). J. B. Sykes, J. S. Bell (translators). Relativistic Quantum Theory 4, part I. Course of Theoretical Physics (Landau and Lifshitz) ISBN 0-08-016025-5 69. ^ "Stephen Hawking; Gödel and the end of physics". Retrieved 11 September 2015.  70. ^ "The Nature of Space and Time". Retrieved 11 September 2015.  71. ^ Tatsumi Aoyama; Masashi Hayakawa; Toichiro Kinoshita; Makiko Nio (2012). "Tenth-Order QED Contribution to the Electron g-2 and an Improved Value of the Fine Structure Constant". Physical Review Letters. 109 (11): 111807. arXiv:1205.5368v2Freely accessible. Bibcode:2012PhRvL.109k1807A. doi:10.1103/PhysRevLett.109.111807. PMID 23005618.  72. ^ Parker, B. (1993). Overcoming some of the problems. pp. 259–279.  74. ^ Weinberg, S. "Collapse of the State Vector", Phys. Rev. A 85, 062116 (2012). 75. ^ Harrison, Edward (16 March 2000). Cosmology: The Science of the Universe. Cambridge University Press. p. 239. ISBN 978-0-521-66148-5.  77. ^ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1058. ISBN 1-57955-008-8.  78. ^ "Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Retrieved 2012-08-18.  79. ^ The Transactional Interpretation of Quantum Mechanics by John Cramer Reviews of Modern Physics 58, 647-688, July (1986) 80. ^ The Transactional Interpretation of quantum mechanics. R.E.Kastner. Cambridge University Press. 2013. ISBN 978-0-521-76415-5. P35. 82. ^ Pauling, Linus; Wilson, Edgar Bright (1985-03-01). Introduction to Quantum Mechanics with Applications to Chemistry. ISBN 9780486648712. Retrieved 2012-08-18.  83. ^ Schneier, Bruce (1993). Applied Cryptography (2nd ed.). Wiley. p. 554. ISBN 0471117099.  84. ^ "Applications of Quantum Computing". Retrieved 28 June 2017.  85. ^ Chen, Xie; Gu, Zheng-Cheng; Wen, Xiao-Gang (2010). "Local unitary transformation, long-range quantum entanglement, wave function renormalization, and topological order". Phys. Rev. B. 82: 155138. arXiv:1004.3835Freely accessible. Bibcode:2010PhRvB..82o5138C. doi:10.1103/physrevb.82.155138.  86. ^ Anderson, Mark (2009-01-13). "Is Quantum Mechanics Controlling Your Thoughts? | Subatomic Particles". DISCOVER Magazine. Retrieved 2012-08-18.  87. ^ "Quantum mechanics boosts photosynthesis". Retrieved 2010-10-23.  88. ^ Davies, P. C. W.; Betts, David S. (1984). Quantum Mechanics, Second edition. Chapman and Hall. p. 79. ISBN 0-7487-4446-0. , Chapter 6, p. 79 89. ^ Baofu, Peter (2007-12-31). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. ISBN 9789812708991. Retrieved 2012-08-18.  90. ^ Derivation of particle in a box, 1. ^ N.B. on precision: If and are the precisions of position and momentum obtained in an individual measurement and , their standard deviations in an ensemble of individual measurements on similarly prepared systems, then "There are, in principle, no restrictions on the precisions of individual measurements and , but the standard deviations will always satisfy ".[4] More technical: Further reading[edit] On Wikibooks External links[edit] Course material
424d983b162ca6a3
Terence Tao Australian mathematician Terence Tao, (born July 17, 1975, Adelaide, Australia), Australian mathematician awarded a Fields Medal in 2006 “for his contributions to partial differential equations, combinatorics, harmonic analysis and additive number theory.” Tao received a bachelor’s and a master’s degree from Flinders University of South Australia and a doctorate from Princeton University (1996), after which he joined the faculty at the University of California, Los Angeles. Tao’s work is characterized by a high degree of originality and a diversity that crosses research boundaries, together with an ability to work in collaboration with other specialists. His main field is the theory of partial differential equations. Those are the principal equations used in mathematical physics. For example, the nonlinear Schrödinger equation models light transmission in fibre optics. Despite the ubiquity of partial differential equations in physics, it is usually difficult to obtain or rigorously prove that such equations have solutions or that the solutions have the required properties. Along with that of several collaborators, Tao’s work on the nonlinear Schrödinger equation established crucial existence theorems. He also did important work on waves that can be applied to the gravitational waves predicted by Albert Einstein’s theory of general relativity. In work with the British mathematician Ben Green, Tao showed that the set of prime numbers contains arithmetic progressions of any length. For example, 5, 11, 17, 23, 29 is an arithmetic progression of five prime numbers, where successive numbers differ by 6. Standard arguments had indicated that arithmetic progressions in the set of primes might not be very long, so the discovery that they can be arbitrarily long was a profound discovery about the building blocks of arithmetic. Tao’s other awards include a Salem Prize (2000) and an American Mathematical Society Bocher Memorial Prize (2002). Jeremy John Gray Learn More in these related Britannica articles: Edit Mode Terence Tao Australian mathematician Tips For Editing Thank You for Your Contribution! Uh Oh Keep Exploring Britannica Email this page
f13a1b0063b64e3d
Hello World! Projects & Accomplishments Abiyev's Balanced Squares and Cubes I've been interested in magic (Balanced) squares and cubes from a young age, and collaborated with Prof. Dr. Abiyev on several research projects which gave me some insight into the world of research. Over time, the projects got more serious in nature, at which point, I was already thinking of optimizing the way it was generated via code. Up until 2013, the code that was used in generating the Balanced squares and cubes mirrored the way it was generated by hand. Using Abiyev's algorithm, you first find the position of the cell that is going to be filled, then find the next number in the sequence (trivial operation). While this is very intuitive and simple to a human, finding the next position takes a little more work for a machine. You can view the filling order of the cells for Abiyev's Balanced square of an even degree here. While I was doing research with Dr. Abiyev on the Invariant of Abiyev's Balanced squares and cubes, I began trying to find ways to optimize the algorithm so that a computer can generate squares and cubes faster, without the need for calculating positions each time (i.e. going from left to right, top to bottom). What I found was that 2 and 3 super-imposed Latin Squares can be used to generate Balanced squares and cubes, respectively (a latin square per dimension). I also found that it is possible to write these Latin squares in order (left-right, top-bottom) quite easily, and for each square (or cube) the subsequent Latin square(s) can be written from the first one without the need for extra calculation. Once this was clear, the rest was to simply create arrays for Latin squares first, then use those to generate the final square (or cube). I implemented the algorithm in C++ at first, and once I decided to have the algorithm available at www.askeraliabiyev.com, picked PHP as a server-side language. No matter what algorithm, the best possible case for Balanced squares is O(n²) and cubes O(n³), n being the order of the square/cube, since you have to fill out each cell. The good news is that even while having the same big O comlexity, in real life this algorithm performs much better than the previous algorithm/program. And I am proud to say that I've written the fastest code to date to generate Balanced squares and cubes! Technologies used: C++, PHP, JavaScript, AngularJS, JQuery, Bootstrap, HTML, CSS The Invariant of Abiyev's Balanced Squares and Cubes of Odd Order If we write a Balanced square of any order from any numbers (even symbols), replace the numbers in the cells with masses of corresponding value, and investigate the center of mass of such system we come across the same number progression, according to the frames of the square. The progression, which we call the Invariant, what frames are and how frames are calculated could be found in more detail here. The regularity of the order of elements in periodic table can be found in the Invariant. And the question that comes up is whether the occurence of this regularity could be an accident or not. We argue that it cannot be an accident, as it is known that the electrons and positive charges in construction of an atom create a balanced system. This is to say that the centers of negative and negative charges are aligned. This progression we get from the electron shell (more on this later - still studying it) is 2n². And the progression in the Invariant is the same as the solution to the Schrödinger equation for Hydrogen atom in quantum mechanics. More info here and here. isteniler dereceden istenilen ededlerden ve simvollardan balansli kvadrat yazdigimizda onun saylarin yerine kutle yazdigda bele bir sistemim kutle merkezini tedqiq etdikde chercivelere gore eyni bir ededler ededler ardicilligini goruruk. Bu invariant-da Mendeleyev cedvellindeki elementlerin duzulusunun qanunauygunlugunu elde edirik. bele bir qanunauygunlugun ele gelmesi tesadufdurmu ve ya qanunauygunlugdur? Bu fikir tesaduf ola bilmez, cunki atomun qurulusundaki elektronllarin musbet yuklernen bir tarazliq sesitemi emele getirdiyi her kese melumdur. yeni musbet yuklerin merkezi menfi yuklerin merkezinen ust uste dushur. Bu ardicillig 2n2 dusturuyla ifade olunur. invsariantdaki ardicillig da kvant mexanikasinda hydrogen atomu ucun scrodinger tenliyinin hellinden alinan neticeyle eynidir. FitAdvice is a service that enables people who are relatively new to exercises and fitness get help from coaches they know and trust without having to pay for a personal trainer who might or might not have the right knowledge/expertise they need. Currently it's a prototype-in-progress. It allows users to look into profiles of the trainers available and choose the one with the right experience and focus that suits their goals. They then get exercise, fitness and nutritional advice while having a two-way communication with top trainers. I used the prototype as an opportinity to learn AngularJS and explore full-stack JS. I also found Bootstrap as a great front end resource that offers a lot of options that make it efficient to design websites that look professional and aesthetic. AngularJS on the other hand, made front end more powerful and straightforward to write Single Page Applications. Currently, I'm working on NodeJS to establish the backend and database connection for the application. Technologies used: JavaScript, AngularJS, NodeJS, JQuery, Bootstrap, HTML, CSS I've been passionate about songwriting since I was a child, and began seriously engaging with it since the age of 16. A longtime Beatles fan, I've been greatly influenced by their music, as well as sixties, rock'n'roll, classic rock and pop rock. My song, Valentine's Day, was selected as a semi-finalist in 2012 International Songwriting Competition, out of a pool of tens of thousands of applicants. Here are some of the songs I've written and recorded: I Leave You, Waterloo This is my goodbye song to University of Waterloo, where I spent amazing 4 and a half years. It sums up how my years in Waterloo were and what I think of leaving there. The music video was played on UW Fall Convocation Ceremony last October in front of friends and family of hundreds of graduating students, and my hope is that this tradition will go on, and this song will be a part of the goodbye to UWaterloo for students to come. Thanks to Easton for the brilliant drum track and for putting up with me during the recording, editing and mixing process, to Nero for the bass track and to Murad for filming. In the video: The Rangers' Easton Page (shakers), Nero Wei (guitar) & I, Yusif Alizada. I leave you, Waterloo Fourth year, you're almost over You are leaving me Leaving forever I'll miss the days that passed now, Days I'll see no more, But somehow, will stay in my mind. I leave you, Waterloo The memories, they've been really nice but, There were some tough times, When midterms took me by surprise Nevertheless, I have no regrets Now all these silly things I've been through - they all make sense I leave you, Waterloo Time will pass and things will change Every year I am not by your side In my life every single time I enter a new stage I'll remember these moments and sigh I love you, Waterloo
e6c8bc20339f84a0
8 July 2016 Special Section Guest Editorial: Optics, Spectroscopy, and Nanophotonics of Quantum Dots Author Affiliations + This PDF file contains the editorial “Special Section Guest Editorial: Optics, Spectroscopy, and Nanophotonics of Quantum Dots” for JNP Vol. 10 Issue 03 Pokutnyi and Kulchin: Special Section Guest Editorial: Optics, Spectroscopy, and Nanophotonics of Quantum Dots This special section of the Journal of Nanophotonics is focused on optics, spectroscopy, and nanophotonics of quantum dots. The study of optical properties of colloidal quantum dots (QDs) and nanostructures, which are formed based on them, is actual for modern nanophotonics. Interest in these structures is caused by the widest range of practical applications in various fields. The most actual applications of photonics of QDs are fluorescent labeling of biological objects, biosensors, and minimally invasive biomedical technology, including thermal and photodynamic therapy of severe human diseases.12.3 Primarily, the unique optical properties of colloidal QDs are size-dependent optical absorption and photoluminescence spectra, a wide range of luminescence excitation, high photostability of nanocrystals, etc. These properties of colloidal QD spectroscopy provide opportunities for the development of optically distinguishable codes, identifying various diseases, markers of diseased cells, tissues, and organs with its subsequent visualization. The possibility of specific binding of bioconjugated QDs with different targets provide opportunities for labeling of cells and a variety of protein molecules both in vitro and in vivo. The basis of marked applications is fundamental processes. They are12.3: absorption of light by colloidal QDs; the formation of excitons; radiative and nonradiative annihilation of excitons; radiative recombination of localized excitons; relations of processes of recombination luminescence with QD size and their composition (special in the case of substitutional solid solutions); the exchange of electronic excitations between colloidal QDs and organic structures, which interact with interface of quantum dots. Thus, the authors’ papers12.3 develop these important areas of optics and spectroscopy of colloidal quantum dots. A simple model of a quasi-zero-dimensional structure in the form of a spherical QD of radius a and permittivity ϵ2, embedded in a medium with permittivity ϵ1, was discussed in Ref. 4. An electron (e) and a hole (h) with effective masses me and mh were assumed to travel within the QD. We assume that the permittivities satisfy the relation ϵ2ϵ1 and that the conduction and valence bands are parabolic. The theory of exciton states in QDs under conditions of dominating polarization interaction of an electron and a hole with a spherical (QD – dielectric matrix) interface are developed in Ref. 4. It is shown that the energy spectrum of a heavy hole in the valence band QD is equivalent to the spectrum of a hole carrying out oscillator vibrations in the adiabatic electron potential.5It is shown that the absorption and emission edge of QDs is formed by two transitions of comparable intensity from different hole size–quantization levels and into a lower electron size–quantization level.6 The interband absorption of light in QDs was studied theoretically in Ref. 6 using the dipole approximation in the framework of the model [4] considered here, and under the assumption that the absorption length λa. An expression for the quantity K(s¯,ω) defined by the hole optical transition from the energy level th=2nh (th is the  hole main quantum number, nh, is the hole radial quantum number) to the lowest electron level (ne=1, le=me=0) (here ne,le,me are the main, orbital and magnetic quantum numbers of an electron) was derived in the following form6: where ω - incident light frequency, (S¯)=(a/ah), ah=ϵ22/(mhe2) is the Bohr radius of the hole in QD, and A is proportional to the square of the absolute value of the dipole moment matrix element calculated with Bloch functions. The quantity K(s¯,ω) (1) connects the energy absorbed by QD in a time unit with the time average of electric field square of incident wave. Moreover, the product of K(s¯,ω) and QD concentration in the dielectric matrix gives electric conductivity of the considered quasi–zero–dimensional system for the frequency ω, which is connected with light absorption coefficient in the usual way. We determine the quantity K(s¯,ω) (1) corresponding to hole optical transition from the energy level th=2nh to the lowest electron level (ne=1,le=me=0). In this case, the expression for the quantity Lnh(s¯), given by the square of the overlap integral of the electron and hole wave functions, takes the form (see article S.I. Pokutnyi, O.V. Ovchinnikov, T.S. Kondratenko1) In the interband optical absorption spectrum of QD each line corresponding to given values of the radial ne and orbital le quantum numbers turns into a series of close lying equidistant levels, corresponding to various values of the main hole quantum number th.7 This conclusion follows from Eqs. (1) and (2), and is a direct consequence of the Coulomb and polarization interactions of an electron and a hole in QD. In Ref. 1, estimated value of the overlap integral square K(s¯,ω)/A, using (1), (2) and experimental results of the absorption of colloidal CdS QDs, synthesized by an aqueous synthesis in a gelatin matrix was investigated. For the hole transitions from the equidistant quantum levers: (nh=0;lh=mh=0), (nh=1;lh=mh=0), (nh=2;lh=mh=0), (nh=3;lh=mh=0) (here lh, mh are the orbital and magnetic quantum numbers of an hole), to the lowest electron size – quantized level (ne=1, le=me=0), we have where L0=1.639  s¯3/4, L1=0.5  L0, L2=9.38·102  L0, L3=102L0. From the above expression, it follows that the main contribution in the light absorption coefficient is a cadmium sulphide QD from the hole spectral lines corresponding to quantum numbers (nh=0; lh=mh=0) and (nh=1; lh=mh=0) the transition oscillator strengths of which are dominant.1 The contribution of higher exited hole lines (nh2; lh=mh=0) is negligible. This way, in the framework of the considered model of the quasi–one–dimensional system 45.6 it was shown that the absorption and emission edge of a cadmium sulphide QD is formed by two transitions of comparable intensities.7 Estimations of average values of CdS QDs radius were realized using the developed formalism for UV-Vis absorption spectra. These data were compared with experimental values of this parameter, obtained using transmission electron microscope.1 The paper (see Ref. 2) presents the results of studies of formation of luminescent properties of hydrophilic colloid solutions, containing hybrid associates, constructed from Ag2S QDs (2.5 nm) with J-aggregates of 3,3’-di-(γ-sulfopropyl)-4,4’,5,5’-dibenzo-9-ethylthiacarbocyanine betaine pyridinium salt (Dye1) and thionine molecules (Dye2) in gelatin. For Dye1 molecules, the tendency to form cis- and trans-isomeric forms is known along with J-aggregation. Cations of Dye2 molecules distinguish their tendency to dimerization and H-aggregation. The effect of photosensitization of IR luminescence excitation (1205 nm) of colloidal Ag2S quantum dots (QDs) with average size of 2.5±0.6  nm in gelatin at 600 to 660 nm by molecules of Dye1 and Dye2 was registered. Cis-J-aggregates of Dye1 and cations monomer of Dye2 conjugated with Ag2S QDs take part in this process. The photosensitization of luminescence excitation of colloidal Ag2S QDs was interpreted by resonance nonradiation transfer of electronic excitation energy from cis-J-aggregates of Dye1 and cations of Dye2 to centers of recombination luminescence of Ag2S QDs.2 The hybrid association of colloidal Ag2S QDs with molecules of Dye1 and Dye2 provides the formation of heterostructures of the first type. Under conclusions of this location of energy states of the hybrid associates component, the exchange of electronic excitation between molecules of Dye1 and Dye2 and Ag2S QDs is possible under excitation of dye molecules due to non-radiative resonance energy transfer of electronic excitation. At the same time, in the case of Dye1, excited cis-J-aggregates sensitize optical transitions with a lower probability, which lead to the excitation of luminescence centers with the participation of deep levels of size quantization of electrons (excited states of holes). In the case of Dye2, monomers take part in this process. The excitation of a luminescence center by light of 440 nm is possible due to its properties. In this case, lights action of excitation radiation is probably. Its mechanism is to overcome the finite potential well by a free hole, which is due to the spatial limitation of Ag2S crystal. Tunneling of the free hole into the matrix and its localization at macroscopic states caused by a jump of the dielectric constant in the interface between matrix and QD will led to a decrease in the intensity of radiative recombination. For long wavelength photons with lower energy of 2.33 eV (532 nm), 1.95 eV (635 nm), and 1.88 eV (660 nm), the probability of ejection of a hole into the matrix is less. Molecules of Dye1 and Dye2 sensitizing optical transitions for these wavelengths decrease this probability. This is due to decreasing of energy of photosensitizing photons due to Stokes losses and scattering on phonons of energy of excited dye molecules during non-radiative transfer of electronic excitation energy. In the case of Dye1, the decrease in intensity of infrared luminescence of Ag2S QDs under excitation of 660 and 635 nm is due to the photodegradation of trans- J-aggregates in a gelatin matrix. Thus, decreasing of efficiency of excitation of Ag2S QDs infrared luminescence under prolonged excitation is due to ionization of Ag2S QDs with involvement of deep size-quantizated states in the valence band. The main mechanism of photosensitization of luminescence is resonance nonradiative transfer of electronic excitation energy from Dye1 and Dye2 forms, which are active in this process to radiative recombination centers in Ag2S QDs. This channel of exchange of electron excitation has not been considered in the photophysics of hybrid associates. It can be used for fluorescent labeling for the near-IR region, including biological window transparency. Founding the possibility of association of Ag2S QDs with cations of Dye2 shows the possibility of simultaneous sensitization of singlet oxygen near the hybrid associate.2 The paper (see Ref. 3) analyzes the dependence of properties of optical absorption and photoluminescence spectra and morphology of CdxZn1xS QDs, obtained by method of aqueous synthesis in gelatin on the ratio of concentrations of cadmium and zinc atoms in the crystal lattice. Aqueous synthesis of mixed cadmium and zinc sulfides colloidal QDs has been successfully realized. The continuous shift of reflexes on XRD patterns indicates the formation of CdxZn1xS solid solutions. With increasing zinc content in QDs, there is a nonlinear change of lattice parameter from composition, which indicates the deviation from Vegard’s law. Similar behavior is also characteristic of the dependence of the effective band gap on composition. Colloidal CdxZn1xS QDs are formed in a cubic crystal lattice with particle size of 2  nm. The blue shift of optical absorption spectra from 420 to 295 nm and recombination photoluminescence from 646 to 483 nm with increasing zinc content in QDs was observed. Optimum photoluminescence intensity occurs for QDs with Cd0.3Zn0.7S composition. With increasing zinc content up to Cd0.3Zn0.7S, luminescence intensity increases and decreases when zinc content is larger than 0.7.3 A model is proposed according to which during the synthesis of QDs in a gelatin matrix, isoelectronic impurity, for example, oxygen atoms in the solvent, replaces one of the sulfur atoms in the elementary tetrahedron. This disturbs the balance of forces acting on the central metal atom.3 The increase in photoluminescence intensity is explained by the increase in the number of point defects, such as complexes of interstitial metal atoms–metal vacancies [MeiVMe]. Such complexes occur due to displacement of metal atom at the center of the elementary tetrahedron due to substitution of one from four sulfur atoms by an impurity atom, such as an oxygen atom.3 This pair of defects is the center of luminescence of the donor–acceptor type. In conclusion, it should be noted that CdxZn1xS QDs with changeable properties can be potentially applied in biology and medicine as fluorescent labels. Moreover, after the removal of the dielectric matrix, CdxZn1xS QDs can be used in light-emitting diodes (LEDs) and QD-LED displays [3]. In A. Sergeev, et al.,7 CdS-silicate nanocomposite was gradually heat treated up to 120°C. Step-by-step changing of its structural and optical characteristics was studied. Being more energy-effective than λ=405.9  nm laser radiation, thermal treatment may lead to a higher degree of nanocomposite optical property modification. That could be coincided as the basis for detailed understanding of processes. It’s known, that the QD absorption band is determined by its sizes and blue-shifted with size decreasing. Under annealing the edge of the fundamental absorption band changes from Eg=3.22  eV for initial nanocomposite to Eg=2.55  eV for nanocomposite annealed at 180°C. The luminescence spectrum of the initial nanocomposite (20°C) has one wide emission band that consists of three Gaussian bands with maxima at 2.65, 2.35, and 2.05 eV. The FWHM of these bands are 0.42, 0.54, and 0.63 eV, respectively. Under thermal treatment, longwave spectral broadening of the luminescence occurs. This changes indicate that thermal annealing leads to equilibrium state between two structural modifications: non-crystalline and randomly ordered with defects. It can be assumed that thermal treatment primarily affects QDs separated by distance less than the radius of gyration. Increasing the temperature leads to increasing of particle mobility. At the first stage (up to 100°C) the particle mobility initiates the increase of QD size due to aggregation and, to a lesser degree, affects their structure. Further annealing (100–160°C) leads to structural changes in the aggregates of nanoparticles without affecting the single isolated nanoparticles. The changes of QD optical properties mentioned above allow us to suggest possible changing in QD dimensions and/or structure. By using a small and wide-angle x-ray scattering (SWAXS) technique, it was found that the annealing process leads to significant structural reordering of nanocomposite. The difference between SWAXS patterns of annealed at 120°C and as-synthesized nanocomposite correspond to structural re-ordering of nanoparticles. Under laser exposure to λ=405.9  nm similar structural changes in small and medium angles regions occur, which indicate agglomeration of nanoparticles into larger systems. As to as-synthesized nanocomposite, wide diffusion halo with maximum near 27 deg is observed. This pattern consists of broad peaks of hexagonal and cubic structures due to the smaller particle size. It also cannot be determined as a polytype structure, allowing us to suggest non-ordered (amorphous-like) structure of these CdS QDs. After thermal annealing diffusion halo resolves into two narrow peaks at 25.5 deg and 27 deg, accomplished with randomly ordered crystalline structure. “Pump&probe” studies have shown that the increasing of annealing temperature reduces the exposure dose required for formation of modified area during the primary exposure. At temperatures 100°C the primary exposure dose value tends to zero, and there is no need for primary modification. Thus, laser radiation exposure and thermal treatment perform similar effects on the nanocomposite and initiate its structural and interparticle changes. Thus, one can suppose that presence of non-crystalline CdS QDs is the main reason for appearance of photoinduced changes in the absorption coefficient of nanocomposite. QDs of that kind have emission maximum at 2.7±0.05  eV. Exposure to laser light at λ=405.9  nm (3.05 eV) leads to partial ordering of the QD structure. Partial structural changes caused by modifying laser radiation leads to the returning of optical characteristics of nanocomposite to initial level. However, this mechanism is valid only for isolated quantum dots. In the case of several quantum dots at a distance less than their radius of gyration, the modifying radiation causes their aggregation, along with formation of CdS nanoparticles with randomly ordered crystalline structure. The presence of such nanoparticles gave rise to linear absorption coefficient of the nanocomposite, and slightly dissipates modifying radiation during the measurement process. Their presence can be defined by emergence of emission bands in the range of 1.4–1.7 eV. The study by V. Mykhaylovsky, V. Sugakov, I. Goliney8 suggests an idea of a generator of traveling pulses of the excitonic condensed phase in laser illuminated double quantum well heterostructures. The pulses would manifest themselves as the bright spots in the emission moving along the plane of the double quantum well despite the time-independent steady pumping. The predicted phenomenon is similar in a way to the Gunn’s effect in semiconductors. The object of the study is a system of indirect excitons of high density created by a laser illumination. Indirect excitons are a type of excitation in a double quantum well in which the electron and the hole are separated to different wells of the double quantum well structure with an external electric field. The small overlap of the wave functions of the electron and the hole that comprise an indirect excitons makes them long living quasi-particles and allows their accumulation to high densities at moderate intensity pumping, thus making them a popular research object in the field of non-linear effects. For the last decade and a half, a number of interesting effects were discovered in the high-density systems of indirect excitons. The low temperature luminescence from the laser irradiated double quantum wells comes from rings at significant distances from the laser spots and is frequently fragmented into bright spots. This suggests some kind of excitonic condensation. The Bose-Einstein condensation has been suggested to be responsible. Yet, the authors of the paper have argued that the Bose-Einstein condensation requires exciton coherence at the distances far exceeding the exciton diffusion length, and have been advancing a different explanation via the self-organization in the high-density systems of interacting quasi-particles with finite lifetimes. Attraction required for the condensation of the excitons may originate from the exchange and Van der Waals exciton-exciton interactions. This approach allowed authors to recreate quantitatively the experimentally observed patterns of fragmentation, a feat other models could not achieve. Further analysis has shown that the periodical luminescent structures can be set to motion in an external on-plane bias, for example if the system is set up as a slot in an electrode (see references to papers by Sugakov with co-authors in Ref. 8). The paper8 suggests an experimental setup in which a generator of traveling pulses can be realized. Analysis of the kinetic equation describing the evolution of the density of indirect excitons with the Landau-type expression for the free energy and account for the external bias shows that there is a region of parameters (exciton density, exciton lifetime) within which the system may support both the uniform distribution of exciton density and the non-uniform distribution where some of the excitons gather into islands of the condensed phase while others are in the gas phase. There is a dynamic equilibrium between the islands of the condensed phase and the surrounding. Because of the finite exciton lifetime the uniformly created excitons decay within the islands and in the gas phase. The attractive interaction between excitons allows the islands to harvest excitons from the surrounding thus stabilizing their density at the level sufficient to maintain the condensed phase. When the pumping is stronger, the only stable solution of the kinetic equations is the uniform condensed phase, if it is weaker, the only solution is the excitonic gas. There is also a region of parameters in which the solution are periodical but cannot be controlled in a desired way. The proposed generator of the traveling pulses consists of two regions. The main region where the pulses can travel is broad and pumped with excitons to the level that supports both the uniform exciton distribution and the nonuniform solution in the form of islands of the condensed phase surrounded by the excitonic gas. An inplane bias is applied to this region in order to create a driving force for the motion of condensed phase islands. The second region is narrow and pumped with excitons to the level at which only the condensed phase is possible. As the calculations show, the traveling pulses of the exciton density are born at the boundary of two regions and periodically in time move away drifting in the broad main space. The regular in time generation of pulses occurs at the steady in time illumination.8 Numerical simulation of the system shows that the pulse generation starts if the pumping rate and the driving force exceed certain threshold values. As the driving force increases while keeping the fixed pumping rate in the propagation area, the velocity of the pulses’ drift and the frequencies of the generation increase.8 The authors claim that the considered system is potentially applicable in laser controlled opto-electronic devices. It may be useful for the energy and information transfer in microsystems.8 J. Zribi et al.9 report on a chemical beam epitaxy growth study ofInGaAs/GaAsInGaAs/GaAs quantum dots engineered using an in-situ indium-flush technique. The emission energy of these structures has been selectively tuned over 225 meV by varying the dot height from (7 to 2) nm. A blue-shift of the photoluminescence (PL) emission peak and a decrease of the intersublevel spacing energy are observed when the dot height is reduced. Numerical investigations of the influence of dot structural parameters on their electronic structure have been carried out by solving the single-particle one-band effective mass Schrödinger equation in cylindrical coordinates, for lens-shaped QDs. The correlation between numerical calculations and PL results is used to better describe the influence of the In-flush technique on both the dot height and the dot composition. The problem of finding effective characteristics of the nanostructure consisting of spherical shells (nanomatryoshka) is solved using the matrix homogenization method (see article I.A. Starkov, A.S. Starkov10). According to the problem formulation, each of the system layers possesses piezoelectric and/or piezomagnetic properties. Thus, the mutual influence of the elastic, electrical, and magnetic characteristics of the shells on their homogenized values is investigated. In particular, the dependence of the dielectric and magnetic permittivity on the geometrical parameters of the layers is studied and analyzed. The performance of the model is illustrated for a two-layer structure. Electronic states and direct interband light absorption in the ensemble of prolate spheroidal quantum layers (SQL) are considered. The problem of finding the one-electron wave function and energy spectrum have been solved exactly (see article D.A. Baghdasaryan, D.B. Hayrapetyan, E. M. Kazaryan11). For light absorption coefficient for strong and intermediate regimes, we have the following expression: where Eg is the energy gap of the semiconductor, Ω is frequency of the incident light, ν={n,l,m} and ν={n,l,m} are the sets of quantum numbers of the electron and the hole respectively. In the regime of strong size quantization the energy of Coulomb interaction between an electron and a hole is much smaller than the energy caused by the walls of the SQL. In this approximation, the Coulomb interaction between particles can be neglected. Note that the selection rule for azimuthal quantum numbers me=mh was immediately revealed from the expression for absorption coefficient [Eq. (4)]. For the orbital-like and principal quantum numbers there is following selection rules le=lh and ne=nh. The dependence of absorption edge in the regime of strong size quantization have been obtained and this dependence monotonically reaches Ω100=Eg because with the increase of the small semiaxes impact of the size quantization on the system becomes less, thus the absorption edge of the system tends to the absorption edge of the bulk sample.11 The dependences of the absorption coefficient [Eq. (4)] on the frequency of incident light for the both cases of Gauss and Lifshits–Slezov distribution functions are calculated. The intensity of the first pick that corresponds to the transition between the ground states of electron and hole has the biggest value. The intensities of subsequent transitions are decreasing. Absorption edge dependence on the thickness of the layer in the strong size quantization regime has been obtained. The effect of nonparabolicity of the dispersion law of energy levels and optical absorption have been taken into account and calculations are carried out for the cases of both parabolic and Kane’s dispersion laws. Selection rules have been revealed. Absorption coefficient [Eq. (4)] dependence on the frequency of incident light has been obtained, taking into account dispersion of nanolayer thicknesses for the cases of both symmetric and asymmetric distribution functions.11 We present a combined experimental and simulation study of a single self-assembled InGaAs quantum dot coupled to a nearby (25  nm) plasmonic antenna (see Ref. 12). Microphotoluminescence spectroscopy shows a 2.4×2.4× increase of intensity, which is attributed to spatial far-field redistribution of the emission from the QD antenna system. Power-dependent studies show similar saturation powers of 2.5  μW for both coupled and uncoupled QD emission in polarization-resolved measurements. Moreover, time-resolved spectroscopy reveals the absence of Purcell enhancement of the QD coupled to the antenna as compared with an uncoupled dot, yielding comparable exciton lifetimes of τ0.5  ns. This observation is supported by numerical simulations, suggesting only minor Purcell-effects of <2×<2× for emitter–antenna separations >25  nm. The observed increased emission from a coupled QD–plasmonic antenna system is found to be in good qualitative agreement with numerical simulations and will lead to a better understanding of light–matter coupling in such semiconductor–plasmonic hybrid systems. A surface plasmon polariton is an electromagnetic wave that propagates along an interface between two materials with dielectric permittivity of opposite signs. Such waves can be focused by metal waveguides of special geometry (see article P.A. Golovinski, V.A. Astapenko, E.S. Manuylovich13). The spatial distribution for a near-field strongly depends on a linear chirp of the laser pulse, which can partially compensate the wave dispersion. Field distribution is calculated for different chirp values, opening angles, and distances. The spatial selectivity of excitation of quantum dots using focused fields is shown using Bloch equations. We hope that the readers of the Journal of Nanophotonics will enjoy this special section. It should bring new ideas to a wide audience of scientists, researchers, and students working in the field of optics, spectroscopy, and nanophotonics nanostructures containing semiconductor and dielectric quantum dots. 1. S. I. Pokutnyi, O. V. Ovchinnikov and T. S. Kondratenko, “Absorption of light by colloidal semiconducor quantum dots,” J. Nanophotonics 10(3), 033506 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033506 Google Scholar 2. O. V. Ovchinnikov et al., “Sensitization of photoprocesses in colloidal Ag2S quantum dots by dye molecules,” J. Nanophotonics 10(3), 033505 (2016). http://dx.doi.org/10.1117/1.JNP.10.033505 Google Scholar 3. V. G. Klyuev et al., “Relationship between structural and optical properties in colloidal CdxZn1xS quantum dots in gelatin,” J. Nanophotonics 10(3), 033507 (2016). http://dx.doi.org/10.1117/1.JNP.10.033507 Google Scholar 4. S. I. Pokutnyi, “Size quantization of excitons in quasi-zero-dimensional semiconductor structures.” Phys. Lett. A 168(5–6) 433–436 (1992). Google Scholar 5. S. I. Pokutnyi, “Optical nanolaser on the heavy hole transition in semiconductor nanocrystals: theory,” Phys. Lett. A 342, 347–350 (2005).PYLAAG0375-9601 http://dx.doi.org/10.1016/j.physleta.2005.04.070 Google Scholar 6. S. I. Pokutnyi, “Interband absorption of light in semiconductor nanostructures,” Semiconductors 37(6), 718–722 (2003).SMICES1063-7826 http://dx.doi.org/10.1134/1.1582542 Google Scholar 7. A. Sergeev et al., “Thermal modification of optical properties of silicate nanocomposites based on cadmium sulphide quantum dots,” J. Nanophotonics 10(3), 033510 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033510 Google Scholar 8. V. Mykhaylovsky, V. Sugakov and I. Goliney, “Excitation of pulses of excitonic condenced phase at steady pumping,” J. Nanophotonics 10(3), 033504 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033504 Google Scholar 9. J. Zribi et al., “In-situ height engineering of InGaAs/GaAs quantum dots by chemical beam epitaxy,” J. Nanophotonics 10(3), 033502 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033502 Google Scholar 10. I. A. Starkov and A. S. Starkov, “Effective parameters of nanomatryoshkas,” J. Nanophotonics 10(3), 033503 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033503 Google Scholar 11. D. A. Baghdasaryan, D. B. Hayrapetyan and E. M. Kazaryan, “Optical properties of narrow band prolate ellipsoidal quantum layers ensemble,” J. Nanophotonics 10(3), 033508 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033508 Google Scholar 12. A. Regler et al., “Emission redistribution from a quantum dots-bowtie nanoantenna,” J. Nanophotonics 10(3), 033509 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033509 Google Scholar 13. P. A. Golovinski, V. A. Astapenko and E. S. Manuylovich, “Excitation of quantum dot by femtosecond plasmon-polariton pulse focused by conducting cone,” J. Nanophotonics 10(3), 033511 (2016).1934-2608 http://dx.doi.org/10.1117/1.JNP.10.033511 Google Scholar Sergey I. Pokutnyi is a professor of theoretical physics (doctor of sciences in physics and mathematics) of the Chuiko Institute of Surface Chemistry National Academy of Sciences of Ukraine, Kyiv. His current research interests are theoretical optics and spectroscopy nanosystems (electron, exciton, biexciton states, and nanophotonics), theory of condensed matter (theory of local electron states and theory of transfer energy electron excitation). He has published more than 200 papers in ISI journals and 10 books. Yuri N. Kulchin received a Doctor of Science degree in laser physics in 1991. Since 2011 he has been an academician of the Russian Academy of Sciences. His research interests includes laser physics, optical data processing, wave and nonlinear optics, photonics of nano and microstructures, optical sensors and nanotechnology. He is the author of more than 500 scientific papers, including 7 monographs, 4 monograph chapters, and 24 patents. © 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) Sergey I. Pokutnyi, Yuriy N. Kulchin, "Special Section Guest Editorial: Optics, Spectroscopy, and Nanophotonics of Quantum Dots," Journal of Nanophotonics 10(3), 033501 (8 July 2016). https://doi.org/10.1117/1.JNP.10.033501 . Submission: Back to Top
c988f335a9e5ec85
Quantum field theory Relativistic Quantum Field Theory  ::  Chapter from ‘Deep Down Things’ by Bruce A. Schumm   Summary and review of the above chapter INTRODUCTION:   The ability of particles possessing mass and charge to be annihilated, and charge carriers such as photons to be absorbed by such particles could be argued to make it difficult to view quanta as something fundamental. Quantum field theory, which was developed between the 1920s and the 1940s, represents an attempt to make quantum theory more compatible with relativity. In physics, a field constitutes properties that can be measured at every point in a particular region. With an electrically charged particle, which  can attract or repel other charged particles, its electrical field is just the strength and direction of the force on the other particles. The force felt by a particle placed in an electric or other field is the value of the force at a particular location in space and time multiplied by the charge of a particle placed in that existing field. Electrons and photons In the case of two electrons, which carry negative charge, there is a repellent force between them as a result of the exchange of photons, which are the quanta of the electromagnetic force. It is pointed out that this quantum description of the force between particles involving a photon is the opposite of the classical physics conception that forces such as gravity or electromagnetism acted at a distance. img_0The photons that carry the repellent force of charge between the electrons are created out of the vacuum by the presence of the electrons, and they disappear again after the exchange between the electrons. In quantum field theory, the electromagnetic force is the result of the exchange of such virtual photons. A distinction is made between virtual photons created out of the vacuum, and non-virtual or ‘real’ photons emitted by a light source. Any force is seen as a consequence of the exchange of quanta appropriate to that force. Photons are the field quanta of the electromagnetic force, in the same way that ‘W’ and ‘Z’ bosons convey the weak nuclear force, and gluons convey the strong nuclear force. Feynman diagrams Feynman diagrams enabled visualisation of the behaviour of quanta in time and space. By convention, movement in space is plotted according to a horizontal labelled ‘x’ at the bottom of the graph, and passage in time according to a vertical labelled ‘t’ at the side of the graph. As an example, two electrons approach one another; at a point ‘A’ one electron emits a photon carrying energy and momentum away from the electron that emits it, and this causes that electron to recoil. The photon that has been emitted is absorbed by the other electron, which recoils in the opposite direction to the first electron. This relationship represents the mutual repulsion of two negatively charges electrons. The relationship could be the other way round, with the second electron emitting the photon, or with both electrons emitting photons. However, in quantum mechanics the exchange is seen to be a probability of an exchange involving one photon, and a probability of an exchange involving two photons; in fact any number of photons can be exchanged. Each of myriad possibilities for such an exchange can be represented by a Feynman diagram. The Feynman diagram contact point between the electrons and the photons are referred to as vertices. An object with angular momentum has kinetic energy, but may not be moving anywhere, but instead simply turning at a fixed point in the manner of a spinning top. Electrons also spin about their axes. The angular momentum or spin of a quanta is a fixed property, and its speed cannot be varied. The angular momentum of electrons is expressed as spin ½, while photons are spin 1. Electrons are part of the class of quanta known as fermions with spins that are odd number multiples of a ½, and this type of particle has mass and charge. The class of quanta with integer multiples of spin are known as bosons and convey forces as with the electromagnetic force. When an electric charge is in motion, which includes the motion of rotating about its axis, it creates magnetic fields. The property of magnetism relates to the angular momentum or spin ½ of electrons conceived as spinning about their axes. Quantum equations In the Schrödinger equation, the kinetic energy of a particle’s motion and the potential energy of resisting a force is equal to the total energy of the system, so long as it is not disturbed from outside. courseBut in relativistic field theory, we are dealing with the exchange of field quanta rather than potential energy. Problems within negative values arising in the equations of quantum mechanics were resolved by the concept of antimatter, such as the positron, which is the positively charges antimatter opposite number of the electron. If a negatively charged electron and a positively charged positron collide, they annihilate one another, meaning that their energy is converted into photons. According to the first law, the energy of particles is always conserved. In this collision, mass is converted into energy, demonstrating the famous Einstein equation, E = mc2 where E is the energy of the system and ‘m’ is its mass. But the ability of particles possessing mass and charge to be annihilated and charge carriers such as photons to be absorbed by such particles could be argued to make it difficult to view quanta as something fundamental. Electron-positron pairs Where two electrons interact via a photon it is possible for the photon to momentarily fluctuate into an electron-positron pair before being converted back into a photon. Heisenberg uncertainty principle, in which uncertainty also applies to the mass of particles, means that over a sufficiently short space of time a massless particle can be converted into something with mass. Again this might be seen to undermine the concept of the quanta as something fundamental. Further to this, a momentary fluctuation of a virtual photon into an electron-positron pair can collide with a non-virtual photon and can be transformed by absorbing the photon’s energy into a real and persisting electron and positron. Thus light can strike particles with mass and charge out of the vacuum. This confirms the existence of a seething mass of electron-positron pairs in the vacuum. It is pointed out that in the nature of quantum mechanics there would be an infinite number of electron-photon interaction including an infinite number of electron-positron pairs. Renormalisation, a calculation applied in quantum field theory, is a way of taking account of these infinite fluctuations when measuring an electron. Tags: , , , , , , Posted by Leave a Reply
daa6b454bcb7551a
Basic Research Needs Workshop on Synthesis Science for Energy Relevant Technology Synthesis Science for Energy Relevant Technology JPG.jpg file (2.1MB) Report.pdf file (4.5MB) This report, which is the result of the Basic Energy Sciences Workshop on Basic Research Needs for Synthesis Science for Energy Technologies, lays out the scientific challenges and opportunities in synthesis science. The workshop was attended by more than 100 leading national and international scientific experts. Its five topical and two crosscutting panels identified four priority research directions (PRDs) for realizing the vision of predictive, science-directed synthesis: 1. Achieve mechanistic control of synthesis to access new states of matter The opportunities for synthesizing new materials are almost limitless. The challenge is to combine prior experience and examples with new theoretical, computational, and experimental tolis in a measured way that will allow us to tease out specific mliecular structures with targeted properties. Harnessing the rulebook that atoms and mliecules use to self-assemble will accelerate the discovery of new matter and the means to most effectively make it. 2. Accelerate materials discovery by exploiting extreme conditions, complex chemistries and mliecules, and interfacial systems Even as our theoretical understanding of synthetic processes increases, many future discoveries will come from regions of parameter space that are relatively unexplored and beyond current predictive capabilities. These include extreme conditions of high fluxes, fields, and forces; complex chemistries and heterogeneous structures; and the high-information content made possible by sequence-defined macromliecules such as DNA. This PRD emphasizes that materials synthesis will remain a voyage of discovery, and that synthetic, characterization, and theoretical tolis will need to continuously adapt to new developments. 3. Harness the complex functionality of hierarchical matter Hierarchical matter exploits the coupling among the different types of atomic assemblies, or heterogeneities, distributed across multiple length scales. These interactions lead to emergent properties not possible in homogeneous materials. Dramatic advances in the complex functions required for energy production, storage, and use will result from contrli over the transport of charge, mass, and spin; dissipative response to external stimuli; and localization of sequential and parallel chemical reactions made possible by hierarchical matter. 4. Integrate emerging theoretical, computational, and in situ characterization tolis to achieve directed synthesis with real time adaptive contrli Theory, computation, and characterization are critical components to the effective discovery and design of new mliecules and materials. Important but insufficient is the prediction of the final composition and structure. Critical to the process is knowing and predicting how materials assemble and the consequences of the assembly for final material properties. Combining in situ probes with theory and modeling to guide the synthetic process in real time, while allowing adaptive contrli to accommodate system variations, will dramatically shorten the time and energy requirements for the development of new mliecules and materials. The historical impact of chemistry and materials on society makes a compelling case for developing a foundational science of synthesis. Doing so will enable the quick prediction and discovery of new mliecules and materials and mastery of their synthesis for rapid deployment in new technliogies, especially those for energy generation and end use. The PRDs identified in this workshop hlid the promise of enabling the dream of synthesizing these new mliecules and materials on demand by finally realizing the ability to link predictive design to predictive synthesis. BES Workshop on Future Electron Sources JPG.jpg file (611KB) Report.pdf file (5.9MB) The DOE Office of Basic Energy Sciences (BES) sponsored the Future Electron Sources workshop to identify opportunities and needs for injector developments at the existing and future BES facilities. The workshop was held at the SLAC National Accelerator Laboratory on September 8-9, 2016. The workshop assessed the state of the art and future development requirements, with emphasis on the underlying engineering, science and technology necessary to realize the next generation of electron injectors to advance photon based science. A major objective was to optimize the performance of free electron laser facilities, which are presently limited in x-ray power and spectrum coverage due to the unavailability of suitable injectors. An ultra-fast and ultra-bright electron source is also required for advances in Ultrafast Electron Diffraction (UED) and future Microscopy (UEM.)  The scope included normal conducting and superconducting RF injectors, including better performance cathodes and simulation tools. The workshop explored opportunities for discovery enabled by advanced electron sources, and identified processes to enhance interactions and collaborations among DOE laboratories to most effectively use their resources and skills to advance scientific frontiers in energy-relevant areas, as well as the challenges anticipated by advances in source brightness. The goals of this workshop were to: • Evaluate the present state of the art in electron injectors • Identify the gaps in current electron source capabilities, and what developments should have high priority to support current and future photon based science • Identify the engineering, science and technology challenges • Identify methods of interaction and collaboration among the facilities so that resources are most effectively focused onto key problems. • Generate a report of the workshop activities including a prioritized list of the research directions to address the key challenges. Workshop participants emphasized that advances in all major technical areas of electron sources are required to meet future X-ray and electron scattering instrument needs. Basic Research Needs Workshop on Quantum Materials for Energy Relevant Technology JPG.jpg file (198KB) Report.pdf file (6.0MB) Computers have revolutionized every aspect of our lives. Yet in science, the most tantalizing applications of computing lie just beyond our reach. The current quest to build an exascale computer with one thousand times the capability of today’s fastest machines (and more than a million times that of a laptop) will take researchers over the next horizon. The field of materials, chemical reactions, and compounds is inherently complex. Imagine millions of new materials with new functionalities waiting to be discovered — while researchers also seek to extend those materials that are known to a dizzying number of new forms. We could translate massive amounts of data from high precision experiments into new understanding through data mining and analysis. We could have at our disposal the ability to predict the properties of these materials, to follow their transformations during reactions on an atom-by-atom basis, and to discover completely new chemical pathways or physical states of matter. Extending these predictions from the nanoscale to the mesoscale, from the ultrafast world of reactions to long-time simulations to predict the lifetime performance of materials, and to the discovery of new materials and processes will have a profound impact on energy technology. In addition, discovery of new materials is vital to move computing beyond Moore’s law. To realize this vision, more than hardware is needed. New algorithms to take advantage of the increase in computing power, new programming paradigms, and new ways of mining massive data sets are needed as well. This report summarizes the opportunities and the requisite computing ecosystem needed to realize the potential before us. In addition to pursuing new and more complete physical models and theoretical frameworks, this review found that the following broadly grouped areas relevant to the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) would directly affect the Basic Energy Sciences (BES) mission need. Basic Research Needs Workshop on Quantum Materials for Energy Relevant Technology JPG.jpg file (326KB) Report.pdf file (5.0MB) Imagine future computers that can perform calculations a million times faster than today’s most powerful supercomputers at only a tiny fraction of the energy cost. Imagine power being generated, stored, and then transported across the national grid with nearly no loss. Imagine ultrasensitive sensors that keep us in the loop on what is happening at home or work, warn us when something is going wrong around us, keep us safe from pathogens, and provide unprecedented control of manufacturing and chemical processes. And imagine smart windows, smart clothes, smart buildings, supersmart personal electronics, and many other items — all made from materials that can change their properties “on demand” to carry out the functions we want. The key to attaining these technological possibilities in the 21st century is a new class of materials largely unknown to the general public at this time but destined to become as familiar as silicon. Welcome to the world of quantum materials — materials in which the extraordinary effects of quantum mechanics give rise to exotic and often incredible properties.. Sustainable Ammonia Synthesis – Exploring the scientific challenges associated with discovering alternative, sustainable processes for ammonia production Ammonia Sustainment Report JPG.jpg file (255KB) Report.pdf file (1.4MB) Ammonia (NH3) is essential to all life on our planet. Until about 100 years ago, NH3 produced by reduction of dinitrogen (N2) in air came almost exclusively from bacteria containing the enzyme nitrogenase.. DOE convened a roundtable of experts on February 18, 2016. Participants in the Roundtable discussions concluded that the scientific basis for sustainable processes for ammonia synthesis is currently lacking, and it needs to be enhanced substantially before it can form the foundation for alternative processes. The Roundtable Panel identified an overarching grand challenge and several additional scientific grand challenges and research opportunities: • Discovery of active, selective, scalable, long-lived catalysts for sustainable ammonia synthesis. • Development of relatively low pressure (<10 atm) and relatively low temperature (<200 C) thermal processes. • Integration of knowledge from nature (enzyme catalysis), molecular/homogeneous and heterogeneous catalysis. • Development of electrochemical and photochemical routes for N2 reduction based on proton and electron transfer • Development of biochemical routes to N2 reduction • Development of chemical looping (solar thermochemical) approaches • Identification of descriptors of catalytic activity using a combination of theory and experiments • Characterization of surface adsorbates and catalyst structures (chemical, physical and electronic) under conditions relevant to ammonia synthesis. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable JPG.jpg file (160KB) Report.pdf file (7.3MB) The overarching answer that emerged was: To address this challenge, the following issues were considered: New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Basic Research Needs for Environmental Management JPG.jpg file (308KB) Report.pdf file (5.4MB) This report is based on a BES/BER/ASCR workshop on Basic Research Needs for Environmental Management, which was held on July 8-11, 2015. The workshop goal was to define priority research directions that will provide the scientific foundations for future environmental management technologies, which will enable more efficient, cost-effective, and safer cleanup of nuclear waste. One of the US Department of Energy’s (DOE) biggest challenges today is cleanup of the legacy resulting from more than half a century of nuclear weapons production. The research and manufacturing associated with the development of the nation’s nuclear arsenal has left behind staggering quantities of highly complex, highly radioactive wastes and contaminated soils and groundwater. Based on current knowledge of these legacy problems and currently available technologies, DOE projects that hundreds of billions of dollars and more than 50 years of effort will be required for remediation. Over the past decade, DOE’s progress towards cleanup has been stymied in part by a lack of investment in basic science that is foundational to innovation and new technology development. During this decade, amazing progress has been made in both experimental and computational tools that have been applied to many energy problems such as catalysis, bioenergy, solar energy, etc. Our abilities to observe, model, and exploit chemical phenomena at the atomic level along with our understanding of the translation of molecular phenomena to macroscopic behavior and properties have advanced tremendously; however, remediation of DOE’s legacy waste problems has not yet benefited from these advances because of the lack investment in basic science for environmental cleanup. Advances in science and technology can provide the foundation for completing the cleanup more swiftly, inexpensively, safely, and effectively. The lack of investment in research and technology development by DOE’s Office of Environmental Management (EM) was noted in a report by a task force to the Secretary of Energy’s Advisory Board (SEAB 2014). Among several recommendations, the report suggested a workshop be convened to develop a strategic plan for a “fundamental research program focused on developing new knowledge and capabilities that bear on the EM challenges.” This report summarizes the research directions identified at a workshop on Basic Research Needs for Environmental Management. This workshop, held July 8-11, 2015, was sponsored by three Office of Science offices: Basic Energy Sciences, Biological and Environmental Research, and Advanced Scientific Computing Research. The workshop participants included 65 scientists and engineers from universities, industry, and national laboratories, along with observers from the DOE Offices of Science, EM, Nuclear Energy, and Legacy Management. As a result of the discussions at the workshop, participants articulated two Grand Challenges for science associated with EM cleanup needs. They are as follows: Interrogation of Inaccessible Environments over Extremes of Time and Space Whether the contamination problem involves highly radioactive materials in underground waste tanks or large volumes of contaminated soils and groundwaters beneath the Earth’s surface, characterizing the problem is often stymied by an inability to safely and cost effectively interrogate the system. Sensors and imaging capabilities that can operate in the extreme environments typical of EM’s remaining cleanup challenges do not exist. Alternatively, large amounts of data can sometimes be obtained about a system, but appropriate data analytics tools are lacking to enable effective and efficient use of all the information for performance regression or prediction. Research into new approaches for remote and in situ sensing, and new algorithms for data analytics are critically needed. Depending on the cleanup problem, these new approaches must span temporal and spatial scales—from seconds to millennia, from atoms to kilometers. Understanding and Exploiting Interfacial Phenomena in Extreme Environments While many of EM’s remaining cleanup problems involve unprecedented extremes in complexity, an additional layer is provided by the numerous contaminant forms and their partitioning across interfaces in these wastes, including liquid-liquid, liquid-solid, and others. For example, the wastes in the high-level radioactive waste tanks can have consistencies of paste, gels, or Newtonian slurries, where water behaves more like a solute than a solvent. Unexpected chemical forms of the contaminants and radionuclides partition to unusual solids, colloids, and other phases in the tank wastes, complicating their efficient separation. Mastery of the chemistry controlling contaminant speciation and its behavior at the solid-liquid and liquid-liquid interfaces in the presence of large quantities of ionizing radiation is needed to develop improved waste treatment approaches and enhance the operating efficiencies of treatment facilities. These same interfacial processes, if understood, can be exploited to develop entirely new approaches for effective separations technologies, both for tank waste processing and subsurface remediation. Based on the findings of the technical panels, six Priority Research Directions (PRDs) were identified as the most urgent scientific areas that need to be addressed to enable EM to meet its mission goals. All of these PRDs are also embodied in the two Grand Challenges. Further, these six PRDs are relevant to all aspects of EM waste issues, including tank wastes, waste forms, and subsurface contamination. These PRDs include the following: • Elucidating and exploiting complex speciation and reactivity far from equilibrium • Understanding and controlling chemical and physical processes at interfaces • Harnessing physical and chemical processes to revolutionize separations • Mechanisms of materials degradation in harsh environments • Mastering hierarchical structures to tailor waste forms • Predictive understanding of subsurface system behavior and response to perturbations. Two recurring themes emerged during the course of the workshop that cut across all of the PRDs. These crosscutting topics give rise to Transformative Research Capabilities. The first such capability, Multidimensional characterization of extreme, dynamic, and inaccessible environments, centers on the need for obtaining detailed chemical and physical information on EM wastes in waste tanks and during waste processing, in wastes forms, and in the environment. New approaches are needed to characterize and monitor these highly hazardous and/or inaccessible materials in their natural environment, either using in situ techniques or remote monitoring. These approaches are particularly suited for studying changes in the wastes over time and distances, for example. Such in situ and remote techniques are also critical for monitoring the effectiveness of waste processes, subsurface transport, and long-term waste form stability. However, far more detailed information will be needed to obtain fundamental insight into materials structure and molecular-level chemical and physical processes required for many of the PRDs. For these studies, samples must be retrieved and studied ex situ, but the hazardous nature of these samples requires special handling. Recent advances in nanoscience have catalyzed the development of high-sensitivity characterization tools—many of which are available at DOE user facilities, including radiological user facilities—and the means of handling ultrasmall samples, including micro- and nanofluidics and nanofabrication tools. These advances open the door to obtaining unprecedented information that is crucial to formulating concepts for new technologies to complete EM’s mission. The sheer magnitude of the data needed to fully understand the complexity of EM wastes is daunting, but it is just the beginning. Additional data will need to be gathered to both monitor and predict changes—in tank wastes, during processing, in waste forms and in the subsurface over broad time and spatial scales. Therefore, the second Transformative Research Capability, Integrated simulation and data-enabled discovery, identified the need to develop curated databases and link experiments and theory through big-deep data methodologies. These state-of-the-art capabilities will be enabled by high-performance computing resources available at DOE user facilities. The foundational knowledge to support innovation for EM cannot wait as the tank wastes continue to deteriorate and result in environmental, health, and safety issues. As clearly stated in the 2014 Secretary of Energy Advisory Board report, completion of EM’s remaining responsibilities will simply not be possible without significant innovation and that innovation can be derived from use-inspired fundamental research as described in this report. The breakthroughs that will evolve from this investment in basic science will reduce the overall risk and financial burden of cleanup while also increasing the probability of success. The time is now ripe to proceed with the basic science in support of more effective solutions for environmental management. The knowledge gleaned from this basic research will also have broad applicability to many other areas central to DOE’s mission, including separations methods for critical materials recovery and isotope production, robust materials for advanced reactor and steam turbine designs, and new capabilities for examining subsurface transport relevant to the water/energy nexus. JPG.jpg file (90KB) Report.pdf file (19.4MB) As a result of this effort, it has become clear that the progress made to date on the five Grand Challenges has created a springboard for seizing five new Transformative Opportunities that have the potential to further transform key technologies involving matter and energy. These five new Transformative Opportunities and the evidence supporting them are discussed in this new report, “Challenges at the Frontiers of Matter and Energy: Transformative Opportunities for Discovery Science.” • Mastering Hierarchical Architectures and Beyond-Equilibrium Matter Complex materials and chemical processes transmute matter and energy, for example from CO2 and water to chemical fuel in photosynthesis, from visible light to electricity in solar cells and from electricity to light in light emitting diodes (LEDs) Such functionality requires complex assemblies of heterogeneous materials in hierarchical architectures that display time-dependent away-from-equilibrium behaviors. Much of the foundation of our understanding of such transformations however, is based on monolithic single- phase materials operating at or near thermodynamic equilibrium. The emergent functionalities enabling next-generation disruptive energy technologies require mastering the design, synthesis, and control of complex hierarchical materials employing dynamic far-from-equilibrium behavior. A key guide in this pursuit is nature, for biological systems prove the power of hierarchical assembly and far- from-equilibrium behavior. The challenges here are many: a description of the functionality of hierarchical assemblies in terms of their constituent parts, a blueprint of atomic and molecular positions for each constituent part, and a synthesis strategy for (a) placing the atoms and molecules in the proper positions for the component parts and (b) arranging the component parts into the required hierarchical structure. Targeted functionality will open the door to significant advances in the harvesting, transforming (e.g., reducing CO2, splitting water, and fixing nitrogen), storing, and use of energy to create new materials, manufacturing processes, and technologies—the lifeblood of human societies and economic growth. • Beyond Ideal Materials and Systems: Understanding the Critical Roles of Heterogeneity, Interfaces, and Disorder Real materials, both natural ones and those we engineer, are usually a complex mixture of compositional and structural heterogeneities, interfaces, and disorder across all spatial and temporal scales. It is the fluctuations and disorderly states of these heterogeneities and interfaces that often determine the system’s properties and functionality. Much of our fundamental scientific knowledge is based on “ideal” systems, meaning materials that are observed in “frozen” states or represented by spatially or temporally averaged states. Too often, this approach has yielded overly simplistic models that hide important nuances and do not capture the complex behaviors of materials under realistic conditions. These behaviors drive vital chemical transformations such as catalysis, which initiates most industrial manufacturing processes, and friction and corrosion, the parasitic effects of which cost the U.S. economy billions of dollars annually. Expanding our scientific knowledge from the relative simplicity of ideal, perfectly ordered, or structurally averaged materials to the true complexity of real-world heterogeneities, interfaces, and disorder should enable us to realize enormous benefits in the materials and chemical sciences, which translates to the energy sciences, including solar and nuclear power, hydraulic fracturing, power conversion, airframes, and batteries. • Harnessing Coherence in Light and Matter Quantum coherence in light and matter is a measure of the extent to which a wave field vibrates in unison with itself at neighboring points in space and time. Although this phenomenon is expressed at the atomic and electronic scales, it can dominate the macroscopic properties of materials and chemical reactions such as superconductivity and efficient photosynthesis. In recent years, enormous progress has been made in recognizing, manipulating, and exploiting quantum coherence. This progress has already elucidated the role that symmetry plays in protecting coherence in key materials, taught us how to use light to manipulate atoms and molecules, and provided us with increasingly sophisticated techniques for controlling and probing the charges and spins of quantum coherent systems. With the arrival of new sources of coherent light and electron beams, thanks in large part to investments by the U.S. Department of Energy’s Office of Basic Energy Sciences (BES), there is now an opportunity to engineer coherence in heterostructures that incorporate multiple types of materials and to control complex, multistep chemical transformations. This approach will pave the way for quantum information processing and next-generation photovoltaic cells and sensors. • Revolutionary Advances in Models, Mathematics, Algorithms, Data, and Computing Science today is benefiting from a convergence of theoretical, mathematical, computational, and experimental capabilities that put us on the brink of greatly accelerating our ability to predict, synthesize, and control new materials and chemical processes, and to understand the complexities of matter across a range of scales. Imagine being able to chart a path through a vast sea of possible new materials to find a select few with desired properties. Instead of the time-honored forward approach, in which materials with desired properties are found through either trial-and-error experiments or lucky accidents, we have the opportunity to inversely design and create new materials that possess the properties we desire. The traditional approach has allowed us to make only a tiny fraction of all the materials that are theoretically possible. The inverse design approach, through the harmonious convergence of theoretical, mathematical, computational, and experimental capabilities, could usher in a virtual cornucopia of new materials with functionalities far beyond what nature can provide. Similarly, enhanced mathematical and computational capabilities significantly enhance our ability to extract physical and chemical insights from vastly larger data streams gathered during multimodal and multidimensional experiments using advanced characterization facilities. • Exploiting Transformative Advances in Imaging Capabilities across Multiple Scales Historically, improvements in imaging capabilities have always resulted in improved understanding of scientific phenomena. A prime challenge today is finding ways to reconstruct raw data, obtained by probing and mapping matter across multiple scales, into analyzable images. BES investments  in new and improved imaging facilities, most notably synchrotron x-ray sources, free-electron lasers, electron microscopes, and neutron sources, have greatly advanced our powers of observation, as have substantial improvements in laboratory- scale technologies. Furthermore, BES is now planning or actively discussing exciting new capabilities. Taken together, these advances in imaging capabilities provide an opportunity to expand our ability to observe and study matter from the 3D spatial perspectives of today to true “4D” spatially and temporally resolved maps of dynamics that allow quantitative predictions of time-dependent material properties and chemical processes. The knowledge gained will impact data storage, catalyst design, drug delivery, structural materials, and medical implants, to name just a few key technologies. Seizing each of these five Transformative Opportunities, as well as accelerating further progress on Grand Challenge research, will require specific, targeted investments from BES in the areas of synthesis, meaning the ability to make the materials and architectures that are envisioned; instrumentation and tools, a category that includes theory and computation; and human capital, the most important asset for advancing the Grand Challenges and Transformative Opportunities. While “Challenges at the Frontiers of Matter and Energy: Transformative Opportunities for Discovery Science” could be viewed as a sequel to the original Grand Challenges report, it breaks much new ground in its assessment of the scientific landscape today versus the scientific landscape just a few years ago. In the original Grand Challenges report, it was noted that if the five Grand Challenges were met, our ability to direct matter and energy would be measured only by the limits of human imagination. This new report shows that, prodded by those challenges, the scientific community is positioned today to seize new opportunities whose impacts promise to be transformative for science and society, as well as dramatically accelerate progress in the pursuit of the original Grand Challenges. Controlling Subsurface Fractures and Fluid Flow: A Basic Research Agenda JPG.jpg file (479KB) Report.pdf file (831KB) From beneath the surface of the earth, we currently obtain about 80-percent of the energy our nation consumes each year. In the future we have the potential to generate billions of watts of electrical power from clean, green, geothermal energy sources. Our planet’s subsurface can also serve as a reservoir for storing energy produced from intermittent sources such as wind and solar, and it could provide safe, long-term storage of excess carbon dioxide, energy waste products and other hazardous materials. However, it is impossible to underestimate the complexities of the subsurface world. These complexities challenge our ability to acquire the scientific knowledge needed for the efficient and safe exploitation of its resources. To more effectively harness subsurface resources while mitigating the impacts of developing and using these resources, the U.S. Department of Energy established SubTER – the Subsurface Technology and Engineering RD&D Crosscut team. This DOE multi-office team engaged scientists and engineers from the national laboratories to assess and make recommendations for improving energy-related subsurface engineering. The SubTER team produced a plan with the overall objective of “adaptive control of subsurface fractures and fluid flow.”This plan revolved around four core technological pillars—Intelligent Wellbore Systems that sustain the integrity of the wellbore environment; Subsurface Stress and Induced Seismicity programs that guide and optimize sustainable energy strategies while reducing the risks associated with subsurface injections; Permeability Manipulation studies that improve methods of enhancing, impeding and eliminating fluid flow; and New Subsurface Signals that transform our ability to see into and characterize subsurface systems. The SubTER team developed an extensive R&D plan for advancing technologies within these four core pillars and also identified several areas where new technologies would require additional basic research. In response, the Office of Science, through its Office of Basic Energy Science (BES), convened a roundtable consisting of 15 national lab, university and industry geoscience experts to brainstorm basic research areas that underpin the SubTER goals but are currently underrepresented in the BES research portfolio. Held in Germantown, Maryland on May 22, 2015, the round-table participants developed a basic research agenda that is detailed in this report. Highlights include the following: • A grand challenge calling for advanced imaging of stress and geological processes to help understand how stresses and chemical substances are distributed in the subsurface—knowledge that is critical to all aspects of subsurface engineering; • A priority research direction aimed at achieving control of fluid flow through fractured media; • A priority research direction aimed at better understanding how mechanical and geochemical perturbations to subsurface rock systems are coupled through fluid  and mineral interactions; • A priority research direction aimed at studying the structure, permeability, reactivity and other properties of nanoporous rocks, like shale, which have become critical energy materials and exhibit important hallmarks of mesoscale materials; • A cross-cutting theme that would accelerate development of advanced computational methods to describe heterogeneous time-dependent geologic systems that could, among other potential benefits, provide new  and vastly improved models of hydraulic fracturing and its environmental impacts; • A cross-cutting theme that would lead to the creation of “geo-architected materials” with controlled repeatable heterogeneity and structure that can be tested under a variety of thermal, hydraulic, chemical and mechanical conditions relevant to subsurface systems; • A cross-cutting theme calling for new laboratory studies on both natural and geo-architected subsurface materials that deploy advanced high-resolution 3D imaging and chemical analysis methods to determine the ;rates and mechanisms of fluid-rock processes, and to test predictive models of such phenomena. Many of the key energy challenges of the future demand a greater understanding of the subsurface world in all of its complexity. This greater under- standing will improve the ability to control and manipulate the subsurface world in ways that will benefit both the economy and the environment. This report provides specific basic research pathways to address some of the most fundamental issues of energy-related subsurface engineering. X-ray Optics for BES Light Source Facilities JPG.jpg file (582KB) Report.pdf file (18.0MB) Future of Electron Scattering and Diffraction The ability to correlate the atomic- and nanoscale-structure of condensed matter with physical properties (e.g., mechanical, electrical, catalytic, and optical) and functionality forms the core of many disciplines. Directing and controlling materials at the quantum-, atomic-, and molecular-levels creates enormous challenges and opportunities across a wide spectrum of critical technologies, including those involving the generation and use of energy. The workshop identified next generation electron scattering and diffraction instruments that are uniquely positioned to address these grand challenges. The workshop participants identified four key areas where the next generation of such instrumentation would have major impact: A – Multidimensional Visualization of Real Materials B – Atomic-scale Molecular Processes C – Photonic Control of Emergence in Quantum Materials D – Evolving Interfaces, Nucleation, and Mass Transport Real materials are comprised of complex three-dimensional arrangements of atoms and defects that directly determine their potential for energy applications. Understanding real materials requires new capabilities for three-dimensional atomic scale tomography and spectroscopy of atomic and electronic structures with unprecedented sensitivity, and with simultaneous spatial and energy resolution. Many molecules are able to selectively and efficiently convert sunlight into other forms of energy, like heat and electric current, or store it in altered chemical bonds. Understanding and controlling such process at the atomic scale require unprecedented time resolution. One of the grand challenges in condensed matter physics is to understand, and ultimately control, emergent phenomena in novel quantum materials that necessitate developing a new generation of instruments that probe the interplay among spin, charge, orbital, and lattice degrees of freedom with intrinsic time- and length-scale resolutions. Molecules and soft matter require imaging and spectroscopy with high spatial resolution without damaging their structure. The strong interaction of electrons with matter allows high-energy electron pulses to gather structural information before a sample is damaged. Electron ScatteringImaging, diffraction, and spectroscopy are the fundamental capabilities of electron-scattering instruments. The DOE BES-funded TEAM (Transmission Electron Aberration-corrected Microscope) project achieved unprecedented sub-atomic spatial resolution in imaging through aberration-corrected transmission electron microscopy. To further advance electron scattering techniques that directly enable groundbreaking science, instrumentation must advance beyond traditional two-dimensional imaging. Advances in temporal resolution, recording the full phase and energy spaces, and improved spatial resolution constitute a new frontier in electron microscopy, and will directly address the BES Grand Challenges, such as to “control the emergent properties that arise from the complex correlations of atomic and electronic constituents” and the “hidden states” “very far away from equilibrium”. Ultrafast methods, such as the pump-probe approach, enable pathways toward understanding, and ultimately controlling, the chemical dynamics of molecular systems and the evolution of complexity in mesoscale and nanoscale systems. Central to understanding how to synthesize and exploit functional materials is having the ability to apply external stimuli (such as heat, light, a reactive flux, and an electrical bias) and to observe the resulting dynamic process in situ and in operando, and under the appropriate environment (e.g., not limited to UHV conditions). To enable revolutionary advances in electron scattering and science, the participants of the workshop recommended three major new instrumental developments: A. Atomic-Resolution Multi-Dimensional Transmission Electron Microscope: This instrument would provide quantitative information over the entire real space, momentum space, and energy space for visualizing dopants, interstitials, and light elements; for imaging localized vibrational modes and the motion of charged particles and vacancies; for correlating lattice, spin, orbital, and charge; and for determining the structure and molecular chemistry of organic and soft matter. The instrument will be uniquely suited to answer fundamental questions in condensed matter physics that require understanding the physical and electronic structure at the atomic scale. Key developments include stable cryogenic capabilities that will allow access to emergent electronic phases, as well as hard/soft interfaces and radiation- sensitive materials. B. Ultrafast Electron Diffraction and Microscopy Instrument: This instrument would be capable of nano-diffraction with 10 fs temporal resolution in stroboscopic mode, and better than 100 fs temporal resolution in single shot mode. The instrument would also achieve single- shot real-space imaging with a spatial/temporal resolution of 10 nm/10 ps, representing a thousand fold improvement over current microscopes. Such a capability would be complementary to x-ray free electron lasers due to the difference in the nature of electron and x-ray scattering, enabling space-time mapping of lattice vibrations and energy transport, facilitating the understanding of molecular dynamics of chemical reactions, the photonic control of emergence in quantum materials, and the dynamics of mesoscopic materials. C. Lab-In-Gap Dynamic Microscope: This instrument would enable quantitative measurements of materials structure, composition, and bonding evolution in technologically relevant environments, including liquids, gases and plasmas, thereby assuring the understanding of structure function relationship at the atomic scale with up to nanosecond temporal resolution. This instrument would employ a versatile, modular sample stage and holder geometry to allow the multi-modal (e.g., optical, thermal, mechanical, electrical, and electrochemical) probing of materials’ functionality in situ and in operando. The electron optics encompasses a pole piece that can accommodate the new stage, differential pumping, detectors, aberration correctors, and other electron optical elements for measurement of materials dynamics. To realize the proposed instruments in a timely fashion, BES should aggressively support research and development of complementary and enabling instruments, including new electron sources, advanced electron optics, new tunable specimen pumps and sample stages, and new detectors. The proposed instruments would have transformative impact on physics, chemistry, materials science, engineering X-ray Optics for BES Light Source Facilities JPG.jpg file (118KB) Report.pdf file (5.3MB) Hi-Res.pdf file (38.0MB) X-ray Optics for BES Light Source Facilities Each new generation of synchrotron radiation sources has delivered an increase in average brightness 2 to 3 orders of magnitude over the previous generation. The next evolution toward diffraction-limited storage rings will deliver another 3 orders of magnitude increase. For ultrafast experiments, free electron lasers (FELs) deliver 10 orders of magnitude higher peak brightness than storage rings. Our ability to utilize these ultrabright sources, however, is limited by our ability to focus, monochromate, and manipulate these beams with X-ray optics. X-ray optics technology unfortunately lags behind source technology and limits our ability to maximally utilize even today’s X-ray sources. With ever more powerful X-ray sources on the horizon, a new generation of X-ray optics must be developed that will allow us to fully utilize these beams of unprecedented brightness. The increasing brightness of X-ray sources will enable a new generation of measurements that could have revolutionary impact across a broad area of science, if optical systems necessary for transporting and analyzing X-rays can be perfected. The high coherent flux will facilitate new science utilizing techniques in imaging, dynamics, and ultrahigh-resolution spectroscopy. For example, zone-plate-based hard X-ray microscopes are presently used to look deeply into materials, but today’s resolution and contrast are restricted by limitations of the current lithography used to manufacture nanodiffractive optics. The large penetration length, combined in principle with very high spatial resolution, is an ideal probe of hierarchically ordered mesoscale materials, if zone-plate focusing systems can be improved. Resonant inelastic X-ray scattering (RIXS) probes a wide range of excitations in materials, from charge-transfer processes to the very soft excitations that cause the collective phenomena in correlated electronic systems. However, although RIXS can probe high-energy excitations, the most exciting and potentially revolutionary science involves soft excitations such as magnons and phonons; in general, these are well below the resolution that can be probed by today’s optical systems. The study of these low-energy excitations will only move forward if advances are made in high-resolution gratings for the soft X-ray energy region, and higher-resolution crystal analyzers for the hard X-ray region. In almost all the forefront areas of X-ray science today, the main limitation is our ability to focus, monochromate, and manipulate X-rays at the level required for these advanced measurements. To address these issues, the U.S. Department of Energy (DOE) Office of Basic Energy Sciences (BES) sponsored a workshop, X-ray Optics for BES Light Source Facilities, which was held March 27–29, 2013, near Washington, D.C.  The workshop addressed a wide range of technical and organizational issues. Eleven working groups were formed in advance of the meeting and sought over several months to define the most pressing problems and emerging opportunities and to propose the best routes forward for a focused R&D program to solve these problems. The workshop participants identified eight principal research directions (PRDs), as follows: • Development of advanced grating lithography and manufacturing for high-energy resolution techniques such as soft X-ray inelastic scattering. • Development of higher-precision mirrors for brightness preservation through the use of advanced metrology in manufacturing, improvements in manufacturing techniques, and in mechanical mounting and cooling. • Development of higher-accuracy optical metrology that can be used in manufacturing, verification, and testing of optomechanical systems, as well as at wavelength metrology that can be used for quantification of individual optics and alignment and testing of beamlines. • Development of an integrated optical modeling and design framework that is designed and maintained specifically for X-ray optics. • Development of nanolithographic techniques for improved spatial resolution and efficiency of zone plates. • Development of large, perfect single crystals of materials other than silicon for use as beam splitters, seeding monochromators, and high-resolution analyzers. • Development of improved thin-film deposition methods for fabrication of multilayer Laue lenses and high-spectral-resolution multilayer gratings. • Development of supports, actuator technologies, algorithms, and controls to provide fully integrated and robust adaptive X-ray optic systems. • Development of fabrication processes for refractive lenses in materials other than silicon. The workshop participants also addressed two important nontechnical areas: our relationship with industry and organization of optics within the light source facilities. Optimization of activities within these two areas could have an important effect on the effectiveness and efficiency of our overall endeavor. These are crosscutting managerial issues that we identified as areas that needed further in-depth study, but they need to be coordinated above the individual facilities. Finally, an issue that cuts across many of the optics improvements listed above is routine access to beamlines that ideally are fully dedicated to optics research and/or development. The success of the BES X-ray user facilities in serving a rapidly increasing user community has led to a squeezing of beam time for vital instrumentation activities. Dedicated development beamlines could be shared with other R&D activities, such as detector programs and novel instrument development. In summary, to meet the challenges of providing the highest-quality X-ray beams for users and to fully utilize the high-brightness sources of today and those that are on the horizon, it will be critical to make strategic investments in X-ray optics R&D. This report can provide guidance and direction for effective use of investments in the field of X-ray optics and potential approaches to develop a better-coordinated program of X-ray optics development within the suite of BES synchrotron radiation facilities. Due to the importance and complexity of the field, the need for tight coordination between BES light source facilities and with industry, as well as the rapid evolution of light source capabilities the workshop participants recommend holding similar workshops at least biannually. Neutron and X Ray Detectors JPG.jpg file (440KB) Report.pdf file (7.9MB) Report.pdf file (16.5MB) Neutron and X-ray Detectors The Basic Energy Sciences (BES) X-ray and neutron user facilities attract more than 12,000 researchers each year to perform cutting-edge science at these state-of-the-art sources. While impressive breakthroughs in X-ray and neutron sources give us the powerful illumination needed to peer into the nano- to mesoscale world, a stumbling block continues to be the distinct lag in detector development, which is slowing progress toward data collection and analysis. Urgently needed detector improvements would reveal chemical composition and bonding in 3-D and in real time, allow researchers to watch “movies” of essential life processes as they happen, and make much more efficient use of every X-ray and neutron produced by the source The immense scientific potential that will come from better detectors has triggered worldwide activity in this area. Europe in particular has made impressive strides, outpacing the United States on several fronts. Maintaining a vital U.S. leadership in this key research endeavor will require targeted investments in detector R&D and infrastructure. To clarify the gap between detector development and source advances, and to identify opportunities to maximize the scientific impact of BES user facilities, a workshop on Neutron and X-ray Detectors was held August 1-3, 2012, in Gaithersburg, Maryland. Participants from universities, national laboratories, and commercial organizations from the United States and around the globe participated in plenary sessions, breakout groups, and joint open-discussion summary sessions. Sources have become immensely more powerful and are now brighter (more particles focused onto the sample per second) and more precise (higher spatial, spectral, and temporal resolution). To fully utilize these source advances, detectors must become faster, more efficient, and more discriminating. In supporting the mission of today’s cutting-edge neutron and X-ray sources, the workshop identified six detector research challenges (and two computing hurdles that result from the corresponding increase in data volume) for the detector community to overcome in order to realize the full potential of BES neutron and X-ray facilities. Resolving these detector impediments will improve scientific productivity both by enabling new types of experiments, which will expand the scientific breadth at the X-ray and neutron facilities, and by potentially reducing the beam time required for a given experiment. These research priorities are summarized in the table below. Note that multiple, simultaneous detector improvements are often required to take full advantage of brighter sources. High-efficiency hard X-ray sensors: The fraction of incident particles that are actually detected defines detector efficiency. Silicon, the most common direct-detection X-ray sensor material, is (for typical sensor thicknesses) 100% efficient at 8 keV, 25%efficient at 20 keV, and only 3% efficient at 50 keV. Other materials are needed for hard X-rays. Replacement for 3He for neutron detectors: 3He has long been the neutron detection medium of choice because of its high cross section over a wide neutron energy range for the reaction 3He + n —> 3H + 1H + 0.764 MeV. 3He stockpiles are rapidly dwindling, and what is available can be had only at prohibitively high prices. Doped scintillators hold promise as ways to capture neutrons and convert them into light, although work is needed on brighter, more efficient scintillator solutions. Neutron detectors also require advances in speed and resolution. Fast-framing X-ray detectors: Today’s brighter X-ray sources make time-resolved studies possible. For example, hybrid X-ray pixel detectors, initially developed for particle physics, are becoming fairly mature X-ray detectors, with considerable development in Europe. To truly enable time-resolved studies, higher frame rates and dynamic range are required, and smaller pixel sizes are desirable. High-speed spectroscopic X-ray detectors: Improvements in the readout speed and energy resolution of X-ray detectors are essential to enable chemically sensitive microscopies. Advances would make it possible to take images with simultaneous spatial and chemical information. Very high-energy-resolution X-ray detectors: The energy resolution of semiconductor detectors, while suitable for a wide range of applications, is far less than what can be achieved with X-ray optics. A direct detector that could rival the energy resolution of optics could dramatically improve the efficiency of a multitude of experiments, as experiments are often repeated at a number of different energies. Very high-energy-resolution detectors could make these experiments parallel, rather than serial. Low-background, high-spatial-resolution neutron detectors: Low-background detectors would significantly improve experiments that probe excitations (phonons, spin excitations, rotation, and diffusion in polymers and molecular substances, etc.) in condensed matter. Improved spatial resolution would greatly benefit radiography, tomography, phase-contrast imaging, and holography. Improved acquisition and visualization tools: In the past, with the limited variety of slow detectors, it was straightforward to visualize data as it was being acquired (and adjust experimental conditions accordingly) to create a compact data set that the user could easily transport. As detector complexity and data rates explode, this becomes much more challenging. Three goals were identified as important for coping with the growing data volume from high-speed detectors: • Facilitate better algorithm development. In particular, algorithms that can minimize the quantity of data stored. • Improve community-driven mechanisms to reduce data protocols and enhance quantitative, interactive visualization tools. • Develop and distribute community-developed, detector-specific simulation tools. • Aim for parallelization to take advantage of high-performance analysis platforms. Improved analysis work flows: Standardize the format of metadata that accompanies detector data and describes the experimental setup and conditions. Develop a standardized user interface and software framework for analysis and data management. The diversity of detector improvements required is necessarily as broad as the range of scientific experimentation at BES facilities. This workshop identified a variety of avenues by which detector R&D can enable enhanced science at BES facilities. The Research Directions listed above will be addressed by focused R&D and detector engineering, both of which require specialized infrastructure and skills. While U.S. leadership in neutron and X-ray detectors lags behind other countries in several areas, significant talent exists across the complex. A forum of technical experts, facilities management, and BES could be a venue to provide further definition. JPG.jpg file (178KB) Report.pdf file (8.1MB) Hi-res.pdf file (30.3MB) From Quanta to the Continuum:  Opportunities for Mesoscale Science We are at a time of unprecedented challenge and opportunity. Our economy is in need of a jump start, and our supply of clean energy needs to dramatically increase. Innovation through basic research is a key means for addressing both of these challenges. The great scientific advances of the last decade and more, especially at the nanoscale, are ripe for exploitation. Seizing this key opportunity requires mastering the mesoscale, where classical, quantum, and nanoscale science meet. It has become clear that—in many important areas—the functionality that is critical to macroscopic behavior begins to manifest itself not at the atomic or nanoscale but at the mesoscale, where defects, interfaces, and non-equilibrium structures are the norm. With our recently acquired knowledge of the rules of nature that govern the atomic and nanoscales, we are well positioned to unravel and control the complexity that determines functionality at the mesoscale. The reward for breakthroughs in our understanding at the mesoscale is the emergence of previously unrealized functionality. The present report explores the opportunity and defines the research agenda for mesoscale science—discovering, understanding, and controlling interactions among disparate systems and phenomena to reach the full potential of materials complexity and functionality. The ability to predict and control mesoscale phenomena and architectures is essential if atomic and molecular knowledge is to blossom into a next generation of technology opportunities, societal benefits, and scientific advances. • Imagine the ability to manufacture at the mesoscale:  that is, the directed assembly of mesoscale structures that possess unique functionality that yields faster, cheaper, higher performing, and longer lasting products, as well as products that have functionality that we have not yet imagined. • Imagine the realization of biologically inspired complexity and functionality with inorganic earth-abundant materials to transform energy conversion, transmission, and storage. This is the promise of mesoscale science. Mesoscale science and technology opportunities build on the enormous foundation of nanoscience that the scientific community has created over the last decade and continues to create. New features arise naturally in the transition to the mesoscale, including the emergence of collective behavior; the interaction of disparate electronic, mechanical, magnetic, and chemical phenomena; the appearance of defects, interfaces and statistical variation; and the self assembly of functional composite systems. The mesoscale represents a discovery laboratory for finding new science, a self-assembly foundry for creating new functional systems, and a design engine for new technologies. The last half-century and especially the last decade have witnessed a remarkable drive to ever smaller scales, exposing the atomic, molecular, and nanoscale structures that anchor the macroscopic materials and phenomena we deal with every day. Given this knowledge and capability, we are now starting the climb up from the atomic and nanoscale to the greater complexity and wider horizons of the mesoscale. The constructionist path up from atomic and nanoscale to mesoscale holds a different kind of promise than the reductionist path down:  it allows us to re-arrange the nanoscale building blocks into new combinations, exploit the dynamics and kinetics of these new coupled interactions, and create qualitatively different mesoscale architectures and phenomena leading to new functionality and ultimately new technology. The reductionist journey to smaller length and time scales gave us sophisticated observational tools and intellectual understanding that we can now apply with great advantage to the wide opportunity of mesoscale science following a bottom-up approach. Realizing the mesoscale opportunity requires advances not only in our knowledge but also in our ability to observe, characterize, simulate, and ultimately control matter. Mastering mesoscale materials and phenomena requires the seamless integration of theory, modeling, and simulation with synthesis and characterization. The inherent complexity of mesoscale phenomena, often including many nanoscale structural or functional units, requires theory and simulation spanning multiple space and time scales. In mesoscale architectures the positions of individual atoms are often no longer relevant, requiring new simulation approaches beyond density functional theory and molecular dynamics that are so successful at atomic scales. New organizing principles that describe emergent mesoscale phenomena arising from many coupled and competing degrees of freedom wait to be discovered and applied. Measurements that are dynamic, in situ, and multimodal are needed to capture the sequential phenomena of composite mesoscale materials. Finally, the ability to design and realize the complex materials we imagine will require qualitative advances in how we synthesize and fabricate materials and how we manage their metastability and degradation over time. We must move from serendipitous to directed discovery, and we must master the art of assembling structural and functional nanoscale units into larger architectures that create a higher level of complex functional systems. While the challenge of discovering, controlling, and manipulating complex mesoscale architectures and phenomena to realize new functionality is immense, success in the pursuit of these research directions will have outcomes with the potential to transform society. The body of this report outlines the need, the opportunities, the challenges, and the benefits of mastering mesoscale science. Future of Electron Scattering JPG.jpg file (470KB) Report.pdf file (1.0MB) This report is based on the Department of Energy (DOE) Workshop on “Data and Communications in Basic Energy Sciences: Creating a Pathway for Scientific Discovery” that was held at the Bethesda Marriott in Maryland on October 24-25, 2011. The workshop brought together leading researchers from the Basic Energy Sciences (BES) facilities and Advanced Scientific Computing Research (ASCR). The workshop was co-sponsored by these two Offices to identify opportunities and needs for data analysis, ownership, storage, mining, provenance and data transfer at light sources, neutron sources, microscopy centers and other facilities. Their charge was to identify current and anticipated issues in the acquisition, analysis, communication and storage of experimental data that could impact the progress of scientific discovery, ascertain what knowledge, methods and tools are needed to mitigate present and projected shortcomings and to create the foundation for information exchanges and collaboration between ASCR and BES supported researchers and facilities. The workshop was organized in the context of the impending data tsunami that will be produced by DOE’s BES facilities. Current facilities, like SLAC National Accelerator Laboratory’s Linac Coherent Light Source, can produce up to 18 terabytes (TB) per day, while upgraded detectors at Lawrence Berkeley National Laboratory’s Advanced Light Source will generate ~10TB per hour. The expectation is that these rates will increase by over an order of magnitude in the coming decade. The urgency to develop new strategies and methods in order to stay ahead of this deluge and extract the most science from these facilities was recognized by all. The four focus areas addressed in this workshop were: • Workflow Management - Experiment to Science: Identifying and managing the data path from experiment to publication. • Theory and Algorithms: Recognizing the need for new tools for computation at scale, supporting large data sets and realistic theoretical models. • Visualization and Analysis: Supporting near-real-time feedback for experiment optimization and new ways to extract and communicate critical information from large data sets. • Data Processing and Management: Outlining needs in computational and communication approaches and infrastructure needed to handle unprecedented data volume and information content. It should be noted that almost all participants recognized that there were unlikely to be any turn-key solutions available due to the unique, diverse nature of the BES community, where research at adjacent beamlines at a given light source facility often span everything from biology to materials science to chemistry using scattering, imaging and/or spectroscopy. However, it was also noted that advances supported by other programs in data research, methodologies, and tool development could be implemented on reasonable time scales with modest effort. Adapting available standard file formats, robust workflows, and in-situ analysis tools for user facility needs could pay long-term dividends. Workshop participants assessed current requirements as well as future challenges and made the following recommendations in order to achieve the ultimate goal of enabling transformative science in current and future BES facilities: Theory and analysis components should be integrated seamlessly within experimental workflow. Develop new algorithms for data analysis based on common data formats and toolsets. Move analysis closer to experiment. Move the analysis closer to the experiment to enable real-time (in-situ) streaming capabilities, live visualization of the experiment and an increase of the overall experimental efficiency. Match data management access and capabilities with advancements in detectors and sources. Remove bottlenecks, provide interoperability across different facilities/beamlines and apply forefront mathematical techniques to more efficiently extract science from the experiments. This workshop report examines and reviews the status of several BES facilities and highlights the successes and shortcomings of the current data and communication pathways for scientific discovery. It then ascertains what methods and tools are needed to mitigate present and projected data bottlenecks to science over the next 10 years. The goal of this report is to create the foundation for information exchanges and collaborations among ASCR and BES supported researchers, the BES scientific user facilities, and ASCR computing and networking facilities. To jumpstart these activities, there was a strong desire to see a joint effort between ASCR and BES along the lines of the highly successful Scientific Discovery through Advanced Computing (SciDAC) program in which integrated teams of engineers, scientists and computer scientists were engaged to tackle a complete end-to-end workflow solution at one or more beamlines, to ascertain what challenges will need to be addressed in order to handle future increases in data JPG.jpg file (402KB) Report.pdf file (1.9MB) Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) This report is based on a SC/EERE Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE), held March 3, 2011, to determine strategic focus areas that will accelerate innovation in engine design to meet national goals in transportation efficiency. The U.S. has reached a pivotal moment when pressures of energy security, climate change, and economic competitiveness converge. Oil prices remain volatile and have exceeded $100 per barrel twice in five years. At these prices, the U.S. spends $1 billion per day on imported oil to meet our energy demands. Because the transportation sector accounts for two-thirds of our petroleum use, energy security is deeply entangled with our transportation needs. At the same time, transportation produces one-quarter of the nation’s carbon dioxide output. Increasing the efficiency of internal combustion engines is a technologically proven and cost-effective approach to dramatically improving the fuel economy of the nation’s fleet of vehicles in the near- to mid-term, with the corresponding benefits of reducing our dependence on foreign oil and reducing carbon emissions. Because of their relatively low cost, high performance, and ability to utilize renewable fuels, internal combustion engines—including those in hybrid vehicles—will continue to be critical to our transportation infrastructure for decades. Achievable advances in engine technology can improve the fuel economy of automobiles by over 50% and trucks by over 30%. Achieving these goals will require the transportation sector to compress its product development cycle for cleaner, more efficient engine technologies by 50% while simultaneously exploring innovative design space. Concurrently, fuels will also be evolving, adding another layer of complexity and further highlighting the need for efficient product development cycles. Current design processes, using “build and test” prototype engineering, will not suffice. Current market penetration of new engine technologies is simply too slow—it must be dramatically accelerated. These challenges present a unique opportunity to marshal U.S. leadership in science-based simulation to develop predictive computational design tools for use by the transportation industry. The use of predictive simulation tools for enhancing combustion engine performance will shrink engine development timescales, accelerate time to market, and reduce development costs, while ensuring the timely achievement of energy security and emissions targets and enhancing U.S. industrial competitiveness. In 2007 Cummins achieved a milestone in engine design by bringing a diesel engine to market solely with computer modeling and analysis tools. The only testing was after the fact to confirm performance. Cummins achieved a reduction in development time and cost. As important, they realized a more robust design, improved fuel economy, and met all environmental and customer constraints. This important first step demonstrates the potential for computational engine design. But, the daunting complexity of engine combustion and the revolutionary increases in efficiency needed require the development of simulation codes and computation platforms far more advanced than those available today. Based on these needs, a Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) convened over 60 U.S. leaders in the engine combustion field from industry, academia, and national laboratories to focus on two critical areas of advanced simulation, as identified by the U.S. automotive and engine industries. First, modern engines require precise control of the injection of a broad variety of fuels that is far more subtle than achievable to date and that can be obtained only through predictive modeling and simulation. Second, the simulation, understanding, and control of these stochastic in-cylinder combustion processes lie on the critical path to realizing more efficient engines with greater power density. Fuel sprays set the initial conditions for combustion in essentially all future transportation engines; yet today designers primarily use empirical methods that limit the efficiency achievable. Three primary spray topics were identified as focus areas in the workshop: 1. The fuel delivery system, which includes fuel manifolds and internal injector flow, 2. The multi-phase fuel–air mixing in the combustion chamber of the engine, and 3. The heat transfer and fluid interactions with cylinder walls. Current understanding and modeling capability of stochastic processes in engines remains limited and prevents designers from achieving significantly higher fuel economy. To improve this situation, the workshop participants identified three focus areas for stochastic processes: 1. Improve fundamental understanding that will help to establish and characterize the physical causes of stochastic events, 2. Develop physics-based simulation models that are accurate and sensitive enough to capture performance-limiting variability, and 3. Quantify and manage uncertainty in model parameters and boundary conditions. Improved models and understanding in these areas will allow designers to develop engines with reduced design margins and that operate reliably in more efficient regimes. All of these areas require improved basic understanding, high-fidelity model development, and rigorous model validation. These advances will greatly reduce the uncertainties in current models and improve understanding of sprays and fuel–air mixture preparation that limit the investigation and development of advanced combustion technologies. The two strategic focus areas have distinctive characteristics but are inherently coupled. Coordinated activities in basic experiments, fundamental simulations, and engineering-level model development and validation can be used to successfully address all of the topics identified in the PreSICE workshop. The outcome will be: 1. New and deeper understanding of the relevant fundamental physical and chemical processes in advanced combustion technologies, 2. Implementation of this understanding into models and simulation tools appropriate for both exploration and design, and 3. Sufficient validation with uncertainty quantification to provide confidence in the simulation results. These outcomes will provide the design tools for industry to reduce development time by up to 30% and improve engine efficiencies by 30% to 50%. The improved efficiencies applied to the national mix of transportation applications have the potential to save over 5 million barrels of oil per day, a current cost savings of $500 million per day. Compact Light Source Thumbnail JPG.jpg file (252KB) Report.pdf file (2.8MB) Report of the Basic Energy Sciences Workshop on Compact Light Sources This report is based on a BES Workshop on Compact Light Sources, held May 11-12, 2010, to evaluated the advantages and disadvantages of compact light source approaches and compared their performance to the third generation storage rings and free-electron lasers. The workshop examined the state of the technology for compact light sources and their expected progress. The workshop evaluated the cost efficiency, user access, availability, and reliability of such sources. Working groups evaluated the advantages and disadvantages of Compact Light Source (CLS) approaches, and compared their performance to the third-generation storage rings and free-electron lasers (FELs). The primary aspects of comparison were 1) cost effectiveness, 2) technical availability v. time frame, and 3) machine reliability and availability for user access. Five categories of potential sources were analyzed: 1) inverse Compton scattering (ICS) sources, 2) mini storage rings, 3) plasma sources, 4) sources using plasma-based accelerators, and 5) laser high harmonic generation (HHG) sources. Compact light sources are not a substitute for large synchrotron and FEL light sources that typically also incorporate extensive user support facilities. Rather they offer attractive, complementary capabilities at a small fraction of the cost and size of large national user facilities. In the far term they may offer the potential for a new paradigm of future national user facility. In the course of the workshop, we identified overarching R&D topics over the next five years that would enhance the performance potential of both compact and large-scale sources: • Development of infrared (IR) laser systems delivering kW-class average power with femtosecond pulses at kHz repetition rates. These have application to ICS sources, plasma sources, and HHG sources. • Development of laser storage cavities for storage of 10-mJ picosecond and femtosecond pulses focused to micron beam sizes. • Development of high-brightness, high-repetition-rate electron sources. • Development of continuous wave (cw) superconducting rf linacs operating at 4 K, while not essential, would reduce capital and operating cost. New Science for a Secure and Sustainable Energy Future JPG.jpg file (79KB) Report.pdf file (6.3MB) Basic Research Needs for Carbon Capture: Beyond 2020 This report is based on a SC/FE workshop on Carbon Capture: Beyond 2020, held March 4–5, 2010, to assess the basic research needed to address the current technical bottlenecks in carbon capture processes and to identify key research priority directions that will provide the foundations for future carbon capture technologies. The problem of thermodynamically efficient and scalable carbon capture stands as one of the greatest challenges for modern energy researchers. The vast majority of US and global energy use derives from fossil fuels, the combustion of which results in the emission of carbon dioxide into the atmosphere. These anthropogenic emissions are now altering the climate. Although many alternatives to combustion are being considered, the fact is that combustion will remain a principal component of the global energy system for decades to come. Today’s carbon capture technologies are expensive and cumbersome and energy intensive. If scientists could develop practical and cost-effective methods to capture carbon, those methods would at once alter the future of the largest industry in the world and provide a technical solution to one of the most vexing problems facing humanity. The carbon capture problem is a true grand challenge for today’s scientists. Postcombustion CO2 capture requires major new developments in disciplines spanning fundamental theoretical and experimental physical chemistry, materials design and synthesis, and chemical engineering. To start with, the CO2 molecule itself is thermodynamically stable and binding to it requires a distortion of the molecule away from its linear and symmetric arrangement. This binding of the gas molecule cannot be too strong, however; the sheer quantity of CO2 that must be captured ultimately dictates that the capture medium must be recycled over and over. Hence the CO2 once bound, must be released with relatively little energy input. Further, the CO2 must be rapidly and selectively pulled out of a mixture that contains many other gaseous components. The related processes of precombustion capture and oxycombustion pose similar challenges. It is this nexus of high-speed capture with high selectivity and minimal energy loss that makes this a true grand challenge problem, far beyond any of today’s artificial molecular manipulation technologies, and one whose solution will drive the advancement of molecular science to a new level of sophistication. We have only to look to nature, where such chemical separations are performed routinely, to imagine what may be achieved. The hemoglobin molecule transports oxygen in the blood rapidly and selectively and releases it with minimal energy penalty. Despite our improved understanding of how this biological system works, we have yet to engineer a molecular capture system that uses the fundamental cooperativity process that lies at the heart of the functionality of hemoglobin. While such biological examples provide inspiration, we also note that newly developed theoretical and computational capabilities; the synthesis of new molecules, materials, and membranes; and the remarkable advances in characterization techniques enabled by the Department of Energy’s measurement facilities all create a favorable environment for a major new basic research push to solve the carbon capture problem within the next decade. The Department of Energy has established a comprehensive strategy to meet the nation’s needs in the carbon capture arena. This framework has been developed following a series of workshops that have engaged all the critical stakeholder communities. The strategy that has emerged is based upon a tiered approach, with Fossil Energy taking the lead in a series of applied research programs that will test and extend our current systems. ARPA-E (Advanced Research Projects Agency–Energy) is supporting potential breakthroughs based upon innovative proposals to rapidly harness today’s technical capabilities in ways not previously considered. These needs and plans have been well summarized in the report from a recent workshop—Carbon Capture 2020, held in October 5 and 6, 2009—focused on near-term strategies for carbon capture improvements ( proceedings/09/CC2020/pdfs/Richards_Summary.pdf ). Yet the fact remains that when the carbon capture problem is looked at closely, we see today’s technologies fall far short of making carbon capture an economically viable process. This situation reinforces the need for a parallel, intensive use-inspired basic research effort to address the problem. This was the overwhelming conclusion of a recent workshop—Carbon Capture: Beyond 2020, held March 4 and 5, 2010—and is the subject of the present report. To prepare for the second workshop, an in-depth assessment of current technologies for carbon capture was conducted; the result of this study was a factual document, Technology and Applied R&D Needs for Carbon Capture: Beyond 2020. This document, which was prepared by experts in current carbon capture processes, also summarized the technological gaps or bottlenecks that limit currently available carbon capture technologies. The report considered the separation processes needed for all three CO2 emission reduction strategies—postcombustion, precombustion, and oxycombustion—and assessed three primary separation technologies based on liquid absorption, membranes, and solid adsorption. The workshop “Carbon Capture: Beyond 2020” convened approximately 80 attendees from universities, national laboratories, and industry to assess the basic research needed to address the current technical bottlenecks in carbon capture processes and to identify key research priority directions that will provide the foundations for future carbon capture technologies. The workshop began with a plenary session including speakers who summarized the extent of the carbon capture challenge, the various current approaches, and the limitations of these technologies. Workshop attendees were then given the charge to identify high-priority basic research directions that could provide revolutionary new concepts to form the basis for separation technologies in 2020 and beyond. The participants were divided into three major panels corresponding to different approaches for separating gases to reduce carbon emissions—liquid absorption, solid adsorption, and membrane separations. Two other panels were instructed to attend each of these three technology panels to assess crosscutting issues relevant to characterization and computation. At the end of the workshop, a final plenary session was convened to summarize the most critical research needs identified by the workshop attendees in each of the three major technical panels and from the two cross-cutting panels. The reports of the three technical panels included a set of high level Priority Research Directions meant to serve as inspiration to researchers in multiple disciplines—materials science, chemistry, biology, computational science, engineering, and others—to address the huge scientific challenges facing this nation and the world as we seek technologies for large-scale carbon capture beyond 2020. These Priority Research Directions were clustered around three main areas, all tightly coupled: • Understand and control the dynamic atomic-level and molecular-level interactions of the targeted species with the separation media. • Discover and design new materials that incorporate designed structures and functionalities tuned for optimum separation properties. • Tailor capture/release processes with alternative driving forces, taking advantage of a new generation of materials. In each of the technical panels, the participants identified two major crosscutting research themes. The first was the development of new analytical tools that can characterize materials structure and molecular processes across broad spatial and temporal scales and under realistic conditions that mimic those encountered in actual separation processes. Such tools are needed to examine interfaces and thin films at the atomic and molecular levels, achieving an atomic/molecular-scale understanding of gas–host structures, kinetics, and dynamics, and understanding and control of nanoscale synthesis in multiple dimensions. A second major crosscutting theme was the development of new computational tools for theory, modeling, and simulation of separation processes. Computational techniques can be used to elucidate mechanisms responsible for observed separations, predict new desired features for advanced separations materials, and guide future experiments, thus complementing synthesis and characterization efforts. These two crosscut areas underscored the fact that the challenge for future carbon capture technologies will be met only with multidisciplinary teams of scientists and engineers. In addition, it was noted that success in this fundamental research area must be closely coupled with successful applied research to ensure the continuing assessment and maturation of new technologies as they undergo scale-up and deployment. Carbon capture is a very rich scientific problem, replete with opportunity for basic researchers to advance the frontiers of science as they engage on one of the most important technical challenges of our times. This workshop report outlines an ambitious agenda for addressing the very difficult problem of carbon capture by creating foundational new basic science. This new science will in turn pave the way for many additional advances across a broad range of scientific disciplines and technology sectors. JPG.jpg file (209KB) Report.pdf file (6.4MB) This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of abating, has enabled the development of computer simulations and models of unprecedented fidelity. We are at the threshold of a new era where the integrated synthesis, characterization, and modeling of complex materials and chemical processes will transform our ability to understand and design new materials and chemistries with predictive power. In turn, this predictive capability will transform technological innovation by accelerating the development and deployment of new materials and processes in products and manufacturing. Harnessing the potential of computational science and engineering for the discovery and development of materials and chemical processes is essential to maintaining leadership in these foundational fields that underpin energy technologies and industrial competitiveness. Capitalizing on the opportunities presented by simulation-based engineering and science in materials and chemistry will require an integration of experimental capabilities with theoretical and computational modeling; the development of a robust and sustainable infrastructure to support the development and deployment of advanced computational models; and the assembly of a community of scientists and engineers to implement this integration and infrastructure. This community must extend to industry, where incorporating predictive materials science and chemistry into design tools can accelerate the product development cycle and drive economic competitiveness. The confluence of new theories, new materials synthesis capabilities, and new computer platforms has created an unprecedented opportunity to implement a "materials-by-design" paradigm with wide-ranging benefits in technological innovation and scientific discovery. The Workshop on Computational Materials Science and Chemistry for Innovation was convened in Bethesda, Maryland, on July 26-27, 2010. Sponsored by the Department of Energy (DOE) Offices of Advanced Scientific Computing Research and Basic Energy Sciences, the workshop brought together 160 experts in materials science, chemistry, and computational science representing more than 65 universities, laboratories, and industries, and four agencies. The workshop examined seven foundational challenge areas in materials science and chemistry: materials for extreme conditions, self-assembly, light harvesting, chemical reactions, designer fluids, thin films and interfaces, and electronic structure. Each of these challenge areas is critical to the development of advanced energy systems, and each can be accelerated by the integrated application of predictive capability with theory and experiment. The workshop concluded that emerging capabilities in predictive modeling and simulation have the potential to revolutionize the development of new materials and chemical processes. Coupled with world-leading materials characterization and nanoscale science facilities, this predictive capability provides the foundation for an innovation ecosystem that can accelerate the discovery, development, and deployment of new technologies, including advanced energy systems. Delivering on the promise of this innovation ecosystem requires the following: • Integration of synthesis, processing, characterization, theory, and simulation and modeling. Many of the newly established Energy Frontier Research Centers and Energy Hubs are exploiting this integration. • Achieving/strengthening predictive capability in foundational challenge areas. Predictive capability in the seven foundational challenge areas described in this report is critical to the development of advanced energy technologies. • Developing validated computational approaches that span vast differences in time and length scales. This fundamental computational challenge crosscuts all of the foundational challenge areas. Similarly challenging is coupling of analytical data from multiple instruments and techniques that are required to link these length and time scales. • Experimental validation and quantification of uncertainty in simulation and modeling. Uncertainty quantification becomes increasingly challenging as simulations become more complex. • Robust and sustainable computational infrastructure, including software and applications. For modeling and simulation, software equals infrastructure. To validate the computational tools, software is critical infrastructure that effectively translates huge arrays of experimental data into useful scientific understanding. An integrated approach for managing this infrastructure is essential. • Efficient transfer and incorporation of simulation-based engineering and science in industry. Strategies for bridging the gap between research and industrial applications and for widespread industry adoption of integrated computational materials engineering are needed. New Science for a Secure and Sustainable Energy Future JPG.jpg file (263KB) Report.pdf file (561KB) Report.pdf file (5.0MB) Summary.pdf file (17KB) New Science for a Secure and Sustainable Energy Future This Basic Energy Sciences Advisory Committee (BESAC) report summarizes a 2008 study by the Subcommittee on Facing our Energy Challenges in a New Era of Science to: (1) assimilate the scientific research directions that emerged from the BES Basic Research Needs workshop reports into a comprehensive set of science themes, and (2) identify the new implementation strategies and tools required to accomplish the science. The United States faces a three-fold energy challenge: • Energy Independence. U.S. energy use exceeds domestic production capacity by the equivalent of 16 million barrels of oil per day, a deficit made up primarily by importing oil and natural gas. This deficit has nearly tripled since 1970. • Environmental Sustainability. The United States must reduce its emissions of carbon dioxide and other greenhouse gases that accelerate climate change. The primary source of these emissions is combustion of fossil fuel, comprising about 85% of U.S. national energy supply. • Economic Opportunity. The U.S. economy is threatened by the high cost of imported energy—as much as $700 billion per year at recent peak prices. We need to create next-generation clean energy technologies that do not depend on imported oil. U.S. leadership would not only provide solutions at home but also create global economic opportunity. The magnitude of the challenge is so immense that existing energy approaches—even with improvements from advanced engineering and improved technology based on known concepts—will not be enough to secure our energy future. Instead, meeting the challenge will require new technologies for producing, storing and using energy with performance levels far beyond what is now possible. Such technologies spring from scientific breakthroughs in new materials and chemical processes that govern the transfer of energy between light, electricity and chemical fuels. Integrating a major national mobilization of basic energy research—to create needed breakthroughs—with appropriate investments in technology and engineering to accelerate bringing new energy solutions to market will be required to meet our three-fold energy challenge. This report identifies three strategic goals for which transformational scientific breakthroughs are urgently needed: • Making fuels from sunlight • Generating electricity without carbon dioxide emissions • Revolutionizing energy efficiency and use Meeting these goals implies dramatic changes in our technologies for producing and consuming energy. We will manufacture chemical fuel from sunlight, water and carbon dioxide instead of extracting it from the earth. We will generate electricity from sunlight, wind, and high-efficiency clean coal and advanced nuclear plants instead of conventional coal and nuclear technology. Our cars and light trucks will be driven by efficient electric motors powered by a new generation of batteries and fuel cells. These new, advanced energy technologies, however, require new materials and control of chemical change that operate at dramatically higher levels of functionality and performance. Converting sunlight to electricity with double or triple today's efficiency, storing electricity in batteries or supercapacitors at ten times today's densities, or operating coal-fired and nuclear power plants at far higher temperatures and efficiencies requires materials with atom by atom design and control, tailored nanoscale structures where every atom has a specific function. Such high performing materials would have complexity far higher than today's energy materials, approaching that of biological cells and proteins. They would be able to seamlessly control the ebb and flow of energy between chemical bonds, electrons, and light, and would be the foundation of the alternative energy technologies of the future. Creating these advanced materials and chemical processes requires characterizing the structure and dynamics of matter at levels beyond our present reach. The physical and chemical phenomena that capture, store and release energy take place at the nanoscale, often involving subtle changes in single electrons or atoms, on timescales faster than we can now resolve. Penetrating the secrets of energy transformation between light, chemical bonds, and electrons requires new observational tools capable of probing the still-hidden realms of the ultrasmall and ultrafast. Observing the dynamics of energy flow in electronic and molecular systems at these resolutions is necessary if we are to learn to control their behavior. Fundamental understanding of complex materials and chemical change based on theory, computation and advanced simulation is essential to creating new energy technologies. A working transistor was not developed until the theory of electronic behavior on semiconductor surfaces was formulated. In superconductivity, sweeping changes occurred in the field when a microscopic theory of the mechanism of superconductivity was finally developed. As Nobel Laureate Phillip Anderson has written, more is different: at each level of complexity in science, new laws need to be discovered for breakthrough progress to be made. Without such breakthroughs, future technologies will not be realized. The digital revolution was only made possible by transistors—try to imagine the information age with vacuum tubes. Nearly as ubiquitous are lasers, the basis for modern day read-heads used in CDs, DVDs, and bar code scanners. Lasers could not be developed until the quantum theory of light emission by materials was understood. These advances—high-performance materials enabling precise control of chemical change, characterization tools probing the ultrafast and the ultrasmall, and new understanding based on advanced theory and simulation—are the agents for moving beyond incremental improvements and creating a truly secure and sustainable energy future. Given these tools, we can imagine, and achieve, revolutionary new energy systems. Full Report (August 2010) Science for Energy Technology: Strengthening the Link between Basic Research and Industry, Full Report JPG.jpg file (323KB) Report.pdf file (6.7MB) Report.pdf file (40.8MB) Initial Report (April 2010) JPG.jpg file (146KB) Report.pdf file (1.4MB) Report.pdf file (7.8MB) Summary.pdf file (103KB) This Basic Energy Sciences Advisory Committee (BESAC) report summarizes the results of a Workshop on Science for Energy Technology on January 18-21, 2010, to identify the scientific priority research directions needed to address the roadblocks and accelerate the innovation of clean energy technologies. The nation faces two severe challenges that will determine our prosperity for decades to come: assuring clean, secure, and sustainable energy to power our world, and establishing a new foundation for enduring economic and jobs growth. These challenges are linked: the global demand for clean sustainable energy is an unprecedented economic opportunity for creating jobs and exporting energy technology to the developing and developed world. But achieving the tremendous potential of clean energy technology is not easy. In contrast to traditional fossil fuel-based technologies, clean energy technologies are in their infancy, operating far below their potential, with many scientific and technological challenges to overcome. Industry is ultimately the agent for commercializing clean energy technology and for reestablishing the foundation for our economic and jobs growth. For industry to succeed in these challenges, it must overcome many roadblocks and continuously innovate new generations of renewable, sustainable, and low-carbon energy technologies such as solar energy, carbon sequestration, nuclear energy, electricity delivery and efficiency, solid state lighting, batteries and biofuels. The roadblocks to higher performing clean energy technology are not just challenges of engineering design but are also limited by scientific understanding. Innovation relies on contributions from basic research to bridge major gaps in our understanding of the phenomena that limit efficiency, performance, or lifetime of the materials or chemistries of these sustainable energy technologies. Thus, efforts aimed at understanding the scientific issues behind performance limitations can have a real and immediate impact on cost, reliability, and performance of technology, and ultimately a transformative impact on our economy. With its broad research base and unique scientific user facilities, the DOE Office of Basic Energy Sciences (BES) is ideally positioned to address these needs. BES has laid out a broad view of the basic and grand challenge science needs for the development of future clean energy technologies in a series of comprehensive "Basic Research Needs" workshops and reports (inside front cover and and has structured its programs and launched initiatives to address the challenges. The basic science needs of industry, however, are often more narrowly focused on solving specific nearer-term roadblocks to progress in existing and emerging clean energy technologies. To better define these issues and identify specific barriers to progress, the Basic Energy Sciences Advisory Committee (BESAC) sponsored the Workshop on Science for Energy Technology, January 18-21, 2010. A wide cross-section of scientists and engineers from industry, universities, and national laboratories delineated the basic science Priority Research Directions most urgently needed to address the roadblocks and accelerate the innovation of clean energy technologies. These Priority Research Directions address the scientific understanding underlying performance limitations in existing but still immature technologies. Resolving these performance limitations can dramatically improve the commercial penetration of clean energy technologies. A key conclusion of the Workshop is that in addition to the decadal challenges defined in the "Basic Research Needs" reports, specific research directions addressing industry roadblocks are ripe for further emphasis. Another key conclusion is that identifying and focusing on specific scientific challenges and translating the results to industry requires more direct feedback and communication and collaboration between industrial and BES-supported scientists. BES-supported scientists need to be better informed of the detailed scientific issues facing industry, and industry more aware of BES capabilities and how to utilize them. An important capability is the suite of BES scientific user facilities, which are seen as playing a key role in advancing the science of clean energy technology. Working together, industry and BES-supported scientists can achieve the required understanding and control of the performance limitations of clean energy technology, accelerate innovation in its development, and help build the workforce needed to implement the growing clean energy economy. Next-Generation Photon Sources for Grand Challenges in Science and Energy JPG.jpg file (238KB) Report.pdf file (9.4MB) Report.pdf file (17.4MB) Next-Generation Photon Sources for Grand Challenges in Science and Energy This Basic Energy Sciences Advisory Committee (BESAC) report summarizes the results of an October 2008 Photon Workshop of the Subcommittee on Facing our Energy Challenges in a New Era of Science to identify connections between major new research opportunities and the capabilities of the next generation of light sources. Particular emphasis was on energy-related research. The next generation of sustainable energy technologies will revolve around transformational new materials and chemical processes that convert energy efficiently among photons, electrons, and chemical bonds. New materials that tap sunlight, store electricity, or make fuel from splitting water or recycling carbon dioxide will need to be much smarter and more functional than today's commodity-based energy materials. To control and catalyze chemical reactions or to convert a solar photon to an electron requires coordination of multiple steps, each carried out by customized materials and interfaces with designed nanoscale structures. Such advanced materials are not found in nature the way we find fossil fuels; they must be designed and fabricated to exacting standards, using principles revealed by basic science. Success in this endeavor requires probing, and ultimately controlling, the interactions among photons, electrons, and chemical bonds on their natural length and time scales. Control science—the application of knowledge at the frontier of science to control phenomena and create new functionality—realized through the next generation of ultraviolet and X-ray photon sources, has the potential to be transformational for the life sciences and information technology, as well as for sustainable energy. Current synchrotron-based light sources have revolutionized macromolecular crystallography. The insights thus obtained are largely in the domain of static structure. The opportunity is for next generation light sources to extend these insights to the control of dynamic phenomena through ultrafast pump-probe experiments, time-resolved coherent imaging, and high-resolution spectroscopic imaging. Similarly, control of spin and charge degrees of freedom in complex functional materials has the potential not only to reveal the fundamental mechanisms of high-temperature superconductivity, but also to lay the foundation for future generations of information science. This report identifies two aspects of energy science in which next-generation ultraviolet and X-ray light sources will have the deepest and broadest impact: • The temporal evolution of electrons, spins, atoms, and chemical reactions, down to the femtosecond time scale. • Spectroscopic and structural imaging of nano objects (or nanoscale regions of inhomogeneous materials) with nanometer spatial resolution and ultimate spectral resolution. The dual advances of temporal and spatial resolution promised by fourth-generation light sources ideally match the challenges of control science. Femtosecond time resolution has opened completely new territory where atomic motion can be followed in real time and electronic excitations and decay processes can be followed over time. Coherent imaging with short-wavelength radiation will make it possible to access the nanometer length scale, where intrinsic quantum behavior becomes dominant. Performing spectroscopy on individual nanometer-scale objects rather than on conglomerates will eliminate the blurring of the energy levels induced by particle size and shape distributions and reveal the energetics of single functional units. Energy resolution limited only by the uncertainty relation is enabled by these advances. Current storage-ring-based light sources and their incremental enhancements cannot meet the need for femtosecond time resolution, nanometer spatial resolution, intrinsic energy resolution, full coherence over energy ranges up to hard X-rays, and peak brilliance required to enable the new science outlined in this report. In fact, the new, unexplored territory is so expansive that no single currently imagined light source technology can fulfill the whole potential. Both technological and economic challenges require resolution as we move forward. For example, femtosecond time resolution and high peak brilliance are required for following chemical reactions in real time, but lower peak brilliance and high repetition rate are needed to avoid radiation damage in high-resolution spatial imaging and to avoid space-charge broadening in photoelectron spectroscopy and microscopy. But light sources alone are not enough. The photons produced by next-generation light sources must be measured by state-of-the-art experiments installed at fully equipped end stations. Sophisticated detectors with unprecedented spatial, temporal, and spectral resolution must be designed and created. The theory of ultrafast phenomena that have never before been observed must be developed and implemented. Enormous data sets of diffracted signals in reciprocal space and across wide energy ranges must be collected and analyzed in real time so that they can guide the ongoing experiments. These experimental challenges—end stations, detectors, sophisticated experiments, theory, and data handling—must be planned and provided for as part of the photon source. Furthermore, the materials and chemical processes to be studied, often in situ, must be synthesized and developed with equal care. These are the primary factors determining the scientific and technological return on the photon source investment. Of equal or greater concern is the need for interdisciplinary platforms to solve the grand challenges of sustainable energy, climate change, information technology, biological complexity, and medicine. No longer are these challenges confined to one measurement or one scientific discipline. Fundamental problems in correlated electron materials, where charge, spin, and lattice modes interact strongly, require experiments in electron, neutron, and X-ray scattering that must be coordinated across platforms and user facilities and that integrate synthesis and theory as well. The model of users applying for one-time access to single-user facilities does not promote the coordinated, interdisciplinary approach needed to solve today's grand challenge problems. Next-generation light sources and other user facilities must learn to accommodate the interdisciplinary, cross-platform needs of modern grand challenge science. Only through the development of such future sources, appropriately integrated with advanced end stations and detectors and closely coupled with broader synthesis, measurement, theory, and modeling tools, can we meet the demands of a New Era of Science. Directing Matter and Energy: Five Challenges for Science and the Imagination JPG.jpg file (176KB) Report.pdf file (28.9MB) Directing Matter and Energy: Five Challenges for Science and the Imagination This Basic Energy Sciences Advisory Committee (BESAC) Grand Challenges report identifies the most important scientific questions and science-driven technical challenges facing BES and describes the importance of these challenges to advances in disciplinary science, to technology development, and to energy and other societal needs. The report originated from a January 25, 2005, request from the Office of Science and is the product of numerous BESAC and Grand Challenges Subcommittee meetings and conferences in 2006-2007. It is frequently said that any sufficiently advanced technology is indistinguishable from magic. Modern science stands at the beginning of what might seem by today's standards to be an almost magical leap forward in our understanding and control of matter, energy, and information at the molecular and atomic levels. Atoms—and the molecules they form through the sharing or exchanging of electrons—are the building blocks of the biological and non-biological materials that make up the world around us. In the 20th century, scientists continually improved their ability to observe and understand the interactions among atoms and molecules that determine material properties and processes. Now, scientists are positioned to begin directing those interactions and controlling the outcomes on a molecule-by-molecule and atom-by-atom basis, or even at the level of electrons. Long the staple of science- fiction novels and films, the ability to direct and control matter at the quantum, atomic, and molecular levels creates enormous opportunities across a wide spectrum of critical technologies. This ability will help us meet some of humanity's greatest needs, including the need for abundant, clean, and cheap energy. However, generating, storing, and distributing adequate and sustainable energy to the nation and the world will require a sea change in our ability to control matter and energy. One of the most spectacular technological advances in the 20th century took place in the field of information, as computers and microchips became ubiquitous in our society. Vacuum tubes were replaced with transistors and, in accordance with Moore's Law (named for Intel co-founder Gordon Moore), the number of transistors on a microchip has doubled approximately every two years for the past two decades. However, if the time comes when integrated circuits can be fabricated at the molecular or nanoscale level, the limits of Moore's Law will be far surpassed. A supercomputer based on nanochips would comfortably fit in the palm of your hand and use less electricity than a cottage. All the information stored in the Library of Congress could be contained in a memory the size of a sugar cube. Ultimately, if computations can be carried out at the atomic or sub-nanoscale levels, today's most powerful microtechnology will seem as antiquated and slow as an abacus. For the future, imagine a clean, cheap, and virtually unlimited supply of electrical power from solar-energy systems modeled on the photosynthetic processes utilized by green plants, and power lines that could transmit this electricity from the deserts of the Southwest to the Eastern Seaboard at nearly 100-percent efficiency. Imagine information and communications systems based on light rather than electrons that could predict when and where hurricanes make landfall, along with self-repairing materials that could survive those hurricanes. Imagine synthetic materials fully compatible and able to communicate with biological materials. This is speculative to be sure, but not so very far beyond the scope of possibilities. Acquiring the ability to direct and control matter all the way down to molecular, atomic, and electronic levels will require fundamental new knowledge in several critical areas. This report was commissioned to define those knowledge areas and the opportunities that lie beyond. Five interconnected Grand Challenges that will pave the way to a science of control are identified in the regime of science roughly defined by the Basic Energy Science portfolio, and recommendations are presented for what must be done to meet them. • How do we control material processes at the level of electrons? Electrons are the negatively charged subatomic particles whose dynamics determine materials properties and direct chemical, electrical, magnetic, and physical processes. If we can learn to direct and control material processes at the level of electrons, where the strange laws of quantum mechanics rule, it should pave the way for artificial photosynthesis and other highly efficient energy technologies, and could revolutionize computer technologies. Humans, through trial and error experiments or through lucky accidents, have been able to make only a tiny fraction of all the materials that are theoretically possible. If we can learn to design and create new materials with tailored properties, it could lead to low-cost photovoltaics, self-repairing and self-regulating devices, integrated photonic (light-based) technologies, and nano-sized electronic and mechanical devices. Emergent phenomena, in which a complex outcome emerges from the correlated interactions of many simple constituents, can be widely seen in nature, as in the interactions of neurons in the human brain that result in the mind, the freezing of water, or the giant magneto-resistance behavior that powers disk drives. If we can learn the fundamental rules of correlations and emergence and then learn how to control them, we could produce, among many possibilities, an entirely new generation of materials that supersede present-day semiconductors and superconductors. Biology is nature's version of nanotechnology, though the capabilities of biological systems can exceed those of human technologies by a vast margin. If we can understand biological functions and harness nanotechnologies with capabilities as effective as those of biological systems, it should clear the way towards profound advances in a great many scientific fields, including energy and information technologies. All natural and most human-induced phenomena occur in systems that are away from the equilibrium in which the system would not change with time. If we can understand system effects that take place away—especially very far away—from equilibrium and learn to control them, it could yield dramatic new energy-capture and energy storage technologies, greatly improve our predictions for molecular-level electronics, and enable new mitigation strategies for environmental damage. We now stand at the brink of a "Control Age" that could spark revolutionary changes in how we inhabit our planet, paving the way to a bright and sustainable future for us all. But answering the call of the five Grand Challenges for Basic Energy Science will require that we change our fundamental understanding of how nature works. This will necessitate a three-fold attack: new approaches to training and funding, development of instruments more precise and flexible than those used up to now for observational science, and creation of new theories and concepts beyond those we currently possess. The difficulties involved in this change of our understanding are huge, but the rewards for success should be extraordinary. If we succeed in meeting these five Grand Challenges, our ability to direct and control matter might one day be measured only by the limits of human imagination. Basic Research Needs for Materials under Extreme Environments JPG.jpg file (261KB) Report.pdf file (10.7MB) Report.pdf file (15.7MB) Basic Research Needs for Materials under Extreme Environments This report is based on a BES Workshop on Basic Research Needs for Materials under Extreme Environments, June 11-13, 2007, to evaluate the potential for developing revolutionary new materials that will meet demanding future energy requirements that expose materials to environmental extremes. Never has the world been so acutely aware of the inextricably linked issues of energy, environment, economy, and security. As the economies of developing countries boom, so does their demand for energy. Today nearly a quarter of the world does not have electrical power, yet the demand for electricity is projected to more than double over the next two decades. Increased demand for energy to power factories, transport commodities and people, and heat/cool homes also results in increased CO2 emissions. In 2007 China, a major consumer of coal, surpassed the United States in overall carbon dioxide emissions. As global CO2 emissions grow, the urgency grows to produce energy from carbon-based sources more efficiently in the near term and to move to non-carbon-based energy sources, such as solar, hydrogen, or nuclear, in the longer term. As we look toward the future, two points are very clear: (1) the economy and security of this nation is critically dependent on a readily available, clean and affordable energy supply; and (2) no one energy solution will meet all future energy demands, requiring investments in development of multiple energy technologies. Materials are central to every energy technology, and future energy technologies will place increasing demands on materials performance with respect to extremes in stress, strain, temperature, pressure, chemical reactivity, photon or radiation flux, and electric or magnetic fields. For example, today's state-of-the-art coal-fired power plants operate at about 35% efficiency. Increasing this efficiency to 60% using supercritical steam requires raising operating temperatures by nearly 50% and essentially doubling the operating pressures. These operating conditions require new materials that can reliably withstand these extreme thermal and pressure environments. To lower fuel consumption in transportation, future vehicles will demand lighter weight components with high strength. Next-generation nuclear fission reactors require materials capable of withstanding higher temperatures and higher radiation flux in highly corrosive environments for long periods of time without failure. These increasingly extreme operating environments accelerate the aging process in materials, leading to reduced performance and eventually to failure. If one extreme is harmful, two or more can be devastating. High temperature, for example, not only weakens chemical bonds, it also speeds up the chemical reactions of corrosion. Often materials fail at one-tenth or less of their intrinsic limits, and we do not understand why. This failure of materials is a principal bottleneck for developing future energy technologies that require placing materials under increasingly extreme conditions. Reaching the intrinsic limit of materials performance requires understanding the atomic and molecular origins of this failure. This knowledge would enable an increase in materials performance of order of magnitude or more. Further, understanding how these extreme environments affect the physical and chemical processes that occur in the bulk material and at its surface would open the door to employing these conditions to make entirely new classes of materials with greatly enhanced performance for future energy technologies. This knowledge will not be achieved by incremental advances in materials science. Indeed, this knowledge will only be gained by innovative basic research that will unlock the fundamentals of how extremes environments interact with materials and how these interactions can be controlled to reach the intrinsic limits of materials performance and to develop revolutionary new materials. These new materials would have enormous impact for the development of future energy technologies: extending lifetimes, increasing efficiencies, providing novel capabilities, and lowering costs. Beyond energy applications, these new materials would have a huge impact on other areas of importance to this nation, including national security, industry, and other areas where robust, reliable materials are required. This report summarizes the research directions identified by a Basic Energy Sciences Workshop on Basic Research Needs for Materials under Extreme Environments, held in June 2007. More than 140 invited scientists and engineers from academia, industry, and the national laboratories attended the workshop, along with representatives from other offices within the Department of Energy, including the National Nuclear Security Administration, the Office of Nuclear Energy, the Office of Energy Efficiency and Renewable Energy, and the Office of Fossil Energy. Prior to the workshop, a technology resource document, Technology and Applied R&D Needs for Materials under Extreme Environments, was prepared that provided the participants with an overview of current and future materials needs for energy technologies. The workshop began with a plenary session that outlined the technology needs and the state of the art in research of materials under extreme conditions. The workshop was then divided into four panels, focusing on specific types of extreme environments: Energetic Flux Extremes, Chemical Reactive Extremes, Thermomechanical Extremes, and Electromagnetic Extremes. The four panels were asked to assess the current status of research in each of these four areas and identify the most promising research directions that would bridge the current knowledge gaps in understanding how these four extreme environments impact materials at the atomic and molecular levels. The goal was to outline specific Priority Research Directions (PRDs) that would ultimately lead to the development of vastly improved materials across a broad range of future energy technologies. During the course of the workshop, a number of common themes emerged across these four panels and a fifth panel was charged to identify these cross-cutting research areas. Photons and energetic particles can cause damage to materials that occurs over broad time and length scales. While initiation, characterized by localized melting and re-crystallization, may occur in fractions of a picosecond, this process can produce cascades of point defects that diffuse and agglomerate into larger clusters. These nanoscale clusters can eventually reach macroscopic dimensions, leading to decreased performance and failure. The panel on energetic flux extremes noted that this degradation and failure is a key barrier to achieving more efficient energy generation systems and limits the lifetime of materials used in photovoltaics, solar collectors, nuclear reactors, optics, electronics and other energy and security systems used in extreme flux environments. The panel concluded that the ability to prevent this degradation from extreme fluxes is critically dependent on being able to elucidate the atomic- and molecular-level mechanisms of defect production and damage evolution triggered by single and multiple energetic particles and photons interacting with materials. Advances in characterization and computational tools have the potential to provide an unprecedented opportunity to elucidate these key mechanisms. In particular, ultrafast and ultra-high spatial resolution characterization tools will allow the initial atomic-scale damage events to be observed. Further, advanced computational capabilities have the potential to capture multiscale damage evolution from atomic to macroscopic dimensions. Elucidation of these mechanisms would allow the complex pathways of damage evolution from the atomic to the macroscopic scale to be understood. This knowledge would ultimately allow atomic and molecular structures to be manipulated in a predicable manner to create new materials that have extraordinary tolerance and can function within an extreme environment without property degradation. Further, it would provide revolutionary capabilities for synthesizing materials with novel structures or, alternatively, to force chemical reactions that normally result in damage to proceed along selected pathways that are either benign or self-repair damage initiation. Chemically reactive extreme environments are found in many advanced energy systems, including fuel cells, nuclear reactors, and batteries, among others. These conditions include aqueous and non-aqueous liquids (such as mineral acids, alcohols, and ionic liquids) and gaseous environments (such as hydrogen, ammonia, and steam). The panel evaluating extreme chemical environments concluded there is a lack of fundamental understanding of thermodynamic and kinetic processes that occur at the atomic level under these important reactive environments. The chemically induced degradation of materials is initiated at the interface of a material with its environment. Chemical stability in these environments is often controlled by protective surfaces, either self-healing, stable films that form on a surface (such as oxides) or by coatings that are applied to a surface. Besides providing surface stability, these films must also prevent facile mass transport of reactive species into the bulk of the material. While some films can have long lifetimes, increasing severity of environments can cause the films to break down, leading to costly materials failure. A major challenge therefore is to develop a new generation of surface layers that are extremely robust under aggressive chemical conditions. Before this can be accomplished, however, it is critical to understand the equilibrium and non-equilibrium thermodynamics and reaction kinetics that occur at the atomic level at the interface of the protective film with its environment. The stability of the film can be further complicated by differences in the material's morphology, structure, and defects. It is critical that these complex and interrelated chemical and physical processes be understood at the nanoscale using new capabilities in materials characterization and theory, modeling, and simulation. Armed with this information, it will be possible to develop a new generation of robust surface films to protect materials in extreme chemical environments. Further, this understanding will provide insight into developing films that can self-heal and to synthesizing new classes of materials that have unimaginable stability to aggressive chemical environments. The need for materials that can withstand thermomechanical extremes—high pressure and stress, strain and strain rate, and high and low temperature—is found across a broad range of energy technologies, such as efficient steam turbines and heat exchangers, fuel-efficient vehicles, and strong wind turbine blades. Failures of materials under thermomechanical extremes can be catastrophic and costly. The panel on thermomechanical extremes concluded that designing new materials with properties specifically tailored to withstand thermomechanical extremes must begin with understanding the fundamental chemical and physical processes involved in materials failure, extending from the nanoscale to the collective behavior at the macroscale. Further, the behavior of materials must be understood under static, quasistatic, and dynamic thermomechanical extremes. This requires learning how atoms and electrons move within a material under extremes to provide insight into defect production and eventual evolution into microstructural components, such as dislocations, voids, and grain boundaries. This will require advanced analytical tools that can study materials in situ as these defects originate and evolve. Once these processes are understood, it will be possible to predict responses of materials under thermomechanical extremes using advanced computation tools. Further, this fundamental knowledge will open new avenues for designing and synthesizing materials with unique properties. Using these thermomechanical extremes will allow the very nature of chemical bonds to be tuned to produce revolutionary new materials, such as ultrahard materials. As electrical energy demand grows, perhaps by greater than 70% over the next 50 years, so does the need to develop materials capable of operating at extreme electric and magnetic fields. To develop future electrical energy technologies, new materials are needed for magnets capable of operating at higher fields in generators and motors, insulators resistant to higher electric fields and field gradients, and conductors/superconductors capable of carrying higher current at lower voltage. The panel on electromagnetic extremes concluded that the discovery and understanding of this broad range of new materials requires revealing and controlling the defects that occur at the nanoscale. Defects are responsible for breakdown of insulators, yet defects are needed within local structures of superconductors to trap magnetic vortices. The ability to observe these defects as materials interact with electromagnetic extremes is just becoming available with advances in characterization tools with increased spatial and time resolution. Understanding how these nanoscale defects evolve to affect the macroscale behavior of materials is a grand challenge, and advances in multiscale modeling are required to understand the behavior of materials under these extremes. Once the behavior of defects in materials is understood, then materials could be designed to prevent dielectric breakdown or to enhance magnetic behavior. For example, composite materials having appropriate structures and properties could be tailored using nanoscale self-assembly techniques. The panel projected that understanding how electric and magnetic fields affect materials at the atomic and molecular level could lead to the ability to control materials properties and synthesis. Such control would lead to a new generation of materials that is just emerging today—such as electrooptic materials that can be switched between transparency and opacity through application of electric fields. Beyond energy applications, these tailored materials could have enormous importance in security, computing, electronics, and other applications. During the course of the workshop, four recurring science issues emerged as important themes: (1) Achieving the Limits of Performance; (2) Exploiting Extreme Environments for Materials Design and Synthesis; (3) Characterization on the Scale of Fundamental Interactions; and (4) Predicting and Modeling Materials Performance. All four of the workshop panels identified the need to understand the complex and interrelated physical and chemical processes that control the various performance limits of materials subjected to extreme conditions as the major technical bottleneck in meeting future energy needs. Most of these processes involve understanding the cascade of events that is initiated at atomic-level defects and progresses through macroscopic materials properties. By understanding various mechanisms by which materials fail, for example, it may be possible to increase the performance and lifetime limits of materials by an order of magnitude or more and thereby achieve the true limits of materials performance. Understanding the atomic and molecular basis of the interaction of extreme environments with materials provides an exciting and unique opportunity to produce entirely new classes of materials. Today materials are made primarily by changing temperature, composition, and sometimes, pressure. The panels concluded that extreme conditions—in the form of high temperatures, pressures, strain rate, radiation fluxes, or external fields, alone or in combination—can potentially be used as new "knobs" that can be manipulated for the synthesis of revolutionary new materials. All four of the extreme environments offer new strategies for controlling the atomic- and molecular-level structure in unprecedented ways to produce materials with tailored functionalities. To achieve the breakthroughs needed to understand the atomic and molecular processes that occur within the bulk and at surfaces in materials in extreme environments will require advances in the final two cross-cutting areas, characterization and computation. Elucidating changes in structure and dynamics over broad timescales (femtoseconds to many seconds) and length scales (nanoscale to macroscale) is critical to realizing the revolutionary materials required for future energy technologies. Advances in characterization tools, including diffraction, scattering, spectroscopy, microscopy, and imaging, can provide this critical information. Of particular importance is the need to combine two or more of these characterization tools to permit so-called "multi-dimensional" analysis of materials and surfaces in situ. These advances will enable the elucidation of fundamental chemical and physical mechanisms that are at the heart of materials performance (and failure) and catalyze the discovery of new materials required for the next generation of energy technologies. Complementing these characterization techniques are computational techniques required for modeling and predicting materials behavior under extreme conditions. Recent advances in theory and algorithms, coupled with enormous and growing computational power and ever more sophisticated experimental methods, are opening up exciting new possibilities for taking advantage of predictive theory and simulation to design and predict of the properties and performance of new materials required for extreme environments. New theoretical tools are needed to describe new phenomena and processes that occur under extreme conditions. These various tools need to be integrated across broad length scales—atomic to macroscopic—to model and predict the properties of real materials in response to extreme environments. Together with advanced synthesis and characterization techniques, these new capabilities in theory and modeling offer exciting new capabilities to accelerate scientific discovery and shorten the development cycle from discovery to application. In concluding the workshop, the panelists were confident that today's gaps in materials performance under extreme conditions could be bridged if the physical and chemical changes that occur in bulk materials and at the interface with the extreme environment could be understood from the atomic to macroscopic scale. These complex and interrelated phenomena can be unraveled as advances are realized in characterization and computational tools. These advances will allow structural changes, including defects, to be observed in real time and then modeled so the response of materials can be predicted. The concept of exploiting these extreme environments to create revolutionary new materials was viewed to be particularly exciting. Adding these parameters to the toolkit of materials synthesis opens unimaginable possibilities for developing materials with tailored properties. The knowledge needed for bridging these technology gaps requires significant investment in basic research, and this research needs to be coupled closely with the applied research and technology communities and industry that will drive future energy technologies. These investments in fundamental research of materials under extreme conditions will have a major impact on the development of technologies that can meet future requirements for abundant, affordable, and clean energy. However, this research will enable the development of materials that will have a much broader impact in other applications that are critical to the security and economy of this nation. Basic Research Needs: Catalysis for Energy JPG.jpg file (220KB) Report.pdf file (4.2MB) Basic Research Needs: Catalysis for Energy This report is based on a BES Workshop on Basic Research Needs in Catalysis for Energy Applications, August 6-8, 2007, to identify research needs and opportunities for catalysis to meet the nation's energy needs, provide an assessment of where the science and technology now stand, and recommend the directions for fundamental research that should be pursued to meet the goals described. The United States continues to rely on petroleum and natural gas as its primary sources of fuels. As the domestic reserves of these feedstocks decline, the volumes of imported fuels grow, and the environmental impacts resulting from fossil fuel combustion become severe, we as a nation must earnestly reassess our energy future. Catalysis—the essential technology for accelerating and directing chemical transformation—is the key to realizing environmentally friendly, economical processes for the conversion of fossil energy feedstocks. Catalysis also is the key to developing new technologies for converting alternative feedstocks, such as biomass, carbon dioxide, and water. With the declining availability of light petroleum feedstocks that are high in hydrogen and low in sulfur and nitrogen, energy producers are turning to ever-heavier fossil feedstocks, including heavy oils, tar sands, shale oil, and coal. Unfortunately, the heavy feedstocks yield less fuel than light petroleum and contain more sulfur and nitrogen. To meet the demands for fuels, a deep understanding of the chemistry of complex fossil-energy feedstocks will be required together with such understanding of how to design catalysts for processing these feedstocks. The United States has the capacity to grow and convert enough biomass to replace nearly a third of the nation's current gasoline use. Building on catalysis for petroleum conversion, researchers have identified potential catalytic routes for biomass. However, biomass differs so much in composition and reactivity from fossil fuels that this starting point is inadequate. The technology for economically converting biomass into widely usable fuels does not exist, and the science underpinning its development is only now starting to emerge. The challenge is to understand the chemistry by which cellulose- and lignin-derived molecules are converted to fuels and to use this knowledge as a basis for identifying the needed catalysts. To obtain energy densities similar to those of currently used fuels, the products of biomass conversion must have oxygen contents lower than that of biomass. Oxygen must be removed by using hydrogen derived from biomass or other sources in a manner that minimizes the yield of carbon dioxide as a byproduct. Catalytic conversion of carbon dioxide into liquid fuels using solar and electrical energy would enable the carbon in carbon dioxide to be recycled into fuels, thereby reducing its contribution to atmospheric warming. Likewise, the catalytic generation of hydrogen from water could provide a carbon-free source of hydrogen for fuel and for processing of fossil and biomass feedstocks. The underlying science is far from sufficient for design of efficient catalysts and economical processes. Grand Challenges To realize the full potential of catalysis for energy applications, scientists must develop a profound understanding of catalytic transformations so that they can design and build effective catalysts with atom-by-atom precision and convert reactants to products with molecular precision. Moreover, they must build tools to make real-time, spatially resolved measurements of operating catalysts. Ultimately, scientists must use these tools to achieve a fundamental understanding of catalytic processes occurring in multiscale, multiphase environments. The first grand challenge identified in this report centers on understanding mechanisms and dynamics of catalyzed reactions. Catalysis involves chemical transformations that must be understood at the atomic scale because catalytic reactions present an intricate dance of chemical bond-breaking and bond-forming steps. Structures of solid catalyst surfaces, where the reactions occur on only a few isolated sites and in the presence of highly complex mixtures of molecules interacting with the surface in myriad ways, are extremely difficult to describe. To discover new knowledge about mechanisms and dynamics of catalyzed reactions, scientists need to image surfaces at the atomic scale and probe the structures and energetics of the reacting molecules on varying time and length scales. They also need to apply theory to validate the results. The difficulties of developing a clear understanding of the mechanisms and dynamics of catalyzed reactions are magnified by the high temperatures and pressures at which the reactions occur and the influence of the molecules undergoing transformation on the catalyst. The catalyst structure changes as the reacting molecules become part of it en route to forming products. Although the scientific challenge of understanding catalyst structure and function is great, recent advances in characterization science and facilities provide the means for meeting it in the long term. The second grand challenge in the report centers on design and controlled synthesis of catalyst structures. Fundamental investigations of catalyst structures and the mechanisms of catalytic reactions provide the necessary foundation for the synthesis of improved catalysts. Theory can serve as a predictive design tool, guiding synthetic approaches for construction of materials with precisely designed catalytic surface structures at the nano and atomic scales. Success in the design and controlled synthesis of catalytic structures requires an interplay between (1) characterization of catalysts as they function, including evaluation of their performance under technologically realistic conditions, and (2) synthesis of catalyst structures to achieve high activity and product selectivity. Priority Research Directions The workshop process identified three priority research directions for advancing catalysis science for energy applications: Advanced catalysts for the conversion of heavy fossil energy feedstocks The depletion of light, sweet crude oil has caused increasing use of heavy oils and other heavy feedstocks. The complicated nature of the molecules in these feedstocks, as well as their high heteroatom contents, requires catalysts and processing routes entirely different from those used in today's petroleum refineries. To advance catalytic technologies for converting heavy feedstocks, scientists must (1) identify and quantify the heavy molecules (now possible with methods such as high-resolution mass spectrometry) and (2) determine data to represent the reactivities of the molecules in the presence of the countless other kinds of molecules interacting with the catalysts. Methods for determining reactivities of individual compounds within complex feedstocks reacting under industrial conditions soon will be available. Reactivity data, when combined with fundamental understanding of how the reactants interact with the catalysts, will facilitate the selection of new catalysts for heavy feedstocks and the prediction of properties of the fuels produced. Understanding the chemistry of lignocellulosic biomass deconstruction and conversion to fuels The United States potentially could harvest 1.3 billion tons of biomass annually. Converting this resource to ethanol would produce more than 60 billion gallons/year, enough to replace 30 percent of the nation's current gasoline use. Scientists must develop fundamental understanding of biomass deconstruction, either through high-temperature pyrolysis or low-temperature catalytic conversion, before engineers can create commercial biomass conversion technologies. Pyrolysis generates gases and liquids for processing into fuels or blending with existing petroleum refinery streams. Low-temperature deconstruction produces sugars and lignin for conversion into molecules with higher energy densities than the parent biomass. Scientists also must discover and develop new catalysts for targeted transformations of these biomass-derived molecules into fuels. Developing a molecular-scale understanding of deconstruction and conversion of biomass products to fuels would contribute to the development of optimal processes for particular biomass sources. Knowledge of how catalyst structure and composition affect the kinetics of individual processes could lead to new catalysts with properties adjusted for maximum activity and selectivity for high- and low-temperature processing of biomass. Photo- and electro-driven conversions of carbon dioxide and water Catalytic conversion of carbon dioxide to liquid fuels facilitated by the input of solar or electrical energy presents an immense opportunity for new sources of energy. Furthermore, the catalytic generation of hydrogen from water could provide a carbon-free source of hydrogen for fuel and for processing of fossil and biomass feedstocks. Although these electrolytic processes are possible, they are not now economical, because they depend on expensive and rare materials, such as platinum, and require significantly more energy than the minimum dictated by thermodynamics. Scientists have explored the use of photons to drive thermodynamically uphill reactions, but the efficiencies of the best-known processes are very low. To dramatically increase efficiencies, we need to understand the elementary processes by which photocatalysts and electrocatalysts operate and the phenomena that limit their effectiveness. This knowledge would guide the search for more efficient catalysts. To address the challenge of increased efficiency, scientists must develop fundamental understanding on the basis of novel spectroscopic methods to probe the surfaces of photocatalysts and electrocatalysts in the presence of liquid electrolytes. New catalysts will have to involve multiple-site structures and be able to drive the multiple-electron and hydrogen transfer reactions required to produce fuels from carbon dioxide and water. Theoretical investigations also are needed to understand the manifold processes occurring on photocatalysts and electrocatalysts, many of which are unique to the conditions of their use. Basic research to address these challenges will result in fundamental knowledge and expertise crucial for developing efficient, durable, and scalable catalysts. Crosscutting Research Issues Two broad issues cut across the grand challenges and the priority research directions for development of efficient, economical, and environmentally friendly catalytic processes for energy applications: Experimental characterization of catalystsas they function is a theme common to all the processes mentioned here—ranging from heavy feedstock refining to carbon dioxide conversion to fuels. The scientific community needs a fundamental understanding of catalyst structures and catalytic reaction mechanisms to design and prepare improved catalysts and processes for energy conversion. Attainment of this understanding requires development of new techniques and facilities for investigating catalysts as they function in the presence of complex, real feedstocks at high temperatures and pressures. The community also needs improved methods for characterizing the feedstocks and products—to the point of identifying individual compounds in these complex mixtures. The dearth of information characterizing biomass-derived feedstocks and the growing complexity of the available heavy fossil feedstocks, as well as the intrinsic complexity of catalyst surfaces, magnify the difficulty of this challenge. Implied in the need for better characterization is the need for advanced methods and instrument hardware and software far beyond today's capabilities. Improved spectroscopic and microscopic capabilities, specifically including synchrotron-based equipment and methods, will provide significantly enhanced temporal, spatial, and energy resolution of catalysts and new opportunities for elucidating their performance under realistic reaction conditions. Achieving these crosscutting goals for better catalyst characterization will require breakthrough developments in techniques and much improved methodologies for combining multiple complementary techniques. Advances in theory and computation are also required to significantly advance catalysis for energy applications. A major challenge is to understand the mechanisms and dynamics of catalyzed transformations, enabling rational design of catalysts. Molecular-level understanding is essential to "tune" a catalyst to produce the right products with minimal energy consumption and environmental impact. Applications of computational chemistry and methods derived from advanced chemical theory are crucial to the development of fundamental understanding of catalytic processes and ultimately to first-principles catalyst design. Development of this understanding requires breakthroughs in theoretical and computational methods to allow treatment of the complexity of the molecular reactants and condensed-phase and interfacial catalysts needed to convert new energy feedstocks to useful products. Computation, when combined with advanced experimental techniques, is already leading to broad new insights into catalyst behavior and the design of new materials. The development of new theories and computational tools that accurately predict thermodynamic properties, dynamical behavior, and coupled kinetics of complex condensed-phase and interfacial processes is a crosscutting priority research direction to address the grand challenges of catalysis science, especially in the area of advanced energy technologies. Scientific and Technological Impact The urgent need for fuels in an era of declining resources and pressing environmental concerns demands a resurgence in catalysis science, requiring a massive commitment of programmatic leadership and improved experimental and theoretical methods. These elements will make it possible to follow, in real time, catalytic reactions on an atomic scale on surfaces that are nonuniform and laden with large molecules undergoing complex competing processes. The understanding that will emerge promises to engender technology for economical catalytic processing of ever more challenging fossil feedstocks and for breakthroughs needed to create an industry for energy production from biomass. These new technologies are needed for a sustainable supply of energy from domestic sources and mitigation of the problem of greenhouse gas emissions. Future Science Needs and Opportunities for Electron Scattering: Next-Generation Instrumentation and Beyond JPG.jpg file (364KB) Report.pdf file (2.9MB) Report.pdf file (18.0MB) This report is based on a BES Workshop entitled "Future Science Needs and Opportunities for Electron Scattering: Next-Generation Instrumentation and Beyond," March 1-2, 2007, to identify emerging basic science and engineering research needs and opportunities that will require major advances in electron-scattering theory, technology, and instrumentation. The workshop was organized to help define the scientific context and strategic priorities for the U.S. Department of Energy's Office of Basic Energy Sciences (DOE-BES) electron-scattering development for materials characterization over the next decade and beyond. Attendees represented university, national laboratory, and commercial research organizations from the United States and around the world. The workshop comprised plenary sessions, breakout groups, and joint open discussion summary sessions. Complete information about this workshop is available at link In the last 40 years, advances in instrumentation have gradually increased the resolution capabilities of commercial electron microscopes. Within the last decade, however, a revolution has occurred, facilitating 1-nm resolution in the scanning electron microscope and sub-Ångstrom resolution in the transmission electron microscope. This revolution was a direct result of decades-long research efforts concentrating on electron optics, both theoretically and in practice, leading to implementation of aberration correctors that employ multi-pole electron lenses. While this improvement has been a remarkable achievement, it has also inspired the scientific community to ask what other capabilities are required beyond "image resolution" to more fully address the scientific problems of today's technologically complex materials. During this workshop, a number of scientific challenges requiring breakthroughs in electron scattering and/or instrumentation for characterization of materials were identified. Although the individual scientific problems identified in the workshop were wide-ranging, they are well represented by seven major scientific challenges. These are listed in Table 1, together with their associated application areas as proposed by workshop attendees. Addressing these challenges will require dedicated long-term developmental efforts similar to those that have been applied to the electron optics revolution. This report summarizes the scientific challenges identified by attendees and then outlines the technological issues that need to be addressed by a long-term research and development (R&D) effort to overcome these challenges. A recurring message voiced during the meeting was that, while improved image resolution in commercially available tools is significant, this is only the first of many breakthroughs required to answer today's most challenging problems. The major technological issues that were identified, as well as a measure of their relative priority, appear in Table 2. These issues require not only the development of innovative instrumentation but also new analytical procedures that connect experiment, theory, and modeling. Table 1 Scientific Challenges and Applications Areas Identified during the Workshop Theme Application Area 1. The nanoscale origin of macroscopic properties High-performance 21st century materials in both structural engineering and electronic applications 2. The role of individual atoms, point defects, and dopants in materials Semiconductors, catalysts, quantum phenomena and confinement, fracture, embrittlement, solar energy, nuclear power, radiation damage 3. Characterization of interfaces at arbitrary orientations Semiconductors, three-dimensional geometries for nanostructures, grain-boundary-dominated processes, hydrogen storage 4. The interface between ordered and disordered materials Dynamic behavior of the liquid-solid interface, organic/inorganic interfaces, friction/wear, grain boundaries, welding, polymer/metal/oxide composites, self-assembly 5. Mapping of electromagnetic (EM) fields in and around nanoscale matter Ferroelectric/magnetic structures, switching, tunneling and transport, quantum confinement/proximity, superconductivity 6. Probing structures in their native environments Catalysis, fuel cells, organic/inorganic interfaces, functionalized nanoparticles for health care, polymers, biomolecular processes, biomaterials, soft-condensed matter, non-vacuum environments 7. The behavior of matter far from equilibrium High radiation, high-pressure and high-temperature environments, dynamic/transient behavior, nuclear and fusion energy, outer space, nucleation, growth and synthesis in solution, corrosion, phase transformations Table 2 Functionality Required to Address Challenges in Table 1 Functionality Required Priority In-situ environments permitting observation of processes under conditions that replicate real-world/real-time conditions (temperature, pressure, atmosphere, EM fields, fluids) with minimal loss of image and/or spectral resolution Detectors that enhance by more than an order of magnitude the temporal, spatial, and/or collection efficiency of existing technologies for electrons, photons, and/or X-rays Higher temporal resolution instruments for dynamic studies with a continuous range of operating conditions from microseconds to femtoseconds A 4. Sources having higher brightness, temporal resolution, and polarization Sources having higher brightness, temporal resolution, and polarization Electron-optical configurations designed to study complex interactions of nanoscale objects under multiple excitation processes (photons, fields, …) Virtualized instruments that are operating in connection with experimental tools, allowing real-time data quantitative analysis or simulation, and community software tools for routine and robust data analysis Some research efforts have already begun to address these topics. However, a dedicated and coordinated approach is needed to address these challenges more rapidly. For example, the principles of aberration correction for electron-optical lenses were established theoretically by Scherzer (Zeitschrift für Physik 101(9-10), 593-603) in 1936, but practical implementation was not realized until 1997 (a 61-year development cycle). Reducing development time to less than a decade is essential in addressing the scientific issues in the ever-growing nanoscale materials world. To accomplish this, DOE should make a concerted effort to revise how it funds advanced resources and R&D for electron beam instrumentation across its programs. Basic Research Needs for Electrical Energy Storage JPG.jpg file (270KB) Report.pdf file (14.3MB) Report.pdf file (14.3MB) Basic Research Needs for Electrical Energy Storage This report is based on a BES Workshop on Basic Research Needs for Electrical Energy Storage (EES), April 2-4, 2007, to identify basic research needs and opportunities underlying batteries, capacitors, and related EES technologies, with a focus on new or emerging science challenges with potential for significant long-term impact on the efficient storage and release of electrical energy. The projected doubling of world energy consumption within the next 50 years, coupled with the growing demand for low- or even zero-emission sources of energy, has brought increasing awareness of the need for efficient, clean, and renewable energy sources. Energy based on electricity that can be generated from renewable sources, such as solar or wind, offers enormous potential for meeting future energy demands. However, the use of electricity generated from these intermittent, renewable sources requires efficient electrical energy storage. For commercial and residential grid applications, electricity must be reliably available 24 hours a day; even second-to-second fluctuations cause major disruptions with costs estimated to be tens of billions of dollars annually. Thus, for large-scale solar- or wind-based electrical generation to be practical, the development of new EES systems will be critical to meeting continuous energy demands and effectively leveling the cyclic nature of these energy sources. In addition, greatly improved EES systems are needed to progress from today's hybrid electric vehicles to plug-in hybrids or all-electric vehicles. Improvements in EES reliability and safety are also needed to prevent premature, and sometimes catastrophic, device failure. Chemical energy storage devices (batteries) and electrochemical capacitors (ECs) are among the leading EES technologies today. Both are based on electrochemistry, and the fundamental difference between them is that batteries store energy in chemical reactants capable of generating charge, whereas electrochemical capacitors store energy directly as charge. The performance of current EES technologies falls well short of requirements for using electrical energy efficiently in transportation, commercial, and residential applications. For example, EES devices with substantially higher energy and power densities and faster recharge times are needed if all-electric/plug-in hybrid vehicles are to be deployed broadly as replacements for gasoline-powered vehicles. Although EES devices have been available for many decades, there are many fundamental gaps in understanding the atomic- and molecular-level processes that govern their operation, performance limitations, and failure. Fundamental research is critically needed to uncover the underlying principles that govern these complex and interrelated processes. With a full understanding of these processes, new concepts can be formulated for addressing present EES technology gaps and meeting future energy storage requirements. BES worked closely with the DOE Office of Energy Efficiency and Renewable Energy and the DOE Office of Electricity Delivery and Energy Reliability to clearly define future requirements for EES from the perspective of applications relevant to transportation and electricity distribution, respectively, and to identify critical technology gaps. In addition, leaders in EES industrial and applied research laboratories were recruited to prepare a technology resource document, Technology and Applied R&D Needs for Electrical Energy Storage, which provided the groundwork for and served as a basis to inform the deliberation of basic research discussions for the workshop attendees. The invited workshop attendees, numbering more than 130, included representatives from universities, national laboratories, and industry, including a significant number of scientists from Japan and Europe. A plenary session at the beginning of the workshop captured the present state of the art in research and development and technology needs required for EES for the future. The workshop participants were asked to identify key priority research directions that hold particular promise for providing needed advances that will, in turn, revolutionize the performance of EES. Participants were divided between two panels focusing on the major types of EES, chemical energy storage and capacitive energy storage. A third panel focused on cross-cutting research that will be critical to achieving the technical breakthroughs required to meet future EES needs. A closing plenary session summarized the most urgent research needs that were identified for both chemical and capacitive energy storage. The research directions identified by the panelists are presented in this report in three sections corresponding to the findings of the three workshop panels. The panel on chemical energy storage acknowledged that progressing to the higher energy and power densities required for future batteries will push materials to the edge of stability; yet these devices must be safe and reliable through thousands of rapid charge-discharge cycles. A major challenge for chemical energy storage is developing the ability to store more energy while maintaining stable electrode-electrolyte interfaces. The need to mitigate the volume and structural changes to the active electrode sites accompanying the charge-discharge cycle encourages exploration of nanoscale structures. Recent developments in nanostructured and multifunctional materials were singled out as having the potential to dramatically increase energy capacity and power densities. However, an understanding of nanoscale phenomena is needed to take full advantage of the unique chemistry and physics that can occur at the nanoscale. Further, there is an urgent need to develop a fundamental understanding of the interdependence of the electrolyte and electrode materials, especially with respect to controlling charge transfer from the electrode to the electrolyte. Combining the power of new computational capabilities and in situ analytical tools could open up entirely new avenues for designing novel multifunctional nanomaterials with the desired physical and chemical properties, leading to greatly enhanced performance. The panel on capacitive storage recognized that, in general, ECs have higher power densities than batteries, as well as sub-second response times. However, energy storage densities are currently lower than they are for batteries and are insufficient for many applications. As with batteries, the need for higher energy densities requires new materials. Similarly, advances in electrolytes are needed to increase voltage and conductivity while ensuring stability. Understanding how materials store and transport charge at electrode-electrolyte interfaces is critically important and will require a fundamental understanding of charge transfer and transport mechanisms. The capability to synthesize nanostructured electrodes with tailored, high-surface-area architectures offers the potential for storing multiple charges at a single site, increasing charge density. The addition of surface functionalities could also contribute to high and reproducible charge storage capabilities, as well as rapid charge-discharge functions. The design of new materials with tailored architectures optimized for effective capacitive charge storage will be catalyzed by new computational and analytical tools that can provide the needed foundation for the rational design of these multifunctional materials. These tools will also provide the molecular-level insights required to establish the physical and chemical criteria for attaining higher voltages, higher ionic conductivity, and wide electrochemical and thermal stability in electrolytes. The third panel identified four cross-cutting research directions that were considered to be critical for meeting future technology needs in EES: 1. Advances in Characterization 2. Nanostructured Materials 3. Innovations in Electrolytes 4. Theory, Modeling, and Simulation Exceptional insight into the physical and chemical phenomena that underlie the operation of energy storage devices can be afforded by a new generation of analytical tools. This information will catalyze the development of new materials and processes required for future EES systems. New in situ photon- and particle-based microscopic, spectroscopic, and scattering techniques with time resolution down to the femtosecond range and spatial resolution spanning the atomic and mesoscopic scales are needed to meet the challenge of developing future EES systems. These measurements are critical to achieving the ability to design EES systems rationally, including materials and novel architectures that exhibit optimal performance. This information will help identify the underlying reasons behind failure modes and afford directions for mitigating them. The performance of energy storage systems is limited by the performance of the constituent materials—including active materials, conductors, and inert additives. Recent research suggests that synthetic control of material architectures (including pore size, structure, and composition; particle size and composition; and electrode structure down to nanoscale dimensions) could lead to transformational breakthroughs in key energy storage parameters such as capacity, power, charge-discharge rates, and lifetimes. Investigation of model systems of irreducible complexity will require the close coupling of theory and experiment in conjunction with well-defined structures to elucidate fundamental materials properties. Novel approaches are needed to develop multifunctional materials that are self-healing, self-regulating, failure-tolerant, impurity-sequestering, and sustainable. Advances in nanoscience offer particularly exciting possibilities for the development of revolutionary three-dimensional architectures that simultaneously optimize ion and electron transport and capacity. The design of EES systems with long cycle lifetimes and high energy-storage capacities will require a fundamental understanding of charge transfer and transport processes. The interfaces of electrodes with electrolytes are astonishingly complex and dynamic. The dynamic structures of interfaces need to be characterized so that the paths of electrons and attendant trafficking of ions may be directed with exquisite fidelity. New capabilities are needed to "observe" the dynamic composition and structure at an electrode surface, in real time, during charge transport and transfer processes. With this underpinning knowledge, wholly new concepts in materials design can be developed for producing materials that are capable of storing higher energy densities and have long cycle lifetimes. A characteristic common to chemical and capacitive energy storage devices is that the electrolyte transfers ions/charge between electrodes during charge and discharge cycles. An ideal electrolyte provides high conductivity over a broad temperature range, is chemically and electrochemically inert at the electrode, and is inherently safe. Too often the electrolyte is the weak link in the energy storage system, limiting both performance and reliability of EES. At present, the myriad interactions that occur in electrolyte systems—ion-ion, ion-solvent, and ion-electrode—are poorly understood. Fundamental research will provide the knowledge that will permit the formulation of novel designed electrolytes, such as ionic liquids and nanocomposite polymer electrolytes, that will enhance the performance and lifetimes of electrolytes. Advances in fundamental theoretical methodologies and computer technologies provide an unparalleled opportunity for understanding the complexities of processes and materials needed to make the groundbreaking discoveries that will lead to the next generation of EES. Theory, modeling, and simulation can effectively complement experimental efforts and can provide insight into mechanisms, predict trends, identify new materials, and guide experiments. Large multiscale computations that integrate methods at different time and length scales have the potential to provide a fundamental understanding of processes such as phase transitions in electrode materials, ion transport in electrolytes, charge transfer at interfaces, and electronic transport in electrodes. Revolutionary breakthroughs in EES have been singled out as perhaps the most crucial need for this nation's secure energy future. The BES Workshop on Basic Research Needs for Electrical Energy Storage concluded that the breakthroughs required for tomorrow's energy storage needs will not be realized with incremental evolutionary improvements in existing technologies. Rather, they will be realized only with fundamental research to understand the underlying processes involved in EES, which will in turn enable the development of novel EES concepts that incorporate revolutionary new materials and chemical processes. Recent advances have provided the ability to synthesize novel nanoscale materials with architectures tailored for specific performance; to characterize materials and dynamic chemical processes at the atomic and molecular level; and to simulate and predict structural and functional relationships using modern computational tools. Together, these new capabilities provide unprecedented potential for addressing technology and performance gaps in EES devices. Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems JPG.jpg file (321KB) Report.pdf file (13.5MB) Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems This report is based on a BES Workshop on Basic Research Needs for Geosciences: Facilitating 21st Century Energy Systems, February 21-23, 2007, to identify research areas in geosciences, such as behavior of multiphase fluid-solid systems on a variety of scales, chemical migration processes in geologic media, characterization of geologic systems, and modeling and simulation of geologic systems, needed for improved energy systems. Serious challenges must be faced in this century as the world seeks to meet global energy needs and at the same time reduce emissions of greenhouse gases to the atmosphere. Even with a growing energy supply from alternative sources, fossil carbon resources will remain in heavy use and will generate large volumes of carbon dioxide (CO2). To reduce the atmospheric impact of this fossil energy use, it is necessary to capture and sequester a substantial fraction of the produced CO2. Subsurface geologic formations offer a potential location for long-term storage of the requisite large volumes of CO2. Nuclear energy resources could also reduce use of carbon-based fuels and CO2 generation, especially if nuclear energy capacity is greatly increased. Nuclear power generation results in spent nuclear fuel and other radioactive materials that also must be sequestered underground. Hence, regardless of technology choices, there will be major increases in the demand to store materials underground in large quantities, for long times, and with increasing efficiency and safety margins. Rock formations are composed of complex natural materials and were not designed by nature as storage vaults. If new energy technologies are to be developed in a timely fashion while ensuring public safety, fundamental improvements are needed in our understanding of how these rock formations will perform as storage systems. This report describes the scientific challenges associated with geologic sequestration of large volumes of carbon dioxide for hundreds of years, and also addresses the geoscientific aspects of safely storing nuclear waste materials for thousands to hundreds of thousands of years. The fundamental crosscutting challenge is to understand the properties and processes associated with complex and heterogeneous subsurface mineral assemblages comprising porous rock formations, and the equally complex fluids that may reside within and flow through those formations. The relevant physical and chemical interactions occur on spatial scales that range from those of atoms, molecules, and mineral surfaces, up to tens of kilometers, and time scales that range from picoseconds to millennia and longer. To predict with confidence the transport and fate of either CO2 or the various components of stored nuclear materials, we need to learn to better describe fundamental atomic, molecular, and biological processes, and to translate those microscale descriptions into macroscopic properties of materials and fluids. We also need fundamental advances in the ability to simulate multiscale systems as they are perturbed during sequestration activities and for very long times afterward, and to monitor those systems in real time with increasing spatial and temporal resolution. The ultimate objective is to predict accurately the performance of the subsurface fluid-rock storage systems, and to verify enough of the predicted performance with direct observations to build confidence that the systems will meet their design targets as well as environmental protection goals. The report summarizes the results and conclusions of a Workshop on Basic Research Needs for Geosciences held in February 2007. Five panels met, resulting in four Panel Reports, three Grand Challenges, six Priority Research Directions, and three Crosscutting Research Issues. The Grand Challenges differ from the Priority Research Directions in that the former describe broader, long-term objectives while the latter are more focused. Computational thermodynamics of complex fluids and solids. Predictions of geochemical transport in natural materials must start with detailed knowledge of the chemical properties of multicomponent fluids and solids. New modeling strategies for geochemical systems based on first-principles methods are required, as well as reliable tools for translating atomic-and molecular-scale descriptions to the many orders of magnitude larger scales of subsurface geologic systems. Specific challenges include calculation of equilibrium constants and kinetics of heterogeneous reactions, descriptions of adsorption and other mineral surface processes, properties of transuranic elements and compounds, and mixing and transport properties for multicomponent liquid, solid and supercritical solutions. Significant advances are required in calculations based on the electronic Schrödinger equation, scaling of solution methods, and representation in terms of Equations of State. Calibration of models with a new generation of experiments will be critical. Integrated characterization, modeling, and monitoring of geologic systems. Characterization of the subsurface is inextricably linked to the modeling and monitoring of processes occurring there. More accurate descriptions of the behavior of subsurface storage systems will require that the diverse, independent approaches currently used for characterizing, modeling and monitoring be linked in a revolutionary and comprehensive way and carried out simultaneously. The challenges arise from the inaccessibility and complexity of the subsurface, the wide range of scales of variability, and the potential role of coupled nonlinear processes. Progress in subsurface simulation requires advances in the application of geological process knowledge for determining model structure and the effective integration of geochemical and high-resolution geophysical measurements into model development and parameterization. To fully integrate characterization and modeling will require advances in methods for joint inversion of coupled process models that effectively represent nonlinearities, scale effects, and uncertainties. Simulation of multiscale geologic systems for ultra-long times. Anthropogenic perturbations of subsurface storage systems will occur over decades, but predictions of storage performance will be needed that span hundreds to many thousands of years, time scales that reach far beyond standard engineering practice. Achieving this simulation capability requires a major advance in modeling capability that will accurately couple information across scales, i.e., account for the effects of small-scale processes on larger scales, and the effects of fast processes as well as the ultra-slow evolution on long time scales. Cross-scale modeling of complex dynamic subsurface systems requires the development of new computational and numerical methods of stochastic systems, new multiscale formulations, data integration, improvements in inverse theory, and new methods for optimization. Mineral-water interface complexity and dynamics. Natural materials are structurally complex, with variable composition, roughness, defect content, and organic and mineral coatings. There is an overarching need to interrogate the complex structure and dynamics at mineral-water interfaces with increasing spatial and temporal resolution using existing and emerging experimental and computational approaches. The fundamental objectives are to translate a molecular-scale description of complex mineral surfaces to thermodynamic quantities for the purpose of linking with macroscopic models, to follow interfacial reactions in real time, and to understand how minerals grow and dissolve and how the mechanisms couple dynamically to changes at the interface. Nanoparticulate and colloid chemistry and physics. Colloidal particles play critical roles in dispersion of contaminants from energy production, use, or waste isolation sites. New advances are needed in characterization of colloids, sampling technologies, and conceptual models for reactivity, fate, and transport of colloidal particles in aqueous environments. Specific advances will be needed in experimental techniques to characterize colloids at the atomic level and to build quantitative models of their properties and reactivity. Dynamic imaging of flow and transport. Improved imaging in the subsurface is needed to allow in situ multiscale measurement of state variables as well as flow, transport, fluid age, and reaction rates. Specific research needs include development of smart tracers, identification of environmental tracers that would allow age dating fluids in the 50-3000 year range, methods for measuring state variables such as pressure and temperature continuously in space and time, and better models for the interactions of physical fields, elastic waves, or electromagnetic perturbations with fluid-filled porous media. Transport properties and in situ characterization of fluid trapping, isolation, and immobilization. Mechanisms of immobilization of injected CO2 include buoyancy trapping of fluids by geologic seals, capillary trapping of fluid phases as isolated bubbles within rock pores, and sorption of CO2 or radionuclides on solid surfaces. Specific advances will be needed in our ability to understand and represent the interplay of interfacial tension, surface properties, buoyancy, the state of stress, and rock heterogeneity in the subsurface. Fluid-induced rock deformation. CO2 injection affects the thermal, mechanical, hydrological, and chemical state of large volumes of the subsurface. Accurate forecasting of the effects requires improved understanding of the coupled stress-strain and flow response to injection-induced pressure and hydrologic perturbations in multiphase-fluid saturated systems. Such effects manifest themselves as changes in rock properties at the centimeter scale, mechanical deformation at meter-to-kilometer scales, and modified regional fluid flow at scales up to 100 km. Predicting the hydromechanical properties of rocks over this scale range requires improved models for the coupling of chemical, mechanical, and hydrological effects. Such models could revolutionize our ability to understand shallow crustal deformation related to many other natural processes and engineering applications. Biogeochemistry in extreme subsurface environments. Microorganisms strongly influence the mineralogy and chemistry of geologic systems. CO2 and nuclear material isolation will perturb the environments for these microorganisms significantly. Major advances are needed to describe how populations of microbes will respond to the extreme environments of temperature, pH, radiation, and chemistry that will be created, so that a much clearer picture of biogenic products, potential for corrosion, and transport or immobilization of contaminants can be assembled. The microscopic basis of macroscopic complexity. Classical continuum mechanics relies on the assumption of a separation between the length scales of microscopic fluctuations and macroscopic motions. However, in geologic problems this scale separation often does not exist. There are instead fluctuations at all scales, and the resulting macroscopic behavior can then be quite complex. The essential need is to develop a scientific basis of "emergent" phenomena based on the microscopic phenomena. Highly reactive subsurface materials and environments. The emplacement of energy system byproducts into geological repositories perturbs temperature and pressure, imposes chemical gradients, creates intense radiation fields, and can cause reactions that alter the minerals, pore fluids, and emplaced materials. Strong interactions between the geochemical environment and emplaced materials are expected. New insight is needed on equilibria in compositionally complex systems, reaction kinetics in concentrated aqueous and other solutions, reaction kinetics under near-equilibrium undersaturated and supersaturated conditions, and transient reaction kinetics. Thermodynamics of the solute-to-solid continuum. Reactions involving solutes, colloids, particles, and surfaces control the transport of chemical constituents in the subsurface environment. A rigorous structural, kinetic, and thermodynamic description of the complex chemical reality between the molecular and the macroscopic scale is a fundamental scientific challenge. Advanced techniques are needed for characterizing particles in the nanometer-tomicrometer size range, combined with a new description of chemical thermodynamics that does not rely on a sharp distinction between solutes and solids. The Grand Challenges, Priority Research Directions, and Crosscutting Issues described in this report define a science-based approach to understanding the long-term behavior of subsurface geologic systems in which anthropogenic CO2 and nuclear materials could be stored. The research areas are rich with opportunities to build fundamental knowledge of the physics, chemistry, and materials science of geologic systems that will have impacts well beyond the specific applications. The proposed research is based on development of a new level of understanding—physical, chemical, biological, mathematical, and computational—of processes that happen at the microscopic scale of atoms, molecules and mineral surfaces, and how those processes translate to material behavior over large length scales and on ultra-long time scales. Addressing the basic science issues described would revolutionize our ability to understand, simulate, and monitor all of the subsurface settings in which transport is critical, including the movement of contaminants, the emplacement of minerals, or the management of aquifers. The results of the research will have a wide range of implications from physics and chemistry, to material science, biology and earth science. Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels JPG.jpg file (262KB) Report.pdf file (5.9MB) This report is based on a BES Workshop on Clean and Efficient Combustion of 21st Century Transportation Fuels, October 29-November 1, 2006, to identify basic research needs and opportunities underlying utilization of evolving transportation fuels, with a focus on new or emerging science challenges that have the potential for significant long-term impact on fuel efficiency and emissions. From the invention of the wheel, advances in transportation have increased the mobility of human kind, enhancing the quality of life and altering our very perception of time and distance. Early carts and wagons driven by human or animal power allowed the movement of people and goods in quantities previously thought impossible. With the rise of steam power, propeller driven ships and railroad locomotives shrank the world as never before. Ocean crossings were no longer at the whim of the winds, and continental crossings went from grand adventures to routine, scheduled outings. The commercialization of the internal combustion engine at the turn of the twentieth century brought about a new, and very personal, revolution in transportation, particularly in the United States. Automobiles created an unbelievable freedom of movement: A single person could travel to any point in the county in a matter of days, on a schedule of his or her own choosing. Suburbs were built on the promise of cheap, reliable, personal transportation. American industry grew to depend on internal combustion engines to produce and transport goods, and farmers increased yields and efficiency by employing farm machinery. Airplanes, powered by internal combustion engines, shrank the world to the point where a trip between almost any two points on the globe is now measured not in days or months, but in hours. Transportation is the second largest consumer of energy in the United States, accounting for nearly 60% of our nation's use of petroleum, an amount equivalent to all of the oil imported into the U.S. The numbers are staggering—the transport of people and goods within the U.S. burns almost one million gallons of petroleum each minute of the day. Our Founding Fathers may not have foreseen freedom of movement as an inalienable right, but Americans now view it as such. Knowledge is power, a maxim that is literally true for combustion. In our global, just-in-time economy, American competitiveness and innovation require an affordable, diverse, stable, and environmentally acceptable energy supply. Currently 85% of our nation's energy comes from hydrocarbon sources, including natural gas, petroleum, and coal; 97% of transportation energy derives from petroleum, essentially all from combustion in gasoline engines (65%), diesel engines (20%), and jet turbines (12%). The monolithic nature of transportation technologies offers the opportunity for improvements in efficiency of 25-50% through strategic technical investment in advanced fuel/engine concepts and devices. This investment is not a matter of choice, but, an economic, geopolitical, and environmental necessity. The reality is that the internal combustion engine will remain the primary driver of transport for the next 30-50 years, whether or not one believes that the peak in oil is past or imminent, or that hydrogen-fueled and electric vehicles will power transport in the future, or that geopolitical tensions will ease through international cooperation. Rational evaluation of U.S. energy security must include careful examination of how we achieve optimally efficient and clean combustion of precious transportation fuels in the 21st century. The Basic Energy Sciences Workshop on Clean and Efficient Combustion of 21st Century Transportation Fuels Our historic dependence on light, sweet crude oil for our transportation fuels will draw to a close over the coming decades as finite resources are exhausted. New fuel sources, with differing characteristics, are emerging to displace crude oil. As these new fuel streams enter the market, a series of new engine technologies are also under development, promising improved efficiency and cleaner combustion. To date, however, a coordinated strategic effort to match future fuels with evolving engines is lacking. To provide the scientific foundation to enable technology breakthroughs in transportation fuel utilization, the Office of Basic Energy Sciences in the U.S. Department of Energy (DOE) convened the Workshop on Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels from October 30 to November 1, 2006. This report is a summary of that Workshop. It reflects the collective output of the Workshop participants, which included over 80 leading scientists and engineers representing academia, industry, and national laboratories in the United States and Europe. Researchers specializing in basic science and technological applications were well represented, producing a stimulating and engaging forum. Workshop planning and execution involved advance coordination with DOE Office of Energy Efficiency and Renewable Energy, FreedomCAR and Vehicle Technologies, which manages applied research and development of transportation technologies. Priority research directions were identified by three panels, each made up of a subset of the Workshop attendees and interested observers. The first two panels were differentiated by their focus on engines or fuels and were similar in their strategy of working backward from technology drivers to scientific research needs. The first panel focused on Novel Combustion, as embodied in promising new engine technologies. The second panel focused on Fuel Utilization, inspired by the unique (and largely unknown) challenges of the emerging fuel streams entering the market. The third panel explored crosscutting science themes and identified general gaps in our scientific understanding of 21st-century fuel combustion. Subsequent to the Workshop, co-chairs and panel leads distilled the collective output to produce eight distinct, targeted research areas that advance one overarching grand challenge: to develop a validated, predictive, multi-scale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications. Fuels and Engines Transportation fuels for automobile, truck and aircraft engines are currently produced by refining petroleum-based sweet crude oil, from which gasoline, diesel fuel and jet fuel are each made with specific physical and chemical characteristics dictated by the type of engine in which they are to be burned. Standardized fuel properties and restricted engine operating domains couple to provide reliable performance. As new fuels derived from oil sands, oil shale, coal, and bio-feedstocks emerge as replacements for light, sweet crude oil, both uncertainties and strategic opportunities arise. Rather than pursue energy-intensive refining of these qualitatively different emerging fuels to match current fuel formulations, we must strive to achieve a "dual revolution" by interdependently advancing both fuel and engine technologies. Spark-ignited gasoline engines equipped with catalytic after-treatment operate cleanly but well below optimal efficiency due to low compression ratios and throttle-plate losses used to control air intake. Diesel engines operate more efficiently at higher compression ratios but sample broad realms of fuel/air ratio, thereby producing soot and NOx for which burnout and/or removal can prove problematic. A number of new engine technologies are attempting to overcome these efficiency and emissions compromises. Direct injection gasoline engines operate without throttle plates, increasing efficiency, while retaining the use of a catalytic converter. Ultra-lean, high-pressure, low-temperature diesel combustion seeks to avoid the conditions that form pollutants, while maintaining very high efficiency. A new form of combustion, homogeneous charge compression ignition (HCCI) seeks to combine the best of diesel and gasoline engines. HCCI employs a premixed fuel-air charge that is ignited by compression, with the ignition timing controlled by in-cylinder fuel chemistry. Each of these advanced combustion strategies must permit and even exploit fuel flexibility as the 21st-century fuel stream matures. The opportunity presented by new fuel sources and advanced engine concepts offers such an overwhelming design and operation parameter space that only those technologies that build upon a predictive science capability will likely mature to a product within a useful timeframe. Research Directions The Workshop identified a single, overarching grand challenge: The development of a validated, predictive, multi-scale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications. A broad array of discovery research and scientific inquiry that integrates experiment, theory, modeling and simulation will be required. This predictive capability, if attained, will change fundamentally the process for fuels research and engine development by establishing a scientific understanding of sufficient depth and flexibility to facilitate realistic simulation of fuel combustion in existing and proposed engines. Similar understanding in aeronautics has produced the beautiful and efficient complex curves of modern aircraft wings. These designs could never have been realized through cut-and-try engineering, but rather rely on the prediction and optimization of complex air flows. An analogous experimentally validated, predictive capability for combustion is a daunting challenge for numerous reasons: (1) spatial scales of importance range from the dimensions of the atom up to that of an engine piston; (2) the combustion chemistry of 21st-century fuels is astonishingly complex with hundreds of different fuel molecules and many thousands of possible reactions contributing to the oxidative release of energy stored in chemical bonds—chemical details also dictate emissions profiles, engine knock conditions and, for HCCI, ignition timing; (3) evolving engine designs will operate under dilute conditions at very high pressures and compression ratios—we possess neither sufficient concepts nor experimental tools to address these new operating conditions; (4) turbulence, transport, and radiative phenomena have a profound impact on local chemistry in most combustion media but are poorly understood and extremely challenging to characterize; (5) even assuming optimistic growth in computing power for existing and envisioned architectures, combustion phenomena are and will remain too complex to simulate in their complete detail, and methods that condense information and accurately propagate uncertainties across length and time scales will be required to optimize fuel/engine design and operation. Eight priority research directions, each of which focuses on crucial elements of the overarching grand challenge, are cited as most critical to the path forward by the Workshop participants. In addition to the unifying grand challenge and specific priority research directions, the Workshop produced a keen sense of urgency and opportunity for the development of revolutionary combustion technology for transportation based upon fundamental combustion science. Internal combustion engines are often viewed as mature technology, developed in an Edisonian fashion over a hundred years. The participants at the Workshop were unanimous in their view that only through the achievable goal of truly predictive combustion science will the engines of the 21st century realize unparalleled efficiency and cleanliness in the challenging environment of changing fuel streams. Basic Research Needs for Advanced Nuclear Energy Systems JPG.jpg file (380KB) Report.pdf file (27.9MB) Report.pdf file (62.6MB) Basic Research Needs for Advanced Nuclear Energy Systems This report is based on a BES Workshop on Advanced Nuclear Energy Systems, July 31-August 3, 2006, to identify new, emerging, and scientifically challenging areas in materials and chemical sciences that have the potential for significant impact on advanced nuclear energy systems. The global utilization of nuclear energy has come a long way from its humble beginnings in the first sustained nuclear reaction at the University of Chicago in 1942. Today, there are over 440 nuclear reactors in 31 countries producing approximately 16% of the electrical energy used worldwide. In the United States, 104 nuclear reactors currently provide 19% of electrical energy used nationally. The International Atomic Energy Agency projects significant growth in the utilization of nuclear power over the next several decades due to increasing demand for energy and environmental concerns related to emissions from fossil plants. There are 28 new nuclear plants currently under construction including 10 in China, 8 in India, and 4 in Russia. In the United States, there have been notifications to the Nuclear Regulatory Commission of intentions to apply for combined construction and operating licenses for 27 new units over the next decade. The projected growth in nuclear power has focused increasing attention on issues related to the permanent disposal of nuclear waste, the proliferation of nuclear weapons technologies and materials, and the sustainability of a once-through nuclear fuel cycle. In addition, the effective utilization of nuclear power will require continued improvements in nuclear technology, particularly related to safety and efficiency. In all of these areas, the performance of materials and chemical processes under extreme conditions is a limiting factor. The related basic research challenges represent some of the most demanding tests of our fundamental understanding of materials science and chemistry, and they provide significant opportunities for advancing basic science with broad impacts for nuclear reactor materials, fuels, waste forms, and separations techniques. Of particular importance is the role that new nanoscale characterization and computational tools can play in addressing these challenges. These tools, which include DOE synchrotron X-ray sources, neutron sources, nanoscale science research centers, and supercomputers, offer the opportunity to transform and accelerate the fundamental materials and chemical sciences that underpin technology development for advanced nuclear energy systems. The fundamental challenge is to understand and control chemical and physical phenomena in multi-component systems from femto-seconds to millennia, at temperatures to 1000ºC, and for radiation doses to hundreds of displacements per atom (dpa). This is a scientific challenge of enormous proportions, with broad implications in the materials science and chemistry of complex systems. New understanding is required for microstructural evolution and phase stability under relevant chemical and physical conditions, chemistry and structural evolution at interfaces, chemical behavior of actinide and fission-product solutions, and nuclear and thermo-mechanical phenomena in fuels and waste forms. First-principles approaches are needed to describe f-electron systems, design molecules for separations, and explain materials failure mechanisms. Nanoscale synthesis and characterization methods are needed to understand and design materials and interfaces with radiation, temperature, and corrosion resistance. Dynamical measurements are required to understand fundamental physical and chemical phenomena. New multiscale approaches are needed to integrate this knowledge into accurate models of relevant phenomena and complex systems across multiple length and time scales. The Department of Energy (DOE) Workshop on Basic Research Needs for Advanced Nuclear Energy Systems was convened in July 2006 to identify new, emerging, and scientifically challenging areas in materials and chemical sciences that have the potential for significant impact on advanced nuclear energy systems. Sponsored by the DOE Office of Basic Energy Sciences (BES), the workshop provided recommendations for priority research directions and crosscutting research themes that underpin the development of advanced materials, fuels, waste forms, and separations technologies for the effective utilization of nuclear power. A total of 235 invited experts from 31 universities, 11 national laboratories, 6 industries, 3 government agencies, and 11 foreign countries attended the workshop. The workshop was the sixth in a series of BES workshops focused on identifying basic research needs to overcome short-term showstoppers and to formulate long-term grand challenges related to energy technologies. These workshops have followed a common format that includes the development of a technology perspectives resource document prior to the workshop, a plenary session including invited presentations from technology and research experts, and topical panels to determine basic research needs and recommended research directions. Reports from the workshops are available on the BES website at The workshop began with a plenary session of invited presentations from national and international experts on science and technology related to nuclear energy. The presentations included nuclear technology, industry, and international perspectives, and an overview of the Global Nuclear Energy Partnership. Frontier research presentations were given on relevant topics in materials science, chemistry, and computer simulation. Following the plenary session, the workshop divided into six panels: Materials under Extreme Conditions, Chemistry under Extreme Conditions, Separations Science, Advanced Actinide Fuels, Advanced Waste Forms, and Predictive Modeling and Simulation. In addition, there was a crosscut panel that looked for areas of synergy across the six topical panels. The panels were composed of basic research leaders in the relevant fields from universities, national laboratories, and other institutions. In advance of the workshop, panelists were provided with a technology perspectives resource document that described the technology and applied R&D needs for advanced nuclear energy systems. In addition, technology experts were assigned to each of the panels to ensure that the basic research discussions were informed by a current understanding of technology issues. The panels were charged with defining the state of the art in their topical research area, describing the related basic research challenges that must be overcome to provide breakthrough technology opportunities, and recommending basic research directions to address these challenges. These basic research challenges and recommended research directions were consolidated into Scientific Grand Challenges, Priority Research Directions, and Crosscutting Research Themes. These results are summarized below and described in detail in the full report. Scientific Grand Challenges Scientific Grand Challenges represent barriers to fundamental understanding that, if overcome, could transform the related scientific field. Historical examples of scientific grand challenges with far-reaching scientific and technological impacts include the structure of DNA, the understanding of quantum behavior, and the explanation of nuclear fission. Theoretical breakthroughs and new experimental capabilities are often key to addressing these challenges. In advanced nuclear energy systems, scientific grand challenges focus on the fundamental materials and chemical sciences that underpin the performance of materials and processes under extreme conditions of radiation, temperature, and corrosive environments. Addressing these challenges offers the potential of revolutionary new approaches to developing improved materials and processes for nuclear applications. The workshop identified the following three Scientific Grand Challenges. Resolving the f-electron challenge to master the chemistry and physics of actinides and actinide-bearing materials.The introduction of new actinide-based fuels for advanced nuclear energy systems requires new chemical separations strategies and predictive understanding of fuel and waste-form fabrication and performance. However, current computational electronic-structure approaches are inadequate to describe the electronic behavior of actinide materials, and the multiplicity of chemical forms and oxidation states for these elements complicates their behavior in fuels, solutions, and waste forms. Advances in density functional theory as well as in the treatment of relativistic effects are needed in order to understand and predict the behavior of these strongly correlated electron systems. Developing a first-principles, multiscale description of material properties in complex materials under extreme conditions.The long-term stability and mechanical integrity of structural materials, fuels, claddings, and waste forms are governed by the kinetics of microstructure and interface evolution under the combined influence of radiation, high temperature, and stress. Controlling the mechanical and chemical properties of materials under these extreme conditions will require the ability to relate phase stability and mechanical behavior to a first-principles understanding of defect production, diffusion, trapping, and interaction. New synthesis techniques based on the nanoscale design of materials offer opportunities for mitigating the effects of radiation damage through the development and control of nanostructured defect sinks. However, a unified, predictive multiscale theory that couples all relevant time and length scales in microstructure evolution and phase stability must be developed. In addition, fundamental advances are needed in nanoscale characterization, diffusion, thermodynamics, and in situ studies of fracture and deformation. Understanding and designing new molecular systems to gain unprecedented control of chemical selectivity during processing.Advanced separations technologies for nuclear fuel reprocessing will require unprecedented control of chemical selectivity in complex environments. This control requires the ability to design, synthesize, characterize, and simulate molecular systems that selectively trap and release target molecules and ions with high efficiency under extreme conditions and to understand how mesoscale phenomena such as nanophase behavior and energetics in macromolecular systems impact partitioning. New capabilities in molecular spectroscopy, imaging, and computational modeling offer opportunities for breakthroughs in this area. Priority Research Directions Priority Research Directions are areas of basic research that have the highest potential for impact in a specific research or technology area. They represent opportunities that align with scientific grand challenges, emerging research opportunities, and related technology priorities. The workshop identified nine Priority Research Directions for basic research related to advanced nuclear energy systems. Nanoscale design of materials and interfaces that radically extend performance limits in extreme radiation environments.The fundamental understanding of the interaction of defects with nanostructures offers the potential for the design of materials and interfaces that mitigate radiation damage by controlling defect behavior. New research is needed in the design, synthesis, nanoscale characterization, and time-resolved study of nanostructured materials and interfaces that offer the potential to control defect production, trapping, and interaction under extreme conditions. Physics and chemistry of actinide-bearing materials and the f-electron challenge.A robust theory of the electronic structure of actinides will provide an improved understanding of their physical and chemical properties and behavior, leading to opportunities for advances in fuels and waste forms. New advances in exchange and correlation functionals in density functional theory as well as in the treatment of relativistic effects and in software implementation on advanced computer architectures are needed to overcome the challenges of adequately treating the behavior of 4f and 5f electrons, namely, strong correlation, spin-orbit coupling, and multiplet complexity, as well as additional relativistic effects. Advances are needed in the application of these new electronic structure methods for f-element-containing molecules and solids to calculate the properties of defects in multi-component systems, and in the fundamental understanding of related chemical and physical properties at high temperature. Microstructure and property stability under extreme conditions.The predictive understanding of microstructural evolution and property changes under extreme conditions is essential for the rational design of materials for structural, fuels, and waste-form applications. Advances are needed to develop a first-principles understanding of the relationship of defect properties and microstructural evolution to mechanical behavior and phase stability. This will require a closely coupled approach of in situ studies of nanoscale and mechanical behavior with multiscale theory. Mastering actinide and fission product chemistry under all chemical conditions.A more accurate understanding of the electronic structure of the complexes of actinide and fission products will expand our ability to predict their behavior quantitatively under conditions relevant to all stages in fuel reprocessing (separations, dissolution, and stabilization of waste forms) and in new media that are proposed for advanced processing systems. This knowledge must be supplemented by accurate prediction and manipulation of solvent properties and chemical reactivities in non-traditional separation systems such as modern "tunable" solvent systems. This will require quantitative, fundamental understanding of the mechanisms of solvent tunability, the factors limiting control over solvent properties, the forces driving chemical speciation, and modes of controlling reactions. Basic research needs include f-element electronic structure and bonding, speciation and reactivity, thermodynamics, and solution behavior. Exploiting organization to achieve selectivity at multiple length scales. Harnessing the complexity of organization that occurs at the mesoscale in solution or at interfaces will lead to new separation systems that provide for greatly increased selectivity in the recovery of target species and reduced formation of secondary waste streams through ligand degradation. Research directions include design of ligands and other selectivity agents, expanding the range of selection/release mechanisms, fundamental understanding of phase phenomena and self-assembly in separations, and separations systems employing aqueous solvents. Adaptive material-environment interfaces for extreme chemical conditions.Chemistry at interfaces will play a crucial role in the fabrication, performance, and stability of materials in almost every aspect of Advanced Nuclear Energy Systems, from fuel, claddings, and pressure vessels in reactors to fuel reprocessing and separations, and ultimately to long-term waste storage. Revolutionary advances in the understanding of interfacial chemistry of materials through developments in new modeling and in situ experimental techniques offer the ability to design material interfaces capable of providing dynamic, universal stability over a wide range of conditions and with much greater "self-healing" capabilities. Achieving the necessary scientific advances will require moving beyond interfacial chemistry in ultra-high-vacuum environments to the development of in situ techniques for monitoring the chemistry at fluid/solid and solid/solid interfaces under conditions of high pressure and temperature and harsh chemical environments. Fundamental effects of radiation and radiolysis in chemical processes.The reprocessing of nuclear fuel and the storage of nuclear waste present environments that include substantial radiation fields. A predictive understanding of the chemical processes resulting from intense radiation, high temperatures, and extremes of acidity and redox potential on chemical speciation is required to enhance efficient, targeted separations processes and effective storage of nuclear waste. In particular, the effect of radiation on the chemistries of ligands, ionic liquids, polymers, and molten salts is poorly understood. There is a need for an improved understanding of the fundamental processes that affect the formation of radicals and ultimately control the accumulation of radiation-induced damage to separation systems and waste forms. Fundamental thermodynamics and kinetic processes in multi-component systems for fuel fabrication and performance.The fabrication and performance of advanced nuclear fuels, particularly those containing the minor actinides, is a significant challenge that requires a fundamental understanding of the thermodynamics, transport, and chemical behavior of complex materials during processing and irradiation. Global thermochemical models of complex phases that are informed by ab initio calculations of materials properties and high-throughput predictive models of complex transport and phase segregation will be required for full fuel fabrication and performance calculations. These models, when coupled with appropriate experimental efforts, will lead to significantly improved fuel performance by creating novel tailored fuel forms. Predictive multiscale modeling of materials and chemical phenomena in multi-component systems under extreme conditions.The advent of large-scale (petaflop) simulations will significantly enhance the prospect of probing important molecular-level mechanisms underlying the macroscopic phenomena ofsolution and interfacial chemistry in actinide-bearing systems and of materials and fuels fabrication, performance, and failure under extreme conditions. There is an urgent need to develop multiscale algorithms capable of efficiently treating systems whose time evolution is controlled by activated processes and rare events. Although satisfactory solutions are lacking, there are promising directions, including accelerated molecular dynamics (MD) and adaptive kinetic Monte Carlo methods, which should be pursued. Many fundamental problems in advanced nuclear energy systems will benefit from multi-physics, multiscale simulation methods that can span time scales from picoseconds to seconds and longer, including fission product transport in nuclear fuels, the evolution of microstructure of irradiated materials, the migration of radionuclides in nuclear waste forms, and the behavior of complex separations media. Crosscutting Research Themes Crosscutting Research Themes are research directions that transcend a specific research area or discipline, providing a foundation for progress in fundamental science on a broad front. These themes are typically interdisciplinary, leveraging results from multiple fields and approaches to provide new insights and underpinning understanding. Many of the fundamental science issues related to materials, fuels, waste forms, and separations technologies have crosscutting themes and synergies. The workshop identified four crosscutting basic research themes related to materials and chemical processes for advanced nuclear energy systems: Tailored nanostructures for radiation-resistant functional and structural materials.There is evidence that the design and control of specialized nanostructures and defect complexes can create sinks for radiation-induced defects and impurities, enabling the development of highly radiation-resistant materials. New capabilities in the synthesis and characterization of materials with controlled nanoscale structure offer opportunities for the development of tailored nanostructures for structural applications, fuels, and waste forms. This approach crosscuts advanced materials synthesis and processing, radiation effects, nanoscale characterization, and simulation. Solution and solid-state chemistry of 4f and 5f electron systems.Advances in the basic science of 4f and 5f electron systems in materials and solutions offer the opportunity to extend condensed matter physics and reaction chemistry on a broad front, including applications that impact the development of nuclear fuels, waste forms, and separations technologies. This is a key enabling science for the fundamental understanding of actinide-bearing materials and solutions. Physics and chemistry at interfaces and in confined environments.Controlling the structure and composition of interfaces is essential to ensuring the long-term stability of reactor materials, fuels, and waste forms. The fundamental understanding of interface science and related transport and chemical phenomena in extreme environments crosscuts many science and technology areas. New computational and nanoscale structure and dynamics measurement tools offer significant opportunities for advancing interface science with broad impacts on the predictive design of advanced materials and processes for nuclear energy applications. Physical and chemical complexity in multi-component systems.Advanced fuels, waste forms, and separations technologies are highly interactive, multi-component systems. A fundamental understanding of these complex systems and related structural and phase stability and chemical reactivity under extreme conditions is needed to develop and predict the performance of materials and separations processes in advanced nuclear energy systems. This is a challenging problem in complexity with broad implications across science and technology. Taken together, these Scientific Grand Challenges, Priority Research Directions, and Crosscutting Research Themes define the landscape for a science-based approach to the development of materials and chemical processes for advanced nuclear energy systems. Building upon new experimental tools and computational capabilities, they presage a renaissance in fundamental science that underpins the development of materials, fuels, waste forms, and separations technologies for nuclear energy applications. Addressing these basic research needs offers the potential to revolutionize the science and technology of advanced nuclear energy systems by enabling new materials, processes, and predictive modeling, with resulting improvements in performance and reduction in development times. The fundamental research outlined in this report offers an outstanding opportunity to advance the materials, chemical, and computational science of complex systems at multiple length and time scales, furthering both fundamental understanding and the technology of advanced nuclear energy systems. Basic Research Needs for Solid-State Lighting JPG.jpg file (822KB) Report.pdf file (6.9MB) Report.pdf file (0 bytes) Basic Research Needs for Solid-State Lighting This report is based on a BES Workshop on Solid-State Lighting (SSL), May 22-24, 2006, to examine the gap separating current state-of-the-art SSL technology from an energy efficient, high-quality, and economical SSL technology suitable for general illumination; and to identify the most significant fundamental scientific challenges and research directions that would enable that gap to be bridged. Since fire was first harnessed, artificial lighting has gradually broadened the horizons of human civilization. Each new advance in lighting technology, from fat-burning lamps to candles to gas lamps to the incandescent lamp, has extended our daily work and leisure further past the boundaries of sunlit times and spaces. The incandescent lamp did this so dramatically after its invention in the 1870s that the light bulb became the very symbol of a "good idea." Today, modern civilization as we know it could not function without artificial lighting; artificial lighting is so seamlessly integrated into our daily lives that we tend not to notice it until the lights go out. Our dependence is even enshrined in daily language: an interruption of the electricity supply is commonly called a "blackout." This ubiquitous resource, however, uses an enormous amount of energy. In 2001, 22% of the nation's electricity, equivalent to 8% of the nation's total energy, was used for artificial light. The cost of this energy to the consumer was roughly $50 billion per year or approximately $200 per year for every person living in the U.S. The cost of this energy to the environment was approximately 130 million tons of carbon emitted into our atmosphere, or about 7% of all the carbon emitted by the U.S. Our increasingly precious energy resources and the growing threat of climate change demand that we reduce the energy and environmental cost of artificial lighting, an essential and pervasive staple of modern life. There is ample room for reducing this energy and environmental cost. The artificial lighting we take for granted is extremely inefficient primarily because all these technologies generate light as a by-product of indirect processes producing heat or plasmas. Incandescent lamps (a heated wire in a vacuum bulb) convert only about 5% of the energy they consume into visible light, with the rest emerging as heat. Fluorescent lamps (a phosphor-coated gas discharge tube, invented in the 1930s) achieve a conversion efficiency of only about 20%. These low efficiencies contrast starkly with the relatively high efficiencies of other common building technologies: heating is typically 70% efficient, and electric motors are typically 85 to 95% efficient. About 1.5 billion light bulbs are sold each year in the U.S. today, each one an engine for converting the earth's precious energy resources mostly into waste heat, pollution, and greenhouse gases. There is no physical reason why a 21st century lighting technology should not be vastly more efficient, thereby reducing equally vastly our energy consumption. If a 50%-efficient technology were to exist and be extensively adopted, it would reduce energy consumption in the U.S. by about 620 billion kilowatt-hours per year by the year 2025 and eliminate the need for about 70 nuclear plants, each generating a billion Watts of power. Solid-state lighting (SSL) is the direct conversion of electricity to visible white light using semiconductor materials and has the potential to be just such an energy-efficient lighting technology. By avoiding the indirect processes (producing heat or plasmas) characteristic of traditional incandescent and fluorescent lighting, it can work at a far higher efficiency, "taking the heat out of lighting," it might be said. Recently, for example, semiconductor devices emitting infrared light have demonstrated an efficiency of 76%. There is no known fundamental physical barrier to achieving similar (or even higher) efficiencies for visible white light, perhaps approaching 100% efficiency. Despite this tantalizing potential, however, SSL suitable for illumination today has an efficiency that falls short of a perfect 100% by a factor of fifteen. Partly because of this inefficiency, the purchase cost of SSL is too high for the average consumer by a factor ten to a hundred, and SSL suitable for illumination today has a cost of ownership twenty times higher than that expected for a 100% efficient light source. The reason is that SSL is a dauntingly demanding technology. To generate light near the theoretical efficiency limit, essentially every electron injected into the material must result in a photon emitted from the device. Furthermore, the voltage required to inject and transport the electrons to the light-emitting region of the device must be no more than that corresponding to the energy of the resulting photon. It is insufficient to generate "simple" white light; the distribution of photon wavelengths must match the spectrum perceived by the human eye to render colors accurately, with no emitted photons outside the visible range. Finally, all of these constraints must be achieved in a single device with an operating lifetime of at least a thousand hours (and preferably ten to fifty times longer), at an ownership cost-of-light comparable to, or lower than, that of existing lighting technology. Where promising demonstrations of higher efficiency exist, they are typically achieved in small devices (to enhance light extraction), at low brightness (to minimize losses) or with low color-rendering quality (overemphasizing yellow and green light, to which the eye is most sensitive). These restrictions lead to a high cost of ownership for high-quality light that would prevent the widespread acceptance of SSL. For example, Cree Research recently (June 2006) demonstrated a 131 lm/W white light device, which translates roughly to 35% efficiency but with relatively low lumen output. With all devices demonstrated to date, a very large gap is apparent between what is achievable today and the 100% (or roughly 375 lm/W) efficiency that should be possible with SSL. Today, we cannot produce white SSL that is simultaneously high in efficiency, low in cost, and high in color-rendering quality. In fact, we cannot get within a factor of ten in either efficiency or cost. Doing so in the foreseeable future will require breakthroughs in technology, stimulated by a fundamental understanding of the science of light-emitting materials. To accelerate the laying of the scientific foundation that would enable such technology breakthroughs, the Office of Basic Energy Sciences in the U.S. Department of Energy (DOE) convened the Workshop on Basic Energy Needs for Solid-State Lighting from May 22 to 24, 2006. This report is a summary of that workshop. It reflects the collective output of the workshop attendees, which included 80 scientists representing academia, national laboratories, and industry in the United States, Europe, and Asia. Workshop planning and execution involved advance coordination with the DOE Office of Energy Efficiency and Renewable Energy, Building Technologies program, which manages applied research and development of SSL technologies and the Next Generation Lighting Initiative. The Workshop identified two Grand Challenges, seven Priority Research Directions, and five Cross-Cutting Research Directions. These represent the most specific outputs of the workshop. The Grand Challenges are broad areas of discovery research and scientific inquiry that will lay the groundwork for the future of SSL. The first Grand Challenge aims to change the very paradigm by which SSL structures are designed—moving from serendipitous discovery towards rational design. The second Grand Challenge aims to understand and control the essential roadblock to SSL—the microscopic pathways through which losses occur as electrons produce light. Rational Design of SSL Structures. Many materials must be combined in order to form a light-emitting device, each individual material working in concert with the others to control the flow of electrons so that all their energy produces light. Today, novel light-emitting and charge-transporting materials tend to be discovered rather than designed "with the end in mind." To approach 100% efficiency, fundamental building blocks should be designed so they work together seamlessly, but such a design process will require much greater insight than we currently possess. Hence, our aim is to understand light-emitting organic and inorganic (and hybrid) materials and nanostructures at a fundamental level to enable the rational design of low-cost, high-color-quality, near-100% efficient SSL structures from the ground up. The anticipated results are tools for rational, informed exploration of technology possibilities; and insights that open the door to as-yet-unimagined ways of creating and using artificial light. Controlling Losses in the Light-Emission Process. The key to high efficiency SSL is using electrons to produce light but not heat. That this does not occur in today's SSL structures stems from the abundance of decay pathways that compete with light emission for electronic excitations in semiconductors. Hence, our aim is to discover and control the materials and nanostructure properties that mediate the competing conversion of electrons to light and heat, enabling the conversion of every injected electron into useful photons. The anticipated results are ultra-high-efficiency light-emitting materials and nanostructures, and a deep scientific understanding of how light interacts with matter, with broad impact on science and technology areas beyond SSL. The Priority and Cross-Cutting Research Directions are narrower areas of discovery research and use-inspired basic research targeted at a particular materials set or at a particular area of scientific inquiry believed to be central to one or more roadblocks in the path towards future SSL technology. These Research Directions also support one or both Grand Challenges. The Research Directions were identified by three panels, each of which was comprised of a subset of the workshop attendees and interested observers. The first two panels, which identified the Priority Research Directions, were differentiated by choice of materials set. The first, LED Science, focused on inorganic light-emitting materials such as the Group III nitrides, oxides, and novel oxychalcogenides. The second, OLED Science, considered organic materials that are carbon-based molecular, polymeric, or dendrimeric compounds. The third panel, which identified the Cross-Cutting Research Directions, explored cross¬cutting and novel materials science and optical physics themes such as light extraction from solids, hybrid organic-inorganic and unconventional materials, and light-matter interactions. LED Science. Single-color, inorganic, light-emitting diodes (LEDs) are already widely used and are bright, robust, and long-lived. The challenge is to achieve white-light emission with high-efficiency and high-color rendering quality at acceptable cost while maintaining these advantages. The bulk of current research focuses on the Group III-nitride materials. Our understanding of how these materials behave and can be controlled has advanced significantly in the past decade, but significant scientific mysteries remain. These include (1) determining whether there are as-yet undiscovered or undeveloped materials that may offer significant advantages over current materials; (2) understanding and optimizing ways of generating white light from other wavelengths; (3) determining the role of piezoelectric and polar effects throughout the device but particularly at interfaces; and (4) understanding the basis for some of the peculiarities of the nitrides, the dominant inorganic SSL materials today, such as their apparent tolerance of high defect densities, and the difficulty of realizing efficient light emission at all visible wavelengths. OLED Science. Organic light emitting devices (OLEDs) based on polymeric or molecular thin films have been under development for about two decades, mostly for applications in flat-panel displays, which are just beginning to achieve commercial success. They have a number of attractive properties for SSL, including ease (and potential affordability) of processing and the ability to tune device properties via chemical modification of the molecular structure of the thin film components. This potential is coupled with challenges that have so far prevented the simultaneous achievement of high brightness at high efficiency and long device lifetime. Organic thin films are often structurally complex, and thin films that were long considered "amorphous" can exhibit order on the molecular (nano) scale. Research areas of particularly high priority include (1) quantifying local order and understanding its role in the charge transport and light-emitting properties of organic thin films, (2) developing the knowledge and expertise to synthesize and characterize organic compounds at a level of purity approaching that of inorganic semiconductors, and understanding the role of various low-level impurities on device properties in order to control materials degradation under SSL-relevant conditions, and (3) understanding the complex interplay of effects among the many individual materials and layers in an OLED to enable an integrated approach to OLED design. Cross-Cutting and Novel Materials Science and Optical Physics. Some areas of scientific research are relevant to all materials systems. While research on inorganic and organic materials has thus far proceeded independently, the optimal material system and device architecture for SSL may be as yet undiscovered and, furthermore, may require the integration of both classes of materials in a single system. Research directions that could enable new materials and architectures include (1) the design, synthesis, and integration of novel, nanoscale, heterogeneous building blocks, such as functionalized carbon nanotubes or quantum dots, with properties optimized for SSL, (2) the development of innovative architectures to control the flow of energy in a light emitting material to maximize the efficiency of light extraction, (3) the exploitation of strong coupling between light and matter to increase the quality and efficiency of emitted light, (4) the development of multiscale modeling techniques extending from the atomic or molecular scale to the device and system scale, and (5) the development and use of new experimental, theoretical, and computational tools to probe and understand the fundamental properties of SSL materials at the smallest scales of length and time. The workshop participants enthusiastically concluded that the time is ripe for new fundamental science to beget a revolution in lighting technology. SSL sources based on organic and inorganic materials have reached a level of efficiency where it is possible to envision their use for general illumination. The research areas articulated in this report are targeted to enable disruptive advances in SSL performance and realization of this dream. Broad penetration of SSL technology into the mass lighting market, accompanied by vast savings in energy usage, requires nothing less. These new "good ideas" will be represented not by light bulbs, but by an entirely new lighting technology for the 21st century and a bright, energy-efficient future indeed. Basic Research Needs for Superconductivity JPG.jpg file (355KB) Report.pdf file (9.3MB) Report.pdf file (35.0MB) Basic Research Needs for Superconductivity This report is based on a BES Workshop on Superconductivity, May 8-10, 2006, to examine the prospects for superconducting grid technology and its potential for significantly increasing grid capacity, reliability, and efficiency to meet the growing demand for electricity over the next century. As an energy carrier, electricity has no rival with regard to its environmental cleanliness, flexibility in interfacing with multiple production sources and end uses, and efficiency of delivery. In fact, the electric power grid was named "the greatest engineering achievement of the 20th century" by the National Academy of Engineering. This grid, a technological marvel ingeniously knitted together from local networks growing out from cities and rural centers, may be the biggest and most complex artificial system ever built. However, the growing demand for electricity will soon challenge the grid beyond its capability, compromising its reliability through voltage fluctuations that crash digital electronics, brownouts that disable industrial processes and harm electrical equipment, and power failures like the North American blackout in 2003 and subsequent blackouts in London, Scandinavia, and Italy in the same year. The North American blackout affected 50 million people and caused approximately $6 billion in economic damage over the four days of its duration. Superconductivity offers powerful new opportunities for restoring the reliability of the power grid and increasing its capacity and efficiency. Superconductors are capable of carrying current without loss, making the parts of the grid they replace dramatically more efficient. Superconducting wires carry up to five times the current carried by copper wires that have the same cross section, thereby providing ample capacity for future expansion while requiring no increase in the number of overhead access lines or underground conduits. Their use is especially attractive in urban areas, where replacing copper with superconductors in power-saturated underground conduits avoids expensive new underground construction. Superconducting transformers cut the volume, weight, and losses of conventional transformers by a factor of two and do not require the contaminating and flammable transformer oils that violate urban safety codes. Unlike traditional grid technology, superconducting fault current limiters are smart. They increase their resistance abruptly in response to overcurrents from faults in the system, thus limiting the overcurrents and protecting the grid from damage. They react fast in both triggering and automatically resetting after the overload is cleared, providing a new, self-healing feature that enhances grid reliability. Superconducting reactive power regulators further enhance reliability by instantaneously adjusting reactive power for maximum efficiency and stability in a compact and economic package that is easily sited in urban grids. Not only do superconducting motors and generators cut losses, weight, and volume by a factor of two, but they are also much more tolerant of voltage sag, frequency instabilities, and reactive power fluctuations than their conventional counterparts. The challenge facing the electricity grid to provide abundant, reliable power will soon grow to crisis proportions. Continuing urbanization remains the dominant historic demographic trend in the United States and in the world. By 2030, nearly 90% of the U.S. population will reside in cities and suburbs, where increasingly strict permitting requirements preclude bringing in additional overhead access lines, underground cables are saturated, and growth in power demand is highest. The power grid has never faced a challenge so great or so critical to our future productivity, economic growth, and quality of life. Incremental advances in existing grid technology are not capable of solving the urban power bottleneck. Revolutionary new solutions are needed — the kind that come only from superconductivity. The Basic Energy Sciences Workshop on Superconductivity The Basic Energy Sciences (BES) Workshop on Superconductivity brought together more than 100 leading scientists from universities, industry, and national laboratories in the United States, Europe, and Asia. Basic and applied scientists were generously represented, creating a valuable and rare opportunity for mutual creative stimulation. Advance planning for the workshop involved two U.S. Department of Energy offices: the Office of Electricity Delivery and Energy Reliability, which manages research and development for superconducting technology, and the Office of Basic Energy Sciences, which manages basic research on superconductivity. Performance of superconductors The workshop participants found that superconducting technology for wires, power control, and power conversion had already passed the design and demonstration stages. The discovery of copper oxide superconductors in 1986 was a landmark event, bringing forth a new generation of superconducting materials with transition temperatures of 90 K or above, which allow cooling with inexpensive liquid nitrogen or mechanical cryocoolers. Cables, transformers, and rotating machines using first-generation (1G) wires based on Bi2Sr2Ca2Cu3Ox allowed new design principles and performance standards to be established that enabled superconducting grid technology to compete favorably with traditional copper devices. The early 2000s saw a paradigm shift to second-generation (2G) wires based on YBa2Cu3O7 that use a very different materials architecture; these have the potential for better performance over a larger operating range with respect to temperature and magnetic field. 2G wires have advanced rapidly; their current-carrying ability has increased by a factor of 10, and their usable length has increased to 300 meters, compared with only a few centimeters five years ago. While 2G superconducting wires now considerably outperform copper wires in their capacity for and efficiency in transporting current, significant gaps in their performance improvements remain. The alternating-current (ac) losses in superconductors are a major source of heat generation and refrigeration costs; these costs decline significantly as the maximum lossless current-carrying capability increases. For the same operating current, a tenfold increase in the maximum current-carrying capability of the wire cuts the heat generated as a result of ac losses by the same factor of 10. For transporting current on the grid, an order-of-magnitude increase in current-carrying capability is needed to reduce the operational cost of superconducting lines and cables to competitive levels. Transformers, fault current limiters, and rotating machinery all contain coils of superconducting wire that create magnetic fields essential to their operation. 2G wires carry significantly less current in magnetic fields as small as 0.1 to 0.5 T, which are found in transformers and fault current limiters, and in fields of 3 to 5 T, which are needed for motors and generators. The fundamental factors that limit the current-carrying performance of 2G wires in magnetic fields must be understood and overcome to produce a five- to tenfold increase in their performance rating. Increasing the current-carrying capability of superconductors requires blocking the motion of "Abrikosov vortices" — nanoscale tubes of magnetic flux that form spontaneously inside superconductors upon exposure to magnetic fields. Vortices are immobilized by artificial defects in the superconducting material that attract the vortices and pin them in place. To pin vortices effectively, an understanding not only of the pinning strength of individual defects for individual vortices but also of the collective effects of many defects interacting with many vortices is needed. The similarities of vortex pinning and flow to glacier flow around rock obstacles, avalanche flow in landslides, and earthquake motion at fault lines are reflected in the colloquial name "vortex matter." To achieve a five- to tenfold increase in vortex pinning and current-carrying ability in superconductors, we must learn how to bridge the scientific gap separating the microscopic behavior of individual vortices and pinning sites in a superconductor from its macroscopic current-carrying ability. Cost of superconductors Although superconducting wires perform significantly better than copper wires in transmitting electricity, their cost is still too high. The cost of manufactured superconducting wires must be reduced by a factor of 10 to 100 to make them competitive with copper. Much of the manufacturing cost arises from the complex architecture of 2G wires, which are made up of a flexible metallic substrate (often of a magnetic material) on which up to seven additional layers must be sequentially deposited while a specific crystalline orientation is maintained from layer to layer. Significant advances in materials science are needed to simplify the architecture and the manufacturing process while maintaining crystalline orientation, flexibility, superconductor composition, and protection from excessive heat if there is an accidental loss of superconductivity. Beyond their manufacturing cost, the operating cost of superconductors must be reduced. Copper wires require no active cooling to operate, while superconductors must be cooled to temperatures of between 50 and 77 K for most applications. The added cost of refrigeration is a significant factor in superconductor operating cost. Reducing refrigeration costs for future generations of superconducting applications is a major technology driver for the discovery or design of new superconducting materials with higher transition temperatures. Phenomena of superconductivity These achievements and challenges in superconducting technology are matched by equally promising achievements and challenges in the fundamental science of superconductivity. Since 1986, new materials discoveries have pushed the superconducting transition temperature in elements from 12 to 20 K (for Li under pressure), in heavy fermion compounds from 1.5 to 18.5 K (for PuCoGa5), in noncuprate oxides from 13 to 30 K (for Ba1-xKxBiO3), in binary borides from 6 to 40 K (for MgB2), and in graphite intercalation compounds from 4.05 to 11.5 K (for CaC6). In addition, superconductivity has been discovered for the first time in carbon compounds like boron-doped diamond (11 K) and fullerides (up to 40 K for Cs3C60 under pressure), as well as in borocarbides (up to 16.5 K with metastable phases up to 23 K). We are finding that superconductivity, formerly thought to be a rare occurrence in special compounds, is a common behavior of correlated electrons or "electron matter" in materials. As of this writing, fully 55 elements display superconductivity at some combination of temperature and pressure; this number is up from 43 in 1986, an increase of 28%. As the number and classes of materials displaying superconductivity have mushroomed, so also has the variety of pairing mechanisms and symmetries of superconductivity. The superconducting state is built of "Cooper pairs" — composite objects composed of two electrons bound by a pairing mechanism. The spatial relation of the charges in a pair is described by its pairing symmetry. Copper oxides are known to have d-wave pairing symmetry, in contrast to the s-wave pairing of conventional superconductors; Sr2RuO4 and certain organic superconductors appear to be p-wave. Superconductivity has been found close to magnetic order and can either compete against it or coexist with it, suggesting that spin plays a role in the pairing mechanism. Tantalizing glimpses of superconducting-like states at very high temperatures have been seen in the underdoped phase of yttrium barium copper oxide (YBCO), in the form of pseudogaps and of strong transverse electric fields induced by temperature gradients (the "vortex Nernst effect") that typically imply vortex motion. The proliferation of new classes of superconducting materials; of record-breaking transition temperatures in the known classes of superconductors; of unconventional pairing mechanisms and symmetries of superconductivity; and of exotic, superconducting-like features well above the superconducting transition temperature all imply that superconducting electron matter is a far richer field than we suspected even 10 years ago. While there are many fundamental puzzles in this profusion of intriguing effects, the central challenge with the biggest impact is to understand the mechanisms of high-temperature superconductivity. This is difficult precisely because the mechanisms are entangled with these anomalous normal state effects. Such effects are noticeably absent in the normal states of conventional superconductors. In the underdoped copper oxides (as in other complex oxides), there are many signs of highly correlated normal states, like the spontaneous formation of stripes and pseudogaps that exist above the superconducting transition temperature. They may be necessary precursors to the high-temperature superconducting state, or perhaps competitors, and it seems clear that an explanation of superconductivity will include these correlated normal states in the same framework. For two decades, theorists have struggled and failed to find a solution, even as experimentalists tantalize them with ever more fascinating anomalous features. The more than 50 superconducting compounds in the copper oxide family demonstrate that the mechanism of superconductivity is robust, and that it is likely to apply widely in nature among other complex metals with highly correlated normal states. Although finding the mechanism is frustratingly difficult, its value, once found, makes the struggle compelling. Research directions The BES Workshop on Superconductivity identified seven "priority research directions" and two "cross-cutting research directions" that capture the promise of revolutionary advances in superconductivity science and technology. The first seven directions set a course for research in superconductivity that will exploit the opportunities uncovered by the workshop panels in materials, phenomena, theory, and applications. These research directions extend the reach of superconductivity to higher transition temperatures and higher current-carrying capabilities, create new families of superconducting materials with novel nanoscale structures, establish fundamental principles for understanding the rich variety of superconducting behavior within a single framework, and develop tools and materials that enable new superconducting technology for the electric power grid that will dramatically improve its capacity, reliability, and efficiency for the coming century. The seven priority research directions identified by the workshop take full advantage of the rapid advances in nanoscale science and technology of the last five years. Superconductivity is ultimately a nanoscale phenomenon. Its two composite building blocks — Cooper pairs mediating the superconducting state and Abrikosov vortices mediating its current-carrying ability — have dimensions ranging from a tenth of a nanometer to a hundred nanometers. Their nanoscale interactions among themselves and with structures of comparable size determine all of their superconducting properties. The continuing development of powerful nanofabrication techniques, by top-down lithography and bottom-up self-assembly, creates promising new horizons for designer superconducting materials with higher transition temperatures and current-carrying ability. Nanoscale characterization techniques with ever smaller spatial and temporal resolution — including aberration-corrected electron microscopy, nanofocused x-ray beams from high-intensity synchrotrons, scanning probe microscopy, and ultrafast x-ray laser spectroscopy — allow us to track the motion of a single vortex interacting with a single pinning defect or to observe Cooper pair making and pair breaking near a magnetic impurity atom. The numerical simulation of superconducting phenomena in confined geometries using computer clusters of a hundred or more nodes allows the interaction of Cooper pairs and Abrikosov vortices with nanoscale boundaries and architectures to be isolated. Understanding these nanoscale interactions with artificial boundaries enables the numerical design of functional superconductors. The promise of nanoscale fabrication, characterization, and simulation for advancing the fundamental science of superconductivity and rational design of functional superconducting materials for next-generation grid technology has never been higher. A key outcome of the BES Workshop on Superconductivity has been a strong sense of optimism and awareness of the opportunity that spans the community of participants in the basic and applied sciences. In the last decade, enormous strides have been made in understanding the science of high-temperature superconductivity and exploiting it for electricity production, distribution, and use. The promise of developing a smart, self-healing grid based on superconductors that require no cooling is an inspiring "grand energy challenge" that drives the frontiers of basic science and applied technology. Meeting this 21st century challenge would rival the 20th century achievement of providing electricity for everyone at the flick of a switch. The seven priority and two cross-cutting research directions identified by the workshop participants offer the potential for achieving this challenge and creating a transformational impact on our electric power infrastructure. The Path to Sustainable Nuclear Energy, Basic and Applied Research Opportunities for Advanced Fuel Cycles JPG.jpg file (178KB) Report.pdf file (864KB) The Path to Sustainable Nuclear Energy Basic and Applied Research Opportunities for Advanced Fuel Cycles This report is based on a small DOE-sponsored workshop held in September 2005 to identify new basic science that will be the foundation for advances in nuclear fuel-cycle technology in the near term, and for changing the nature of fuel cycles and of the nuclear energy industry in the long term . The goals are to enhance the development of nuclear energy, to maximize energy production in nuclear reactor parks, and to minimize radioactive wastes, other environmental impacts, and proliferation risks. The limitations of the once-through fuel cycle can be overcome by adopting a closed fuel cycle, in which the irradiated fuel is reprocessed and its components are separated into streams that are recycled into a reactor or disposed of in appropriate waste forms. The recycled fuel is irradiated in a reactor, where certain constituents are partially transmuted into heavier isotopes via neutron capture or into lighter isotopes via fission. Fast reactors are required to complete the transmutation of long-lived isotopes. Closed fuel cycles are encompassed by the Department of Energy's Advanced Fuel Cycle Initiative (AFCI), to which basic scientific research can contribute. Two nuclear reactor system architectures can meet the AFCI objectives: a "single-tier" system or a "dual-tier" system. Both begin with light water reactors and incorporate fast reactors. The "dual-tier" systems transmute some plutonium and neptunium in light water reactors and all remaining transuranic elements (TRUs) in a closed-cycle fast reactor. Basic science initiatives are needed in two broad areas: • Near-term impacts that can enhance the development of either "single-tier" or "dual-tier" AFCI systems, primarily within the next 20 years, through basic research. Examples: • Dissolution of spent fuel, separations of elements for TRU recycling and transmutation • Design, synthesis, and testing of inert matrix nuclear fuels and non-oxide fuels • Invention and development of accurate on-line monitoring systems for chemical and nuclear species in the nuclear fuel cycle • Development of advanced tools for designing reactors with reduced margins and lower costs • Long-term nuclear reactor development requires basic science breakthroughs: • Understanding of materials behavior under extreme environmental conditions • Creation of new, efficient, environmentally benign chemical separations methods • Modeling and simulation to improve nuclear reaction cross-section data, design new materials and separation system, and propagate uncertainties within the fuel cycle • Improvement of proliferation resistance by strengthening safeguards technologies and decreasing the attractiveness of nuclear materials A series of translational tools is proposed to advance the AFCI objectives and to bring the basic science concepts and processes promptly into the technological sphere. These tools have the potential to revolutionize the approach to nuclear engineering R&D by replacing lengthy experimental campaigns with a rigorous approach based on modeling, key fundamental experiments, and advanced simulations. Basic Research Needs for Solar Energy Utilization JPG.jpg file (333KB) Report.pdf file (6.9MB) Report.pdf file (19.2MB) Basic Research Needs for Solar Energy Utilization This report is based on a BES Workshop on Solar Energy Utilization, April 18-21, 2005, to examine the challenges and opportunities for the development of solar energy as a competitive energy source and to identify the technical barriers to large-scale implementation of solar energy and the basic research directions showing promise to overcome them. World demand for energy is projected to more than double by 2050 and to more than triple by the end of the century. Incremental improvements in existing energy networks will not be adequate to supply this demand sustainably. Finding sufficient supplies of clean energy for the future is one of society's most daunting challenges. Yet, in 2001, solar electricity provided less than 0.1% of the world's electricity, and solar fuel from modern (sustainable) biomass provided less than 1.5% of the world's energy. This report of the Basic Energy Sciences Workshop on Solar Energy Utilization identifies the key scientific challenges and research directions that will enable efficient and economic use of the solar resource to provide a significant fraction of global primary energy by the mid 21st century. The report reflects the collective output of the workshop attendees, which included 200 scientists representing academia, national laboratories, and industry in the United States and abroad, and the U.S. Department of Energy's Office of Basic Energy Sciences and Office of Energy Efficiency and Renewable Energy. Solar energy conversion systems fall into three categories according to their primary energy product: solar electricity, solar fuels, and solar thermal systems. Each of the three generic approaches to exploiting the solar resource has untapped capability well beyond its present usage. Workshop participants considered the potential of all three approaches, as well as the potential of hybrid systems that integrate key components of individual technologies into novel cross-disciplinary paradigms. The challenge in converting sunlight to electricity via photovoltaic solar cells is dramatically reducing the cost/watt of delivered solar electricity — by approximately a factor of 5-10 to compete with fossil and nuclear electricity and by a factor of 25-50 to compete with primary fossil energy. New materials to efficiently absorb sunlight, new techniques to harness the full spectrum of wavelengths in solar radiation, and new approaches based on nanostructured architectures can revolutionize the technology used to produce solar electricity. The technological development and successful commercialization of single-crystal solar cells demonstrates the promise and practicality of photovoltaics, while novel approaches exploiting thin films, organic semiconductors, dye sensitization, and quantum dots offer fascinating new opportunities for cheaper, more efficient, longer-lasting systems. Many of the new approaches outlined by the workshop participants are enabled by (1) remarkable recent advances in the fabrication of nanoscale architectures by novel top-down and bottom-up techniques; (2) advances in nanoscale characterization using electron, neutron, and x-ray scattering and spectroscopy; and (3) sophisticated computer simulations of electronic and molecular behavior in nanoscale semiconductor assemblies using density functional theory. Such advances in the basic science of solar electric conversion, coupled to the new semiconductor materials now available, could drive a revolution in the way that solar cells are conceived, designed, implemented, and manufactured. The inherent day-night and sunny-cloudy cycles of solar radiation necessitate an effective method to store the converted solar energy for later dispatch and distribution. The most attractive and economical method of storage is conversion to chemical fuels. The challenge in solar fuel technology is to produce chemical fuels directly from sunlight in a robust, cost-efficient fashion. For millennia, cheap solar fuel production from biomass has been the primary energy source on the planet. For the last two centuries, however, energy demand has outpaced biomass supply. The use of existing types of plants requires large land areas to meet a significant portion of primary energy demand. Almost all of the arable land on Earth would need to be covered with the fastest-growing known energy crops, such as switchgrass, to produce the amount of energy currently consumed from fossil fuels annually. Hence, the key research goals are (1) application of the revolutionary advances in biology and biotechnology to the design of plants and organisms that are more efficient energy conversion "machines," and (2) design of highly efficient, all-artificial, molecular-level energy conversion machines exploiting the principles of natural photosynthesis. A key element in both approaches is the continued elucidation — by means of structural biology, genome sequencing, and proteomics — of the structure and dynamics involved in the biological conversion of solar radiation to sugars and carbohydrates. The revelation of these long-held secrets of natural solar conversion by means of cutting-edge experiment and theory will enable a host of exciting new approaches to direct solar fuel production. Artificial nanoscale assemblies of new organic and inorganic materials and morphologies, replacing natural plants or algae, can now use sunlight to directly produce H2 by splitting water and hydrocarbons via reduction of atmospheric CO2. While these laboratory successes demonstrate the appealing promise of direct solar fuel production by artificial molecular machines, there is an enormous gap between the present state of the art and a deployable technology. The current laboratory systems are unstable over long time periods, too expensive, and too inefficient for practical implementation. Basic research is needed to develop approaches and systems to bridge the gap between the scientific frontier and practical technology. The key challenge in solar thermal technology is to identify cost-effective methods to convert sunlight into storable, dispatchable thermal energy. Reactors heated by focused, concentrated sunlight in thermal towers reach temperatures exceeding 3,000°C, enabling the efficient chemical production of fuels from raw materials without expensive catalysts. New materials that withstand the high temperatures of solar thermal reactors are needed to drive applications of this technology. New chemical conversion sequences, like those that split water to produce H2 using the heat from nuclear fission reactors, could be used to convert focused solar thermal energy into chemical fuel with unprecedented efficiency and cost effectiveness. At lower solar concentration temperatures, solar heat can be used to drive turbines that produce electricity mechanically with greater efficiency than the current generation of solar photovoltaics. When combined with solar-driven chemical storage/release cycles, such as those based on the dissociation and synthesis of ammonia, solar engines can produce electricity continuously 24 h/day. Novel thermal storage materials with an embedded phase transition offer the potential of high thermal storage capacity and long release times, bridging the diurnal cycle. Nanostructured thermoelectric materials, in the form of nanowires or quantum dot arrays, offer a promise of direct electricity production from temperature differentials with efficiencies of 20-30% over a temperature differential of a few hundred degrees Celsius. The much larger differentials in solar thermal reactors make even higher efficiencies possible. New low-cost, high-performance reflective materials for the focusing systems are needed to optimize the cost effectiveness of all concentrated solar thermal technologies. Workshop attendees identified thirteen priority research directions (PRDs) with high potential for producing scientific breakthroughs that could dramatically advance solar energy conversion to electricity, fuels, and thermal end uses. Many of these PRDs address issues of concern to more than one approach or technology. These cross-cutting issues include (1) coaxing cheap materials to perform as well as expensive materials in terms of their electrical, optical, chemical, and physical properties; (2) developing new paradigms for solar cell design that surpass traditional efficiency limits; (3) finding catalysts that enable inexpensive, efficient conversion of solar energy into chemical fuels; (4) identifying novel methods for self-assembly of molecular components into functionally integrated systems; and (5) developing materials for solar energy conversion infrastructure, such as transparent conductors and robust, inexpensive thermal management materials. A key outcome of the workshop is the sense of optimism in the cross-disciplinary community of solar energy scientists spanning academia, government, and industry. Although large barriers prevent present technology from producing a significant fraction of our primary energy from sunlight by the mid-21st century, workshop participants identified promising routes for basic research that can bring this goal within reach. Much of this optimism is based on the continuing, rapid worldwide progress in nanoscience. Powerful new methods of nanoscale fabrication, characterization, and simulation — using tools that were not available as little as five years ago — create new opportunities for understanding and manipulating the molecular and electronic pathways of solar energy conversion. Additional optimism arises from impressive strides in genetic sequencing, protein production, and structural biology that will soon bring the secrets of photosynthesis and natural bio-catalysis into sharp focus. Understanding these highly effective natural processes in detail will allow us to modify and extend them to molecular reactions that directly produce sunlight-derived fuels that fit seamlessly into our existing energy networks. The rapid advances on the scientific frontiers of nanoscience and molecular biology provide a strong foundation for future breakthroughs in solar energy conversion. Advanced Computational Materials Sciences: Application to Fusion and Generation IV Fission Reactors JPG.jpg file (238KB) Report.pdf file (692KB) Advanced Computational Materials Science: Application to Fusion and Generation IV Fission Reactors This report is based on a workshopExternal link held March 31-April 2, 2004, to determine the degree to which an increased effort in modeling and simulation could help bridge the gap between the data that is needed to support the implementation of advanced nuclear technologies and the data that can be obtained in available experimental facilities. The need to develop materials capable of performing in the severe operating environments expected in fusion and fission (Generation IV) reactors represents a significant challenge in materials science. There is a range of potential Gen-IV fission reactor design concepts and each concept has its own unique demands. Improved economic performance is a major goal of the Gen-IV designs. As a result, most designs call for significantly higher operating temperatures than the current generation of LWRs to obtain higher thermal efficiency. In many cases, the desired operating temperatures rule out the use of the structural alloys employed today. The very high operating temperature (up to 1000°C) associated with the NGNP is a prime example of an attractive new system that will require the development of new structural materials. Fusion power plants represent an even greater challenge to structural materials development and application. The operating temperatures, neutron exposure levels and thermo-mechanical stresses are comparable to or greater than those for proposed Gen-IV fission reactors. In addition, the transmutation products created in the structural materials by the high energy neutrons produced in the DT plasma can profoundly influence the microstructural evolution and mechanical behavior of these materials. Although the workshop addressed issues relevant to both Gen-IV and fusion reactor materials, much of the discussion focused on fusion; the same focus is reflected in this report. Most of the physical models and computational methods presented during the workshop apply equally to both types of nuclear energy systems. The primary factor that differentiates the materials development path for the two systems is that nearly prototypical irradiation environments for Gen-IV materials can be found or built in existing fission reactors. This is not the case for fusion. The only fusion-relevant, 14 MeV neutron sources ever built (such as the rotating target neutron sources, RTNS-I and -II at LLNL) were relatively low-power accelerator based systems. The RTNS-II "high" flux irradiation volume was quite small, less than 1 cm3, and only low doses could be achieved. The maximum dose data obtained was much less than 0.1 dpa. Thus, RTNS-II, which last operated in 1986, provided only a limited opportunity for fundamental investigations of the effects of 14 MeV neutrons characteristic of DT fusion. Historically, both the fusion and fission reactor programs have taken advantage of and built on research carried out by the other program. This leveraging can be expected to continue over the next ten years as both experimental and modeling activities in support of the Gen-IV program grow substantially. The Gen-IV research will augment the fusion studies (and vice versa) in areas where similar materials and exposure conditions are of interest. However, in addition to the concerns that are common to both fusion and advanced fission reactor programs, designers of a future DT fusion reactor have the unique problem of anticipating the effects of the 14 MeV neutron source term. In particular, the question arises whether irradiation data obtained in a near-prototypic irradiation environment such as the IFMIF are needed to verify results obtained from computational materials research. The need for a theory and modeling effort to work hand-in-hand with a complementary experimental program for the purpose of model development and verification, and for validation of model predictions was extensively discussed at the workshop. There was a clear consensus that an IFMIF-like irradiation facility is likely to be required to contribute to this research. However, the question of whether IFMIF itself is needed was explored from two different points of view at the workshop. These complementary (and in some cases opposing) points of view can be coarsely characterized as "scientific" and "engineering." The recent and anticipated progress in computational materials science presented at the workshop provides some confidence that many of the scientific questions whose answers will underpin the successful use of structural materials in a DT fusion reactor can be addressed in a reasonable time frame if sufficient resources are devoted to this effort. For example, advances in computing hardware and software should permit improved (and in some cases the first) descriptions of relevant properties in alloys based on ab initio calculations. Such calculations could provide the basis for realistic interatomic potentials for alloys, including alloy-He potentials, that can be applied in classical molecular dynamics simulations. These potentials must have a more detailed description of many-body interactions than accounted for in the current generation which are generally based on a simple embedding function. In addition, the potentials used under fusion reactor conditions (very high PKA energies) should account for the effects of local electronic excitation and electronic energy loss. The computational cost of using more complex potentials also requires the next generation of massively parallel computers. New results of ab initio and atomistic calculations can be coupled with ongoing advances in kinetic and phase field models to dramatically improve predictions of the non-equilibrium, radiation-induced evolution in alloys with unstable microstructures. This includes phase stability and the effects of helium on each microstructural component. However, for all its promise, computational materials science is still a house under construction. As such, the current reach of the science is limited. Theory and modeling can be used to develop understanding of known critical physical phenomena, and computer experiments can, and have been used to, identify new phenomena and mechanisms, and to aid in alloy design. However, it is questionable whether the science will be sufficiently mature in the foreseeable future to provide a rigorous scientific basis for predicting critical materials' properties, or for extrapolating well beyond the available validation database. Two other issues remain even if the scientific questions appear to have been adequately answered. These are licensing and capital investment. Even a high degree of scientific confidence that a given alloy will perform as needed in a particular Gen-IV or fusion environment is not necessarily transferable to the reactor licensing or capital market regimes. The philosophy, codes, and standards employed for reactor licensing are properly conservative with respect to design data requirements. Experience with the U.S. Nuclear Regulatory Commission suggests that only modeling results that are strongly supported by relevant, prototypical data will have an impact on the licensing process. In a similar way, it is expected that investment on the scale required to build a fusion power plant (several billion dollars) could only be obtained if a very high level of confidence existed that the plant would operate long and safely enough to return the investment. These latter two concerns appear to dictate that an experimental facility capable of generating a sufficient, if limited, body of design data under essentially prototypic conditions (i.e. with ~14 MeV neutrons) will ultimately be required for the commercialization of fusion power. An aggressive theory and modeling effort will reduce the time and experimental investment required to develop the advanced materials that can perform in a DT fusion reactor environment. For example, the quantity of design data may be reduced to that required to confirm model predictions for key materials at critical exposure conditions. This will include some data at a substantial fraction of the anticipated end-of-life dose, which raises the issue of when such an experimental facility is required. Long lead times for construction of complex facilities, coupled with several years irradiation to reach the highest doses, imply that the decision to build any fusion-relevant irradiation facility must be made on the order of 10 years before the design data is needed. Two related areas of research can be used as reference points for the expressed need to obtain experimental validation of model predictions. Among the lessons learned from ASCI, the importance of code validation and verification was emphasized at the workshop. Despite an extensive investment in theory and modeling of the relevant physics, the NIF is being built at LLNL to verify the performance of the physics codes. Similarly, while the U.S. and international fusion community has invested considerable resources in simulating the behavior of magnetically-confined plasmas, a series of experimental devices (e.g. DIII-D, TFTR, JET, NSTX, and NCSX) have been, or will be, built and numerous experiments carried out to validate the predicted plasma performance on the route to ITER and a demonstration fusion power reactor. Opportunities for Discovery: Theory and Computation in Basic Energy Sciences JPG.jpg file (376KB) Report.pdf file (1.9MB) Report.pdf file (6.8MB) Opportunities for Discovery: Theory and Computation in Basic Energy Sciences This report is based on the deliberations of the BESAC Subcommittee on Theory and Computation following meetings on February 22 and April 17-16, 2004, to obtain testimony and discuss input from the scientific community on research directions for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the science. The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged on October 17, 2003, by the Director, Office of Science, with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee's Findings and Recommendations are presented in Section VII of the report. A confluence of scientific events has enhanced the importance of theory and computation in BES. After considering both written and verbal testimony from members of the scientific community, the Subcommittee observed that a confluence of developments in scientific research over the past fifteen years has quietly revolutionized both the present role and future promise of theory and computation in the disciplines that comprise the Basic Energy Sciences. Those developments fall into four broad categories: 1. a set of striking recent scientific successes that demonstrate the increased impact of theory and computation; 2. the appearance of new scientific frontiers in which innovative theory is required to lead inquiry and unravel the mysteries posed by new observations; 3. the development of new experimental capabilities, including large-scale facilities, that provide challenging new data and demand both fundamental and computationally intensive theory to realize their promise; 4. the ongoing increase of computational capability provided by continued improvements in computers and algorithms, which has dramatically amplified the power and applicability of theoretical research. The sum of these events argues powerfully that now is the time for an increase in the investment by BES in theory and computation, including modeling and simulation. Emerging themes in the Basic Energy Sciences and nine specific areas of opportunity for scientific discovery. The report identifies nine specific areas of opportunity in which expanded investment in theory and computation holds great promise to enhance discovery in the scientific mission of BES. While this list is not exhaustive, it represents a range of persuasive prospects broadly characterized by the themes of "Complexity" and "Control" that describe much of the BES portfolio. The challenges and promise of theory in each of these nine areas are described in detail. Connecting theory with experiment. Connecting the BES theory and computation programs with experimental research taking place at existing or planned BES facilities deserves a high priority. BES should undertake a major new thrust to significantly augment its theoretical and computational programs coupled to experimental research at its major facilities. We also urge that such a new effort not be limited only to research at the facilities but also address the coupling of theory and computation with new capabilities involving "tabletop" experimental science as well. The unity of modern theory and computation. For a number of the research problems in BES, we are fortunate to know the equations that must be solved. For this reason many BES disciplines are presently exploiting high-end computation and are poised to use it at the leadership scale. However, in many other areas of BES, we do not know all the equations, nor do we have all the mathematical and physical insights we need, and therefore we have not yet invented the required algorithms. In an expanded yet balanced theory effort in BES, enhancements in computation must be accompanied by enhancements in the rest of the theoretical endeavor. Conceptual theory and computation are not separate enterprises. Resources necessary for success in the BES theory enterprise. A successful BES theory effort must provide the full spectrum of computational resources, as well as support the development and maintenance of scientific computer codes as shared scientific instruments. We find that BES is ready for and requires access to leadership-scale computing to perform calculations that cannot be done elsewhere, but also that a large amount of essential BES computation falls between the leadership and the desktop scales. Moreover, BES should provide support for the development and maintenance of shared scientific software to enhance the scientific impact of the BES-supported theory community and to remove a key obstacle to the effective exploitation of high-end computing resources and facilities. In summary, the Subcommittee finds that there is a compelling need for BES to expand its programs to capture opportunities created by the combination of new capabilities in theory and computation and the opening of new experimental frontiers. Providing the right resources, supporting new styles of theoretical inquiry, and building a properly balanced program are all essential for the success of an expanded effort in theory and computation. The experimental programs of BES will be enhanced by such an effort. Nanoscience Research for Energy Needs JPG.jpg file (235KB) Report.pdf file (6.4MB) Report.pdf file (29.0MB) Nanoscience Research for Energy Needs This report is based upon a BES-cosponsored National Nanotechnology Initiative (NNI) Workshop held March 16-18, 2004, by t he Nanoscale Science, Engineering, and Technology (NSET) Subcommittee of the National Science and Technology Council (NSTC) to address the Grand Challenge in Energy Conversion and Storage set out in the NNI. This report was originally released on June 24, 2004, during the Department of Energy NanoSummitExternal link. The second edition that is provided here was issued in June 2005. The world demand for energy is expected to double to 28 terawatts by the year 2050. Compounding the challenge presented by this projection is the growing need to protect our environment by increasing energy efficiency and through the development of "clean" energy sources. These are indeed global challenges, and their resolution is vital to our energy security. Recent reports on Basic Research Needs to Assure a Secure Energy Future and Basic Research Needs for the Hydrogen Economy have recognized that scientific breakthroughs and truly revolutionary developments are demanded. Within this context, nanoscience and nanotechnology present exciting and requisite approaches to addressing these challenges. An interagency workshop to identify and articulate the relationship of nanoscale science and technology to the nation's energy future was convened on March 16-18, 2004 in Arlington, Virginia. The meeting was jointly sponsored by the Department of Energy and, through the National Nanotechnology Coordination Office, the other member agencies of the Nanoscale Science, Engineering and Technology Subcommittee of the Committee on Technology, National Science and Technology Council. This report is the outcome of that workshop. The workshop had 63 invited presenters with 32 from universities, 26 from national laboratories and 5 from industry. This workshop is one in a series intended to provide input from the research community on the next NNI strategic plan, which the NSTC is required to deliver to Congress on the first anniversary of the signing of the 21st Century Nanotechnology R&D Act, Dec. 3, 2003. At the root of the opportunities provided by nanoscience to impact our energy security is the fact that all the elementary steps of energy conversion (charge transfer, molecular rearrangement, chemical reactions, etc.) take place on the nanoscale. Thus, the development of new nanoscale materials, as well as the methods to characterize, manipulate, and assemble them, creates an entirely new paradigm for developing new and revolutionary energy technologies. The primary outcome of the workshop is the identification of nine research targets in energy-related science and technology in which nanoscience is expected to have the greatest impact: • Scalable methods to split water with sunlight for hydrogen production • Highly selective catalysts for clean and energy-efficient manufacturing • Harvesting of solar energy with 20 percent power efficiency and 100 times lower cost • Solid-state lighting at 50 percent of the present power consumption • Super-strong, light-weight materials to improve efficiency of cars, airplanes, etc. • Reversible hydrogen storage materials operating at ambient temperatures • Power transmission lines capable of 1 gigawatt transmission • Low-cost fuel cells, batteries, thermoelectrics, and ultra-capacitors built from nanostructured materials • Materials synthesis and energy harvesting based on the efficient and selective mechanisms of biology The report contains descriptions of many examples indicative of outcomes and expected progress in each of these research targets. For successful achievement of these research targets, participants recognized six foundational and vital crosscutting nanoscience research themes: • Catalysis by nanoscale materials • Using interfaces to manipulate energy carriers • Linking structure and function at the nanoscale • Assembly and architecture of nanoscale structures • Theory, modeling, and simulation for energy nanoscience • Scalable synthesis methods DOE-NSF-NIH Workshop on Opportunities in THz Science JPG.jpg file (470KB) Report.pdf file (9.8MB) DOE-NSF-NIH Workshop on Opportunities in THz Science This report is based on a Workshop on Opportunities in Terahetrz (THz) Science held February 12-14, 2004, to discuss basic research problems that can be answered using THz radiation. The workshop did not focus on the wide range of potential applications of THz radiation in engineering, defense and homeland security, or the commercial and government sectors of the economy. The workshop was jointly sponsored by DOE, NSF, and NIH. The region of the electromagnetic spectrum from 0.3 to 20 THz (10- 600 cm-1, 1 mm - 15 µm wavelength) is a frontier area for research in physics, chemistry, biology, medicine, and materials sciences. Sources of high quality radiation in this area have been scarce, but this gap has recently begun to be filled by a wide range of new technologies. Terahertz radiation is now available in both cw and pulsed form, down to single-cycles or less, with peak powers up to 10 MW. New sources have led to new science in many areas, as scientists begin to become aware of the opportunities for research progress in their fields using THz radiation. Science at a Time Scale Frontier: THz-frequency electromagnetic radiation, with a fundamental period of around 1 ps, is uniquely suited to study and control systems of central importance: electrons in highly-excited atomic Rydberg states orbit at THz frequencies. Small molecules rotate at THz frequencies. Collisions between gas phase molecules at room temperature last about 1 ps. Biologically-important collective modes of proteins vibrate at THz frequencies. Frustrated rotations and collective modes cause polar liquids (such as water) to absorb at THz frequencies. Electrons in semiconductors and their nanostructures resonate at THz frequencies. Superconducting energy gaps are found at THz frequencies. An electron in Intel's THz Transistor races under the gate in ~1 ps. Gaseous and solid-state plasmas oscillate at THz frequencies. Matter at temperatures above 10 K emits black-body radiation at THz frequencies. This report also describes a tremendous array of other studies that will become possible when access to THz sources and detectors is widely available. The opportunities are limitless. Electromagnetic Transition Region: THz radiation lies above the frequency range of traditional electronics, but below the range of optical and infrared generators. The fact that the THz frequency range lies in the transition region between photonics and electronics has led to unprecedented creativity in source development. Solid-state electronics, vacuum electronics, microwave techniques, ultrafast visible and NIR lasers, single-mode continuous-wave NIR lasers, electron accelerators ranging in size from a few inches to a mile-long linear accelerator at SLAC, and novel materials have been combined yield a large variety of sources with widely-varying output characteristics. For the purposes of this report, sources are divided into 4 categories according to their (low, high) peak power and their (small, large) instantaneous bandwidth. THz experiments: Many classes of experiments can be performed using THz electromagnetic radiation. Each of these will be enabled or optimized by using a THz source with a particular set of specifications. For example, some experiments will be enabled by high average and peak power with impulsive half-cycle excitation. Such radiation is available only from a new class of sources based on sub-ps electron bunches produced in large accelerators. Some high-resolution spectroscopy experiments will require cw THz sources with kHz linewidths but only a few hundred microwatts of power. Others will require powerful pulses with ≤1% bandwidth, available from free-electron lasers and, very recently, regeneratively-amplified lasers and nonlinear optical materials. Time-domain THz spectroscopy, with its time coherence and extremely broad spectral bandwidth, will continue to expand its reach and range of applications, from spectroscopy of superconductors to sub-cutaneous imaging of skin cancer. What is needed The THz community needs a network: Sources of THz radiation are, at this point, very rare in physics and materials science laboratories and almost non-existent in chemistry, biology and medical laboratories. The barriers to performing experiments using THz radiation are enormous. One needs not only a THz source, but also an appropriate receiver and an understanding of many experimental details, ranging from the absorption characteristics of the atmosphere and common materials, to where to purchase or construct various simple optics components such as polarizers, lenses, and waveplates, to a solid understanding of electromagnetic wave propagation, since diffraction always plays a significant role at THz frequencies. There is also significant expense, both in terms of time and money, in setting up any THz apparatus in one's own lab, even if one is the type of investigator who enjoys building things. Because of the enormous barriers to entry into THz science, the community of users is presently much smaller than the potential based on the scientific opportunities. Symposia on medical applications of THz radiation are already attracting overflow crowds at conferences. The size of the community is increasing with a clear growth potential to support a large THz user's network including user facilities. The opportunities are great. The most important thing we can do is lower research barriers. A THz User's Network would leverage the large existing investment in THz research and infrastructure to considerably grow the size of the THz research community. The Network would inform the scientific community at large of opportunities in THz science, bring together segments of the community of THz researchers who are currently only vaguely aware of one another and lower the barriers to entry into THz research. Specific ideas for network activities include disseminating information about techniques and opportunities in THz science through the worldwide web, sponsoring sessions about THz technology at scientific conferences, co-location of conferences from different communities within the THz field, providing funding for small-scale user facilities at existing centers of excellence, directing researchers interested in THz science to the most appropriate technology and/or collaborator, encouraging commercialization of critical THz components, outreach to raise public awareness of THz science and technology, and formation of teams to work on problems of common interest, such as producing higher peak fields or pulse-shaping schemes. Interagency support is crucial: NIH, NSF, and DOE will all benefit, and all must be involved. Eventually, the network will provide the best and most efficient path to defining what new facilities may be needed. New users of THz methodology will also find it easier to learn about the field when there is a network. Defining common goals: During the workshop, the community articulated several common and unmet technical needs. This list is far from exhaustive, and it will grow with the network: 1. Higher peak fields. 2. Coverage to 10 THz (or higher) with coherent broad-band sources. 3. Full pulse-shaping. 4. Excellent stability in sources with the above characteristics. 5. Easy access to components such as emitters and receivers, and for time-domain THz spectroscopy. 6. Near-field THz microscopy. 7. Sensitive non-cryogenic detectors. Basic Research Needs for the Hydrogen Economy JPG.jpg file (407KB) Report.pdf file (7.2MB) Basic Research Needs for the Hydrogen Economy This report is based upon the BES Workshop on Hydrogen Production, Storage, and Use, held May 13-15, 2003, to identify fundamental research needs and opportunities in hydrogen production, storage, and use, with a focus on new, emerging and scientifically challenging areas that have the potential to have significant impact in science and technologies. The coupled challenges of a doubling in the world's energy needs by the year 2050 and the increasing demands for "clean" energy sources that do not add more carbon dioxide and other pollutants to the environment have resulted in increased attention worldwide to the possibilities of a "hydrogen economy" as a long-term solution for a secure energy future. The hydrogen economy offers a grand vision for energy management in the future. Its benefits are legion, including an ample and sustainable supply, flexible interchange with existing energy media, a diversity of end uses to produce electricity through fuel cells or to produce heat through controlled combustion, convenient storage for load leveling, and a potentially large reduction in harmful environmental pollutants. These benefits provide compelling motivation to mount a major, innovative basic research program in support of a broad effort across the applied research, development, engineering, and industrial communities to enable the use of hydrogen as the fuel of the future. There is an enormous gap between our present capabilities for hydrogen production, storage, and use and those required for a competitive hydrogen economy. To be economically competitive with the present fossil fuel economy, the cost of fuel cells must be lowered by a factor of 10 or more and the cost of producing hydrogen must be lowered by a factor of 4. Moreover, the performance and reliability of hydrogen technology for transportation and other uses must be improved dramatically. Simple incremental advances in the present state of the art cannot bridge this gap. The only hope of narrowing the gap significantly is a comprehensive, long-range program of innovative, high-risk/high-payoff basic research that is intimately coupled to and coordinated with applied programs. The best scientists from universities and national laboratories and the best engineers and scientists from industry must work in interdisciplinary groups to find breakthrough solutions to the fundamental problems of hydrogen production, storage, and use. The objective of such a program must not be evolutionary advances but revolutionary breakthroughs in understanding and in controlling the chemical and physical interactions of hydrogen with materials. The detailed findings and research directions identified by the three panels are presented in this report. They address the four research challenges for the hydrogen economy outlined by Secretary of Energy Spencer Abraham in his address to the National Hydrogen Association: (1) dramatically lower the cost of fuel cells for transportation, (2) develop a diversity of sources for hydrogen production at energy costs comparable to those of gasoline, (3) find viable methods of onboard storage of hydrogen for transportation uses, and (4) develop a safe and effective infrastructure for seamless delivery of hydrogen from production to storage to use. The essence of this report is captured in six cross-cutting research directions that were identified as being vital for enabling the dramatic breakthroughs to achieve lower costs, higher performance, and greater reliability that are needed for a competitive hydrogen economy: • Catalysis • Nanostructured Materials • Membranes and Separations • Characterization and Measurement Techniques • Theory, Modeling, and Simulation • Safety and Environmental Issues In addition to these research directions, the panels identified biological and bio-inspired science and technology as richly promising approaches for achieving the revolutionary technical advances required for a hydrogen economy. Theory and Modeling in Nanoscience JPG.jpg file (274KB) Report.pdf file (2.2MB) Theory and Modeling in Nanoscience This report is based upon the May 10-11, 2002, workshop conducted jointly by the Basic Energy Sciences Advisory Committee and the Advanced Scientific Computing Advisory Committees to identify challenges and opportunities for theory, modeling, and simulation in nanoscience and nanotechnology and to investigate the growing and promising role of applied mathematics and computer science in meeting those challenges. During the past 15 years, the fundamental techniques of theory, modeling, and simulation have undergone a revolution that parallels the extraordinary experimental advances on which the new field of nanoscience is based. This period has seen the development of density functional algorithms, quantum Monte Carlo techniques, ab initio molecular dynamics, advances in classical Monte Carlo methods and mesoscale methods for soft matter, and fast-multipole and multigrid algorithms. Dramatic new insights have come from the application of these and other new theoretical capabilities. Simultaneously, advances in computing hardware increased computing power by four orders of magnitude. The combination of new theoretical methods together with increased computing power has made it possible to simulate systems with millions of degrees of freedom. The application of new and extraordinary experimental tools to nanosystems has created an urgent need for a quantitative understanding of matter at the nanoscale. The absence of quantitative models that describe newly observed phenomena increasingly limits progress in the field. A clear consensus emerged at the workshop that without new, robust tools and models for the quantitative description of structure and dynamics at the nanoscale, the research community would miss important scientific opportunities in nanoscience. The absence of such tools would also seriously inhibit widespread applications in fields of nanotechnology ranging from molecular electronics to biomolecular materials. To realize the unmistakable promise of theory, modeling, and simulation in overcoming fundamental challenges in nanoscience requires new human and computer resources. Fundamental Challenges and Opportunities With each fundamental intellectual and computational challenge that must be met in nanoscience comes opportunities for research and discovery utilizing the approaches of theory, modeling, and simulation. In the broad topical areas of (1) nano building blocks (nanotubes, quantum dots, clusters, and nanoparticles), (2) complex nanostructures and nano-interfaces, and (3) the assembly and growth of nanostructures, the workshop identified a large number of theory, modeling, and simulation challenges and opportunities. Among them are: • to bridge electronic through macroscopic length and time scales • to determine the essential science of transport mechanisms at the nanoscale • to devise theoretical and simulation approaches to study nano-interfaces, which dominate nanoscale systems and are necessarily highly complex and heterogeneous • to simulate with reasonable accuracy the optical properties of nanoscale structures and to model nanoscale opto-electronic devices • to simulate complex nanostructures involving "soft" biologically or organically based structures and "hard" inorganic ones as well as nano-interfaces between hard and soft matter • to simulate self-assembly and directed self-assembly • to devise theoretical and simulation approaches to quantum coherence, deco-herence, and spintronics • to develop self-validating and benchmarking methods The Role of Applied Mathematics Since mathematics is the language in which theory is expressed and advanced, developments in applied mathematics are central to the success of theory, modeling, and simulation for nanoscience, and the workshop identified important roles for new applied mathematics in the above-mentioned challenges. Novel applied mathematics is required to formulate new theory and to develop new computational algorithms applicable to complex systems at the nanoscale. The discussion of applied mathematics at the workshop focused on three areas that are directly relevant to the central challenges of theory, modeling, and simulation in nano-science: (1) bridging time and length scales, (2) fast algorithms, and (3) optimization and predictability. Each of these broad areas has a recent track record of developments from the applied mathematics community. Recent advances range from fundamental approaches, like mathematical homogenization (whereby reliable coarse-scale results are made possible without detailed knowledge of finer scales), to new numerical algorithms, like the fast-multipole methods that make very large scale molecular dynamics calculations possible. Some of the mathematics of likely interest (perhaps the most important mathematics of interest) is not fully knowable at the present, but it is clear that collaborative efforts between scientists in nanoscience and applied mathematicians can yield significant advances central to a successful national nanoscience initiative. The Opportunity for a New Investment The consensus of the workshop is that the country's investment in the national nano-science initiative will pay greater scientific dividends if it is accelerated by a new investment in theory, modeling, and simulation in nanoscience. Such an investment can stimulate the formation of alliances and teams of experimentalists, theorists, applied mathematicians, and computer and computational scientists to meet the challenge of developing a broad quantitative understanding of structure and dynamics at the nanoscale. The Department of Energy is uniquely situated to build a successful program in theory, modeling, and simulation in nano-science. Much of the nation's experimental work in nanoscience is already supported by the Department, and new facilities are being built at the DOE national laboratories. The Department also has an internationally regarded program in applied mathematics, and much of the foundational work on mathematical modeling and computation has emerged from DOE activities. Finally, the Department has unique resources and experience in high performance computing and algorithms. The combination of these areas of expertise makes the Department of Energy a natural home for nanoscience theory, modeling, and simulation. Opportunities for Catalysis in the 21st Century JPG.jpg file (130KB) Report.pdf file (1.0MB) Opportunities for Catalysis in the 21st Century This report is based upon a Basic Energy Sciences Advisory Committee subpanel workshop that was held May 14-16, 2002, to identify research directions to better understand how to design catalyst structures to control catalytic activity and selectivity. Chemical catalysis affects our lives in myriad ways. Catalysis provides a means of changing the rates at which chemical bonds are formed and broken and of controlling the yields of chemical reactions to increase the amounts of desirable products from these reactions and reduce the amounts of undesirable ones. Thus, it lies at the heart of our quality of life: The reduced emissions of modern cars, the abundance of fresh food at our stores, and the new pharmaceuticals that improve our health are made possible by chemical reactions controlled by catalysts. Catalysis is also essential to a healthy economy: The petroleum, chemical, and pharmaceutical industries, contributors of $500 billion to the gross national product of the United States, rely on catalysts to produce everything from fuels to "wonder drugs" to paints to cosmetics. Today, our Nation faces a variety of challenges in creating alternative fuels, reducing harmful by-products in manufacturing, cleaning up the environment and preventing future pollution, dealing with the causes of global warming, protecting citizens from the release of toxic substances and infectious agents, and creating safe pharmaceuticals. Catalysts are needed to meet these challenges, but their complexity and diversity demand a revolution in the way catalysts are designed and used. This revolution can become reality through the application of new methods for synthesizing and characterizing molecular and material systems. Opportunities to understand and predict how catalysts work at the atomic scale and the nanoscale are now appearing, made possible by breakthroughs in the last decade in computation, measurement techniques, and imaging and by new developments in catalyst design, synthesis, and evaluation. A Grand Challenge In May 2002, a workshop entitled "Opportunities for Catalysis Science in the 21st Century" was conducted in Gaithersburg, Maryland. The impetus for the workshop grew out of a confluence of factors: the continuing importance of catalysis to the Nation's productivity and security, particularly in the production and consumption of energy and the associated environmental consequences, and the emergence of new research tools and concepts associated with nanoscience that can revolutionize the design and use of catalysts in the search for optimal control of chemical transformations. While research opportunities of an extraordinary variety were identified during the workshop, a compelling, unifying, and fundamental challenge became clear. Simply stated, the Grand Challenge for catalysis science in the 21st century is to understand how to design catalyst structures to control catalytic activity and selectivity. The Present Opportunity In his address to the 2002 meeting of the American Association for the Advancement of Science, Jack Marburger, the President's Science Advisor, spoke of the revolution that will result from our emerging ability to achieve an atom-by-atom understanding of matter and the subsequent unprecedented ability to design and construct new materials with properties that are not found in nature. " The revolution I am describing," he said, " is one in which the notion that everything is made of atoms finally becomes operational… We can actually see how the machinery of life functions, atom by atom. We can actually build atomic-scale structures that interact with biological or inorganic systems and alter their functions. We can design new tiny objects 'from scratch' that have unprecedented optical, mechanical, electrical, chemical, or biological properties that address needs of human society." Nowhere else can this revolution have such an immediate payoff as in the area of catalysis. By investing now in new methods for design, synthesis, characterization, and modeling of catalytic materials, and by employing the new tools of nanoscience, we will achieve the ability to design and build catalytic materials atom by atom, molecule by molecule, nanounit by nanounit. The Importance of Catalysis Science to DOE For the present and foreseeable future, the major source of energy for the Nation is found in chemical bonds. Catalysis affords the means of changing the rates at which chemical bonds are formed and broken. Catalysis also allows chemistry of extreme specificity, making it possible to select a desired product over an undesired one. Materials and materials properties lie at the core of almost every major issue that the U.S. Department of Energy (DOE) faces, including energy, stockpile stewardship, and environmental remediation. Much of the synthesis of new materials is certainly going to happen through catalysis. When scientists and engineers understand how to design catalysts to control catalytic chemistry, the effects on energy production and use and on the creation of exciting new materials will be profound. A Recommendation for Increased Federal Investment in Catalysis Research We are approaching a renaissance in catalysis science in this country. With the availability of exciting new laboratory tools for characterization, new designer approaches to synthesis, advanced computational capabilities, and new capabilities at user facilities, we have unparalleled potential for making significant advances in this vital and vibrant field. The convergence of the scientific disciplines that is a growing trend in the catalysis field is spawning new ideas that reach beyond conventional thinking. This revolution unfortunately comes at a time when industry has largely abandoned its support of basic research in catalysis. As the only Federal agency that supports catalysis as a discipline, DOE is uniquely positioned to lead the revolution. Our economy and our quality of life depend on catalytic processes that are efficient, clean, and effective. An increased investment in catalysis science in this country is not only important, it is essential. Successful research ventures in this area will have an impact on all levels of daily life, leading to enhanced energy efficiency for a range of fuels, reductions in harmful emissions, effective synthesis of new and improved drugs, enhanced homeland security and stockpile stewardship, and new materials with tailored properties. Federal investment is vital for building the scientific workforce needed to address the challenging issues that lie ahead in this field — a workforce that comprises our best and brightest scientists, developing creative new ideas and approaches. This investment is also vital to ensuring that we have the best scientific tools possible for exploiting creative ideas, and that our scientists have ready access to these experimental and computational tools. These tools include both state-of-the-art instrumentation in individual investigator laboratories and unique instrumentation that is only available, because of its size and cost, at DOE' s national user facilities. Biomolecular Materials JPG.jpg file (317KB) Report.pdf file (8.9MB) Biomolecular Materials This report is based upon the January 13-15, 2002, workshop sponsored by the Basic Energy Sciences Advisory Committee to explore the potential impact of biology on the physical sciences, in particular the materials and chemical sciences. Twenty-two scientists from around the nation and the world met to discuss the way that the molecules, structures, processes and concepts of the biological world could be used or mimicked in designing novel materials, processes or devices of potential practical significance. The emphasis was on basic research, although the long-term goal is, in addition to increased knowledge, the development of applications to further the mission of the Department of Energy. The charge to the workshop was to identify the most important and potentially fruitful areas of research in the field of Biomolecular Materials and to identify challenges that must be overcome to achieve success. This report summarizes the response of the workshop participants to this charge, and provides, by way of example, a description of progress that has been made in selected areas of the field. The participants felt that a DOE program in this area should focus on the development of a greater understanding of the underlying biology, and tools to manipulate biological systems both in vitro and in vivo rather than on the attempted identification of narrowly defined applications or devices. The field is too immature to be subject to arbitrary limitations on research and the exclusion of areas that could have great impact. These limitations aside, the group developed a series of recommendations. Three major areas of research were identified as central to the exploitation of biology for the physical sciences: 1) Self Assembled, Templated and Hierarchical Structures; 2) The Living Cell in Hybrid Materials Systems; and 3) Biomolecular Functional Systems. Workshop participants also discussed the challenges and impediments that stand in the way of our attaining the goal of fully exploiting biology in the physical sciences. Some are cultural, others are scientific and technical. Recommendations from the report are: Program Relevance. In view of what has recently developed into a generally recognized opinion that biology offers a rich source of structures, functions and inspiration for the development of novel materials, processes and devices support for this research should be a component of the broad Office of Basic Energy Sciences Program. Broad Support. The field is in its early stages and is not as well defined as other areas. Thus, although it is recommended that support be focused in the three areas identified in this report, it should be broadly applied. Good ideas in other areas proposed by investigators with good track records should be supported as well. There should not be an emphasis on "picking winning applications" because it is simply too difficult to reliably identify them at this time. Support of the Underlying Biology. Basic research focused on understanding the biological structures and processes in areas that show potential for applications supporting the DOE mission should be supported. Multidisplinary Teams. Research undertaken by multidisciplinary teams across the spectrum of materials science, physics, chemistry and biology should be encouraged but not artificially arranged. Training. Research that involves the training of students and postdocs in multiple disciplines, preferably co-advised by two or more senior investigators representing different relevant disciplines, should be encouraged without sacrificing the students' thorough studies within the individual disciplines. Long-Term Investment. Returns, in terms of functioning materials, processes or devices should not be expected in the very short term, although it can reasonably be assumed that applications will, as they have already, arise unexpectedly. Basic Research Needs To Assure A Secure Energy Future JPG.jpg file (132KB) Report.pdf file (13.0MB) Basic Research Needs To Assure A Secure Energy Future This report is based upon a Basic Energy Sciences Advisory Committee workshop that was held in October 2002 to assess the basic research needs for energy technologies to assure a reliable, economic, and environmentally sound energy supply for the future. The workshop discussions produced a total of 37 proposed research directions. Current projections estimate that the energy needs of the world will more than double by the year 2050. This is coupled with increasing demands for "clean" energy – sources of energy that do not add to the already high levels of carbon dioxide and other pollutants in the environment. These coupled challenges simply cannot be met by existing technologies. Major scientific breakthroughs will be required to provide reliable, economic solutions. The results of the BESAC workshop are a compilation of 37 Proposed Research Directions. At a higher level, these fell into ten general research areas, all of which are multidisciplinary in nature: • Materials Science to Transcend Energy Barriers • Energy Biosciences • Basic Research Towards the Hydrogen Economy • Innovative Energy Storage • Novel Membrane Assemblies • Heterogeneous Catalysis • Fundamental Approaches to Energy Conversion • Basic Research for Energy Utilization Efficiency • Actinide Chemistry and Nuclear Fuel Cycles • Geosciences Nanoscale science, engineering, and technology were identified as cross-cutting areas where research may provide solutions and insights to long-standing technical problems and scientific questions. The need for developing quantitative predictive models was also identified in many cases, and this requires better understanding of the underlying fundamental mechanisms of the relevant processes. Often this in turn requires characterization with very high physical, chemical, structural, and temporal precision: DOE's existing world-leading user facilities currently provide these capabilities, and these capabilities must be continuously enhanced and new ones developed. In addition, requirements for theory, modeling, and simulation will demand advanced computational tools, including high-end computer user facilities. All the participants agreed that the education of the next generation of research scientists is of crucial importance; and this should include making the importance of the energy security issue clear to everyone. It is clear that assuring the security of the energy supply for the U.S. over the next few decades will present major problems. There are a number of reasons for this. The most important of these is the current reliance on fossil fuels for a high proportion of the energy, of which a significant fraction is imported. The Developing World countries will have greatly increased needs for energy, in part because of the expected population increase, and in part because of the increase in their presently very low standards of living. A second problem is related to concerns over the environmental effects of the use of fossil fuels. Third, the peaking of the production of fossil fuels is likely within the next several decades. For these reasons, it is very important that the U.S. undertakes a vigorous research and development program to address the issues identified in this report. There are a number of actions that can help in the nearer term: increased efficiency in the conversion and use of energy; increased conservation; and aggressive environmental control requirements. However, while these may delay the major impact, they will not in the longer run provide the assured energy future that the U.S. requires. It is also clear that there is no single answer to this problem. There are several options that are available at the moment, and many – or indeed all – of them must be pursued. Basic research will make an important contribution to the solution to this problem by providing the basis on which entities which include DOE's applied missions programs will develop new technological approaches; and by leading to the discovery of new concepts. The time between the basic research and its contribution to new or significantly improved technical solutions that can make major contributions to the future energy supply is often measured in decades. Major new discoveries are needed, and these will largely come from basic research programs. It is clear from the analysis presented in this report that there are a number of opportunities. Essentially all of these are interdisciplinary in character. The Office of Basic Energy Sciences should review its current research portfolio to assess how it is contributing to the research directions proposed by this study. BESAC expects, however, that a much larger effort will be needed than the current BES program. The magnitude of the energy challenge should not be underestimated. With major scientific discoveries and development of the underlying knowledge base, we must enable vast technological changes in the largest industry in the world (energy), and we must do it quickly. If we are successful, we will both assure energy security at home and promote peace and prosperity worldwide. Recommendation: Considering the urgency of the energy problem, the magnitude of the needed scientific breakthroughs, and the historic rate of scientific discovery, current efforts will likely be too little, too late. Accordingly, BESAC believes that a new national energy research program is essential and must be initiated with the intensity and commitment of the Manhattan Project, and sustained until this problem is solved. BESAC recommends that BES review its research activities and user facilities to make sure they are optimized for the energy challenge, and develop a strategy for a much more aggressive program in the future. Basic Research Needs for Countering Terrorism JPG.jpg file (665KB) Report.pdf file (1.8MB) Basic Research Needs for Countering Terrorism This report documents the results of the Department of Energy, Office of Basic Energy Sciences (BES) Workshop on Basic Research Needs to Counter Terrorism. This two-day Workshop, held in Gaithersburg, MD, February 28-March 1, 2002, brought together BES research participants and experts familiar with counter-terrorism technologies, strategies, and policies. The purpose of the workshop was to: (1) identify direct connections between technology needs for countering terrorism and the critical, underlying science issues that will impact our ability to address those needs and (2) recommend investment strategies that will increase the impact of basic research on our nation's efforts to counter terrorism. The workshop focused on science and technology challenges associated with our nation's need to detect, prevent, protect against, and respond to terrorist attacks involving Radiological and Nuclear, Chemical, and Biological threats. While the organizers and participants of this workshop recognize that the threat of terrorism is extremely broad, including food and water safety as well as protection of our public infrastructure, we necessarily limited the scope of our discussions to the principal weapons of mass destruction. In order to set the stage for the discussions of critical science and technology challenges, the workshop began with keynote and plenary lectures that provided a realistic context for understanding the broad challenges of countering terrorism. The plenary speakers emphasized the socio-political complexity of terrorism problems, reinforced the need for basic research in addressing these problems, and provided critical advice on how basic research can best contribute to our nation's needs. Their advice highlighted the need to: • Invest Strategically– Focus on Cross-Cutting Research that has the potential to have an impact on a broad set of technology needs, thereby providing the greatest return on the research investment. • Build Team Efforts– Countering terrorism will require broad, collaborative teams. The research community should focus on: (1) Research Environments and Infrastructures that encourage and enable cross-disciplinary science and technology teams to explore and integrate new scientific discoveries and (2) Exploring Relationships with Other Programs that will strengthen connections between new scientific advances and those groups responsible for technology development and implementation. • Consider Dual Use– Identify areas of research that present significant Dual-Use Opportunities for application to countering terrorism and other complementary technology needs. The during the workshop, participants identified several critical technology needs and the underlying science challenges that, if met, can help to reduce the threat of terrorist attacks in the United States. Some of the key technology needs and limitations that were identified include: Detection– Nonintrusive, stand-off, and imaging detection systems; sampling from complex backgrounds and environments; inexpensive and field-deployable sensor systems; highly selective and ultra-sensitive detectors; early warning triggers for continuous monitoring Prevention – Methods and materials to control, track, and reduce the availability of hazardous materials; techniques to rapidly characterize and attribute the source of terrorist threats Protection – Personal protective equipment; light-weight barrier materials and fabrics; filtration systems; explosive containment structures; methods to protect people, animals, crops, and public spaces Response–Coupled models and measurements that can predict fate and transport of toxic materials including pre-event background data; pre-symptomatic and point of care medical diagnostics; methods to immobilize and neutralize hazardous materials including self-cleaning and self-decontaminating surfaces The workshop discussions of these technology needs and the underlying science challenges are fully documented in the major sections of this report. The results of these discussions, combined with the broad perspective and advice from our plenary speakers, were used to develop a set of high-level workshop recommendations. The following recommendations are offered to help guide our nation's basic research investments in order to maximize our ability to reduce the threat of terrorism. • We recommend continuing or increasing funding for a selected set of research directions that are identified in the Workshop Summary and Recommendations (Section 5) of this report. These areas of research underpin many of the technologies that have high probability to impact our nation's ability to counter terrorism. • New programs should be supported to stimulate the formation of, and provide needed resources for, cross-disciplinary and multi-institutional teams of scientists and technologists that are needed to address these critical problems. An important component of this strategy is investment in DOE national laboratories and user facilities because they can provide an ideal environment to carry out this highly collaborative work. • Governmental organizations and agencies should explore their complementary goals and capabilities and, where appropriate, work to develop agreements that facilitate the formation of multi-organizational teams and the sharing of research and technology capabilities that will improve our nation's ability to counter the threat of terrorism. • Increased emphasis should be placed on identifying dual-use applications for key counter-terrorism technologies. Efforts should be focused on building partnerships between government, university, and industry to capitalize on these opportunities. In summary, this workshop made significant progress in identifying the basic research needs and in outlining a strategy to enhance the research community's ability to impact our nation's counter-terrorism needs. We wish to acknowledge the enthusiasm and hard work of all the workshop participants. Their extraordinary contributions were key to the success of this workshop, and their dedication to this endeavor provides strong evidence that the basic research community is firmly committed to supporting our nation's goal of reducing the threat of terrorism in the United States. Workshop Presentations - February 28, 2002 The Role of Science and Technology in Countering Terrorism.ppt file (109KB) Keynote Lecture by Jay Davis, LLNL Welcome and Brief Overview.ppt file (88KB) Walter Stevens, BES Introduction and Purpose.ppt file (63KB) Terry Michalske, SNL Radiological and Nuclear Threat Area.ppt file (20.3MB) Michael Anastasio, LLNL Chemical Threat Area.ppt file (24.4MB) Michael Sailor, UC San Diego Biological Threat Area.ppt file (3.1MB); David Franz, Southern Research Institute Workshop Report Screen optimized.pdf file (2.8MB) Print optimized.pdf file (4.2MB) Word format.doc file (6.2MB) Complex Systems: Science for the 21st Century JPG.jpg file (258KB) Report.pdf file (1.9MB) Complex Systems: Science for the 21st Century This report is based upon a BES workshop, March 5-6, 1999, which was designed to help define new scientific directions related to complex systems in order to create new understanding aboutthe nano world and complicated, multicomponent structures. As we look further in to this century, we find science and technology at yet another threshold: the study of simplicity will give way to the study of "complexity" as the unifying theme. The triumphs of science in the past century, which improved our lives immeasurably, can be described as elegant solutions to problems reduced to their ultimate simplicity. We discovered and characterized the fundamental particles and the elementary excitations in matter and used them to form the foundation for interpreting the world around us and for building devices to work for us. We learned to design, synthesize, and characterize small, simple molecules and to use them as components of, for example, materials, catalysts, and pharmaceuticals. We developed tools to examine and describe these "simple" phenomena and structures. The new millennium will take us into the world of complexity. Here, simple structures interact to create new phenomena and assemble themselves into devices. Here also, large complicated structures can be designed atom by atom for desired characteristics. With new tools, new understanding, and a developing convergence of the disciplines of physics, chemistry, materials science, and biology, we will build on our 20th century successes and begin to ask and solve questions that were, until the 21st century, the stuff of science fiction. Complexity takes several forms. The workshop participants identified five emerging themes around which research could be organized. Collective Phenomena — Can we achieve an understanding of collective phenomena to create materials with novel, useful properties? We already see the first examples of materials with properties dominated by collective phenomena — phenomena that emerge from the interactions of the components of the material and whose behavior thus differs significantly from the behavior of those individual components. In some cases collective phenomena can bring about a large response to a small stimulus — as seen with colossal magnetoresistance, the basis of a new generation of recording memory media. Collective phenomena are also at the core of the mysteries of such materials as the high-temperature superconductors. Materials by Design — Can we design materials having predictable, and yet often unusual properties? In the past century we discovered materials, frequently by chance, determined their properties, and then discarded those materials that did not meet our needs. Now we will see the advent of structural and compositional freedoms that will allow the design of materials having specific desired characteristics directly from our knowledge of atomic structure. Of particular interest are "nanostructured" materials, with length scales between 1 and 100 nanometers. In this regime, dimensions "disappear," with zero-dimensional dots or nanocrystals, one-dimensional wires, and two-dimensional films, each with unusual properties distinctly different from those of the same material with "bulk" dimensions. We could design materials for lightweight batteries with high storage densities, for turbine blades that can operate at 2500°C, and perhaps even for quantum computing. Functional Systems — Can we design and construct multicomponent molecular devices and machines? We have already begun to use designed building blocks to create self-organized structures of previously unimagined complexity. These will form the basis of systems such as nanometer-scale chemical factories, molecular pumps, and sensors. We might even stretch and think of self-assembling electronic/photonic devices. Nature's Mastery — Can we harness, control, or mimic the exquisite complexity of Nature to create new materials that repair themselves, respond to their environment, and perhaps even evolve? This is, perhaps, the ultimate goal. Nature tells us it can be done and provides us with examples to serve as our models. We learn about Nature's design rules and try to mimic green plants which capture solar energy, or genetic variation as a route to "self-improvement" and optimized function. T hese concepts may seem fanciful, but with the revolution now taking place in biology, progressing from DNA sequence to structure and function, the possibilities seem endless. Nature has done it. Why can't we? New Tools — Can we develop the characterization instruments and the theory to help us probe and exploit this world of complexity? Radical enhancement of existing techniques and the development of new ones will be required for the characterization and visualization of structures, properties, and functions — from the atomic, to the molecular, to the nanoscale, to the macroscale. Terascale computing will be necessary for the modeling of these complex systems. Now is the time. We can now do this research, make these breakthroughs, and enhance our lives as never before imagined. The work of the past few decades has taken us to this point, solving many of the problems that underlie these challenges, teaching us how to approach problems of complexity, giving us the confidence needed to achieve these goals. This work also gave us the ability to compute on our laps with more power than available to the Apollo astronauts on their missions to the moon. It taught us to engineer genes, "superconduct" electricity, visualize individual atoms, build "plastics" ten times stronger than steel, and put lasers on chips for portable CD players. We are ready to take the next steps. Complexity pays dividends. We think of simple silicon for semiconductors, but our CD players depend on dozens of layers of semiconductors made of aluminum, gallium, and arsenic. Copper conducts electricity and iron is magnetic. Superconductors and giant magnetoresistive materials have eight or more elements, all of which are essential and interact with one another to produce the required proper-ties. Nature, too, shows us the value of complexity. Hemoglobin, the protein that transports oxygen from the lungs to, for example, the brain, is made up of four protein subunits which interact to vastly increase the efficiency of delivery. As individual subunits, these proteins cannot do the job. The new program. The very nature of research on complexity makes it a "new millennium" program. Its foundations rest on four pillars: physics, chemistry, materials science, and biology. Success will require an unprecedented level of inter-disciplinary collaboration. Universities will need to break down barriers between established departments and encourage the development of teams across disciplinary lines. Interactions between universities and national laboratories will need to be increased, both in the use of the major facilities at the laboratories and also through collaborations among research programs. Finally, understanding the interactions among components depends on understanding the components themselves. Although a great deal has been accomplished in this area in the past few decades, far more remains to be done. A complexity program will complement the existing programs and will ensure the success of both. The benefits are, as they have been at the start of all previous scientific "revolutions," beyond anything we can now foresee. Nanoscale Science, Engineering and Technology Research Directions JPG.jpg file (707KB) Front & Back JPG.jpg file (588KB) Report.pdf file (6.6MB) Report.doc file (4.9MB) Nanoscale Science, Engineering and Technology Research Directions This report illustrates the wide range of research opportunities and challenges in nanoscale science, engineering and technology. It was prepared in 1999 in connection with the interagency national research initiative on nanotechnology. The principal missions of the Department of Energy (DOE) in Energy, Defense, and Environment will benefit greatly from future developments in nanoscale science, engineering and technology. For example, nanoscale synthesis and assembly methods will result in significant improvements in solar energy conversion; more energy-efficient lighting; stronger, lighter materials that will improve efficiency in transportation; greatly improved chemical and biological sensing; use of low-energy chemical pathways to break down toxic substances for environmental remediation and restoration; and better sensors and controls to increase efficiency in manufacturing. The DOE's Office of Science has a strong focus on nanoscience discovery, the development of fundamental scientific understanding, and the conversion of these into useful technological solutions. A key challenge in nanoscience is to understand how deliberate tailoring of materials on the nanoscale can lead to novel and enhanced functionalities. The DOE National Laboratories are already making a broad range of contributions in this area. The enhanced properties of nanocrystals for novel catalysts, tailored light emission and propagation, and supercapacitors are being explored, as are hierachical nanocomposite structures for chemical separations, adaptive/responsive behavior and impurity gettering. Nanocrystals and layered structures offer unique opportunities for tailoring the optical, magnetic, electronic, mechanical and chemical properties of materials. The Laboratories are currently synthesizing layered structures for electronics/photonics, novel magnets and surfaces with tailored hardness. This report supplies numerous other examples of new properties and functionalities that can be achieved through nanoscale materials control. These include: • Nanoscale layered materials that can yield a four-fold increase in the performance of permanent magnets; • Addition of aluminum oxide nanoparticles that converts aluminum metal into a material with wear resistance equal to that of the best bearing steel; • New optical properties achieved by fabricating photonic band gap superlattices to guide and switch optical signals with nearly 100% transmission, in very compact architectures; • Layered quantum well structures to produce highly efficient, low-power light sources and photovoltaic cells; • Novel optical properties of semiconducting nanocrystals that are used to label and track molecular processes in living cells; • Novel chemical properties of nanocrystals that show promise as photocatalysts to speed the breakdown of toxic wastes; • Meso-porous inorganic hosts with self-assembled organic monolayers that are used to trap and remove heavy metals from the environment; and • Meso-porous structures integrated with micromachined components that are used to produce high-sensitivity and highly selective chip-based detectors of chemical warfare agents. These and other nanostructures are already recognized as likely key components of 21st century optical communications, printing, computing, chemical sensing and energy conversion technologies. The DOE is well prepared to make major contributions to developing nanoscale scientific understanding, and ultimately nanotechnology, through its materials characterization, synthesis, in situ diagnostic and computing capabilities. The DOE and its National Laboratories maintain a large array of major national user facilities that are ideally suited to nanoscience discovery and to developing a fundamental understanding of nanoscale processes. Synchrotron and neutron sources provide exquisite energy control of radiation sources that are able to probe structure and properties on length scales ranging from Ångstroms to millimeters. Scanning Probe Microscope (SPM) and Electron Microscopy facilities provide unique capabilities for characterizing nanoscale materials and diagnosing processes. DOE also maintains synthesis and prototype manufacturing centers where fundamental and applied research, technology development and prototype fabrication can be pursued simultaneously. Finally, the large computational facilities at the DOE National Laboratories can be key contributors in nanoscience discovery, modeling and understanding. In order to increase the impact of major DOE facilities on the national nanoscience and technology initiative, it is proposed to establish several new Nanomaterials Research Centers. These Centers are intended to exploit and be associated with existing radiation sources and materials characterization and diagnostic facilities at DOE National Laboratories. Each Center would focus on a different area of nanoscale research, such as materials derived from or inspired by nature; hard and crystalline materials, including the structure of macromolecules; magnetic and soft materials, including polymers and ordered structures in fluids; and nanotechnology integration. The Nanomaterials Research Centers will facilitate interdisciplinary research and provide an environment where students, faculty, industrial researchers and national laboratory staff can work together to rapidly advance nanoscience discovery and its application to nanotechnology. Establishment of these Centers will permit focusing DOE resources on the most important nanoscale science questions and technology needs, and will ensure strong coupling with the national nanoscience initiative. The synergy of these DOE assets in partnership with universities and industry will provide the best opportunity for nanoscience discoveries to be converted rapidly into technological advances that will meet a variety of national needs and enable the United States to reap the benefits of a technological revolution. Last modified: 10/16/2017 4:58:16 PM
b0008c2e317731ee
Sharp boundary behaviour of solutionsto semilinear nonlocal elliptic equations Sharp boundary behaviour of solutions to semilinear nonlocal elliptic equations We investigate quantitative properties of nonnegative solutions to the semilinear diffusion equation  , posed in a bounded domain with appropriate homogeneous Dirichlet or outer boundary conditions. The operator may belong to a quite general class of linear operators that include the standard Laplacian, the two most common definitions of the fractional Laplacian () in a bounded domain with zero Dirichlet conditions, and a number of other nonlocal versions. The nonlinearity is increasing and looks like a power function , with . The aim of this paper is to show sharp quantitative boundary estimates based on a new iteration process. We also prove that, in the interior, solutions are Hölder continuous and even classical (when the operator allows for it). In addition, we get Hölder continuity up to the boundary. Particularly interesting is the behaviour of solution when the number goes below the exponent corresponding to the Hölder regularity of the first eigenfunction . Indeed a change of boundary regularity happens in the different regimes , and in particular a logarithmic correction appears in the “critical” case . For instance, in the case of the spectral fractional Laplacian, this surprising boundary behaviour appears in the range . Keywords. Nonlocal equations of elliptic type, nonlinear elliptic equations, bounded domains, a priori estimates, positivity, boundary behavior, regularity, Harnack inequalities. Mathematics Subject Classification. 35B45, 35B65, 35J61, 35K67. Matteo Bonforte. Departamento de Matemáticas, Universidad Autónoma de Madrid, Campus de Cantoblanco, 28049 Madrid, Spain. e-mail address: Alessio Figalli. ETH Zürich, Department of Mathematics, Rämistrasse 101, 8092 Zürich, Switzerland. E-mail: Juan Luis Vázquez. Departamento de Matemáticas, Universidad Autónoma de Madrid, 1 Introduction In this paper we address the question of obtaining a priori estimates, positivity, upper and lower boundary behaviour, Harnack inequalities, and regularity for nonnegative solutions to Semilinear Elliptic Equations of the form where is a bounded domain with smooth boundary, , is a monotone nondecreasing function with , and is a linear operator, possibly of nonlocal type (the basic examples being the fractional Laplacian operators, but the classical Laplacian operator is also included). Since the problem is posed in a bounded domain we need boundary conditions, or exterior conditions in the nonlocal case, that we assume of Dirichlet type and will be included in the functional definition of operator . This theory covers a quite large class of local and nonlocal operators and nonlinearities. The operators include the three most common choices of fractional Laplacian operator with Dirichlet conditions but also many other operators that are described in Section 2, see also [9, 6]. In fact, the interest of the theory we develop lies in the wide applicability. The problem is posed in the context of weak dual solutions, which has been proven to be very convenient for the parabolic theory, and is also convenient in the elliptic case. The focus of the paper is obtaining a priori estimates and regularity. The a priori estimates are upper bounds for solutions of both signs and lower bounds for nonnegative solutions. A basic principle in the paper is that sharp boundary estimates may depend not only on but also on the behaviour of the nonlinearity near . For this reason we assume that the nonlinearity looks like a power with linear or sublinear growth, namely for some when , and in that case we identify the range of parameters where the more complicated behaviour happens. We point out that, for nonnegative solutions, our quantitative inequalities produce sharp behaviour in the interior and near the boundary, both in the case (the eigenvalue problem) and when , (the sublinear problem). Our upper and lower bounds will be formulated in terms of the first eigenfunction of , that under our assumptions will behave like for a certain characteristic power  , cf. Section 2. This constant plays a big role in the theory. Apart from its own interest, the motivation for this paper comes from companion papers, [9, 6]. In [9] a theory for a general class of nonnegative very weak solutions of the parabolic equation is built, while in [6] we address the parabolic regularity theory: positivity, sharp boundary behaviour, Harnack inequalities, sharp Hölder continuity and higher regularity. The proof of such parabolic results relies in part on the elliptic counterparts contained in this paper. In this paper we concentrate the efforts in the study of the sublinear case , since we are motivated by the study of the Porous Medium Equation of the companion paper [6], see also Subsection 6.1.1. The boundary behaviour when is indeed the same as for . Notation. Let us indicate here some notation of general use. The symbol will always denote . We also use the notation whenever there exist constants such that  . We use the symbols and . We will always consider bounded domains with smooth boundary, at least . The question of possible lower regularity of the boundary is not addressed here. 2 Basic assumptions and notation In view of the close relation of this study with the parabolic problem, most of the assumptions on the class of operators are the same as in [9] and [6]. We list them for definiteness and we refer to the references for comments and explanations. Basic assumptions on . The linear operator is assumed to be densely defined and sub-Markovian, more precisely satisfying (A1) and (A2) below: 1. is -accretive on , 2. If then  . The latter can be equivalently written as 1. If is a maximal monotone graph in with ,  ,  ,  ,  , a.e., then Such assumptions are the starting hypotheses proposed in the paper [9] in order to deal with the parabolic problem . Further theory depends on finer properties of the representation kernel of , as follows. Assumptions on . In other to prove our quantitative estimates, we need to be more specific about operator . Besides satisfying (A1) and (A2), we will assume that it has a left-inverse with a kernel such that and that moreover satisfies at least one of the following estimates, for some : - There exists a constant such that for a.e.  : - There exist constants  , such that for a.e.  : where we adopt the notation . Hypothesis (K2) introduces an exponent , which is a characteristic of the operator and will play a big role in the results. Notice that defining an inverse operator implies that we are taking into account the Dirichlet boundary conditions. - The lower bound of assumption (K2) is weaker than the best known estimate on the Green function for many examples under consideration; a stronger inequality holds in many cases: The role of the first eigenfunction of . Under the assumption (K1) it is possible to show that the operator has a first nonnegative and bounded eigenfunction  , satisfying for some , cf. Proposition 5.1. As a consequence of (K2), we show in Proposition 5.3 that the first eigenfunction satisfies hence it encodes the parameter  , which takes care of describing the boundary behaviour, as first noticed in [8]. We will also show that all possible eigenfunctions of satisfy the bound , cf. Proposition 5.4. Recall that we are assuming that the boundary of the domain is smooth enough, for instance . In view of (2.1), we can rewrite (K2) and (K4) in the following equivalent forms: There exist constants  , such that for a.e.  : We keep the labels (K2), (K4), (K3) and (K5) to be consistent with the papers [9, 6]. 2.1 Main Examples The theory applies to a number of operators, mainly nonlocal but also local. We will just list the main cases with some comments, since we have already presented a detailed exposition in [9, 6] that applies here. In all the examples below, the operators satisfy assumptions and and . As far as fractional Laplacians are concerned, there are at least three different and non-equivalent operators when working on bounded domains, that we call Restricted Fractional Laplacian (RFL) , the Spectral Fractional Laplacian (SFL) and the Censored Fractional Laplacian (CFL), see Section 3 of [9] and Section 2.1 of [6]. A good functional setup both for the SFL and the RFL in the framework of fractional Sobolev spaces can be found in [7]. For the application of our results to these cases, it is important to recall that for the RFL , for the CFL and , while for SFL and . There are a number of other operators to which our theory applies: (i) Fractional operators with more general kernels of RFL and CFL type, under some assumptions on the kernel; (ii) Spectral powers of uniformly elliptic operators with coefficients; (iii) Sums of two fractional operators; (iv) Sum of the Laplacian and a nonlocal operator of Lévy-type; (v) Schrödinger equations for non-symmetric diffusions; (vi) Gradient perturbation of restricted fractional Laplacians; (vii) Relativistic stable processes, and many other examples more. These examples are presented in detail in Section 3 of [9] and Section 10 of [6]. Finally, it is worth mentioning that our arguments readily extend to operators on manifolds for which the required bounds on hold. 3 Outline of the paper and main results In this section we give a overview of the results that we obtain in this paper. Although the first two examples (the linear problem and the eigenvalue problem) are easier and rather standard, some of the results proved in these settings are preparatory for the semilinear problem , which is the main focus of this paper. In addition, since we could not find a precise reference for (i) and (ii) below in our generality, we present all the details. (i) The linear equation. We consider the linear problem with with , and we show that nonnegative solutions behave at the boundary as follows where is the function defined in (4.4), and depends on the value of , while depend only on . See details in Section 4. (ii) Eigenvalue problem. We prove a set of a priori estimates for the eigenfunctions, i.e. solutions the Dirichlet problem for the equation . We first prove that, under assumption (K1), eigenfunctions exist and are bounded, see Proposition 5.1 and Lemma 5.2. Then, under assumption (K2), we show the boundary estimates see Section 5 for more details and results. Boundary estimates have been proven in various settings, especially for the common fractional operators (RFL and SFL), see for instance [4, 7, 13, 18, 17, 20, 23, 28, 29, 30, 31, 33]. (iii) Semilinear equations. This is the core of the paper, and our main result concerns sharp boundary behaviour. In Section 6 we show that all nonnegative solutions to the semilinear equation (1.1) with , , satisfy the following sharp estimates whenever : Here depend only on , and the exponent is given by Note that (i.e., it is independent of ) whenever . In particular the “exceptional value” does not appear neither when or when , nor in the case of the RFL or CFL. See also the survey [27]. When (i.e. in the limit case ) a logarithmic correction appear, and we prove the following sharp estimate: (iv) Regularity. In Section 7 we prove that, both in the linear and semilinear case, solutions are Hölder continuous and even classical in the interior (whenever the operator allows it). In addition we prove that they are Hölder continuous up to the boundary with a sharp exponent. Regularity estimates have been extensively studied: as far as interior Hölder regularity is concerned, see for instance [18, 25, 2, 15, 16, 32, 34]; for boundary regularity see [17, 23, 28, 29, 30, 31]; for interior Schauder estimates, see [3, 22]. Remark. The results apply without changes in dimension when . Method and generality. The usual approach to prove a priori estimates for both linear and semilinear equations, relies De Giorgi-Nash-Moser technique, exploiting energy estimates, and Sobolev and Stroock-Varopoulos inequalities. In addition, extension methods à la Caffarelli-Silvestre [14] turn out to be very useful. However, due to the generality of the class of operators considered here, such extension is not always possible. Hence, we develop a new approach where we concentrate on the properties of the properties of the Green function of . In particular, once good linear estimates for the Green function are known, we proceed through a delicate iteration process to establish sharp boundary behaviour of solutions even in a nonlinear setting, see Propositions 5.3 and 6.5, and Lemmata 6.6, 6.7 and 6.8. 4 The linear problem. Potential and boundary estimates. In this section we prove estimates on the boundary behaviour of solutions to the linear elliptic problem with zero Dirichlet boundary conditions The solution to this problem is given by the representation formula whenever with . This representation formula is compatible with the concept of weak dual solution that we shall use in the semilinear problem, see Section 6; this can be easily seen by using the definition of weak dual solution and approximating the Green function by means of admissible test functions, analogously to what is done in Subsection 6.3. In the case of SFL and/or of powers of elliptic operators with continuous coefficient, boundary estimates were obtained in [17, 18, 33], and for RFL and CFL see [25, 27] and references therein. See also Section 3.3 of [9] for more examples and references. The main result of this section is the following theorem. Theorem 4.1 Let be the kernel of , and assume . Let be a weak dual solution of the Dirichlet Problem (4.1), corresponding to with . Then there exist positive constants depending on such that the following estimates hold true where , and is defined as follows: Remark on the existence of eigenfunctions. Under assumption (K1) on the kernel of we have existence of a positive and bounded eigenfunction , see Subsections 5.1 and 5.2; if we further assume (K2) then , cf. Subsection 5.3 for further details. The proof of the theorem is a simple consequence of the following Lemma Lemma 4.2 (Green function estimates I) Let be the kernel of , and assume that holds. Then, for all , there exist a constant such that Moreover, if (K2) holds, then for the same range of there exists a constant such that, for all , where is defined as in (4.4). Finally, for all , implies that The constants  ,  , depend only on , and have an explicit expression given in the proof. Proof of Theorem 4.1 Thanks to (4.5), the formula makes sense for with . Now the lower bound is given in (4.7), while the upper bound follows by (4.6) and Hölder inequality: Proof of Lemma 4.2We split the proof in three steps. Step 1. Proof of estimate (4.5). As consequence of assumption (K1) we obtain where we used that and the notation (recall that by assumption ). Step 2. Proof of estimate (4.6). We first prove the lower bound of inequality (4.6). This follows directly from (K2) (see also the equivalent form (K3)): We next prove the upper bounds of inequality (4.6). Let us fix , and define so that for any we have  . Notice that it is not restrictive to assume , since we are focusing here on the boundary behaviour, i.e. when (note that when we already have estimates (4.5)). Recall now the upper part of (K2) estimates, that can be rewritten in the form so that We consider three cases, depending whether is positive, negative, or zero. - We first analyze the case when  ; recalling that ,we have - Next we analyze the case when  ; using again that , we get - Finally we analyze the case when  ; again since , it holds (note that, since we are assuming , ). The proof of the upper bound (4.6) is now complete. Step 3. Proof of estimates (4.7). For all , and , the lower bound in (K2) implies 5 The eigenvalue Problem In this section we will focus the attention on the eigenvalue problem for . It is clear, by standard Spectral theory, that the eigenelements of and are the same. We hence focus our study on the “dual” problem for . We are going to prove first that assumption (K1) is sufficient to ensure that the self-adjoint operator is compact, hence it possesses a discrete spectrum. Then we show that eigenfunctions are bounded. Finally, as a consequence of the stronger assumption (K2), we will obtain the sharp boundary behaviour of the first positive eigenfunction  , and also optimal boundary estimates for all the other eigenfunctions, namely we prove also that  . 5.1 Compactness and existence of eigenfunctions. Let be the kernel of , and assume that holds. Under this assumption we show that is compact, hence it has a discrete spectrum. Proposition 5.1 Assume that satisfies and  , and that its inverse satisfies . Then the operator is compact. As a consequence, possesses a discrete spectrum, denoted by , with as . Moreover, there exists a first eigenfunction and a first positive eigenvalue  , such that As a consequence, the following Poincaré inequality holds: Proof of Proposition 5.1The proof is divided in several steps. We first prove that the self-adjoint operator is bounded. Step 1. Boundedness of . We shall prove the following inequality: there exists a constant such that for all we have For this, we have The last inequality holds because we know that where is the inverse of , the transpose isomorphism of the restriction operator ; we recall that  , so that gives the isomorphism We refer to [26] for further details, see also Section 7.7 of [7]. Next we recall the Poincaré inequality that holds for the Restricted Fractional Laplacian, cf. [4, 19, 31] and also [7, 9]. For all such that we have that there exists a constant such that We apply the above inequality to  , to get Combining inequalities (5.5) and (5.7) we obtain (5.4) with  . Step 2. The Rayleigh quotient is bounded below: Poincaré inequality. We can compute where we have used Cauchy-Schwartz inequality and inequality (5.4) of Step 1. The above inequality clearly implies that , and also proves the Poincaré inequality (5.3) . Step 3. Compactness. Fix small and set Note that, by (K1), we can bound Thus, for all and all we have Now, by Young’s convolution inequality, the last two terms can be bounded by Also, by Hölder inequality we have Note that, because of , is bounded and therefore it belongs to . Thus, for fixed, it holds that as , therefore Recalling (5.9), this proves that and since is arbitrary we obtain Since is linear, thanks Riesz-Fréchet-Kolmogorov Theorem we have proved that the image of any ball in is compact in with respect to the strong topology. Hence the operator is compact and has a discrete spectrum. Step 4. The first eigenfunction and the Poincaré inequality. The first eigenfunction exists in view of the previous step. Finally, the minimality property (5.2) follows by standard arguments and implies both the non-negativity of and the Poincaré inequality (5.3).       5.2 Boundedness of eigenfunctions. We now show that, under the only assumption (K1), all the eigenfunctions are bounded, namely there exists depending only on and , such that Recall that we are considering eigenfunctions normalized in . The key point to obtain such bounds is that the absolute value of eigenfunctions satisfies an integral inequality: Thus satisfies the hypothesis of Lemma 5.2 below. Lemma 5.2 Assume that satisfies and , and that its inverse satisfies . If is nonnegative and satisfies then there exists a constant such that the following sharp upper bound holds true: Proof.  The boundedness follows by the Hardy-Littlewood-Sobolev (HLS) inequality, through a finite iteration. The HLS reads see [26] or [7] and references therein. We will use HLS in the following iterative form: Indeed, for all  , as a consequence of (K1) we have that
c07fd3eadc08dae2
Hamilton-Jacobi-Einstein Equation Get Hamilton%E2%80%93Jacobi%E2%80%93Einstein Equation essential facts below. View Videos or join the Hamilton%E2%80%93Jacobi%E2%80%93Einstein Equation discussion. Add Hamilton%E2%80%93Jacobi%E2%80%93Einstein Equation to your PopFlock.com topic list for future reference or share this resource on social media. In general relativity, the Hamilton-Jacobi-Einstein equation (HJEE) or Einstein-Hamilton-Jacobi equation (EHJE) is an equation in the Hamiltonian formulation of geometrodynamics in superspace, cast in the "geometrodynamics era" around the 1960s, by Asher Peres in 1962 and others.[1] It is an attempt to reformulate general relativity in such a way that it resembles quantum theory within a semiclassical approximation, much like the correspondence between quantum mechanics and classical mechanics. It is named for Albert Einstein, Carl Gustav Jacob Jacobi, and William Rowan Hamilton. The EHJE contains as much information as all ten Einstein field equations (EFEs).[2] It is a modification of the Hamilton-Jacobi equation (HJE) from classical mechanics, and can be derived from the Einstein-Hilbert action using the principle of least action in the ADM formalism. Background and motivation Correspondence between classical and quantum physics In classical analytical mechanics, the dynamics of the system is summarized by the action S. In quantum theory, namely non-relativistic quantum mechanics (QM), relativistic quantum mechanics (RQM), as well as quantum field theory (QFT), with varying interpretations and mathematical formalisms in these theories, the behavior of a system is completely contained in a complex-valued probability amplitude ? (more formally as a quantum state ket - an element of a Hilbert space). Using the polar form of the wave function, so making a Madelung transformation: the phase of ? is interpreted as the action, and the modulus = = |?| is interpreted according to the Copenhagen interpretation as the probability density function. The reduced Planck constant ? is the quantum of angular momentum. Substitution of this into the quantum general Schrödinger equation (SE): and taking the limit ? -> 0 yields the classical HJE: which is one aspect of the correspondence principle. Shortcomings of four-dimensional spacetime On the other hand, the transition between quantum theory and general relativity (GR) is difficult to make; one reason is the treatment of space and time in these theories. In non-relativistic QM, space and time are not on equal footing; time is a parameter while position is an operator. In RQM and QFT, position returns to the usual spatial coordinates alongside the time coordinate, although these theories are consistent only with SR in four-dimensional flat Minkowski space, and not curved space nor GR. It is possible to formulate quantum field theory in curved spacetime, yet even this still cannot incorporate GR because gravity is not renormalizable in QFT.[3] Additionally, in GR particles move through curved spacetime with a deterministically known position and momentum at every instant, while in quantum theory, the position and momentum of a particle cannot be exactly known simultaneously; space x and momentum p, and energy E and time t, are pairwise subject to the uncertainty principles which imply that small intervals in space and time mean large fluctuations in energy and momentum are possible. Since in GR mass-energy and momentum-energy is the source of spacetime curvature, large fluctuations in energy and momentum mean the spacetime "fabric" could potentially become so distorted that it breaks up at sufficiently small scales.[4] There is theoretical and experimental evidence from QFT that vacuum does have energy since the motion of electrons in atoms is fluctuated, this is related to the Lamb shift.[5] For these reasons and others, at increasingly small scales, space and time are thought to be dynamical up to the Planck length and Planck time scales.[4] In any case, a four-dimensional curved spacetime continuum is a well-defined and central feature of general relativity, but not in quantum mechanics. One attempt to find an equation governing the dynamics of a system, in as close a way as possible to QM and GR, is to reformulate the HJE in three-dimensional curved space understood to be "dynamic" (changing with time), and not four-dimensional spacetime dynamic in all four dimensions, as the EFEs are. The space has a metric (see metric space for details). The metric tensor in general relativity is an essential object, since proper time, arc length, geodesic motion in curved spacetime, and other things, all depend on the metric. The HJE above is modified to include the metric, although it's only a function of the 3d spatial coordinates r, (for example r = (x, y, z) in Cartesian coordinates) without the coordinate time t: In this context gij is referred to as the "metric field" or simply "field". General equation (free curved space) For a free particle in curved "empty space" or "free space", i.e. in the absence of matter other than the particle itself, the equation can be written:[6][7][8] where g is the determinant of the metric tensor and R the Ricci scalar curvature of the 3d geometry (not including time), and the "?" instead of "d" denotes the variational derivative rather than the ordinary derivative. These derivatives correspond to the field momenta "conjugate to the metric field": the rate of change of action with respect to the field coordinates gij(r). The g and ? here are analogous to q and p = ?S/?q, respectively, in classical Hamiltonian mechanics. See canonical coordinates for more background. The equation describes how wavefronts of constant action propagate in superspace - as the dynamics of matter waves of a free particle unfolds in curved space. Additional source terms are needed to account for the presence of extra influences on the particle, which include the presence of other particles or distributions of matter (which contribute to space curvature), and sources of electromagnetic fields affecting particles with electric charge or spin. Like the Einstein field equations, it is non-linear in the metric because of the products of the metric components, and like the HJE it is non-linear in the action due to the product of variational derivatives in the action. The quantum mechanical concept, that action is the phase of the wavefunction, can be interpreted from this equation as follows. The phase has to satisfy the principle of least action; it must be stationary for a small change in the configuration of the system, in other words for a slight change in the position of the particle, which corresponds to a slight change in the metric components; the slight change in phase is zero: (where d3r is the volume element of the volume integral). So the constructive interference of the matter waves is a maximum. This can be expressed by the superposition principle; applied to many non-localized wavefunctions spread throughout the curved space to form a localized wavefunction: for some coefficients cn, and additionally the action (phase) Sn for each ?n must satisfy: for all n, or equivalently, Regions where ? is maximal or minimal occur at points where there is a probability of finding the particle there, and where the action (phase) change is zero. So in the EHJE above, each wavefront of constant action is where the particle could be found. This equation still does not "unify" quantum mechanics and general relativity, because the semiclassical Eikonal approximation in the context of quantum theory and general relativity has been applied, to provide a transition between these theories. The equation takes various complicated forms in: See also 1. ^ A. Peres (1962). "On Cauchy's problem in general relativity - II". Nuovo Cimento. 26 (1). Springer. pp. 53-62. doi:10.1007/BF02754342. 2. ^ U.H. Gerlach (1968). "Derivation of the Ten Einstein Field Equations from the Semiclassical Approximation to Quantum Geometrodynamics". Physical Review. 177 (5): 1929-1941. Bibcode:1969PhRv..177.1929G. doi:10.1103/PhysRev.177.1929. 3. ^ A. Shomer (2007). "A pedagogical explanation for the non-renormalizability of gravity". arXiv:0709.3555 [hep-th]. 4. ^ a b R.G. Lerner; G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC Publishers. p. 1285. ISBN 978-0-89573-752-6. 5. ^ J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 1190. ISBN 978-0-7167-0344-0.CS1 maint: multiple names: authors list (link) 6. ^ J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 1188. ISBN 978-0-7167-0344-0.CS1 maint: multiple names: authors list (link) 7. ^ J. Mehra (1973). The Physicist's Conception of Nature. Springer. p. 224. ISBN 978-90-277-0345-3. 8. ^ J.J. Halliwell; J. Pérez-Mercader; W.H. Zurek (1996). Physical Origins of Time Asymmetry. Cambridge University Press. p. 429. ISBN 978-0-521-56837-1. Further reading Selected papers Music Scenes
c24356d0efa2da4b
LOG#114. Bohr’s legacy (II). Dedicated to Niels Bohr and his atomic model 2nd part: Electron shells, Quantum Mechanics and The Periodic Table Niels Bohr (1923) was the first to propose that the periodicity in the properties of the chemical elements might be explained by the electronic structure of the atom. In fact, his early proposals were based on his own “toy-model” (Bohr atom) for the hydrogen atom in which the electron shells were orbits at a fixed distance from the nucleus. Bohr’s original configurations would seem strange to a present-day chemist: the sulfur atom was given a shell structure of  (2,4,4,6)  instead of 1s^22s^22p^63s^23p^4, the right structure being (2,8,6). The following year, E.C.Stoner incorporated the Sommerfeld’s corrections to the electron configuration rules, and thus, incorporating the third quantum number into the description of electron shells, and this correctly predicted the shell structure of sulfur to be the now celebrated sulfur shell structure (2,8,6). However neither Bohr’s system nor Stoner’s could correctly describe the changes in atomic spectra in a magnetic field (known as the Zeeman effect). We had to wait to the complete Quantum Mechanics formalist to arise in order to give a description of this atomic phenomenon an many others (like the Stark’s effect, spectrum split due to an electric field). Bohr was well aware of all this stuff. Indeed, he had written to his friend Wolfgang Pauli   to ask for his help in saving quantum theory (the system now known as “old quantum theory”). Pauli realized that the Zeeman effect could be due only to the outermost electrons of the atom, and was able to reproduce Stoner’s shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his famous exclusion principle (for fermions like the electrons theirselves) around 1925. He said: It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [l], j [ml] and m [ms]. The next step was the Schrödinger equation. Firstly published by E. Schrödinger in 1926, it gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom: his solution yields the (quantum mechanical) atomic orbitals which are shown today in textbooks of chemistry (and above). The careful study of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung’s rule (1936) for the order in which atomic orbitals are filled with electrons. The Madelung’s law is generally written as a formal sketch (picture): Shells and subshells versus orbitals In the picture of the atom given by Quantum Mechanics, the notion of trajectory looses its meaning. The description of electrons in atoms are given by “orbitals”. Instead of orbits, orbitals arise as the zones where the probability of finding an electron is “maximum”. The classical world seems to vanish into the quantum realm. However, the electron configuration was first conceived of under the Bohr model of the (hydrogen) atom, and it is still common to speak of shells and subshells (imagine an onion!!!)  despite the advances in understanding of the quantum-mechanical nature of electrons (both, wave and particles, due to the de Broglie hypothesis). Any particle (e.g. an electron) does have wave and particle features. The de Broglie hypothesis says that to any particle with linear momentum p=mv corresponds a wave length (or de Broglie wavelength) given by Remark: this formula can be easily generalized to the relativistic domain by a simple shift from the classical momentum to the relativistic momentum P=m\gamma v, so \lambda =\dfrac{h\sqrt{1-\beta^2}}{mv} with \beta=v/c An electron shell is the set of energetic allowed states that electrons may occupy which share the same principal quantum number   n (the number before the letter in the orbital label), and which gives the energy of the shell (or the orbital in the language of QM). An atom’s nth electron shell can accommodate 2n^2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons, the fourth 32, the fifth 50, the sixth 72, the seventh 92, the eighth 128, the ninth 162, the tenth 200, the eleventh 242, the twelfth 288 and so on. This sequence of “atomic numbers” is well known In fact, I have to be more precise with the term “magic number”. Magic number (atomic or even nuclear physics), in the shell models of both atomic and nuclear structure, IS any of a series of numbers that connote stable structure. The magic numbers for atoms are 2,10,18, 36, 54, and 86, 118, 168, 218, 290, 362,… They correspond to the total number of electrons in filled electron shells (having ns^2np^6 as electron configuration ). Electrons within a shell have very similar energies and are at similar distances from the nucleus, i.e., inert gases! The factor of two above arises because the allowed states are doubled due to the electron spin —each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow). An atomic subshell is the set of states defined by a common secondary quantum number, also called azimutahl quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the spectroscopic values s, p, d, and f , respectively. The maximum number of electrons which can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. Therefore, subshells “close” after the addition of 2,8,10,18, 36,50,72,… electrons. That is, atomic shells close after we reach ns^2np^6, with n>1, i.e., shells close after reaching the inert gas electron configuration. The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics,in particular the Pauli exclusion principle: no two electrons in the same atom can have the same values of the four quantum numbers stated above. The energy associated to an electron is that of its orbital. The energy of any electron configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground (a.k.a. fundamental) state. Aufbau principle and Madelung rule The Aufbau principle (from the German word Aufbau, “building up, construction”) was an important part of Bohr’s original concept of electron configuration. It may be stated as: a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy orbitals are filled before electrons are placed in higher-energy orbitals. The approximate order of filling of atomic orbitals, following the sketch given above arrows from 1s to 7p. After 7p the order includes orbitals outside the range of the diagram, starting with 8s. The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung’s rule (also referred as the Klechkowski’s rule). This rule was first stated by Charles Janet in 1929, rediscovered by E. Madelung in 1936, and later given a theoretical justification by V.M.Klechkowski. In modern words, it states that: A) Orbitals are filled in the order of increasing n+l. B) Where two orbitals have the same value of n+l, they are filled in order of increasing n. This gives the following order for filling the orbitals: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, (8s, 5g, 6f, 7d, 8p, and 9s) In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (circa 2013, July), the ununoctiom (Uuo), an atom with Z=118 protons in its nucleus and thus, 118 electrons in its ground state. The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the atomic shell model. The nuclear shell model predicts the magic numbers at Z,N=2, 8, 20, 28, 50, 82, 126 (and Z,N=184 and 258 for spherical symmetry, but it does not seem to be the case for “deformed” nuclei at high values of Z and N). Shortcomings of the Aufbau principle The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; neither of these is true (although they are approximately true enough for the principle to be useful). It considers atomic orbitals as “boxes” of fixed energy into which can be placed two electrons and no more. However the energy of an electron “in” an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no “one-electron solutions” for systems of more than one electron, only a set of many-electron solutions which cannot be calculated exactly. The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogenic (hydrogen-like) atoms , which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects like the Lamb shift). Exceptions to Madelung’s rule There are several more exceptions to Madelung’s rule among the heavier elements, and it is more and more difficult to resort to simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations, which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light . In general, these relativistic effects tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals. The electron-shell configuration of elements beyond rutherfodium (Z=104) has not yet been empirically verified, but they are expected to follow Madelung’s rule without exceptions until the element Ubn (Unbinillium, Z=120). Beyond that number, there is no accepted viewpoint (see below my discussion of Pykko’s model for the extended periodic table). from the Greeks to Mendeleiev and Seaborg Atoms and their existence from Greeks to Mendeleiev have suffered historical evolution. In this section, I am going to give you a visual tour from the “ancient elements” until their current classifications via Periodic Tables (Mendeleiev’s being the first one!). Some early elements and periodic tables:PT0ancientelementsFromGreeks  PT0bisbisbisbisbisChineseelements2 PT0bisbisbisbisChineseElements PT0bisbisbisMendeleievsAsZeusperiodictablemonument PT0bisbisElementsknownToFirstHumans PT0bisElementsCirca1800 Just for fun, Feng Shui elements are…PT0Chinese-methaphysicsFengShuiElements And you can also find today apps/games with elements as “key” pieces…Gamelogy! LOL…PT0elementsAndGamelogy Turning back to Chemistry…Or Alchemy (Modern Chemistry is an evolution from Alchemy in which we take the scientific method seriously, don’t forget it!)PT0elementsInAstrology PT0theFiveElements After the chemical revolution in the 18th and 19th century, we also have these pictures (note the evolution of the chemical elements, their geometry and classification): PT1daltonsTable1808 PT2lavoisierList PT3oldElements PT4a3dTable PT5oldsymbolsAndElements PT6oldperiodictableOctaveLaw1865newlands PT7bayleysPeriodicTable PT8MeyerPeriodicTable PT9atomicmassesCirca1850 PT10oldelementNotations PT11mendeleievsConjectureInGerman PT11oldPeriodicTableAndPicture PT12MendeleievsVerticalPeriodicPTable PT13moreaboutMendeleievsTrick PT14mendeleievsPredictions PT15rangsperiodicTable PT16metalloidsVersusMetals PT17oldelementsAnotherPicture PT18mendeleievspredictionsAndtheircontext PT19periodictableAndPeriodicFeaturesofchemicalelements Some interesting pictures about “new tables” and geometries of some periodic tables and its “make-up” process: PT20spiralperiodictable PT21schaltenbrandsperiodictable PT22mayanperiodictable The following one is just for fun (XD): PT23periodictableGeekTVseriesAndMovies PT24afun3dPeriodicTableModel PT25ellipticalperiodictable PT26periodictableAnotherVariationincludingsuperactinides PT27periodictableCylinder PT28spherePeriodicTableDream PT29infinitePeriodicTable PT30periodicTableAndElectronShells PT31otherPeriodicTableGeometry PT31periodictableArch PT32stowePeriodicTable PT33lavoisierCompleteList Extended periodic tables and the island of stability Seaborg conjectured that the 8th period elements were an interesting “laboratory” to test quantum mechanical and physical principles from relativity and quantum physics. He claimed that there could be possible that around some (high) values of Z, N (122, 126 in Z, and about 184 in N), some superheavy elements could be stable enough to be produced. This topic is yet controversial by the same reasons I mentioned in the previous post: finite size of the nucleus, relativistic effects make the nuclei to be deformed, and likely, some novel effects related to nonpertubative issues (like pair creation in strong fields, as Greiner et al. have remarked) should be taken into account. Anyway, the existence of the so-called island of stability is a hot topic in both theoretical chemistry and experimental chemistry (at the level of the synthesis of superheavy elements). It is also relevant for (quantum and relativistic) physics. However, we will have to wait to be able to find those elements in laboratories or even in the outer space! Some extended periodic tables were proposed by theoretical chemists like Seaborg and many others: PT34islandofStabilitySeaborgHypothesis PT35galacticPeriodicTable PT36circularExtendedPtable PT37extendedPeriodicTableSeaborgStabilityIsland PT41extPtableWithHblock PT42extendedPtable Pykko’s model and beyond The finnish chemist Pekka Pykko has produced a beautiful modern extended periodic table from his numerical calculations. He has discovered that the Madelung’s law is modified and then, the likely correct superheavy element included Periodic Table should be something like this (with Z less or equal than 172):  PT38pykkosPtable1 PT39pykkosTable2 You can visit P. Pykko homepage’s here http://www.chem.helsinki.fi/~pyykko/I urge to do it. He has really cool materials! The abstract of his periodic table paper deserves to be inserted here: PTpykkopaperAbstractand some of his interesting results from it are the modified electron configurations with respect to the normal Madelung’s rule (as I remarked above): PTextraPykkoGoodElConfUntilE140 PTextraPykkoGoodElConfUntilE149 PTextraPykkoGoodElConfUntilE168 Indeed, Pykko is able to calculate some “simple” and “stable” molecules made of superheavy elements! It is interesting to compare Pykko’s table with other extended periodic tables out there, like this one: His extended periodic paper can be dowloadad here and you can also watch a periodic table video by the most famous chemist in youtube talking about it here We have already seen about the feynmanium in the last paper, but what is its electron configuration? It is not clear since we have up most theoretical predictions since NO atoms from E137 have been produced yet. Thus, Feynmanium’s electron configuration is assumed to be \left[Ms\right] 5g^{17}8s^2, but due to smearing of the orbitals due to the small separation between the orbitals, the electron configuration is believed to be \left[Ms\right] 5g^{11}6f^{3}7d^18s^28p^2. The hyperphysics web page also discusses this problem. It says: “(…)Dirac showed that there are no stable electron orbits for more than 137 electrons, therefore the last chemical element on the periodic table will be untriseptium (137Uts) also known informally as feynmanium _{137}Fy. It’s full electron configuration would be something like … or is it … 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 8s1 5g18 ?(…)” What is the right electron configuration? Without a synthesized element, we do not know… Even more, you can have fun with this page and references therein http://planetstar.wikia.com/wiki/Feynmanium There, you can even find that there are proposals for almost every superheavy element (SHE) name! Let me remark that today, circa 2013, 10th July, we have named every chemical element till Z=112 (Copernicium), plus Z=114 (Flerovium) and Z=116 (Livermorium) “offitially”. Feynmanium, neutronium, and any other superheavy element name is not offitial. The IUPAC recommends to use a systematic name until the discoverers have proposed the name and it is “offitially” accepted. Thus, feynmanium should be called untriseptium until we can produce it! More Periodic Table limits? What about a 0th element with Z=0? Sometimes it is called “neutronium” or “neutrium”. More details here Of course it is an speculative idea or concept. Indeed, in japanese culture, the void is the 5th element! It is closer to the picture we get from particle physics today in which “elementary particles” are excitations from some vacuum for certain (spinorial, scalar, tensor,…) field. We could see the “voidium” (no, it is no the dalekenium! LOL) as the fundamental “element” for particle physics. And yet, we have that only a 5% of the known Universe are “radiation” and “known elements”. What a shock! PT43knownElementsAndItsWeightInOurCurrentCosmodels PT44quintessenceElementsAndCosmicDestiny PT45finalComparisonInBasicElementsPastAndNow Just for fun, again, the anime Saint Seiya Omega uses 7 fundamental “elements” (yes, I am a geek, I recognize it!)PT46saintSeiyaOmegaElements The Seaborg’s original proposal was something like the next table:PTextra2+Superactinides And you see, it is quite a different from the astrological first elements from myths and superstitions:PTextra3fengshuiElements PTExtra4ElementsAngGeometricalForms PTExtra5ElementsAndSpiritAnd finally, let me show you the presently known elementary particles again, the smallest “elements” from which matter is believed to made of (till now, of course): modeloestandar2012Remark: Chemistry is about atoms. High Energy Physics is about elementary particles. Final questions: 1st. What is your favorite (theoretical or known to exist) chemical element? 2nd. What is your favorite elementary particle (theoretical or known to exist in the Standard Model)? May The Chemical Elements and the Elementary Particles be with YOU!
f380d09158f24172
 -by I, Quantum Table of Contents Synopsis – Free Will 1. The Question of Free Will 2. Dualities 3. About Quantum Mechanics and Biology 5. Crisscross Entanglement and the Nonlinear Schrödinger Equation 6. The Mind’s Eye and Free Will 7. Making Sense of Experimental Results in Neuroscience 8. What is it like to be an Electron? 9. Predictability Powerpoint Summary of What if free will is not an illusion?  What if the miracle behind evolution is quantum mechanics?  -by I, Quantum Table of Contents 1. Miracles and Monsters 2. Occam’s Fat-Shattering Razor 3. Complexity is the Key – in Machine Learning and DNA     4. The Protein Folding Problem 7. Quantum Networks – Using Dynamics to Restore and Extend Entanglement 8. Quantum Biology – Noisy, Warm, Dynamical Quantum Systems 9. Quasicrystals & Phasons – Shadows of Life? 10. Holography & The Ultimate Quantum Network – A Living Organism 11. Quantum Mechanics and Evolution 12. Experimental Results in Evolutionary Biology Free Will – Synopsis (CC BY-NC 4.0) I, Quantum Table of Contents 1. The Question of Free Will 2. Dualities 3. About Quantum Mechanics and Biology 5. Crisscross Entanglement and the Nonlinear Schrödinger Equation 6. The Mind’s Eye and Free Will 7. Making Sense of Experimental Results in Neuroscience 8. What is it like to be an Electron? 9. Predictability I. The Question of Free Will II. Dualities III. About Quantum Mechanics & Biology Figure 11: Chain of three entangled electrons represented as V. Crisscross Entanglement and the NonLinear Schrödinger Equation VI. The Mind’s Eye and Free Will “You can choose a ready guide In some celestial voice If you choose not to decide You still have made a choice You can choose from phantom fears And kindness that can kill I will choose a path that’s clear I will choose free will.” – the song Free Will by Rush (1980) VII. Making Sense of Experimental Results in Neuroscience VIII. What is it like to be an Electron? IX. Predictability Figure 23: Quantum Neural Network from Kinda Altarboush at slideshare.net Self-Awareness and Gödel’s Incompleteness Theorem G(F) = “This statement is false” G(F) = “I cannot prove this sentence” “A bit beyond perception’s reach I sometimes believe I see That life is two locked boxes, each Containing the other’s key” General Qualia of the Senses Moral Responsibility All of the above. Evolution – Synopsis What if the Miracle Behind Evolution is Quantum Mechanics? (CC BY-NC 4.0) I, Quantum “…about forty years ago the Dutchman de Vries discovered that in the offspring even of thoroughly pure-bred stocks, a very small number of individuals, say two or three in tens of thousands, turn up with small but ‘jump-like’ changes, the expression ‘jump-like’ not meaning that the change is so very considerable, but that there is a discontinuity inasmuch as there are no intermediate forms between the unchanged and the few changed. De Vries called that a mutation. The significant fact is the discontinuity. It reminds a physicist of quantum theory – no intermediate energies occurring between two neighbouring energy levels. He would be inclined to call de Vries’s mutation theory, figuratively, the quantum theory of biology. We shall see later that this is much more than figurative. The mutations are actually due to quantum jumps in the gene molecule. But quantum theory was but two years old when de Vries first published his discovery, in 1902. Small wonder that it took another generation to discover the intimate connection!” – Erwin Schrödinger, ‘What is Life?‘ (1944) Table of Contents 1. Miracles and Monsters 2. Occam’s Fat-Shattering Razor 3. Complexity is the Key – in Machine Learning and DNA     4. The Protein Folding Problem 7. Quantum Networks – Using Dynamics to Restore and Extend Entanglement 8. Quantum Biology – Noisy, Warm, Dynamical Quantum Systems 9. Quasicrystals & Phasons – Shadows of Life? 10. Holography & The Ultimate Quantum Network – A Living Organism 11. Quantum Mechanics and Evolution 12. Experimental Results in Evolutionary Biology I. Miracles and Monsters What is going on with life? It is utterly amazing all the things these plants and creatures of mother nature do! Their beauty! Their complexity! Their diversity! Their ability to sustain themselves! The symbiotic relationships! Where did it all come from? If evolution is the right idea, how does it work? We’re not talking about the little changes, the gradual changes proposed by Charles Darwin. We understand there is natural selection going on, like pepper colored moths and Darwin’s finches. We’re talking about the big changes – the evolutionary leaps apparently due to mutations affecting gene expression, a process known as saltation. How do these mutations know what will work – shouldn’t there be a bunch of failed abominations everywhere from the gene mutations that screwed up? Shouldn’t a mix up be far more likely than an improvement? Is it possible mutations are adaptive as Jean-Baptiste Lamarck, a predecessor of Darwin, originally proposed? That is, could it be that the environment, rather than random changes, is the primary driver of adaptation? Imagine selecting architectural plans on a two-story house. Suppose we randomly pick from the existing set of millions of blueprints for the upstairs, and separately pick the plans for the downstairs and put them together. How many times would you expect this house to be functional? The plumbing and electrical systems to work? Suppose we start with a blueprint for a house and then select randomly the plans for just the living room and swap that into the original? What are the chances this would produce a final blue print that was workable? Seemingly very small we should say! We expect there should be all these monstrous houses, with leaking plumbing, short circuited electricity, windows looking out at walls, doorways to nowhere, and grotesque in style! Turns out Evolutionary Biologists have been concerned with this problem for a long time. A geneticist named Richard Goldschmidt was the first scientist to coin the term “hopeful monster” in 1933 in reference to these abominations. Goldschmidt’s theory was received with skepticism. Biologists argued: if evolution did produce big changes in a species then how would these mutants find a mate? For most of the 20th century Goldschmidt’s ideas were on the back burner, scientists were focused on gradualism as they uncovered many examples of gradual evolutionary changes in nature, supporting the natural selection hypothesis. But, recent scientific results reveal the environment does, indeed, have a deep impact on the traits of offspring. The adaptations of embryos in experiments are an example: “The past twenty years have vindicated Goldschmidt to some degree. With the discovery of the importance of regulatory genes, we realize that he was ahead of his time in focusing on the importance of a few genes controlling big changes in the organisms, not small-scales changes in the entire genome as neo-Darwinians thought. In addition, the hopeful monster problem is not so insurmountable after all. Embryology has shown that if you affect an entire population of developing embryos with a stress (such as a heat shock) it can cause many embryos to go through the same new pathway of embryonic development, and then they all become hopeful monsters when they reach reproductive age.” – Donald R. Prothero in his book Evolution: What the Fossils Say and Why it Matters (2007); via rationalwiki.org. These discoveries prompted Evolutionary Biologist Olivia Judson to write a wonderful article “The Monster is Back, and it’s Hopeful.” (via Wikipedia) Still, we are left wondering: where are all the hopeless monsters? All the embryos either adapt to the stress or keep the status quo – there are no failures. Shouldn’t some suffer crippling mutations? Are epigenetic factors involved? And, perhaps most importantly, even with environmental feedback, how do organisms know how to adapt – i.e. how is the process of adaptation so successful? The puzzle would not be complete, however, without also considering some amazing leaps that have occurred along the tree of life, for example, the mutations that lead to the evolution of the eye. How does life figure out it can construct this extended precisely shaped object – the eyeball – and set up the lens, the muscles to focus it, the photoreceptors and the visual cortex to make sense of the image? It seems like we would need a global plan, a blueprint of an eye, before we start construction! Not only that, but to figure it out independently at least fifty-times-over in different evolutionary branches? Or, how did cells make the leap from RNA to DNA, as is widely believed to be the case, in the early evolution of single celled organisms? Evolutionary biologists puzzle that to make that leap life would need to know the DNA solution would work before it tried it. How should life be so bold – messing with the basic gene structure would seem fraught with danger? How could life know? And, don’t forget, perhaps the most amazing leap of all, where does this amazing human intelligence come from? We humans, who are probing the origins of the Universe, inventing or discovering mathematics, building quantum computers and artificial intelligence, and seeking to understand our very own origin– however it may have happened – how did WE come to be? To frame the problem, let’s talk classical statistics for a second and consider the following situation: suppose we have 100 buckets into which we close our eyes and randomly toss 100 ping pong balls. Any that miss we toss again. When we open our eyes, what distribution should we expect? All in a single cup? Probably not. Scattered over many cups with some cups holding more ball than others? Probably something like that. If we repeat this experiment zillions of times, however, sooner or later, we will find one instance with them all in the same bucket. Is this a miracle? No, of course not. Once in a while amazingly unlikely things do happen. If we tossed the balls repeatedly and each time all landed in the same bucket, now that would feel like a miracle! That’s what’s weird about life – the miracles seem to keep happening again and again along the evolutionary tree. The ping pong balls appear to bounce lucky for Mother Nature! II. Occam’s Fat-Shattering Razor The Intelligent Design folks ardently point out the miraculous nature of life despite being labeled as pseudoscientists by the scientific community at large. However, no can one deny that the amazing order we see in biological systems does have the feel of some sort of intelligent design, scientifically true or not. The trouble is that these folks postulate an Intelligent Designer is behind all these miracles. In fact, it is possible that they are correct, but, there is a problem with this kind of hypothesis: it can be used to explain anything! If we ask “how did plants come to use photosynthesis as a source of energy?” we answer: “the Designer designed it that way”. And, if we ask “how did the eye come to exist in so many animal species?”, again, we can only get “the Designer designed it that way”. The essential problem is that this class of hypotheses has infinite complexity. “It may seem natural to think that, to understand a complex system, one must construct a model incorporating everything that one knows about the system. However sensible this procedure may seem, in biology it has repeatedly turned out to be a sterile exercise. There are two snags with it. The first is that one finishes up with a model so complicated that one cannot understand it: the point of a model is to simplify, not to confuse. The second is that if one constructs a sufficiently complex model one can make it do anything one likes by fiddling with the parameters: a model that can predict anything predicts nothing.” – John Maynard Smith and Eörs Szathmáry (Hat tip Gregory Chaitin) The field of learning theory forms the foundation of machine learning. It contains the secret sauce that is behind many of the amazing artificial intelligence applications today. This list includes achieving image recognition on par with humans, self-driving cars, Jeopardy! champion Watson, and the amazing 9-dan Go program AlphaGo [see Figure 2]. These achievements shocked people all over the world – how far and how fast artificial intelligence had advanced. Half of this secret sauce is a sound mathematical understanding of complexity in computer models (a.k.a. hypotheses) and how to measure it. In effect learning theory has quantified the philosophical principal of Occam’s razor which says that the simplest explanation is the correct one – we can now measure the complexity of explanations. Early discoveries in the 1970’s produced the concept of the VC dimension (also known as the “fat-shattering” dimension) named for its discoverers, Vladimir Vapnik and Alexey Chervonenkis. This property of a hypothesis class measures the number of observations that it is guaranteed to be able to explain. Recall a polynomial with, say, 11 parameters, such as: can be fit to any 11 data points [see Figure 1]. This function is said to have a VC dimension of 11. Don’t expect this function to find any underlying patterns in the data though! When a function with this level of complexity is fit to an equal number of data points it is likely to over-fit. The key to having a hypothesis generalize well, that is, make predictions that are likely to be correct, is having it explain a much greater number of observations than its complexity. Figure 1: Noisy (roughly linear) data is fitted to both linear and polynomial functions. Although the polynomial function is a perfect fit, the linear version can be expected to generalize better. In other words, if the two functions were used to extrapolate the data beyond the fit data, the linear function would make better predictions. Image and caption by Ghiles [CC BY-SA 4.0] on Wikimedia. Nowadays measures of complexity have become much more acute: the technique of margin-maximization in support vector machines, regularization in neural networks and others have had the effect of reducing the effective explanatory power of a hypothesis class, thereby limiting its complexity, and causing the model to make better predictions. Still, the principal is the same: the key to a hypothesis making accurate predictions is about managing its complexity relative to explaining known observations. This principal applies whether we are trying to learn how to recognize handwritten digits, how to recognize faces, how to play Go, how to drive a car, or how to identify “beautiful” works of art. Further, it applies to all mathematical models that learn inductively, that is, via examples, whether machine or biological. When a model fits the data with a reasonable complexity relative to the number of observations then we are confident it will generalize well. The model has come to “understand” the data in a sense. Figure 2: The game of Go. The AI application AlphaGo defeated one of the best human Go players, Lee Sedol, 4 games to 1 in March,2016 by Goban1 via Wikimedia Commons. The hypothesis of Intelligent Design, simply put, has infinite VC dimension, and, therefore can be expected to have no predictive power, and that is what we see – unless, of course, we can query the Designer J! But, before we jump on Darwin’s bandwagon we need to face a very grim fact: the hypothesis class characterized by “we must have learned that during X billion years of evolution” also has the capacity to explain just about anything! Just think of the zillions of times this has been referenced, almost axiom-like, in the journals of scientific research! III. Complexity is the Key – In Machine Learning and DNA As early as 1945 a computational device known as a neural network (a.k.a. a multi-layered perceptron network) was invented. It was patterned after the networks formed by neuron cells in animal brains [see figure 3]. In 1975 a technique called backpropagation was developed that significantly advanced the learning capability of these networks. They were “trained” on a sample of input data (observations), then could be used to make predictions about future and/or out-of-sample data. Neurons in the first layer were connected by “synaptic weights” to the data inputs. The inputs could be any number of things, e.g. one pixel in an image, the status of a square on a chessboard, or financial data of a company. These neurons would multiply the input values by the synaptic weights and sum them. If the sum exceeded some threshold value the neuron would fire and take on a value of 1 for neurons in the second layer, otherwise it would not fire and produce a value of 0. Neurons in the second layer were connected to the first via another set of synaptic weights and would fire by the same rules, and so on to the 3rd, 4th, layers etc. until culminating in an output layer. Training examples were fed to the model one at a time. The network’s outputs were compared against the known results to evaluate errors. These were used to adjust the weights in the network via the aforementioned backpropagation technique: weights that contributed to the error were reduced while weights contributing to a correct answer were increased. With each example, the network followed the error gradient downhill (gradient descent). The training stopped when no further improvements were made. Figure 3: A hypothetical neural network with an input layer, 1 hidden layer, and an output layer, by Glosser.ca (CC BY-SA) via Wikimedia Commons. Neural Networks exploded onto the scene in the 1980’s and stunned us with how well they would learn. More than that, they had a “life-like” feel as we could watch the network improve with each additional training sample, then become stuck for several iterations. Suddenly the proverbial “lightbulb would go on” and the network would begin improving again. We could literally watch the weights change as the network learned. In 1984 the movie “The Terminator” was released featuring a fearsome and intelligent cyborg character, played by Arnold Schwarzenegger, with a neural network for a brain. It was sent back from the future where a computerized defense network, Skynet, had “got smart” and virtually annihilated all humanity! The hysteria did not last, however. The trouble was that while neural networks did well on certain problems, on others they failed miserably. Also, they would converge to a locally optimal solution but often not a global one. There they would remain stuck only with random perturbations as a way out – a generally hopeless proposition in a difficult problem. Even when they did well learning the in-sample training set data, they would sometimes generalize poorly. It was not understood why neural nets succeeded at times and failed at others. In the 1990’s significant progress was made understanding the mathematics of the model complexity of neural networks and other computer models and the field of learning theory really emerged. It was realized that most of the challenging problems were highly non-linear, having many minima, and any gradient descent type approach would be vulnerable to becoming stuck in one. So, a new kind of computer model was developed called the support vector machine. This model rendered the learning problem as a convex optimization problem – so that it had only one minima and a globally optimal result could always be found. There were two keys to the support vector machine’s success: first it did something called margin-maximization which reduced overfitting, and, second, it allowed computer scientists to use their familiarity with the problem to choose an appropriate kernel – a function which mapped the data from the input feature space into a smooth, convex space. Like a smooth bowl-shaped valley, one could follow the gradient downhill to a global solution. It was a way of introducing domain knowledge into the model to reduce the amount of twisting and turning the machine had to do to fit the data. Bayesian techniques offered a similar helping hand by allowing their designers to incorporate a “guess”, called the prior, of what the model parameters might look like. If the machine only needed to tweak this guess a little bit to come up with a posterior, the model could be interpreted as a simple correction to the prior. If it had to make large changes, that was a complex model, and, would negatively impact expected generalization ability – in a quantifiable way. This latter point was the second half of the secret sauce of machine learning – allowing clever people to incorporate as much domain knowledge as possible into the problem so the learning task was rendered as simple as possible for the machine. Simpler tasks required less contortion on the part of the machine and resulted in models with lower complexity. SVM’s, as they became known, along with Bayesian approaches were all the rage and quickly established machine learning records for predictive accuracy on standard datasets. Indeed, the mantra of machine learning was: “have the computer solve the simplest problem possible”. Figure 4: A kernel, , maps data from an input space, where it is difficult to find a function that correctly classifies the red and blue dots, to a feature space where they are easily separable – from StackOverflow.com. It would not take long before the science of controlling complexity set in with the neural net folks – and the success in learning that came with it. They took the complexity concepts back to the drawing board with neural networks and came out with a new and greatly improved model called a convolutional neural network. It was like the earlier neural nets but had specialized kinds of hidden layers known as convolutional and pooling layers (among others). Convolutional layers significantly reduced the complexity of the network by limiting neurons connectivity to only a nearby region of inputs, called the “receptive field”, while also capturing symmetries in data – like translational invariance. For example, a vertical line in the upper right hand corner of the visual field is still a vertical line if it lies in the lower left corner. The pooling layer neurons could perform functions like “max pooling” on their receptive fields. They simplified the network in the sense that they would only pass along the most likely result downstream to subsequent layers. For example, if one neuron fires, weakly indicating a possible vertical line, but, another neuron fires strongly indicating a definite corner, then only the latter information is passed onto the next layer of the network [see Figure 5]. Figure 5: Illustration of the function of max pooling neurons in a pooling layer of a convolutional neural network. By Aphex34 [CC BY-SA 4.0] via Wikimedia Commons The idea for this structure came from studies of the visual cortex of cats and monkeys. As such, convolutional neural networks were extremely successful at enabling machines to recognize images. They quickly established many records on standardized datasets for image recognition and to this day continue to be the dominant model of choice for this kind of task. Computer vision is on par with human object recognition ability when the human subject is given a limited amount of time to recognize the image. A mystery that was never solved was: how did the visual cortex figure out its own structure. Interestingly, however, when it comes to more difficult images, humans can perform something called top-down reasoning which computers cannot replicate. Sometimes humans will look at an image, not recognize it immediately, then start using a confluence of contextual information and more to think about what the image might be. When ample time is given for humans to exploit this capability we exhibit superior image recognition capability. Just think back to the last time we were requested to type in a string of disguised characters to validate that we were, indeed, human! This is the basis for CAPTCHA: Completely Automated Public Turing test to tell Computers and Humans Apart. [see Figure 6]. Figure 6: An example of a reCAPTCHA challenge from 2007, containing the words “following finding”. The waviness and horizontal stroke were added to increase the difficulty of breaking the CAPTCHA with a computer program. Image and caption by B Maurer at Wikipedia While machine learning was focused on quantifying and managing the complexity of models for learning, the dual concept of the Kolmogorov complexity had already been developed in 1965 in the field of information theory. The idea was to find the shortest description possible of a string of data. So, if we generate a random number by selecting digits at random without end, we might get something like this: and so on to infinity. An infinite string of digits generated in this manner cannot be abbreviated. That is, there is no simpler description of the number than an infinitely long string. The number is said to have infinite Kolmogorov complexity, and is analogous to a machine learning model with infinite VC dimension. On the other hand, another similar looking number, , extends out to infinity: never terminating and never repeating, yet, it can be expressed in a much more compact form. For example, we can write a very simple program to perform a series approximation of to arbitrary accuracy using the Madhava-Leibniz series (from Wikipedia): So, has a very small Kolmogorov complexity, or minimum description length (MDL). This example illustrates the abstract, and far-from-obvious nature of complexity. But, also, illustrates a point about understanding: when we understand something, we can describe it in simple terms. We can break it down. The formula, while very compact, acts as a blueprint for constructing a meaningful, infinitely long number. Mathematicians understand . Similar examples of massive data compression abound, and some, like the Mandelbrot set, may seem biologically inspired [see Figure 7]. Figure 7: This image illustrates part of the Mandelbrot set (fractal). Simply storing the 24-bit color of each pixel in this image would require 1.62 million bits, but a small computer program can reproduce these 1.62 million bits using the definition of the Mandelbrot set and the coordinates of the corners of the image. Thus, the Kolmogorov complexity of the raw file encoding this bitmap is much less than 1.62 million bits in any pragmatic model of computation. Image and caption By Reguiieee via Wikimedia Commons Perhaps life, though, has managed to solve the ultimate demonstration of MDL – the DNA molecule itself! Indeed, this molecule, some 3 billion nucleotides (C, A, G, or T) long in humans, encodes an organism of some 3 billion-billion-billion (3 \times 10^{26} \colon 1 ) amino acids. A compression of about a billion-billion to 1 (1 \times 10^{18} \colon 1 ). Even including possible epigenetic factors as sources of additional blueprint information (epigenetic tags are thought to affect about 1% of genes in mammals), the amount of compression is mind boggling. John von Neuman pioneered an algorithmic view of DNA, like this, in 1948 in his work on cellular automata. Biologists know, for instance, that the nucleotide sequences: “TAG”, “TAA”, and “TGA” act as stop codons (Hat tip Douglas Hoftstadter in Gödel, Escher, Bach: An Eternal Golden Braid) in DNA and signal the end of a protein sequence. More recently, the field of Evolutionary Developmental Biology (a.k.a. evo-devo) has encouraged this view: Inspired by von Neuman and the developments of evo-devo, Gregory Chaitin in 2010 published a paper entitled “To a Mathematical Theory of Evolution and Biological Creativity“. Chaitin characterized DNA as a software program. He built a toy-model of evolution where computer algorithms would compute the busy beaver problem of mathematics. In this problem, he tries to get the computer program to generate the biggest integer it can. Like children competitively yelling out larger and larger numbers: “I’m a million times stronger than you! Well, I’m a billion times stronger. No, I’m a billion, billion times. That’s nothing, I’m a billion to the billionth power times stronger!” – we get the idea. Simple as that. The program has no concept of infinity and so that’s off limits. There is a subroutine that randomly “mutates” the code at each generation. If the mutated code computes a bigger integer, it becomes the de facto code, otherwise it is thrown out (natural selection). Lots of times the mutated code just doesn’t work, or it enters a loop that never halts. So, an oracle is needed to supervise the development of the fledgling algorithms. It is a very interesting first look at DNA as an ancient programming language and an evolving algorithm. See his book, Proving Darwin: Making Biology Mathematical, for more. Figure 8: Graph of variation in estimated genome sizes in base pairs (bp). Graph and caption by Abizar at Wikipedia One thing is for certain: the incredible compactness of DNA molecules implies it has learned an enormous amount of information about the construction of biological organisms. Physicist Richard Feynman famously said “what I cannot create, I do not understand.” Inferring from Feynman: since DNA can create life (maybe “build” is a better word), it therefore understands it. This is certainly part of the miracle of biological evolution – understanding the impact of genetic changes on the organism. The simple description of the organism embedded in DNA allows life to predictably estimate the consequences of genetic changes – it is the key to generalizing well. It is why adaptive mutations are so successful. It is why the hopeless monsters are missing! When embryos adapt to stress so successfully, it’s because life knows what it is doing. The information is embedded in the genetic code! Figure 9: Video of an Octopus camouflaging itself. A dramatic demonstration of how DNA understands how to build organisms – it gives the Octopus this amazing toolkit! Turns out it has an MDL of only 3 basic textures and the chromatophores come in only 3 basic colors! – by SciFri with marine biologist Roger Hanlon In terms of house blueprints, it means life is so well ordered that living “houses” are all modular. The rooms have such symmetry to them that the plumbing always goes in the same corner, the electrical wiring always lines up, the windows and doors work, even though the “houses” are incredibly complex! You can swap out the upstairs, replace it with the plans from another and everything will work. Change living rooms if you want, it will all work, total plug-and-play modular design. It is all because of this remarkably organized, simple MDL blueprint. The trouble is: how did this understanding come to be in the first place? And, even understanding what mutations might successfully lead to adaptation to a stress, how does life initiate and coordinate the change among the billions of impacted molecules throughout the organism? Half of the secret sauce of machine learning was quantifying complexity and the other half was allowing creative intelligent beings, such as ourselves, to inject our domain knowledge into the learning algorithm. DNA should have no such benefit, or should it? Not only that, but recent evidence suggests the role of epigenetic factors, such as methylation of DNA, is significant in heredity. How does DNA understand the impact of methylation? Where is this information stored? Seemingly not in the DNA, but if not, then where? IV. The Protein Folding Problem “Perhaps the most remarkable features of the molecule are its complexity and its lack of symmetry. The arrangement seems to be almost totally lacking in the kind of regularities which one instinctively anticipates, and it is more complicated than has been predicated by any theory of protein structure. Though the detailed principles of construction do not yet emerge, we may hope that they will do so at a later stage of the analysis.” – John Kendrew et al. upon seeing the structure of the protein myoglobin under an electron microscope for the first time, via “The Protein Folding Problem, 50 Years On” by Ken Dill DNA exists in every cell in every living organism. Not only is it some 3 billion nucleotides long, but it encodes 33,000 genes which express over 1 million proteins. There are several kinds of processes that ‘repeat’ or copy the nucleotides sequences in DNA: 1.) DNA is replicated into additional DNA for cell division (mitosis) 2.) DNA is transcribed into RNA for transport outside the nucleus 3.) RNA is translated into protein molecules in the cytoplasm of the cell – by NobelPrize.org Furthermore, RNA does not only play a role in protein synthesis. Many types of RNA are catalytic – they act like enzymes to help reactions proceed faster. Also, many other types of RNA play complex regulatory roles in cells (see this for more: the central dogma of molecular biology). Genes act as recipes for protein molecules. Proteins are long chains of amino acids that become biologically active only after they fold. While often depicted as messy squiggly strands lacking any symmetry, they ultimately fold very specifically into beautifully organized highly complex 3-dimensional shapes such as micro pumps, bi-pedaled walkers called kinesins, whip-like flagella that propel the cell, enzymes and other micro-machinery. The proteins that are created ultimately determine the function of the cell. Figure 10: This TEDx video by Ken Dill gives an excellent introduction to the protein folding problem and shows the amazing dynamical forms these proteins take. The protein folding problem has been one of the great puzzles in science for 50 years. The questions it poses are: 1. “How does the amino acid sequence influence the folding to form a 3-D structure? 2. There are a nearly infinite number of ways a protein can fold, how can proteins fold to the correct structure so fast (nanoseconds for some)? 3. Can we simulate proteins with computers?” – from The Protein-Folding Problem, 50 Years On by Ken Dill Nowadays scientists understand a great number of proteins, but several questions remain unanswered. For example, Anfinsen’s dogma is the postulate that the amino acid sequence alone determines the folded structure of the protein – we do not know if this is true. We also know that molecular chaperones help other proteins to fold, but are thought not to influence the protein’s final folded structure. We can produce computer simulations of how proteins fold. However, this is only possible in special cases of simple proteins where there is an energy gradient leading the protein downhill to a global configuration of minimal energy [see figure 11]. Even in these cases, the simulations do not accurately predict protein stabilities or thermodynamic properties. Figure 11: This graph shows the energy landscape for some proteins. When the landscape is reasonably smoothly downhill like this, protein folding can be simulated. Graph By Thomas Splettstoesser (www.scistyle.com) via Wikimedia Commons Figure 12: A TED Video (short) by David Bolinsky showing the complexity of the protein micro-machinery working away inside the cell. Despite all this complexity, organization, and beauty, little is understood about how proteins fold to form these amazing machines. Protein folding generally happens in a fraction of a second (nanoseconds in some cases), which is mind boggling given the number of ways it could fold. This is known as Levinthal’s paradox, posited in 1969: “To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^{100} different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.” – Technology Review.com, “Physicists discover quantum law of protein folding”     The Arrhenius equation is used to estimate chemical reaction rates as a function of temperature. Turns out the application of this equation to protein folding misses badly. In 2011, L. Luo and J. Lu published a paper entitled “Temperature Dependence of Protein Folding Deduced from Quantum Transition“. They show that quantum mechanics can be used to correctly predict the proper temperature dependence of protein folding rates (hat tip chemistry.stackexchange.com). Further, globular proteins (not the structural or enzymatic kind) are known to be marginally stable, meaning that there is very little energy difference between the folded, native state, and the unfolded state. This kind of energy landscape may open the door to a host of quantum properties.      V. The Nature of Quantum Mechanics – Infinite, Non-Local, Computing Capacity “It is impossible that the same thing belong and not belong to the same thing at the same time and in the same respect.”; “No one can believe that the same thing can (at the same time) be and not be.”; “The most certain of all basic principles is that contradictory propositions are not true simultaneously.” – Aristotle’s Law of Non-Contradiction, “Metaphysics (circa 350 B.C.) Via Wikipedia Max Planck in 1900, in order to solve the blackbody radiation problem, and Albert Einstein in 1905, to explain the photoelectric effect, postulated that light itself was made of individual “energy quanta” and so began the theory of quantum mechanics. In the early 20th century many titans of physics would contribute to this strange theory, but a rare, rather intuitive, discovery occurred in 1948 when Richard Feynman invented a tool called the path integral. When physicists wanted to calculate the probability that, say, an electron, travels from A to B they used the path integral. The path integral appears as a complex exponential function like e^{-i\Phi(x)} in physics equations, but this can be conceptually understood simply as a two-dimensional wave because: The real component represents one direction (e.g. horizontal-axis), while the other, “imaginary”, component another (e.g. vertical-axis). This complex function in the path integral, and in quantum mechanics in general, just means the wave is two-dimensional, not one. Think of a rope with one person holding each end. A vertical flick by one person sends a vertical wave propagating along the rope toward the other – this is not the path integral of quantum mechanics. Neither is a horizontal flick. Instead, imagine making a twisting flick, both vertical and horizontal. A corkscrew shaped wave propagates down the rope. This two-dimensional wave captures the nature of quantum mechanics and the path integral, but the wave is not known to be something physical like the wave on the rope. It is, rather, a wave of probability (a.k.a. a quantum wave function). Figure 13: The titans of quantum physics -1927 Solvay Conference on Quantum Mechanics by Benjamin Couprie via Wikimedia Commons. The path integral formulation of quantum mechanics is mathematically equivalent to the Schrödinger equation – it’s just another way of formulating the same physics. The idea for the electron is to sum (integrate) over all the possible ways it can go from A to B, summing all the 2-D waves together (a.k.a. amplitudes). To get the right answer – the one that agrees with experiment – we must also consider very exotic paths. The tools that help us do this are Feynman diagrams which illustrate all the particle physics interactions allowed along the way. So, a wave propagates from A to B via every possible path it can take in space and time and at every point therein it considers all the allowed Feynman diagrams (great intro to Feynman diagrams here). The more vertices there are in the diagram the smaller that particular diagram’s contribution – each additional vertex adds a probability factor of about 1/137th. The frequency and wavelength of the waves change with the action (a function of the energy of the particle). At B, all the amplitudes from every path are summed, some interfering constructively, some destructively, and the resultant amplitude squared is the probability of the electron going from A to B. But, going from A to B is not the only thing that path integrals are good for. If we want to calculate the probability that A scatters off of B then interacts with C, or A emits or absorbs B, the cross-section of A interacting with D, or whatever, the path integral is the tool to do the calculation. For more information on path integrals see these introductory yet advanced gold-standard lectures by Feynman on Quantum Electro-Dynamics: part 1, 2, 3 and 4. Figure 14: In this Feynman diagram, an electron and a positron antiquark pair, after which the antiquark radiates a gluon (represented by the green helix). Note: the arrows are not the direction of motion of the particle, they represent the flow of electric charge. Time always moves forward from left to right. Image and caption by Joel Holdsworth [GFDL, CC-BY-SA-3.0], via Wikimedia Commons Path integrals apply to every photon of light, every particle, every atom, every molecule, every system of molecules, everywhere, all the time, in the observable universe. All the known forces of nature appear in the path integral with the peculiar sometimes exception of gravity. Constant, instantaneous, non-local, wave-like calculations of infinitely many possibilities interfering all at once is the nature of this universe when we look really closely at it. The computing power of the tiniest subsets are infinite. So, when we fire a photon, an electron, or even bucky-balls (molecules of 60 carbon atoms!) for that matter, at a two-slit interferometer, on the other side we will see an interference pattern. Even if fired one at a time, the universe will sum infinitely many amplitudes and a statistical pattern will slowly emerge that reveals the wave-like interference effects. The larger the projectile the shorter it’s wavelength. The path integrals still must be summed over all the round-about paths, but the ones that are indirect tend to cancel out (destructively interfere) making the interference pattern much more narrow. Hence, interference effects are undetectable in something as large as a baseball, but still theoretically there. Figure 15: Results from the Double slit experiment: Pattern from a single slit vs. a double slit.By Jordgette [CC BY-SA 3.0 ] via Wikimedia Commons Feynman was the first to see the enormous potential in tapping into the infinite computing power of the universe. He said, back in 1981: “We can’t even hope to describe the state of a few hundred qubits in terms of classical bits. Might a computer that operates on qubits rather than bits (a quantum computer) be able to perform tasks that are beyond the capability of any conceivable classical computer?” – Richard Feynman [Hat tip John Preskill] Quantum computers are here now and they do use qubits instead of bits. The difference is that, while a classical 5-bit computer can be in only one state at any given time, such as “01001”, a 5-qubit quantum computer can be in all possible 5-qubit states (2^5 ) at once: “00000”, “00001”, “00010”, “00011”, …, “11111”. Each state, k, has a coefficient, \alpha_k , that, when squared, indicates the probability the computer will be in that state when we measure it. An 80-qubit quantum computer can be in 2^{80} states at once – more than the number of atoms in the observable universe! The key to unlocking the quantum computer‘s power involves two strange traits of quantum mechanics: quantum superposition and quantum entanglement. Each qubit can be placed into a superposition of states, so it can be both “0” and “1” at the same time. Then, it can be entangled with other qubits. When two or more qubits become entangled they act as “one system” of qubits. Two qubits can then be in four states at once, three qubits in eight, four qubits in 16 and so on. This is what enables the quantum computer to be in so many states at the same time. This letter from Schrödinger to Einstein in 1935 sums it up: “Another way of expressing the peculiar situation is: the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts…I would not call that one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought…” – Erwin Schrödinger, Proceedings of the Cambridge Philosophical Society, submitted Aug 14, 1935. [Hat tip to John Preskill] We can imagine starting a 5-qubit system in the ground state, all qubits initialized to “0”. The computer is in the state “00000”, no different than a classical computer so far. With the first tick of the clock (less than a nanosecond), we can place the 1st qubit into a superposition of states, state 1 = “00000” and state 2 = “10000”, with coefficients \alpha_1  and \alpha_2 indicating the probability of finding the system in each state respectively upon measurement. Now we have, in a sense, two computers operating at once. On the 2nd tick of the clock, we place the 2nd bit into a superposition too. Now our computer is in four states at once: “00000”, “10000”, “01000”, and “11000” with probabilities \alpha_1 , \alpha_2 , \alpha_3 , and \alpha_4 , respectively. And so on. In a handful of nanoseconds our computer could be in thirty-two states at once. If we had more qubits to work with, there is no theoretical limit to how many states the quantum computer can be in at once. Other quantum operations allow us to entangle two or more qubits in any number of ways. For example, we can entangle qubit #1 and qubit #2 such that if qubit #1 has the value of “0”, then qubit #2 must be “1”. Or, we can entangle qubits #3, #4, and #5 so that they must all have the same value: all zeros, “000”, or all ones, “111” (an entanglement known as a GHZ state). Once the qubits of the system are entangled, the states of the system can be made to interfere with each other, conceptually like the interference in the two-slit experiment. The right quantum algorithm of constructive and destructive interference unleashes the universe’s infinite quantum computational power. In 1994 Peter Shor invented an algorithm, known as Shor’s algorithm (a tutorial is here), for factorizing integers on a quantum computer. Factorizing is a really hard problem and that’s why this approach is used to encrypt substantially all of the information we send over the internet (RSApublic key cryptography). For example, the problem of factoring a 500-digit integer takes 10^{12} CPU years on a conventional computer – longer than the age of the universe. A quantum computer with the same clock speed (a reasonable assumption), would take two seconds! [Hat tip to John Preskill for the stats] Factoring of integers is at least in a class of problems known to be NP, and more than likely NP-Hard, in its computational complexity. That means the calculation time on a conventional computer grows exponentially bigger, proportional to e^N , as the size of the integer, N, grows (actually, this is only a conjecture, not proven, see P=NP? for more). On a quantum computer, the calculation time only grows logarithmically, proportional to (log N)^3 . That is a HUGE difference! That means, for instance, that quantum computers will trivially break all current public key encryption schemes! All the traffic on the internet will be visible to anyone that has access to a quantum computer! And still, quantum algorithms and quantum computing are very much in their infancy. We have a long way to go before we understand and can harness the full potential of quantum computing power! Figure 16: Quantum subroutine for order finding in Shor’s algorithm by Bender2k14 [CC BY-SA 4.0], via Wikimedia Commons. Based on Figure 1 in Circuit for Shor’s algorithm using 2n+3 qubits by Stephane Beauregard. There are many ways to implement a quantum computer. It is possible to make qubits out of electron-spins, so, say the spin is pointing up, that would represent a value of “1”, and down, a value of “0”. Electrons can never have any spin but either up or down, i.e. they’re quantized, but, they can exist in a superposition of both. They can also be entangled together. Other implementations involve photons, nuclear spins, configurations of atoms (called topological qubits), ion traps, and more. While there are many different approaches, and still a lot to learn, all of today’s approaches do have something in common: they try to isolate the qubits in a very cold (near absolute zero), dark, noiseless, vibration free, static environment. Nothing is allowed to interact with the qubits, nor are new qubits allowed to be added or removed during the quantum computation. We have a fraction of a second to finish the program and measure the qubits before decoherence sets in and all quantum information in the qubits is lost to the environment. Researchers are constantly trying to find more stable qubits that will resist decoherence for longer periods. Indeed, there is no criterion that says a quantum computer must be digital at all – it could be an analog style quantum computer and do away with qubits altogether. IBM has a 5-qubit quantum computer online right now that anyone can access. They have online tutorials that teach how to use it too. The best way for us to develop an intuition for quantum mechanics is to get our hands dirty and write some quantum programs, called “quantum scores” – like a musical score. It really is not hard to learn, just counter-intuitive at first. Soon, intuition for this kind of programming will develop and it will feel natural. Another company, D-Wave, is working on an alternative approach to quantum computing called quantum annealing. A quantum annealer does not allow us to write quantum programs, instead it is specifically designed to find global solutions (a global minimum) to specific kinds of mathematical optimization problems (here is tutorial from D-Wave). This process takes advantage of yet another strange property of quantum mechanics called quantum tunneling. Quantum tunneling allows the computer to tunnel from one local minimum to another, in a superposition of many different paths at once, until a global minimum is found. While they do have a 1,000+qubit commercial quantum annealer available, some physicists remain skeptical of D-Wave’s results. for a really good brain teaser) VII. Quantum Networks – Using Dynamics to Restore and Extend Entanglement Quantum networks use a continual dynamical sequence of entanglement to teleport a quantum state for purposes of communication. It works like this: suppose A, B, C, & D are qubits and we entangle A with B in one location, and C with D in another (most laboratory quantum networks have used entangled photons from an EPR source for qubits). The two locations are 200km apart. Suppose the farthest we can send B or C without losing their quantum information to decoherence is 100km. So, we send B and C to a quantum repeater halfway in between. At the repeater station B and C are entangled (by performing a Bell state measurement, e.g. passing B and C thru a partially transparent mirror). Instantaneously, A and D will become entangled! Even if some decoherence sets in with B and C, when they interact at the repeater station full entanglement is restored. After that it does not matter what happens to B or C. They may remain entangled, be measured, or completely decohere – A and D will remain entangled 200km apart! This process can be repeated with N quantum repeaters to connect arbitrarily far away locations and to continually restore entanglement. It can also be applied in a multiple party setting (3 or more network endpoints). We could potentially have a vast number of locations entangled together at a distance – a whole quantum internet! When we are ready to teleport a quantum state, \left|\phi\right> , (which could be any number of qubits, for instance) over the network, we entangle \left|\phi\right> with A in the first location and then D will instantaneously be entangled in a superposition of states at the second location – one of which will be the state \left|\phi\right> ! In a multi-party setting, every endpoint of network receives the state \left|\phi\right> instantaneously! Classical bits of information must be sent from A to D to tell which one of the superposition is the intended state. This classical communication prevents information from traveling faster than the speed of light – as required by Einstein‘s special theory of relativity. Researchers further demonstrated experimentally that macroscopic atomic systems can be entangled (and a quantum network established) by transfer of light (the EM field) between the two systems (“Quantum teleportation between light and matter” – J. Sherson et al., 2006). In this case the atomic system was a spin-polarized gas sample of a thousand-billion (10^{12} ) cesium atoms at room temperature and the distance over which they were entangled was about \frac{1}{2} meter. VIII. Quantum Biology – Noisy, Warm, Dynamical Quantum Systems Quantum Biology is a field that has come out of nowhere to be at the forefront of pioneering science. But 20 years ago, virtually no one thought quantum mechanics had anything to do with biological organisms. On the scale of living things quantum effects just didn’t matter. Nowadays quantum effects seem to appear all over biological systems. The book “Life on the Edge: The Coming Age of Quantum Biology” by J. McFadden and J. Al-Khalili (2014) is a New York Times bestseller and gives a great comprehensive introduction. Another, slightly more technical introduction, is this paper “Quantum physics meets biology” by M. Ardnt, T. Juffmann, and V. Vedral (2009), and more recently this paper Quantum biology” (2013) by N. Lambert et al. A summary of the major research follows: Photosynthesis: Photosynthesis represents probably the most well studied of quantum biological phenomenon. The FMO complex (Fenna-Mathews-Olsen) of green-sulphur bacteria is a large complex making it readily accessible. Light-harvesting antennae in plants and certain bacteria absorb photons creating an electronic excitation. This excitation travels to a reaction center where it is converted to chemical energy. It is an amazing reaction achieving near 100% efficiency – nearly every single photon makes its way to the reaction center with virtually no energy wasted as heat. Also, it is an ultrafast reaction taking only about 100 femtoseconds. Quantum coherence was observed for the first time in “Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems” by Engel et al. (2007). The energy transfer seems to involve quantum exciton delocalization that is assisted by quantum phonon states and environmental noise. It is believed that coherent interference may guide the excitations to the reaction centers. This paper proves unequivocally that photosynthesis uses quantum processes – something that there is surprisingly strong resistance to by classicists. Enzyme Catalysis: Enzymes catalyze reactions speeding up reactions rates by enormous amounts. Classical factors can only explain a small fraction of this. Quantum tunneling of hydrogen seems to play an important role. Enzymes are vibrating all the time and it is unclear what role coherence and superposition effects may also contribute to reaction rate speed-ups. Avian Compass: Several bird species, including robins and pigeons, are believed to use quantum radical-pair production to sense the Earth’s magnetic field for migratory purposes (the avian compass). Pair production involves the protein cryptochrome and resides in the bird’s eye. Olfactory sense: Traditional theories of olfaction describe a “lock & key” method where molecules (the key) are detected if they fit into a specific geometric configuration (the lock). We have about 400 differently shaped smell receptors, but recognize 100,000 different smells. For example, the human nose can distinguish ferrocene and nickelocene which both have similar geometry. It has been proposed that the olfactory sense uses quantum electron tunneling to detect the vibrational spectra of molecules. Vision receptors: One of the key proteins involved in animal vision is called retinal. The retinal molecule undergoes conformational change upon absorption of a photon. This allows humans to detect even just a handful of emitted photons. The protein rhodopsin, active in octopi in the dark ocean depths, may be able to detect single photons. Consciousness: R. Penrose was the first to propose that quantum mechanics had a role in consciousness in his book “The Emperor’s New Mind” (1989). Together with S. Hameroff, he developed a theory known as Orch-OR (orchestrated objective reduction) which has received much attention. While the theory remains highly controversial, it has been instrumental in jump starting research into possible relationships between quantum mechanics and consciousness. The compelling notion behind this has to do with quantum’s departure from determinism – a.k.a. the “annihilation operator” of freewill, i.e. quantum probabilities could potentially allow freewill to enter the picture. Generally, the thinking is that wave function collapse has something to do with conscious choice. The conversation about consciousness is a deeply fascinating subject unto itself and we will address this in a subsequent supposition. Mutation: In 1953, shortly after discovering DNA, J. Watson and F. Crick proposed that mutation may occur through a process called tautomerization. The DNA sequence is comprised of nucleotides: cytosine, adenine, guanine and thymine. The only difference between guanine and thymine is the location of a hydrogen atom in the molecular structure. Tautomerization is a process by which the hydrogen atom quantum tunnels through the molecular structure to allow guanine to transform into thymine, and similarly adenine into cytosine. Only recently have quantum simulations become sophisticated enough to test this hypothesis. This paper “UV-Induced Proton Transfer between DNA Strands” by Y. Zhang et al. (2015) shows experimental evidence that ultraviolet (UV) photons can induce tautomerization. This is a very important mechanism we will return into later. Even with the growth and success of quantum biology, and the advances in sustaining quantum entanglement (e.g. 10 billion ions entangled for 39 minutes at room temperature – 2013), some scientists look at the warm, wet environment of living organisms and conclude there is no way “to keep decoherence at bay” in such an environment. Such arguments are formidable in the context of static quantum systems – like those used for developing present day quantum computers. But, biological systems tend to be dynamical, operating far from thermal equilibrium, with lots of noise and many accessible quantum rotational, vibrational, torsional and quasiparticle states. Moreover, we have discussed the importance of managing complexity in machine learning (chapter II and III), science has had a lot of success with classical molecular chemistry (balls and sticks), and, classical calculations are much simpler than quantum calculations. Shouldn’t we cling to this simpler approach until it is utterly falsified? Maybe so, but while quantum mechanical calculations are certainly more computationally intensive, they may not be more complex as a theory. More importantly, classical science is simply struggling to correctly predict observed results all over biological systems. A thorough study of quantum biological processes is deservedly well underway. In 2009 J. Cai, S. Popescu, and H. Briegel published a paper entitled “Dynamic entanglement in oscillating molecules and potential biological implications” (follow-up enhancements in 2012 are here) which has shown that entanglement can continually recur in biological molecules in a hot noisy environment in which no static entanglement can survive. Conformational change is ubiquitous in biological systems – this is the shape changing that many proteins rely on to function. Conformational change induced by noisy, thermal energy in the environment repetitively pushes two sites of the bio-molecule together entangling them. When the two sites come together, they “measure” each other. That means that their spins must either line up together, or be opposite. The system will sit in a superposition of both, with each spin dependent upon the other, i.e. entangled, during at least a portion of the oscillation cycle. If the conformational recurrence time is less than the decoherence time, entanglement may be preserved indefinitely. Entanglement can be continually restored even in the presence of very intense noise. Even when all entanglement is temporarily lost, it will be restored cyclically. We wonder if there were not only two sites, but a string of sites, could a wave of entanglement spread, via this method, throughout the system? Followed by a wave of decoherence. In such a circumstance, perhaps an “envelope” of entanglement might cascade through the system (as we discussed in chapter VI). Such a question could be addressed in the context of quantum dynamical models as in the solution to the quantum measurement problem. Figure 19: “Conformational changes of a bio-molecule, induced, for example, by the interaction with some other chemical, can lead to a time-dependent interaction between different sites (blue) of the molecule.” – from “Dynamic entanglement in oscillating molecules and potential biological implications” by J. Cai, S. Popescu, and H. Briegel (2009) IX. Quasicrystals & Phasons – Shadows of Life? “A small molecule might be called ‘the germ of a solid’. Starting from such a small solid germ, there seem to be two different ways of building up larger and larger associations. One is the comparatively dull way of repeating the same structure in three directions again and again. That is the way followed in a growing crystal. Once the periodicity is established, there is no definite limit to the size of the aggregate. The other way is that of building up a more and more extended aggregate without the dull device of repetition. That is the case of the more and more complicated organic molecule in which every atom, and every group of atoms, plays an individual role, not entirely equivalent to that of many others (as is the case in a periodic structure). We might quite properly call that an aperiodic crystal or solid and express our hypothesis by saying: ‘We believe a gene – or perhaps the whole chromosome fibre’ – to be an aperiodic solid.” – Erwin Schrödinger, What is Life? (1944) chapter entitled ‘The Aperiodic Solid’ Crystals are structures that derive their unique properties (optical transparency, strength, etc.) from the tight packing, symmetric structure of the atoms that comprise them – like quartz, ice, or diamonds. There are only so many ways atoms can be packed together in a periodic pattern to form a two-dimensional crystal: rectangles and parallelograms (i.e. 2-fold symmetry), triangles (3-fold), squares (4-fold), or hexagons like snowflakes or honeycombs (6-fold). These shapes can be connected tightly to one another leaving no gaps in between. Moreover, there is no limit on how extensive crystals can be since attaching more atoms is just a matter of repeating the pattern. Mathematically, we can tessellate an infinite plane with these shapes. Other shapes, like pentagons, don’t work. There are always gaps. In fact, mathematicians have proven no other symmetries are allowed in crystals! These symmetries were “forbidden” in nature and crystallographers never expected to see them. But, in 1982, Dan Shechtman did! When studying the structure of a lab-created alloy of aluminum and manganese (Al_6Mn ) using an electron microscope, he saw a 5-fold symmetric diffraction pattern (Bragg Diffraction) [see Figure 20]. Most crystallographers were skeptical. Shechtman spent two years scrutinizing his work, and, after ruling out all other possible explanations, published his findings in 1984. Turns out, what he discovered was a quasicrystal. In 2011 he was awarded the Nobel Prize in chemistry for his discovery. Figure 20: Electron diffraction pattern of an icosahedral Zn-Mg-Ho quasicrystal by Materialscientist (Own work) [CC BY-SA 3.0 or GFDL], via Wikimedia Commons Quasicrystals were not supposed to exist in nature because they were thought to require long-range forces to develop. The forces that were thought to guide atomic assembly of crystals, electromagnetic Coulomb forces, are dominated by local (nearest neighbor) interactions. Still, today, we can make dozens of different quasicrystals in the lab, and, they have been found a handful of times in nature. Physicists have postulated that the non-local effects of quantum mechanics are involved and this is what enables quasicrystals to exist. Figure 21: Example of 5-fold symmetry may be indicative of biological quasicrystals. (First) flower depicting 5-fold symmetry from “Lotsa Splainin’ 2 Do”, (second) plant with 5-fold symmetric spiral from www.digitalsynopsis.com, (third) starfish from www.quora.com, (last) Leonardo Da Vinci’s “The Vitruvian Man” (1485) via Wikipeida There is evidence of quasicrystals in biological systems as well: protein locations in the bovine papilloma virus appear to show dodecahedral symmetry [see figure 22], the Boerdijk-Coxeter helix (which forms the core of collagen) packs extremely densely and is proposed to have a quasicrystalline structure, pentameric symmetry of neurotransmitters may be indicative of quasicrystals, and general five-fold symmetries in nature [see figure 21] may also be indicative of their presence. Also, the golden ratio which appears frequently in biological systems is implicit in quasicrystal geometry. Figure 22: Protein locations in a capsid of bovine papilloma virus. (a) Experimental protein density map. (b) Superimposition of the protein density map with a dodecahedral tessellation of the sphere. (c) The idealized quasilattice of protein density maxima. Kosnetsova, O.V.  Rochal, S.B.  Lorman, V.L. “Quasicrystalline Order and Dodecahedron Geometry in Exceptional Family of Viruses“, Phys. Rev. Lett., Jan. 2012, Hat tip to Prescribed Evolution. Aperiodic tilings give a mathematical description of quasicrystals. We can trace the history of such tilings back to Johannes Kepler in the 1600’s. The most well-known examples are Penrose tilings [see figure 23], discovered by Roger Penrose in 1974. Penrose worked out that a 2-D infinite plane could, indeed, be perfectly tessellated in a non-periodic way -first, using six different shapes, and later with only two. Even knowing what two shapes to use, it is not easy to construct a tiling that will cover the entire plane (a perfect Penrose tile). More likely is that an arrangement will be chosen that will lead to an incomplete tiling with gaps [see figure 23]. For example, in certain two-tile systems, only 7 of 54 combinations at each vertex will lead to a successful quasicrystal. Selected randomly, the chance of successfully building a quasicrystal quickly goes to zero as the number of vertices grows. Still, it has been shown that in certain cases it is possible to construct Penrose tiles with only local rules (e.g. see “Growing Perfect Quasicrystals“, Onoda et al., 1988). However, this is not possible in all cases, e.g. quasicrystals that implement a one-dimensional Fibonacci sequence. Figure 23: (Left) A failed Penrose tiling. (Right) A successful Penrose tiling. Both are from Paul Steinhardt’s Introduction to Quasicrystals here. Phasons are a kind of dynamic structural macro-rearrangement of particles. Like phonons they are a quasiparticle. Several particles in the quasicrystal can simultaneously restructure themselves to phase out of one arrangement and into another [see Figure 24-right]. This paper from 2009 entitled “A phason disordered two-dimensional quantum antiferromagnet” studied a theoretical quasicrystal of ultracold atomic gases in optical lattices after undergoing phason distortions. The authors show that delocalized quantum effects grow stronger with the level of disorder in the quasicrystal. One can see how phason-flips disorder the perfect quasicrystaline pattern [see Figure 24-left]. Figure 24: (Left) The difference between an ordered and disordered quasicrystal after several phason-flips from “A phason disorder two-dimension quantum antiferromagnet” by A. Szallas and A. Jagannathan. (Right) HBS tilings of d-AlCoNi (a) boat upright (b) boat flipped. Atomic positions are indicated as Al¼white, Co¼blue, Ni¼black. Large/small circles indicate vertical position. Tile edge length is 6.5A˚. Caption and image from “Discussion of phasons in quasicrystals and their dynamics” by M. Widom. Figure 25: Physical examples of quasicrystals created in the lab. Both are from Paul Steinhardt’sIntroduction to Quasicrystals“. In 2015 K. Edagawa et al. captured video via electron microscopy of a quasicrystal, Al_{70.8}Ni_{19.7}Co_{9.5} , growing. They published their observations here: “Experimental Observation of Quasicrystal Growth“. This write-up, “Viewpoint: Watching Quasicrystals Grow” by J. Jaszczak, provides an excellent summary of Edagawa’s findings and we will follow it here: certain quasicrystals, like this one, produce one-dimensional Fibonacci chains. A Fibonacci chain can be generated by starting with the sequence “WN” (W for wide, N for narrow referring to layers of the quasicrystal) and then use the following substitution rules: replace “W” with “WN” and replace “N” with “‘W”. Applying the substitutions one time transforms “WN” into “WNW”. Subsequent application expands the Fibonacci sequence: “WNWWN”, “WNWWNWNW”, “WNWWNWNWWNWWN”, and so on. The continued expansion of the sequence cannot be done without knowledge of the whole one-dimensional chain. Turns out that when new layers of atoms are added to the quasicrystal, they are usually added incorrectly leaving numerous gaps [see Figure 26]. This creates “phason-strain” in the quasicrystal. There may be, in fact, several erroneous layers added before the atoms undergo a “phason-flip” into a correct arrangement with no gaps. Figure 26: Portion of an ideal Penrose tiling illustrating part of a Fibonacci sequence of wide (W) and narrow (N) rows of tiles (green). The W and N layers are separated by rows of other tiles (light blue) that have edges perpendicular to the average orientation of the tiling’s growth front. The N layers have pinch points (red dots) where the separator layers touch, whereas the W layers keep the divider layers fully separated. An ideal tiling would require the next layer to be W as the growth front advances. However, Edagawa and colleagues observed a system in which the newly grown layer would sometimes start as an N layer, until a temperature-dependent time later upon which it would transition through tile flipping to become a W layer. (graph and caption are from Jaszczak, J.A. APS Physics) How does nature do this? Non-local quantum mechanical effects may be the answer. Is the quasicrystal momentarily entangled together so that it not only may be determined what sort of layer, N or W, goes next, but also, so that the action of several atoms may be coordinated together in one coordinated phason-flip? One cannot help but wonder, does quantum mechanics understand the Fibonacci sequence? In other words, has it figured out that it could start with “WN” and then follow the two simple substitution rules outlined above? This would represent a rather simple description (MDL) of the quasicrystal. And, if so, where does this understanding reside, i.e. where is the quasicrystal’s DNA? Suffice it to say, it has, at the very least, figured out something equivalent. In other words, whether it has understood the Fibonacci sequence or not, whether it has understood the substitution rules or not, it has developed the equivalent to an understanding as it can extend the sequence! So, even if quantum mechanics did not keep some sort of log, or blueprint of how to construct the Fibonacci quasicrystal, it certainly has the information to do so! X. Holography & The Ultimate Quantum Network – A Living Organism DNA is a remarkable molecule. Not just because it contains the whole genetic blueprint of the organism distilled in such a simple manner, but also because it can vibrate, rotate, and excite in so many ways. DNA is not natively static. It’s vibrating at superfast frequencies (like nanoseconds and femtoseconds)! Where does all this vibrational energy come from? One would think this energy would dissipate into the surrounding environment. Also puzzling is: why is there a full copy of DNA in every single cell? Isn’t that overkill? This paper, “Is it possible to predict electromagnetic resonances in proteins, DNA and RNA?” by I. Cosic, D. Cosic, and K. Lazar (2016), shows the incredible range of resonant frequencies in DNA. And, not only that, they also show that there is substantial overlap with other biomolecules like proteins and RNA. Perhaps DNA has some deeper purpose. Is it possible DNA is some sort of quantum repeater (chapter VII)? To do so, DNA would need to provide a source of entangled particles (like the EPR photon source in a laboratory quantum network). This paper “Quantum entanglement between the electron clouds of nucleic acids in DNA” (2010) by E. Rieper, J. Anders, and V. Vedral has shown that entanglement between the electron clouds of neighboring nucleotides plays a critical role in holding DNA together. They oscillate, like springs, between the nucleotides, and occupy a superposition of states: to balance each other out laterally, and to synchronize oscillations (harmonics) along the chain. The former prevents lateral strain on the molecule, and the latter is more rhythmically stable. Both kinds of superpositions exist because they stabilize and lower the overall energy configuration of the molecule! The entanglement is in its ground state at biological temperatures so the molecule will remain entangled even in thermal equilibrium. Furthermore, because the electron clouds act like spacers between the planar nucleotides they are coupled to their vibrations (phonons). If the electron clouds are in a superposition of states, then the phonons will be also. Figure 27: The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs (nucleotides) are shown in the bottom right. The nucleotides are planar molecules primarily aligned perpendicular to the direction of the helix. From Wikipedia. So, DNA’s electron clouds could provide the entanglement, but where does the energy come from? It could, for instance, come from the absorption of ultraviolet light (UV radiation). While we’re all mindful of the harmful aspect of UV radiation, DNA is actually able to dissipate this energy superfast and super efficiently 99.9% of the time. When DNA does absorb UV radiation, the absorption has been shown to be spread out non-locally along the nucleotide chain and follows a process known as internal conversion where it is thought to be thermalized (i.e. turned into heat). Could UV photons be down-converted and then radiated as photons at THz frequencies instead? One UV photon has the energy to make a thousand THz photons, for instance. We have seen such highly efficient and coherent quantum conversions of energy before in photosynthesis (chapter VIII). Could this be a way of connecting the quantum network via the overlapping resonant frequencies to neighboring DNA, RNA, and proteins? The photons would need to be coherent to entangle the network. Also, we can’t always count on UV radiation, e.g. at night or indoors. If this is to work, there must be another source of energy driving the vibrations of DNA also. A paper published in 2013 by A. Bolan et al. showed experimental evidence that THz radiation affected the expression of genes in the stem cells of mice suggesting that the THz spectrum is particularly important for gene expression. Phonon modes have been observed in DNA for some time, but not under physiological conditions (e.g. in the presence of water) until now. This paper entitled “Observation of coherent delocalized phonon-like modes in DNA under physiological conditions” (2016) by M. González-Jiménez, et al. gives experimental evidence of coherent quantum phonons states even in the presence of water. These phonons span the length of the DNA sequence, expand and contract the distance between nucleotides, and are thought to play a role in breaking the hydrogen bonds that connect the two DNA strands. They are in the THz regime and allow the strands to open forming a transcription bubble which enables access to the nucleotide sequence for replication. This is sometimes referred to as “DNA breathing“. Hence, it’s plausible these phonon modes can control gene expression, and, possibly exist in a complex superposition with the other states of the DNA molecule. They also are coherent which is critical for extending the quantum network, but, is there any evidence proteins could be entangled too? In 2015 I. Lundholm, et al. published this paper “Terahertz radiation induces non-thermal structural changes associated with Fröhlich condensation in a protein crystal” showing that they could create something called a Fröhlich condensate when they exposed a collection of protein molecules to a THz laser. Herbert Fröhlich proposed the idea back in 1968 and since then it has been the subject of much debate. Now, finally, we have direct evidence these states can be induced in biological systems. These condensates are special because they involve a macroscopic collection of molecules condensing into a single non-local quantum state that only exists under the right conditions. There are many ways a Fröhlich condensate can form, but, in this case, it involves compression of the atomic helical structure of the proteins. Upon compression, the electrons of millions of proteins in crystalline form align and form a collective vibrational state, oscillating together coherently. This conformational change in the protein is critical to controlling its functioning – something generally true of proteins, e.g. as in enzyme catalysis, and protein-protein interactions (hat tip here for the examples). In the laboratory, the condensate state will last micro- to milli- seconds after exposure to the THz radiation, a long time in biomolecular timescales. Of course, that’s upon exposure to a THz laser. Could DNA THz photon emissions perform the same feat and carry the coherent information on from DNA and entangle proteins in the quantum network as well? Could a whole quantum network involving DNA, RNA, and a vast slew of proteins throughout the organism be entangled together via continuous coherent interaction with the EM field (at THz and other frequencies)? If so, it would give the organism an identity as “One” thing, and, it would connect the proteins which are interacting with the environment with the DNA that encodes them. This would open a possible connection between the tautomerization mutation mechanism (chapter VIII) and environmental stress! In other words, a method by which mutations are adaptive would be feasible, and not just that, but a method which could use quantum computational power to determine how to adapt! But, then there is the question of energy. Where does the continual energy supply come from to support this network and can it supply it without disrupting coherence? In this paper, “Fröhlich Systems in Cellular Physiology” by F. Šrobár (2012), the author describes the details of a pumping source providing energy to the Fröhlich condensate via ATP, or GTP-producing mitochondria. Could the organism’s own metabolism be the sustaining energy source behind the organism’s coherent quantum network? In the presence of so much coherence, is it possible dynamical interference patterns, using the EM field, could be directed very precisely by the organism – very much like a hologram? Not a visual hologram but rather, images in the EM field relevant to controlling biomolecular processes (e.g. the KHz, MHz, GHz, and THz domains)? A hologram is a 3-D image captured on a 2-D surface using a laser. The holographic plate is special in that it not only records brightness and color, but also the phase of incident coherent light. When the same frequency of coherent light is shined upon it, it reproduces the 3-D image through interference. The surface does not need to be a 2-D sheet, however. Coherently vibrating systems of molecules throughout the organism could create the interference. Not only that, but if the biological quantum network is in a superposition of many states at once, could it conceivably create a superposition of multiple interference patterns in the 3-D EM field at many different frequencies simultaneously (e.g. 20 MHz, 100 GHz, 1 THz, etc.)? With these interference effects, perhaps the organism directly controls, for instance, microtubule growth in specific regions as shown in this paper “Live visualizations of single isolated tubulin protein self-assembly via tunneling current: effect of electromagnetic pumping during spontaneous growth of microtubule” (2014) by S. Sahu, S. Ghosh, D. Fujita, and A. Bandyopadhyay? The paper shows that when the EM field is turned on, at a frequency that coincides with mechanical vibrational frequency of the tubulin protein molecule, the microtubules may be induced to grow, or, stop growing if the EM field is turned off. Microtubules are structural proteins that help form the cytoskeleton of all cells throughout the organism. Perhaps, more generally, organisms use holographic like interference effects to induce or halt growth, induce conformational changes (with the right frequency), manipulate Fröhlich effects, and generally control protein function throughout themselves? Indeed, it may not only be the case of “DNA directing its own transcription” as many biologists believe, but the organism as One whole directing many aspects of its own development. Figure 28: (Left) Two photographs of a single hologram taken from different viewpoints, via Wikipedia. (Right) Rainbow hologram showing the change in colour in the vertical direction via Wikipedia. This process would be more analogous to the growth of a quasicrystal (chapter IX) than a bunch of individual molecules trying to find their way. In the process of growth, mistakes along the way happen, such as misfolded proteins. Because quantum mechanics is probabilistic, some mistakes are inevitable. They become like the phason-strain in the quasicrystal – the quantum network corrects the arrangement through non-local phason-shifts, directed holographically. Rearrangement is not like reallocating balls and sticks as in classical molecular chemistry, but more like phasing out of one configuration of quantum wave functions and into another. Perhaps the quantum computing power of vast superpositions through holographic interference effects, not unlike Shor’s algorithm (chapter V), is the key to solving the highly non-linear, probably NP-hard problems, of organic growth. Construction of the eye, a process requiring global spatial information and coordination, could be envisioned holographically by the quantum organism in the same way that quantum mechanics understood the Fibonacci sequence. Imagine the holographic image of the “Death Star” in “Star Wars” acting as a 3-D blueprint guiding its own assembly (as opposed to destroying it J). The hologram of the eye, originating from the quantum network of the organism is like a guiding pattern – a pattern resulting from coherent interfering amplitudes – guiding its own construction. It’s the same concept as how quantum mechanics can project forward the Fibonacci sequence and then build it in a quasicrystal, just scaled up many-fold in complexity. Growth of the eye could be the result of deliberate control of the organism’s coherent EM field focused through the holographic lens of DNA and the entangled biomolecules of the organism’s quantum network. Figure 29: (Left) Diagram of the human eye via Wikipedia.(Right) Close-up photograph of the human eye by Twisted Sifter. The growth of the organism could quite possibly be related to our own experience of feeling, through intuition, that the solution to a problem is out there. Maybe, we haven’t put all the parts together yet, we haven’t found a tangible approach yet, we may not know all the details but there is a guiding intuition there. We feel it. Perhaps that is the feeling of creativity, the feeling of quantum interference, the feeling of holographic effects. The building of an organism is like layers of the quasicrystals phasing together, capturing abstract complex relationships and dependencies, to make a successful quasicrystal. Each layer is a milestone on the way to that distant clever solution – a fully functional organism! Maybe humans do not have a monopoly on creative intelligence, maybe it is a power central to the Universe! Life moved it beyond quasicrystalline structures, highly advanced organisms moved it beyond the space of biomolecules, but the raw creative power, could be intrinsic. Moreover, all life would be the very special result of immense problem solving, creativity and quantum computational power! That certainly feels good, doesn’t it? XI. Quantum Mechanics and Evolution “We are the cosmos made conscious and life is the means by which the universe understands itself.” – Brian Cox (~2011) Television show: “Wonders of the Universe – Messengers” Attempts to describe evolution in quantum mechanical terms run into difficulties because quantum mechanics does not care about ‘fitness’ or ‘survival’ – it only cares about energy states. Some states are higher energy, some are lower, some are more or less stable. As in the solution of the quantum measurement problem (chapter VI), we may not need anything outside our present understanding of quantum mechanics to understand evolution. The key is recognizing: quantum entanglement itself factors into the energy of biological quantum states. Just like quantum entanglement in the electron clouds of DNA allows the electrons to more densely pack in their orbits in a cooperative quantum superposition thereby achieving a more stable energy configuration, we expect entanglement throughout the organism to lead to lower, more stable energy states. Coherence between the whole system, DNA oscillating coherently together, coherent with RNA, coherent with protein vibrations, in-synch with the EM field, all are coherent and entangled together. All that entanglement affects the energy of the system and allows for a more stable energy state for the whole organism. Moreover, it incentivizes life to evolve to organisms of increasing quantum entanglement – because it is a more stable energy state. Increasing entanglement means increasing quantum computational horsepower. Which, in turn, means more ability to find even more stable energy states in the vast space of potential biological organisms. This, as opposed to natural selection, may be the key reason for bias in evolution toward more complex creatures. Natural selection may be the side show. Very important, yes, absolutely a part of the evolutionary landscape, yes, but not the main theme. That is much deeper! Recall our example of fullerene (a.k.a. buckyballs) fired through a two-slit interferometer. When this experiment is performed in a vacuum a clear interference pattern emerges. As we allow gas particulates into the vacuum, the interference fringes grow fuzzier and eventually disappear (hat tip “Quantum physics meets biology” for the example). The gas molecules disrupt the interference pattern. They are like the stresses in the environment – heat stress, oxidative stress, lack of a food, …whatever. They all muddle the interference pattern. There is no interferometer per se’ in a living organism, but there are holographic effects throughout the organism and every entangled part of the organism can feel it (this feeling can be quantified mathematically as the entropy of entanglement through something called an entanglement witness). The stresses erode the coherence of the organism and induce instability in the energy state. The organism will probabilistically adapt by undergoing a quantum transition to a more stable energy state – clarifying the interference pattern, clarifying the organism’s internal holography. All within the mathematical framework of dynamical quantum mechanics. This could mean an epigenetic change, a simple change to the genetic nucleotide sequence or a complex rearrangement. The whole of DNA (and the epigenetic feedback system) is entangled together so these complex transitions are possible, and made so by quantum computational power. In J. McFadden’s book “Quantum Evolution” (2000) he describes one of the preeminent challenges of molecular evolutionary biology: to explain the evolution of Adenosine monophosphate (AMP). AMP is a nucleotide in RNA and a cousin of the more well-known ATP (Adenosine triphosphate) energy molecule. Its creation involves a sequence of thirteen steps involving twelve different enzymes. None of which have any use other than making AMP, and each one is absolutely essential to AMP creation (see here for a detailed description). If a single enzyme is missing, no AMP is made. Furthermore, there is no evidence of simpler systems in any biological species. No process of natural selection could seemingly account for this since there is no advantage to having any one of the enzymes much less all twelve. In other words, it would seem, somehow, evolution had this hugely important AMP molecule in mind and evolved the enzymes to make it. Such an evolutionary leap has no explanation in the classical picture, but we can make sense of this in the same way that quantum mechanics envisioned completion of the Fibonacci quasicrystal. The twelve enzymes represent quasicrystal layers along the way that must be completed as intermediate steps. In holographic terms, organisms, prior to having AMP, saw via far reaching path integrals a distant holographic plan of the molecule comprised of many frequencies of EM interference: a faint glow corresponding to the stable energy configuration of the AMP molecule, a hologram formed from the intersection of the amplitudes of infinitely many path integrals at many relevant biological frequencies. A hint of a clever idea toward a more stable energy configuration. The enzymes needed for its development were holographic interference peaks along the way. Development of each enzyme occurred not by accident, but with the grand vision of the AMP molecule all along. This is same conceptual process that we as human beings execute all the time having a distant vision of a solution to a problem, like Roger Penrose’s intuition of the Penrose tiles, Feynman’s intuition of the quantum computer, or Schrödinger’s vision of quantum genes. Intuition guides us. We know from learning theory (chapter II & III) that learning is mathematical in nature, whether executed by the machine, by the mind, or by DNA. The difference is the persistent quantum entanglement that is life, that is “Oneness”, and the holographic quantum computational power that goes with it. Because the entire organism is connected as one vast quantum entangled network, mutation via UV photon induced tautomerization (Chapter VIII) can be viewed as a quantum transition between the energy states of the unified organism. So, when the organism is faced with an environmental stress, it is in an unstable energy state. Just like a hydrogen atom absorbing an incident photon to excite it to the next energy level, the organism absorbs the UV photon (or photons) and phason-shifts the genetic code and the entire entangled organism. Isomerization of occurs. This is made possible in part by the marginal stability of proteins (chapter IV) – it takes very little energy to transition from one protein to another. In other words, a change to one or more nucleotides in the DNA sequence instantaneously and simultaneously shifts the nucleotide sequence in other DNA, RNA, and the amino acid sequences of proteins. Evolutionary adaptations of the organism are quantum transitions to more stable energy configurations. In chapters II and III we talked about the importance of simplicity (MDL) in the genetic code, the importance of Occam’s Razor. Simplicity is important for generalization, so that DNA can understand the process of building organisms in simplest terms. Thereby, it can generalize well, that is, when it attempts to adapt an organism to its environment it would have a sense of how to do it. The question then arises, how does this principle of Occam’s razor manifest itself in the context of quantum holograms? A lens, like that of the eye, is a very beautiful object with great symmetry, and must be perfectly convex to focus light properly. If we start making random changes to it, the image will no longer be in focus. The blueprint of the lens must be kept simple to ensure it is constructed and functions properly. Moreover, the muscles around the lens of the eye that flex and relax to adjust its focal length, must do so in a precise choreographed way. Random deformations of its shape will render the focused image blurry. The same concept applies to the genetic code. DNA serves as a holographic focal lens for many EM frequencies simultaneously. We cannot just randomly perturb its shape, that could damage it and leave the organism’s guiding hologram out of focus, unstable. The changes must be made very carefully to preserve order. This is a factor in the quantum calculus of mutation, it’s not simply a local question of does the UV photon interact with a nucleotide and tautomerize it. Rather, it must be non-local involving the whole organism and connecting to the stress in the environment while also keeping the DNA code very organized and simple. If a DNA mutation occurs that does not preserve a high-state-of-order in the blueprint, i.e. does not preserve a short MDL, it could be disastrous for the organism. XII. Experimental Results in Evolutionary Biology So, how does all this contrast with biological studies of evolution? Turns out Lamarck was correct, there is growing evidence that mutations are indeed adaptive – mutation rates increase when organisms are exposed to stress (heat, oxidative, starvation, etc.) and, they resist mutation when not stressed. This has been studied now in many forms of yeast, bacteria, and human cancer cells across many types of stress and under many circumstances. Moreover, there are many kinds of mutations in the genetic code ranging from small changes affecting a few nucleotides, to deletions and insertions, to gross genetic rearrangements. This paper “Mutation as a Stress Response and the Regulation of Evolvability” (2007) by R. Galhardo, P. Hastings, and S. Rosenberg sums it up: “Stress-induced genomic instability has been studied in a variety of strains, organisms, stress conditions and circumstances, in various bacteria, yeast, and human cancer cells. Many kinds of genetic changes have been observed, including small (1 to few nucleotide) changes, deletions and insertions, gross chromosomal rearrangements and copy-number variations, and movement of mobile elements, all induced by stresses. Similarly, diversity is seen in the genetic and protein requirements, and other aspects of the molecular mechanisms of the stress-induced mutagenesis pathways.” – “Mutation as a Stress Response and the Regulation of Evolvability” (2007) by R. Galhardo, P. Hastings, and S. Rosenberg What does the fossil record say about evolution? The fossil record paints a mixed picture of gradualism and saltation. The main theme of the fossil record is one of stasis – fossils exhibit basically no evolutionary change for long periods of time, millions of years in some cases. There are clear instances where the geological record is well preserved and still we see stasis, e.g. the fossil record of Lake Turkana, Kenya. Sometimes, there are gaps in the fossil record. Sometimes long periods of stasis follow abrupt periods of change in fossils – an evolutionary theory known as punctuated equilibria. Other times, the fossil record clearly shows a continuous gradual rate of evolution (e.g. the fossil record of marine plankton) – a contrasting evolutionary theory known as phyletic gradualism. This paper “Speciation and the Fossil Record” by M. Benton and P. Pearson (2001) provides an excellent summary. Neither theory, punctuated equilibria, nor phyletic gradualism seems to apply in every case. If we allow ourselves to be open to the idea of quantum mechanics in evolution, it would seem Schrödinger was right. On the fossil record, we could see quantum evolution as compatible with both the punctuated equilibria and the phyletic gradualism theories of evolution as changes are induced by stress with quantum randomness. On the biological evidence for adaptive mutation it would seem quantum evolution nails it. We have talked about the fundamental physical character of quantum mechanics and evolution. Three aspects emerge as central to the theme: quantum entanglement via a quantum network, generalization (or adaptation) through holographic quantum computing, and complexity management via the MDL principal in DNA. These three themes are all connected as a natural result of the dynamics of quantum mechanics. Sometimes, though, it can be useful to see things through a personal, 1st person perspective. Perhaps entanglement is like “love“, connecting things to become One, generalization through holographic projection like “creativity“, and MDL complexity like “understanding“. Now suppose, if just for a moment, that these three traits: love, creativity, and understanding, that define the essence of the human experience, are not just three high-level traits selected for during “X billion years of evolution” but characterize life and the universe itself from its very beginnings. The End Creative Commons BY-NC 4.0 Automated Copyright Information: <a rel=”license” href=”http://creativecommons.org/licenses/by-nc/4.0/”><img alt=”Creative Commons License” style=”border-width:0″ src=”https://i.creativecommons.org/l/by-nc/4.0/80×15.png” /></a><br />This work is licensed under a <a rel=”license” href=”http://creativecommons.org/licenses/by-nc/4.0/”>Creative Commons Attribution-NonCommercial 4.0 International License</a>.
52d0c6ae5e24bc66
Vol. 62, No. 6 (2013) A class of asymptotic solution of sea-air time delay oscillator for the El Niño-southern oscillation mechanism Ouyang Cheng, Lin Wan-Tao, Cheng Rong-Jun, Mo Jia-Qi 2013, 62 (6): 060201. doi: 10.7498/aps.62.060201 Abstract + A class of coupled system of the El Niño-southern oscillation mechanism is studied. Using the singular perturbation theory and method, the outer solution and the initial layer corrective term of the model are solved. And then, the asymptotic expansion of the solution for the problem is obtained and the asymptotic behavior of solution is considered. Numerical modeling of the signal transmission by cables and electromagnetic coupling for logging while drilling Zhu Ke-Bin, Nie Zai-Ping, Sun Xiang-Yang 2013, 62 (6): 060202. doi: 10.7498/aps.62.060202 Abstract + Lack of efficiency in transmitting logging signals has long been one of the crucial problems for the development of logging while drilling. This study aims to address this issue by using the advanced scheme for logging while drilling signal transmission, proposed by NovatekTM. The main points of the study focus on electromagnetic coupling between two adjacent pipes and the signal transmission in coaxial cables imbedded in drilling pipes. According to the axial symmetry of the electromagnetic coupling structure, the numerical mode matching is used to establish the numerical model for it. Through simulation analysis which is based on the numerical modeling of the electromagnetic coupling structure, we analyze how various parameters of the structure influence the coupling, obtain some significant conclusions, and optimize the coupling structure. The conclusions can be used to guide optimization design of coupling structure between the drill-pipe for signal transmission in logging while drilling. In addition, the rectangular transmission line whose characteristic impedance is 50 Ω is designed for the cable imbedded in the drill pipe, and the attenuation is calculated. Finally, simulation and experiment are performed for one unit of the pipeline. The results are in agreement with each other, thereby showing the good transmission performance. Critical data processing technology for spectral image inversion in a static computational spectral imager Liu Yang-Yang, Lü Qun-Bo, Zeng Xiao-Ru, Huang Min, Xiang Li-Bin 2013, 62 (6): 060203. doi: 10.7498/aps.62.060203 Abstract + To carry out spectral image inversion in a static computational spectral imager is a crucial step for accomplishing its theoretical advantages, so the data processing technology for spectral image inversion will determine the final spectral image achieved. Focusing on the spectral image inversion, we have investigated various algorithms such as image reconstruction, image compressed sensing and spectral image inversion theories, and compared them carefully. By taking into account the data transmission link of the system and the error in the engineering development process, a comprehensive simulation is carried out. The key issue of spectral image inversion, and also how to use the inversion algorithms to achieve its optimized routes are pointed out. So a detailed analysis for realizing the theoretical advantages and ensuring instrument technology development is provided. Entangled quantum heat engines based on two-qubit XXZ model with Dzyaloshinski-Mariya interaction Wang Tao, Huang Xiao-Li, Liu Yang, Xu Huan 2013, 62 (6): 060301. doi: 10.7498/aps.62.060301 Abstract + We construct an entangled quantum heat engine based on two-coupled-qubit XXZ model with Dzyaloshinski-Mariya interaction. The work done and the heat transfer are discussed according to the definition first given by Kieu, The relations between the entanglement and heat transfer, work output and efficiency are analyzed for different anisotropic parameters. The results show that the second law of thermodynamics holds in entangled systems and the isolines for the efficiency are looped curves. When the anisotropic parameter Δ is small enough, the heat engine can operate in both C1 > C2 and C1C2, however, when Δ is large, the heat engine operates in C1 > C2 only. Simulating dnamical Casimir effect at finite temperature with magnons in spin chain within an optical lattice Zhao Xu, Zhao Xing-Dong, Jing Hui 2013, 62 (6): 060302. doi: 10.7498/aps.62.060302 Abstract + In this paper, we study the dynamical characteristics of magnons generated by the static magnetic dipole-dipole interaction and the external-laser induced dipole-dipole interaction in spin chain within an optical lattice. Specially, we choose a blue-detuned optical lattice and define an effective temperature for the system. We make a comparison between the generation process of magnons and that of photons in an optical vibration cavity. The results show that by suitably choosing the system parameters, the dynamical Casimir effect at finite temperature in the magnon system can be reproduced. Hawking radiation from the dynamical spherical symmetrically Einstein-Yang-Mills-Chern-Simons black hole Yang Shu-Zheng, Lin Kai 2013, 62 (6): 060401. doi: 10.7498/aps.62.060401 Abstract + Using Hamilton-Jacobi method, the Hawking tunneling radiation and temperature are investigated near the event horizon of the Einstein-Yang-Mills-Chern-Simons black hole. The results show that the temperature and tunneling rate depend on the charge and horizon of black holes, and the conclusion is significant for investigating other dynamical black holes. What is more, we also prove that this method can be used to study Hawking radiation in the scalar, vector, Dirac field and gravitational wave cases. Cellular automaton simulation of muti-lane traffic flow including emergency vehicle Zhao Han-Tao, Mao Hong-Yan 2013, 62 (6): 060501. doi: 10.7498/aps.62.060501 Abstract + Based on the analysis of urban road traffic flow affected by emergency vehicle, a muti-lane cellular automaton model is established. Three characteristic variables are introduced to modify the lane change rules, including the give-way state variable, the affected areas of police siren and the safe distance for mandatory lane change. Numerical simulation results indicate that lane number and hybrid vehicle scale factor have a great influence on vehicle speed and lane change number in low-density range. And the parameter setting for affected areas of police siren changes the lane change number within a certain range. Meanwhile, the parameter of safe distance for mandatory lane change mainly affects emergency vehicle speed and lane change number. The study indicates that the appearance of emergency vehicle interferences with traffic flow of lower density obviously, and the proposed parameters make cellular automaton model closer to the actual traffic scenarios under emergency conditions. Chaotic forecasting of natural circulation flow instabilities under rolling motion based on lyapunov exponents Zhang Wen-Chao, Tan Si-Chao, Gao Pu-Zhen 2013, 62 (6): 060502. doi: 10.7498/aps.62.060502 Abstract + The chaotic forecasting of irregular complex flow oscillation of natural circulation flow instabilities under rolling motion condition based on the largest Lyapunov exponents is performed. The correlation dimension, Kolmogorov entropy and the largest Lyapunov exponent are determined based on the phase space reconstruction theory of experimental data. On the premise that the irregular complex flow oscillation is confirmed to own chaos characteristic, the chaotic forecasting of the irregular complex flow oscillation is carried out by calculating the largest Lyapunov exponent. A comparisons between the prediction results and experimental data indicates that the chaotic forecasting based on the largest Lyapunov exponent is an effective way of producing those two-phase natural circulation flow instabilities. Meanwhile, the maximum predictable scale of chaotic flow instability is determined and a way of dynamic forecasting to monitor flow oscillation is presented. The method employed here provides a new method of studying the complex two-phase flow instabilities. Femtosecond laser fine machining of energetic materials Wang Wen-Ting, Hu Bing, Wang Ming-Wei 2013, 62 (6): 060601. doi: 10.7498/aps.62.060601 Abstract + In this article, the characteristics of femtosecond laser pulses and the interaction mechanism between them and materials are described, and the characteristics and advantages of femtosecond laser micromachining of energetic materials are discussed. The technology and development of the femtosecond laser machining of energetic materials are reviewed. The experimental and theoretical research of femtosecond laser machining of energetic materials and the corresponding research scheme and key techniques for further development are discussed. Theoretical design and experiment study of sub-wavelength antireflective micropyramid structures on THz emitters Hu Xiao-Kun, Li Jiang, Li Xian, Chen Yun-Hui, Li Yan-Feng, Chai Lu, Wang Qing-Yue 2013, 62 (6): 060701. doi: 10.7498/aps.62.060701 Abstract + Nonlinear crystals commonly used in optical rectification for the generation of terahertz (THz) radiation have high refractive indices in the THz frequency range, and thus Fresnel reflection at the crystal-air output surface causes a large part of the generated THz wave to be reflected back into the crystals. Here we report on the design and experimental study of sub-wavelength antireflective micropyramid structures on GaP crystals. Effective medium theory is used to demonstrate the enhancement of THz output by the antireflective micropyramid structures, and further to design the antireflective structures at different frequencies. Several micropyramid structures are fabricated on the output surface of GaP crystals by micromachining, and the correlation between the THz output enhancement and the structure parameters is verified. The agreement between theory and experiment shows that our methodology is applicable to other THz emitters based on optical rectification. The research of polarized information detection for photo-elastic modulator-based imaging spectropolarimeter Chen You-Hua, Wang Zhao-Ba, Wang Zhi-Bin, Zhang Rui, Wang Yan-Chao, Wang Guan-Jun 2013, 62 (6): 060702. doi: 10.7498/aps.62.060702 Abstract + A new method of polarization modulation based triple-photoelastic-modulator (triple-PEM) is proposed as an key component of photo-elastic modulator-based imaging spectro-polarimeter (PEM-ISP) combined with acousto optic tunable filter. The basic principles of PEM-ISP and triple-PEM-based differential frequency polarization modulation are described, that is, the tandem PEMs are operated as an electro-optic circular retardance modulator in a high-performance reflective imaging system. Operating the PEMs at slightly different resonant frequencies generates a differential signal that modulates the polarized component of the incident light at a much lower heterodyne frequency. Then the basic equations for polarization measurement is derived by analyzing and calculating its Muller matrix. The simulation and experiments verify the feasibility and accuracy of polarization measurement by triple-PEM-based differential frequency polarization modulation. Finally, we analyze the influences of the setting of integral step and sampling interval of the detector polarization measurement, and a preliminary error analyses of field angle, phase retardation amplitude etc are also be carried out. The result shows that the measurement error of DoLP is less than 0.6% when the phase retardation error is 1%. This work provides the necessary theoretical basis for remote sensing of new PEM-ISP and for engineering implementation of Stokes parametric inversion. Study of Au-Ag alloy film based infrared surface plasmon resonance sensors Zhang Zhe, Liu Qian, Qi Zhi-Mei 2013, 62 (6): 060703. doi: 10.7498/aps.62.060703 Abstract + Au-Ag alloy films deposited on the glass substrates are used, for the first time, as a wavelength-interrogated near infrared surface plasmon resonance (SPR) sensor. The values of resonance wavelength (λR) of the sensor at different angles of incidence are determined by absorptiometry and its refractive-index (RI) sensitivity is investigated using aqueous glucose solutions as the standard RI samples. As the incident angle increases from 7.5° to 9.5°, the SPR absorption peak shifts from λR = 1215 nm to 767.7 nm, the full width at half magnitude (FWHM) of the peak reduces from 292.8 nm to 131.4 nm, and the RI sensitivity decreases from 35648.3 nm/RIU down to 9363.6 nm/RIU. At the same initial λR, the SPR sensor with the Au-Ag alloy film shows a higher sensitivity than that with the pure Au film (S = 29793.9 nm/RIU at λR=1215 nm with a pure Au film). Adsorption of bovine serum album molecules from the aqueous solution of 1 μmol/L protein results in a redshift of ΔλR = 12.1 nm with the Au-Ag alloy film and ΔλR=9.5 nm with the pure Au film. The experimental data also indicate that the FWHM of the SPR absorption peak with the Au-Ag alloy film is larger than that at the same λR with the pure Au film, leading to a lower spectral resolution than that of the latter. Laser detection method of ship wake bubbles based on multiple scattering intensity and polarization characteristics Liang Shan-Yong, Wang Jiang-An, Zong Si-Guang, Wu Rong-Hua, Ma Zhi-Guo, Wang Xiao-Yu, Wang Le-Dong 2013, 62 (6): 060704. doi: 10.7498/aps.62.060704 Abstract + It is the research foundation of ship wake detection by laser and new-generation optical homing torpedo to investigate the influence of multiple scattering effect on light scattering intensity and polarization characteristics of the ship wake bubbles. The simulation model of laser back-scattering detection by ship wake bubbles is based on vector Monte Carlo method, and the multiple scattering mechanism is studied. The influences of multiple scattering effect and the bubble density in ship wake on the light scattering intensity and polarization characteristics of echo signal are analyzed. The echo photon polarization contribution reception method and the echo signal polarization statistical method are proposed to solve the problem that the low photon return probability cannot form the echo energy in the system with small receiver field of view. These methods are based on the basic idea of the particle collision importance sampling and the traditional energy receiving method. The polarization detection experimental platform for the simulated wake bubbles is built and the accuracy of the simulation results is verified in experiment. The consistence of the experimental and simulation results shows that the bubble distance and density information can be characterized by echo intensity, polarization information and the echo signal intensity, and the polarization characteristics can be used to detect and distinguish the ship wake bubbles, or even a low density wake bubbles with high precision. A space audio cummunication system based on X-ray Deng Ning-Qin, Zhao Bao-Sheng, Sheng Li-Zhi, Yan Qiu, Yang Hao, Liu Duo 2013, 62 (6): 060705. doi: 10.7498/aps.62.060705 Abstract + In this paper, an X-ray communication program, which consists of a sender of grid controlled X-ray source and a receiver of X-ray single-photon detector based on micro-channel plate, is presented. With the detailed information about the signal modulation transmitter, the micro-channel-based X-ray single-photon detector as well as the signal receiving demodulator, a space audio communication system based on X-ray is built. The communication rate of more than 20 kbit/s is realized. According to the preliminary test result analyses of the X-ray space audio communication system test, the X-ray emission success rate restricts the communication speed by the influence of different X-ray intensities, signal shaping time and threshold settings respectively. Therefore, a scheme for further increasing X-ray communication performance is suggested. First-principles study of LuI3 scintillator Deng Jiao-Jiao, Liu Bo, Gu Mu 2013, 62 (6): 063101. doi: 10.7498/aps.62.063101 Abstract + We use first-principles calculation with pseudo-potential and plane wave method to study the electronic structure of LuI3. Exchange and correlation are treated in the local density approximation based on the density functional theory. The results show that the narrow bands with a width of about 0.2 eV near -4.4 eV are dominated by the 4f bands of Lu. Valence bands are located between -3.55 eV and 0 eV and mainly from p bands of I. Conduction bands are located between 2.44 eV and 12.35 eV and mainly from d bands of Lu, as well as from s bands of Lu. The peaks which appear at -3.46 eV of the s states of Lu, f states of Lu and p states of I show the strong interaction between the Lu and I. Study on ro-vibrational excitation cross sections of Ne-HF Xu Mei, Wang Xiao-Lu, Linghu Rong-Feng, Yang Xiang-Dong 2013, 62 (6): 063102. doi: 10.7498/aps.62.063102 Abstract + In this paper, the QCISD(T) method and aug-cc-pVTZ basic set are used to calculate the interactional potential of Ne atom and halogen hydride molecule HF, in which Boys and Bernardi's full counterpoise method is employed to eliminate the basis set superposition error. After obtaining the interactional potential energy data in eleven directions for He-HF, the symmetric potential V0 and the anisotropic potentials V1, V2, V3, etc. of the system are derived, by using Huxley function fitting, so as to describe well the He-HF potential energy surface. Finally, the close-coupling method is used to calculate the total collision excitation cross section, elastic partial wave cross section and inelastic elastic partial wave cross section. Molecular dynamics simulation on mechanical properties of gold nanotubes Su Jin-Fang, Song Hai-Yang, An Min-Rong 2013, 62 (6): 063103. doi: 10.7498/aps.62.063103 Abstract + The tensile and compressive mechanical properties of gold nanotubes in different crystal orientations as well as the tensile mechanical properties of the same thinkness of gold nanotubes at different radius. are investigated using the molecular dynamics simulation method. In the simulation, we select embedded atom method as the interatomic potential function. The result shows that mechanical properties in the tensile and compressive process in different crystallographic orientations are dramatically different from each other, where the yield strength of the direction is the highest and the yield strength and the Young's modulus in the direction are less than in the and crystal orientation. The yield strength has no major changes when the radius is less than 3.0 nm, but it obviously decreases with the increase of the radius when the radius is larger than 3.0 nm. A two-dimensional magneto-optical trap for a cesium fountain clock Wu Chang-Jiang, Ruan Jun, Chen Jiang, Zhang Hui, Zhang Shou-Gang 2013, 62 (6): 063201. doi: 10.7498/aps.62.063201 Abstract + To study the relationship of atomic beam flow with cooling intensity, laser detuning, and magnetic field gradient, the numerical simulation is performed and a two-dimensional magneto-optical trap setup is built. A low-velocity atomic beam flow is generated with a total flux of 2.1 109/s. Theoretical analysis and experimental results are in good consistence. Optimal detuning and magnetic field gradient can produce the largest atomic beam flow. X-ray spectrum emitted by the impact of 152Eu20+ of near Bohn velocity on Au surface Liang Chang-Hui, Zhang Xiao-An, Li Yao-Zong, Zhao Yong-Tao, Mei Ce-Xiang, Cheng Rui, Zhou Xian-Ming, Lei Yu, Wang Xing, Sun Yuan-Bo, Xiao Guo-Qing 2013, 62 (6): 063202. doi: 10.7498/aps.62.063202 Abstract + The characteristic X-ray spectra produced by the impact of highly charged ions of 152Eu20+ with energies from 2.0 to 6.0 MeV on Au surface are measured. It is found that highly charged ions could excite both the characteristic X-ray spectra of Mζ, Mα and Mδ of Au and the characteristic X-ray spectra of Mα of Eu. The total X-ray yield increases with the ion kinetic energy increasing. The total production cross section of Au induced by Eu20+ is measured and compared with those obtained from the binary encounter approximation, plane-wave-Born approximation, and the energy-loss Coulomb deflection perturbed stationary state relativistic theoretical models. Ab initio calculation on the potential energy curves and spectroscopic properties of the low-lying excited states of BCl Yu Kun, Zhang Xiao-Mei, Liu Yu-Fang 2013, 62 (6): 063301. doi: 10.7498/aps.62.063301 Abstract + The high level quantum chemistry ab initio multi-reference configuration interaction method with reasonably large aug-cc-pVQZ basis sets is used to calculate the potential energy curves of 14 -S states of BCl+ radical correlated to the dissociation limit B+(1Sg)+Cl(2Pu) and B(2Pu) +Cl+(3Pg). In order to get the better potential energy curves, the Davidson correction and scalar relativistic effect are taken into consideration. The spin-orbit interaction is first considered, which makes the lowest 4 -S states split to 7 states. The calculational results show that the avoided crossing rule exists between the states of the same symmetry. The analyses of the electronic structures of -S states determine the electronic transition of each state and demonstrates that the -S electronic states are multi-configurational in nature. Then the spectroscopic constants of the bound -S and states are obtained by solving the radial Schrdinger equation. By comparison with experimental results, the spectroscopic constants of ground states are in good agreement with the observed values. The remaining computational results are reported for the first time. First-principles study on the piezoelectric properties of hydrogen modified graphene nanoribbons Liu Yuan, Yao Jie, Chen Chi, Miao Ling, Jiang Jian-Jun 2013, 62 (6): 063601. doi: 10.7498/aps.62.063601 Abstract + This paper focuses on the piezoelectric properties of zigzag graphene nanoribbons with hydrogen selective modifications by first-principles calculations. The structures of hydrogen modified graphene nanoribbons are optimized and the calculated hydrogen binding energies indicate that these structures are very stable. Owing to the hydrogen atom selective adsorption, the adjacent carbon atoms have different charge states and breaking inversion symmetries of nonpiezoelectric graphene. So, the positive charge centers and the negative charge centers of the hexatomic carbon ring in these structures separate from each other under uniaxial tensile strain, inducing the macroscopical electric polarization. Furthermore, the gradient of strain induced dipole moment density is related to ribbon width, i.e., the wider the ribbon, the better the piezoelectric property is. Besides, the dipole moment density of hydrogen selective modified graphene nanoribbons without strain could be controlled by changing the edge modification configuration of hydrogen atoms effectually. Tunable split ring resonators in terahertz band Dai Yu-Han, Chen Xiao-Lang, Zhao Qiang, Zhang Ji-Hua, Chen Hong-Wei, Yang Chuan-Ren 2013, 62 (6): 064101. doi: 10.7498/aps.62.064101 Abstract + Split ring resonators (SRRs) can be used as a negative permeability medium near its magnetic resonance frequency. In this paper, a new type of THz-band magnetic resonance structure is proposed by introducing metal wires into traditional SRRs, and the effect of metal wires on the transmission characteristics of SRRs is numerically investigated. The results show that the resonant frequency of SRRs significantly decreases with the number of metal wires increasing. The parameters of metal wires, such as length Lx, width Wx and distance Gx also have influence on the resonant frequency of SRRs. Meanwhile, the results verify that the introduction of metal wires play an important role in reducing the size of the device, and cannot be affected by the presence of dielectric substrate. The new magnetic resonance structure proposed in this paper provides a reference for the design and practical applications of metamaterials in the future. Reflection and transmission characteristics of electromagnetic waves by the uniaxially anisotropic chiral slab Dong Jian-Feng, Li Jie 2013, 62 (6): 064102. doi: 10.7498/aps.62.064102 Abstract + The reflection and transmission characteristics of electromagnetic waves by the uniaxially anisotropic chiral slab with the optical axis parallel to the interface are investigated. Formulas of the reflection and transmission coefficients (power) are derived. The curves of powers of the reflected and transmitted electromagnetic waves are presented for four cases of dielectric constants according to their signs. The effects of chirality parameter on the reflection and transmission are discussed. Especially, the dependences of pseudo-Brewster angles on the chirality parameter are plotted. Design of low-radar cross section microstrip antenna based on metamaterial absorber Yang Huan-Huan, Cao Xiang-Yu, Gao Jun, Liu Tao, Ma Jia-Jun, Yao Xu, Li Wen-Qiang 2013, 62 (6): 064103. doi: 10.7498/aps.62.064103 Abstract + A metamaterial absorber with high absorptivity, wide incident angle and no surface ullage layer is designed and applied to microstrip antenna to reduce its radar cross section (RCS). The results show that the absorber can exhibit an absorption of 99.9% with a thickness of 0.3 mm. Compared with the conventional microstrip antenna, the proposed antenna has an RCS reduction of more than 3 dB in the boresight direction in the working frequency band, and the largest reduction can reach 16.7 dB, the monostatic and bistatic RCS reduction are over 3 dB from -30° to +30° and -90° to +90° respectively, while the radiation performance is kept, which proves that the absorber has an excellent absorptivity and could be applied to microstrip antennas to achieve in-band stealth. Cognitive physics-based method for image edge representation and extraction with uncertainty Wu Tao, Jin Yi-Fu, Hou Rui, Yang Jun-Jie 2013, 62 (6): 064201. doi: 10.7498/aps.62.064201 Abstract + Image edge detection is an important tool of image processing, in which edge representation and extraction with uncertainty is one of key issues. Based on the physics-like methods for image edge representation and extraction, a novel cognitive physics-based method with uncertainty is proposed. The method uses data field to discover the global information from the image and then to map it from grayscale space to the appropriate potential space. From the point of view of the field theory, the method establishes an extensible theoretical framework and unifies the existing physics-like methods. On the other hand, the method defines the ascending half-cloud to construct the internal relationship between the range of cloud uncertainty degree and the edge representation and extraction. Finally, the method achieves image edge representation and extraction with uncertainty using the cognitive physics. The time complexity of the proposed algorithm is approximately linear in the size of the original image. It is indicated by the quantitative and qualitative experiments that the proposed method yields accurate and robust result, and is reasonable and effective. Splitting of electromagnetically induced transparency window and appearing of gain due to radio frequency field Li Xiao-Li, Shang Ya-Xuan, Sun Jiang 2013, 62 (6): 064202. doi: 10.7498/aps.62.064202 Abstract + Two resonant radio frequency fields are added to lambda three-level system in this paper. By discussing the behaviors of probing field absorption profiles under the effect of different Rabi frequencies of two radio frequency fields, the splitting of electromagnetically induced transparency (EIT) can be seen and the overlapping between EIT and gain can be obtained. The results show that the two radio frequency fields have different control functions on the system. The radio frequency field which interacts with hyperfine levels of ground state plays a role in the splitting of EIT, but the radio frequency field which interacts with hyperfine levels of excited state does not work on it. In addition only when the Rabi frequency of radio frequency field interacting with hyperfine levels of ground state is greater than with hyperfine levels of excited state, can the new features about the overlapping between EIT and gain be obtained. High power bessel ultrashort pulses directly output from a fiber laser system Xie Chen, Hu Ming-Lie, Xu Zong-Wei, Wu Wei, Gao Hai-Feng, Zhang Da-Peng, Qin Peng, Wang Yi-Sen, Wang Qing-Yue 2013, 62 (6): 064203. doi: 10.7498/aps.62.064203 Abstract + High power Bessel pulses directly output from a fiber-based amplifier system are demonstrated. A compact solution based on the inverse micro-axicon (IMAX) on fiber end is proposed for the conventional ultrashort pulse fiber laser system to enable the direct generation of high power Bessel pulses from lasers without any additional exhausting alignments. The IMAX is fabricated on one facet of a ytterbium-doped large mode area fiber by focusing ion beam technique and constitutes an integrated beam shaper in combination with an inherent collimating lens in the fiber laser system. The experimental results accord qualitatively with the simulations. The system can directly generate chirped Bessel pulses with diffraction-free propagation in meter-scaled free space. The highest average power of such a wavepacket can reach 10.1 W, correspongding to 178 nJ, and the pulse duration can be dechirped to 140 fs. Design of incident angle-independent color filter based on subwavelength two-dimensional gratings Hong Liang, Yang Chen-Ying, Shen Wei-Dong, Ye Hui, Zhang Yue-Guang, Liu Xu 2013, 62 (6): 064204. doi: 10.7498/aps.62.064204 Abstract + A novel design of reflective color filters based on a two-dimensional subwavelength grating structure is proposed, which exhibits an incident angle independent property with unpolarized incident light in the visible range. By using rigorous coupled-wave analysis method, the effects of the grating period, the groove depth and the size of the structure on the reflectance spectrum are investigated in detail. The structural parameters of the gratings are optimized, and a color filter with high angular tolerance is achieved. Simulation result shows that the maximal reflectance is 56% at 424 nm with a bandwidth of 45 nm, and that the grating can almost keep its reflectance, bandwidth and the peak position at the incident angle up to about 60° under unpolarized incident light. The peak position of the color filter can be tuned from 400 nm to 520 nm by changing structural parameters of the gratings, and keep its incident angle-independent property. Multi-level authentication based on two-beam interference He Wen-Qi, Peng Xiang, Meng Xiang-Feng, Liu Xiao-Li 2013, 62 (6): 064205. doi: 10.7498/aps.62.064205 Abstract + A method of multi-level authentication based on two-beam interference is proposed. By verifying the "password" and "phase key" of one user simultaneously, the system can thus achieve the two-factor authentication on the user's identity. This scheme can not only check the legality of one user, but also verify his identity level as an authorized user and then grant the user the corresponding permissions to access the system resources. While operating the authentication process, which largely depends on an optical setup based on interference, a "phase key" and a password-controlled "phase lock" are firstly loaded on two spatial light modulators (SLMs), separately. Then two coherent beams are respectively, modulated by the two SLMs and then interfere with each other, leading to an interference pattern in the output plane. It is recorded and transmitted to the computer to finish the last step of the authentication process: comparing the interference pattern with the standard verification images in the database of the system to verify whether it is an authorized user. When it turns to the system designing process for a user, which involves an iterative algorithm to acquire an estimated solution of an inverse problem, we need to determine the "phase key" according to a modified phase retrieval iterative algorithm under the condition of an arbitrarily given "phase lock" and a previously determined identity level (corresponding to a certain standard verification image). The theoretical analysis and simulation experiments both validate the feasibility and effectiveness of the proposed scheme. Research on the key parameters of illuminating beam for imaging via ptychography in visible light band Wang Ya-Li, Shi Yi-Shi, Li Tuo, Gao Qian-Kun, Xiao Jun, Zhang San-Guo 2013, 62 (6): 064206. doi: 10.7498/aps.62.064206 Abstract + Some key parameters of illuminating beam and the influence on imaging quality are investigated via ptychography in visible light band. The influences of overlap ratio, size and shape of illuminating beam on imaging quality and their relationship are studied using ptychographical iterative engine algorithm. The simulation results show that the overlap ratio of illuminating beam is a main factor influencing imaging quality. Shape of illuminating beam mainly influences the convergence of ptychography. And the size of illuminating beam less influences directly the imaging quality and convergence. Therefore, the simulation results play an important theoretic guiding role in optimizing the beam parameters in visible light, the X-ray and electronic band and other bands. Generation of continuous-variable entanglement in a two-mode four-level single-atom driven by microwave Song Ming-Yu, Wu Yao-De 2013, 62 (6): 064207. doi: 10.7498/aps.62.064207 Abstract + In this paper, we discuss the generation and evolution of continuous-variable entanglement in a two-mode single-atom laser, where the atomic coherence is induced by two classical microwave fields, which drive the corresponding fine atomic transitions. The results show that the intensity of the microwave field can influence effectively the entanglement properties of the cavity field. In addition, our numerical results also show that the intensity and the period of entanglement between the two cavity modes as well as the total mean photon number of the cavity field can be increased synchronously by adjusting the corresponding frequency detuning. Physical modeling and caculation method of laser pulse superposition in multi-pass amplification process Zhang Ying, Liu Lan-Qin, Wang Wen-Yi, Huang Wan-Qing, Xie Xu-Dong, Zhu Qi-Hua 2013, 62 (6): 064208. doi: 10.7498/aps.62.064208 Abstract + Physical model and caculation method are established to describe the laser pulse superpositon in multi-pass amplification process. In this model, the inversion pupulation density is consumed by the pulse leading edge and tailing edge simultaneously. It is demonstrated that this model can not solve the problem of laser superposition amplification in the time-delay coordination. The superposition amplification is solved by building a new time-space coorination. Base on the physical model and caculation method, computer simulation is performed and the pulse shape distortion is discussed at different cavity mirror positions in two-pass amplification process. Generation of ultra-wideband signals by directly current-modulating distributed feedback laser diode subjected to optical feedback Liu Ming, Zhang Ming-Jiang, Wang An-Bang, Wang Long-Sheng, Ji Yong-Ning, Ma Zhe 2013, 62 (6): 064209. doi: 10.7498/aps.62.064209 Abstract + The chaotic ultra-wideband (UWB) pulse signals are generated by directly modulating semiconductor laser subjected to optical feedback. We simulate that the -10 dB bandwidth and the central frequency of the RF spectrum of the chaotic UWB signals are influenced by the bias current and feedback strength. The research results demonstrate that the -10 dB bandwidth of the RF spectrum of the UWB signals increases with the increases of the bias current of the semiconductor laser and the feedback, the central frequency also increases with the increases of the bias current and the feedback. In our experiments, chaotic UWB signals with steerable and flatted power spectrum are generated by directly modulating DFB-LD subjected to optical feedback. The power spectrum of UWB signals is fully compliant with the FCC indoor mask, while a large fractional bandwidth of 133% and a central frequency of 6.6 GHz are achieved. The central frequency and -10 dB bandwidth of the chaotic UWB signals are on a large scale tunable by adjusting the bias current and feedback power. In addition, the chaotic UWB signals transmit through a 34.08 km single mode fiber and the power spectrum does not have any discrete spectrum line. Experimental research of high performance fiber and fiber laser at 1018 nm Wang Yi-Bo, Chen Gui, Xie Lu, Jiang Zuo-Wen, Li Jin-Yan 2013, 62 (6): 064210. doi: 10.7498/aps.62.064210 Abstract + The effects of codopants and the deposition parameters such as gas flowrate and pressure in tube during the fabrication of fiber performs are studied. It is found that the fluorescence spectrum of Yb3+ can shift when codoping other elements, according to which, a double cladding Yb3+-doped fiber that is beneficial for 1018 nm laser is fabricated for the first time. When the fiber length is 7 m, an output of 22.8 W at 1018 nm is obtained. The optical-optical conversion efficiency is approximately 70%, and there are neither spontaneous radiation nor saturation. Self-diffraction based self-reference spectral interferometry Li Fang-Jia, Liu Jun, Li Ru-Xin 2013, 62 (6): 064211. doi: 10.7498/aps.62.064211 Abstract + A new method of characterizing the femtosecond pulse is proposed based on the self-diffraction process in a thin transparent bulk medium and the self-reference spectral interferometry. A simple device is designed based on this technique and is successful in characterizing a ~40 fs pulse at 800 nm centeral wavelength. The result is in accordance with that measured by a commercial self-reference spectral phase interferometry for direct electric reconstruction. Pulses in a spectral range from deep UV to middle IR are expected to be measured by this new method and corresponding simple device. Analysis of features of the microdisk cavity perpendicular coupler Shu Fang-Jie 2013, 62 (6): 064212. doi: 10.7498/aps.62.064212 Abstract + The use of a waveguide perpendicular to boundaries of the microdisk cavity is a newly developed coupling technique. We make a detailed analysis about the adaptation of the cavity size, the adaptation of the wavelength and the expansibility of the ports. The results confirm that the perpendicular coupler is valid in large cavity or multiple bands. It is shown that multiple perpendicular couplers can exchange energy with a microdisk cavity and work as filter, beam splitter, and crossroads of optical path. The usage of perpendicular coupler in integrated optical circuit with microcavity components will make the selection of materials and arrangement of the optical path more flexible. Numerical study of long-range interaction between two beams in (1+2)-dimensional thermal nonlocal media Lu Da-Quan, Qi Ling-Min, Yang Zhen-Jun, Zhang Chao, Hu Wei 2013, 62 (6): 064213. doi: 10.7498/aps.62.064213 Abstract + According to the nonlinear Schrödinger equation and Poisson equation of thermal diffusion, we investigate the interaction of double beams in (1+2)-dimension thermal nonlocal medium, using the slip-step Fourier algorithm and multi-grid method. The results show that the two beams intertwine with each other during propagation. If the power and the tilt parameter are appropriate, the projections of the trajectories of the beams in (X, Y) plane are approximately circle, even if the incident distance between the beams is changed. Because of the strongly nonlocal property of thermal medium, the influences of boundaries and initial transverse momentum can be felt when beams are far from the boundaries; there will be an oscillatory propagation when the mass center of the input field deviates from sample center or the initial transverse momentum is unequal to zero. Characterization and comparison of 7-core and 19-core large-mode-area few-mode fibers Lin Zhen, Zheng Si-Wen, Ren Guo-Bin, Jian Shui-Sheng 2013, 62 (6): 064214. doi: 10.7498/aps.62.064214 Abstract + A novel multi-core large-mode-area few-mode fiber (MC-LMA-FMF) is proposed in this paper. The special structure of air holes makes it operate in few modes (HE11 and HE21 mode only). Numerical analysis shows that the 7-core-LMA-FMF can maintain a stable dual-mode operation and the effective area of the fundamental mode can reach 866.54 μm2. The regular pattern that fiber structure parameters affect mode characteristics and the effective area is investigated, and the similarities and differences brought in by increasing the number of cores is also analyzed. The advanced 19-core-LMA-FMF inherits the few-mode characteristic, meanwhile, the effective area of the fundamental mode can be as high as 3617.55 μm2. Compared with the reported few-mode fibers, MC-LMA-FMF obtains a large effective area and good bending characteristics. These advantages enable this new type of fiber to be a potential candidate for high-speed large-capacity optical fiber transmission systems or high power fiber amplifiers and lasers. Generation of visible and infrared broadband dispersive waves in photonic crystal fiber cladding Zhao Xing-Tao, Zheng Yi, Han Ying, Zhou Gui-Yao, Hou Zhi-Yun, Shen Jian-Ping, Wang Chun, Hou Lan-Tian 2013, 62 (6): 064215. doi: 10.7498/aps.62.064215 Abstract + The optical properties of photonic crystal fiber cladding knot among the three air holes are analyzed. The mode area, nonlinear coefficient and dispersion characteristics of the core and cladding knot are contrasted. Cladding knot of photonic crystal fiber has a small core and highly nonlinear characteristics. For larger cladding air holes, double zero dispersion curves are obtained. According to the dispersion curve, phase-matching features are analyzed for dispersive wave generation. Variation rules of the central wavelength of the dispersive wave with pump power and wavelength are achieved. The photonic crystal fiber designed is fabricated. The visible and infrared broadband dispersive waves above 300 nm are obtained in experiment. Experimental and theoretical results are completely consistent with each other. These are foundation for wavelength conversion and supercontinuum broadband light source. Influence of monitoring point wavelength on axial strain sensitivity of high-birefringence fiber loop mirror Jiang Ying, Liang Da-Kai, Zeng Jie, Ni Xiao-Yu 2013, 62 (6): 064216. doi: 10.7498/aps.62.064216 Abstract + The influence of the monitoring point wavelength on the axial strain sensitivity of high-birefringence fiber loop mirror is investigated. The theoretical expression for the axial strain sensitivity of high-birefringence fiber loop mirror is developed. The results show that the sensitivity increases with the wavelength of the monitoring point increasing when the high-birefringence fiber material is certain, that the sensitivity is constant and the wavelength shift is linear versus strain for the certain monitoring point. The axial strain sensitivities of the different wave peaks are monitored in experiment. The experimental results of data fitting are in good agreement with the theoretical ones. The research results help to improve the strain sensitivity, the temperature sensitivity, etc. of high-birefringence fiber loop mirror. The beam propagation based on one-dimensional separation modulated photonic lattices Qi Xin-Yuan, Cao Zheng, Bai Jin-Tao 2013, 62 (6): 064217. doi: 10.7498/aps.62.064217 Abstract + We numerically study the propagations of Gaussian beams in four types of separation modulated photonic lattices. The results shows that the potential wells between double positive hyperbolic secant and rectangular potential barriers and between the potential barriers in the forms of double negative hyperbolic secant and rectangular functions can both support localized linear modes. Moreover, the coupling effects between two linear modes in the potential barriers can be used to realize all-optical switch. Furthermore, the nonlinear localization can also be observed in high power. Our results supply new ideas for all optical switch, light controlling and manipulation in photonic lattices. Analysis of underwater sound absorption of visco-elastic composites coating containing micro-spherical glass shell Yu Li-Gang, Li Zhao-Hui, Wang Ren-Qian, Ma Li-Li 2013, 62 (6): 064301. doi: 10.7498/aps.62.064301 Abstract + Underwater sound absorption coating is significant to the stealth of a submarine, so it attracts a lot of attention. Underwater sound absorption of visco-elastic composites coating containing micro-spherical glass shell was investigated theoretically. The mechanical and acoustic properties of the composites in response to the volume of the micro-spherical glass shell were analyzed by the effective parameters method. Sound absorption of a single layer composites coating containing different volume of micro-spherical glass shell was calculated by the one-dimensional model, in which sound propagates in multi-layer media. The calculated results show that the sound absorption at low frequencies can be promoted by increasing the volume of micro-spherical glass shell, but the sound absorption at high frequencies is depressed. The volume distribution of the micro-spherical glass shells across the thickness of the coating was optimized by the genetic algorithm. The optimal multi-layer structure can promote the sound absorption at low frequencies, and keep the sound absorption coefficients above a limited value (0.7) at high frequencies. The optimal multi-layer composite coating can work at high pressure since it does not contain hollow macro-structure. Its structure is simple, so the technique of its fabrication should not be complicated. The theoretical method achieved in this paper can be applied in the design of underwater sound absorption coating. Orthogonal code shift keying spread spectrum underwater acoustic communication Yu Yang, Zhou Feng, Qiao Gang 2013, 62 (6): 064302. doi: 10.7498/aps.62.064302 Abstract + Code shift keying (CSK) is generally used to overcome the spreading gain versus data rate limitation in underwater acoustic (UWA) communication as generalized M-ary spread spectrum technology. In addition, the concept of orthogonal CSK is introduced into the UWA communication to achieve higher rate, mitigate crosstalk from the other thoroughfare and utilize the redundant information of CSK adequately. In this paper, we propose a new scheme employing orthogonal double thoroughfare CSK spread spectrum UWA communication with utilizing code phase information combined. First, each symbol integration output form of the proposed method is deduced. Furthermore, the property of orthogonal CSK is analyzed and its bit error rate is investigated as compared with conventional CSK and double thoroughfare CSK via simulation. Finally, the validity of simulation comparison is verified in experiment. 580.6 bps data rate of the proposed communication scheme is realized in 104 bit volume and 4 kHz bandwidth efficiently. It is shown that the proposed method provides significantly improved communication performance through formula, simulation and test. The measure of environmental sensitivity in detection performance degradation Liu Zong-Wei, Sun Chao, Du Jin-Yan 2013, 62 (6): 064303. doi: 10.7498/aps.62.064303 Abstract + Existing detection methods have mismatch problem when applyed to the real uncertain ocean, which will lead to the detection performance degradation. However, there has been little work on defining the practical quantitative measures of environmental sensitivity. In this article we define a measure of environmental sensitivity for target detection performance loss in an uncertain ocean for realistic uncertainties in various environmental parameters (water-column sound speed profile and seabed geoacoustic properties). The Monte Carlo approach is used to transfer the environment uncertainty through the forward problem and quantify the resulting variability in the detection performance loss. The computer simulation is based on the Malta Plateau, a well-studied shallow-water region of the Mediterranean Sea. The simulation result shows that 1) the sensitivity is range and depth dependent and in the sound channel the sensitivity is much smaller than in other regions of the ocean; 2) the sound speed profile and the upper seabed layer are most sensitive parameters for the detection performance loss; 3) the sensitivity is frequency dependent. The seabed layer properties such as sediment thickness, density and attenuation coefficient have less influence on the detection as the frequency increases. Linear wave propagation in the bubbly liquid Wang Yong, Lin Shu-Yu, Zhang Xiao-Li 2013, 62 (6): 064304. doi: 10.7498/aps.62.064304 Abstract + In order to get the factor of influence of bubbly liquid on the acoustic wave propagation, the linear wave propagation in bubbly liquid is studied. The influence of bubbles is taken into account when the acoustic model of bubbly liquid is established, and we can get the corrected oscillation equation of the bubble when the interaction of bubbles is taken into the Keller's model. One can get the acoustic attenuation coefficient and the sound speed of the bubbly liquid through solving the linearized equation of wave propagation of bubbly liquids and the oscillation equation of bubbles when (ωR0)/c << 1. After the numerical analysis, we find that the acoustic attenuation coefficient increases and the sound speed will turn smaller as the numbers of bubbles increases and the bubbles gets smaller when the driving frequency of sound field keeps constant; when the driving frequency is far bellow the resonance frequency of bubble and both the volume fraction and the size of bubbles are kept constant, the sound speed will changes in a way contrary to the case of driving frequency of sound field; it is not evident that the bubble interaction influences the acoustic attenuation coefficient and the sound speed. Finally, we deem that the volume concentration, the size of bubble and the driving frequency of sound field are the important parameters which determine the deviations of the sound speed and the attenuation from those of bubble-free water. On the first integrals of linear damped oscillators Ding Guang-Tao 2013, 62 (6): 064501. doi: 10.7498/aps.62.064501 Abstract + By introducing fundamental integrals of one-dimensional linear damped oscillators the other first integrals can be constructed, including time-irrelevant integrals. The above method is extended to multidimensional systems, in order to construct different integrals of two-dimensional and n-dimensional linear damped oscillators. It is proved that there are three independent time-irrelevant integrals for all kinds of two-dimensional linear damped oscillators, and 2n-1 independent time-irrelevant integrals for n-dimensional linear damped oscillators. Using the transformation of variables the first integrals of linear damped oscillator transform into ones of harmonic oscillator. A study on the first integrals of harmonic oscillators Ding Guang-Tao 2013, 62 (6): 064502. doi: 10.7498/aps.62.064502 Abstract + A concept of fundamental integrals of one-dimensional harmonic oscillator is presented, and other integrals can be constructed by use of fundamental integrals. The above concept and method are extended to multidimensional harmonic oscillators. By directly constructing other integrals from the fundamental integrals, it is proved that there are three independent time-independent integrals for all kinds of two-dimensional harmonic oscillators and there are 2n-1 independent time-independent integrals for n-dimensional harmonic oscillators. The characteristics of the anisotropic two-dimensional harmonic oscillator is discussed when the ratio between two frequencies is rational or irrational number. Hydrodynamic characters of a near-wall circular cylinder oscillating in cross flow direction in steady current Chen Ying, Fu Shi-Xiao, Xu Yu-Wang, Zhou Qing, Fan Di-Xia 2013, 62 (6): 064701. doi: 10.7498/aps.62.064701 Abstract + Hydrodynamic characteristics of a near-wall circular cylinder oscillating in direction perpendicular to steady current are experimentally investigated at a Reynolds number of 2× 105. Forces in both in-line and cross-flow are measured by the three-dimensional force transducers. The effects of gap ratio, oscillating frequency and amplitude on the hydrodynamic charactersistic of the cylinder are studied. Experimental results indicate that 1) mean drag reduces rapidly when the gap ratio decreases from 0.7 to 0.3; 2) for an oscillating cylinder, the critical gap ratio of vortex shedding suppression is smaller than that for a still cylinder; 3) the existence of near-wall influences the energy transfer between the structure and fluid significantly, which means that hydrodynamic coefficient based on free-wall cylinder may not be suitble for predicting vortex induced vibration of pipelines; 4) for an oscillating cylinder, added mass is not a constant except for in a certain range of oscillating frequency, and the absolute value increases with the decrease of gap ratio in low frequency range; 5) mean drag coefficient, oscillating drag coefficient and oscillating lift coefficient all increase with oscillating amplitude increasing. A numerical analysis of drop impact on solid surfaces by using smoothed particle hydrodynamics method Su Tie-Xiong, Ma Li-Qiang, Liu Mou-Bin, Chang Jian-Zhong 2013, 62 (6): 064702. doi: 10.7498/aps.62.064702 Abstract + In this paper, we present a numerical simulation of a single liquid drop impacting onto solid surface with smoothed particle hydrodynamics (SPH). SPH is a Lagrangian, meshfree particle method, and it is attractive in dealing with free surfaces, moving interfaces and deformable boundaries. The SPH model includes an improved approximation scheme with corrections to kernel gradient and density to improve computational accuracy. Riemann solver is adopted to solve equations of fluid motion. An new inter-particle interaction force is used for modeling the surface tension effects, and the modified SPH method is used to investigate liquid drop impacting onto solid surfaces. It is demonstrated that the inter-particle interaction force can effectively simulate the effect of surface tension. It can well describe the dynamic process of morphology evolution and the pressure field evolution with accurate and stable results. The spread factor increases with the increase of the initial Weber number. The numerical results are in good agreement with the theoretical and experimental results in the literature. Experimental research on bubble dynamics near circular hole of plate Wang Shi-Ping, Zhang A-Man, Liu Yun-Long, Wu Chao 2013, 62 (6): 064703. doi: 10.7498/aps.62.064703 Abstract + Traditional studies on bubble dynamics near solid boundaries mainly focus on its pulsation and jet features near a full plate. A hole will be formed when a warship is attacked by an underwater weapon and it may be subjected to a subsequent attack generated by charge explosion. And the hole on the plate would affect the blow effect of the underwater explosion bubble nearby. To study the bubble pulsation and jet features near a plate with a hole in the middle, a series of experiments is carried out using a spark bubble generator and high-speed camera. We find that when a bubble is generated homocentricly near the hole, cavity-attraction effect of the bubble will be formed due to the effect of the hole, and the opposite-jets can then be formed. Then the influences of dimensionless standoff distance and hole size are analyzed. Finally, the dynamic behavior of a bubble which is generated decenteredly near the hole is studied to show that the blow effect of a bubble increases with decentered position increasing. Experimental investigations on the propagation characteristics of internal solitary waves over a gentle slope Du Hui, Wei Gang, Zhang Yuan-Ming, Xu Xiao-Hui 2013, 62 (6): 064704. doi: 10.7498/aps.62.064704 Abstract + In a stratified fluid tank, experiments on the propagating, shoaling and breaking of the internal solitary waves over a gentle slope similar to the topography in the northeast of the South China Sea are conducted. The qualitative analysis on the evolving characteristics of the internal solitary waves is accomplished by use of the dye-tracing technique, and their quantitative measurement is carried out by using the multi-channel conductivity-probe arrays. It is shown that due to the shoaling effect the internal solitary waves with large amplitude are restrained, but the waves with small amplitude are magnified. The shoaling effect will also lead to the decrease of the propagation velocity of the internal solitary waves. Further, the shoaling effect will bring about strong shear flow instability, and then makes the internal solitary wave broken. The breaking wave will result in the fission from one large amplitude wave into several small amplitude waves with the same polarity. By means of the Mile's stability theory, the instable happening-location of the internal solitary wave over the gentle slope can be described through the Richardson number. The experimental results accord well with the theoretical analyses. Dissipative particle dynamics simulation of multiphase flow through a mesoscopic channel Liu Han-Tao, Liu Mou-Bin, Chang Jian-Zhong, Su Tie-Xiong 2013, 62 (6): 064705. doi: 10.7498/aps.62.064705 Abstract + A new conservative interaction potential with short-range repulsion and long-distance attraction is constructed by a quartic equation function. The multiphase flow through a cross-shape mesoscopic channel is simulated by dissipative particle dynamics with this new potential function. The results show that the new method is capable of simulating the flow process and flow pattern. Inactivation of Hela cancer cells by an atmospheric pressure cold plasma jet Huang Jun, Chen Wei, Li Hui, Wang Peng-Ye, Yang Si-Ze 2013, 62 (6): 065201. doi: 10.7498/aps.62.065201 Abstract + An inactivation mechanism study on Hela cancer cells by means of an atmospheric pressure cold plasma jet is presented. Cell morphology is observed under an inverted microscope after plasma treatment. The neutral red uptake assay provides quantitative evaluations of cell viability under different conditions. The effect of the inactivation efficiency of Hela cancer cells in the argon (900 mL/min) with addition of different amount of oxygen (1%, 2%, 4%, 8%) into atmospheric pressure cold plasma jet is discussed under the fixed power 18 W. Results show that 2% O2 addition provides the best inactivation efficiency, and the survival rate can be reduced to 7% after 180 s treatment. When the oxygen addition exceeds 2%, the inactivation efficiency gradually weakens. The effect is not so good as that in pure argon plasma when the oxygen addition arrives at 8%. According to the emission spectrum of the plasmum, it is concluded that the reactive oxygen species in the plasma play a key role in cancer cell inactivation process. A method to strengthen and toughen Sapphire by codoping of Fe/Ti ions Hu Ke-Yan, Xu Jun, Tang Hui-Li, Li Hong-Jun, Zou Yu-Qi, Su Liang-Bi, Chen Wei-Chao, Yu Hai-Ou, Yang Qiu-Hong 2013, 62 (6): 066201. doi: 10.7498/aps.62.066201 Abstract + Mechanical properties of titanium and iron co-doped Sapphire crystal are first studied at room temperature Large (ø180 × 280 mm3 in dimension 30 kg in weight) titanium and iron codoped sapphire single crystal is grown by the Kyropoulos technique It is shown that the fracture strength and surface hardness and fracture toughness of as-grown crystals are significantly improved and the visible-infrared optical property is not adversely affected by titanium and iron codoping and certain heat treatment. The Fe3+ in the doped Fe2O3 palys a role of substituting Al3+, leading to an in creased internal stress in the crystal. And the Ti4+ in the doped TiO2 crystallizes the second phase needle crystal and brings in a toughening effect through certain heat treatment. As a consequence, the mechanical properties of as-grown sapphire are improved at room temperature. The present work has the realistic significance for developing the sapphires of excellent mechanical properties. Carrier transport characteristics in CdSe/CdS/Thioglycolic acid ligand quantum dots with a core-shell structure Xue Zhen-Jie, Li Kui-Ying, Sun Zhen-Ping 2013, 62 (6): 066801. doi: 10.7498/aps.62.066801 Abstract + In the present paper, we synthesize CdSe quantum dots (QDs) that are stabilized by thioglycolic acid according to the water-phase synthesis. The X-ray diffraction and HRTEM results confirm that the samples prepared each possess a sphalerite structure. The EDS and FT-IR spectra of the samples show that a core-shell structure is formed between the CdSe nanoparticles and the ligand. The fine band structures and the characteristics of the surface states in a connection with the structures are identified by the surface photovoltage (SPV) spectrum of the samples. Two SPV response peaks, located at 475 nm (2.61 eV) and 400 nm (3.1 eV), are closely related to the band-band transitions of the core-CdSe and the shell-CdS, respectively; the SPV response at 370 nm (3.35 eV) is correlated with the n → π* transition between the hydroxyl and sulfydryl (or hydroxyl). It is because of an obvious quantum size effect of the samples that both PL line broadens and SPV response intensity increases with the decrease of the grain size of the sample. The change trend of the surface photoacoustic signal intensity is contrary to that of the SPV response intensity of the samples synthesized at varying pH. Moreover, the fine band structures at surfaces and grain boundaries of CdSe QDs prepared are probed by the SPV spectra of the samples at varying pH values. The relationship between the grain size and the photo-generated carrier transport behavior is discussed according to the detected EFISPV results of the QDs. Friction and wear performance of the 0.5Ba(Ti0.8Zr0.2)O3-0.5(Ba0.7Ca0.3)TiO3 piezoelectric film Zhang Yan, Wang Zeng-Mei, Chen Yun-Fei, Guo Xin-Li, Sun Wei, Yuan Guo-Liang, Yin Jiang, Liu Zhi-Guo 2013, 62 (6): 066802. doi: 10.7498/aps.62.066802 Abstract + As a lead-free piezoelectric material with potential application, 0.5Ba(Ti0.8Zr0.2)O3-0.5(Ba0.7Ca0.3)TiO3 (BZT-0.5BCT) ceramics, which has a morphotropic phase boundary composition, deserves much attention due to its excellent ferroelectric and piezoelectric properties. BZT-0.5BCT lead-free piezoelectric film has been synthesized on a Si (100) substrate by Sol-Gel process. The topography of the film measured using an atomic force microscope and a scanning electron microscope shows that the surface of the prepared film is smooth, and the grain is in the shape of hemisphere with a diameter of 80-100 nm. The film is 1.7 μm in thickness, with pores inside. Friction experiments show that the friction between the tip and the piezoelectric film is much larger than that between the tip and the SiO2 substrate, because of the existence of electrostatic force between the film and the silicon tip. However, the friction coefficients obtained are approximately equal. Nano-scratch experiments show that the BZT-0.5BCT film has a high normal carrying capacity, but a poor tangential wear resistance. The average elastic modulus of the film is 23.64 GPa ± 5 GPa, and its hardness is 2.7-4 GPa, both being slightly lower than those of the bulk value in PZT ceramics. Electronic state of zigzag graphene nanoribbons Deng Wei-Yin, Zhu Rui, Deng Wen-Ji 2013, 62 (6): 067301. doi: 10.7498/aps.62.067301 Abstract + Based on the tight-binding model, the electronic state and band of zigzag graphene nanoribbons are given analytically by a new method. The results show that there are only two kinds electronic states, i.e., the standing wave state and edge state. For the standing wave state, the wave function is sine function and the vector is real; for the edge state, the wave function is hyperbolic sine function and the vector is complex, whose real part is 0 or π/2. The energy band is composed of the energy of standing wave state and the energy of edge state. The accurate ranges of infinite direction wave vector and energy of the edge state are deduced. Then we discuss the transition point between the edge state and the standing wave state and find that the two kinds of electronic states tend to the linear relationship regarding the site of carbon lattice in different ways at the phase transition point. When the width of two restricted boundary goes to infinity, the result of the limited graphene tends to the infinite case. Strain control of the leakage current of the ferroelectric thin films Wen Juan-Hui, Yang Qiong, Cao Jue-Xian, Zhou Yi-Chun 2013, 62 (6): 067701. doi: 10.7498/aps.62.067701 Abstract + Combining nonequilibrium Green's functions and first-principles quantum transport calculations in density-functional theory, we investigate the effect of biaxial strain on the leakage current of BaTiO3 ferroelectric thin film. The results show that the compressive strain can effectively reduce the leakage current of ferroelectric thin film. Especially when the compressive strain is 4%, the leakage current will be reduced by nearly 10 times that of strain-free case. By calculating the transmission coefficient and the density of states, we find that the transmission probability of ferroelectric tunnel junction with compressive strain is smaller than that with tensile strain. Moreover, we find that the valence band shifts toward the lower energy zone while the conduction band moves toward the high energy zone, which leads to the enlarged energy band gap, thereby reducing the leakage current. Our study suggestes a suitable way to reduce the ferroelectric thin film leakage current and improve the performance of ferroelectric thin film and its relevant ferroelectric memory. Positioning performance evaluation of magnetic contour matching under different accuracy of sensor Zhao Long, Yan Ting-Jun 2013, 62 (6): 067702. doi: 10.7498/aps.62.067702 Abstract + The accuracy of geomagnetic reference map, geomagnetic sensor, and inertial navigation system is the key factor which affects the reliability of magnetic contour matching (MAGCOM) system. In order to improve the reliability of MAGCOM, the influence of the sensor's accuracy on the matching success rate has been studied. For different geomagnetic reference maps, the factors and their physical mechanism affecting matching success rate have been analyzed, the factors are geomagnetic sensor error, velocity error, and heading angle error. The error range of sensors was determined by the simulation computing which was based on a practical system and a simulation experimentation was implemented by using the geomagnetic reference map. The simulation results show that MAGCOM can allow the velocity error 0.14 m/s, the heading angle error 0.6 and the standard deviation of geomagnetic sensor noise 11 nT, when the matching success rate is 90%. The influence of the excition recombination zone on the organic magnetic-field effect Li Dong-Mei, Wang Guan-Yong, Zhang Qiao-Ming, You Yin-Tao, Xiong Zu-Hong 2013, 62 (6): 067801. doi: 10.7498/aps.62.067801 Abstract + In this work we explore the influence of the exciton recombination zone (RZ) on magnetic-field effect in tris-(8-hydroxyquinolinato) aluminum (Alq3) based organic light-emitting diodes by changing the thickness of Alq3. The magneto-electroluminescence and magneto-conductance (MC) in these devices are investigated at various temperatures and bias voltages. It is found that the sign of MC changes from positive to negative, and then back to positive with the reduction of the thickness of Alq3 at 50 K. The phenomenon observed is ascribed to the change of the exciton density in the exciton RZ. Based on the mechanisms including the hyperfine mixing, the triplet-charge interaction and interfacial dissociation or quenching of excitons, the observed results are explained qualitatively. Photoluminescence studies of the neutral vacancy defect known as GR1 centre in diamond Wang Kai-Yue, Li Zhi-Hong, Tian Yu-Ming, Zhu Yu-Mei, Zhao Yuan-Yuan, Chai Yue-Sheng 2013, 62 (6): 067802. doi: 10.7498/aps.62.067802 Abstract + The single isolate vacancy in diamond exists in three charged states, neutral, negative and positive; and many complicated defects such as di-vacancies, impurities-vacancy complexes could also be formed in diamond. In this paper, we investigate the optical properties of the irradiation-induced neutral vacancy in diamond by low-temperature micro-photoluminescence technology, which will play a guiding significant role in the further studies of the complex defects in diamond. Sub-diffraction-limit fabrication of 6H-SiC with femtosecond laser Yun Zhi-Qiang, Wei Ru-Sheng, Li Wei, Luo Wei-Wei, Wu Qiang, Xu Xian-Gang, Zhang Xin-Zheng 2013, 62 (6): 068101. doi: 10.7498/aps.62.068101 Abstract + Sub-diffraction-limit fabrication of 6H-SiC is investigated with femtosecond laser direct-write setup. Micro/nano-fabrication on 6H-SiC is studied with a home-made micro/nano-fabrication platform, which is integrated with a fluorescence microscope and a Ti:sapphire laser with a central wavelength of 800 nm and pulse duration of 130 fs. Micro/nano-structures are characterized with scanning electron microscope. It is found that the spatial resolution is improved with the decrease of laser power and the increase of scanning velocity. The smallest resolution achieved is 125 nm and line array with a line width of 240 nm and a period of 1 μm is fabricated. This work paves the new way for integrated micro electro-mechanical systems devices. Thermal transport of graphene nanoribbons embedding linear defects Yao Hai-Feng, Xie Yue-E, Ouyang Tao, Chen Yuan-Ping 2013, 62 (6): 068102. doi: 10.7498/aps.62.068102 Abstract + Using nonequilibrium Green's function method, the thermal transport properties of zigzag graphene nanoribbons (ZGNR) embedding a finite (semi-infinite or infinite) long linear defect are investigated in this paper. The results show that defect type and defect length have significant influence on the thermal conductance of ZGNR. When the embedded linear defects have the same lengths, thermal conductance of ZGNR embedding t5t7 defect is lower than that of ZGNR embedding Stone-Wales defect. As for the ZGNR embedding finite and the same type defects, their thermal conductance reduce with the increase of the defect length. However, as the linear defect is long enough, the thermal conductance is insensitive to the change of length. By comparing the ZGNRs embedding finite, semi-infinite and infinite long defects, we find that the thermal conductance of ZGNR embedding an infinite long defect is higher than that of ZGNR embedding a semi-infinite defect, while the thermal conductance of the latter is higher than that of ZGNR embedding a finite long defect. This is due to the fact that different structures possess different numbers of scattering interfaces in the phonon transmission direction. The more the scattering interfaces, the lower the thermal conductance is. These thermal transport phenomena are explained by analyzing transmission coefficient and local density of states. These results indicate that linear defects can tune thermal transport property of ZGNR efficiently. Analysis of gas isolation by prominent O-ring on the mold in compressional gas cushion press nanoimprint lithography Li Tian-Hao, Zheng Guo-Heng, Liu Chao-Ran, Xia Wei-Wei, Li Dong-Xue, Duan Zhi-Yong 2013, 62 (6): 068103. doi: 10.7498/aps.62.068103 Abstract + Nanoimprint lithography has the advantages of low-cost, high-throughput, ultrahigh resolution, which could make it one of the next generation lithography technologies. However, the bubble-defect is always a problem which may damage the duplicate patterns, so it is an urgent issue to propose effective solutions. A novel methods, which is suitable for compressional gas cushion press nanoimprint lithography in gas atmosphere and could prevent gas from entering the gap between mold and substrate, is presented here. The annular plate capillary gap formed between the smooth substrate and the prominent O-ring processed by etching the original mold would be filled with the fluid medium. The capillary liquid bridge between the O-ring and substrate produces a closed cavity. The stiction induced by adhesion force and the capillary force induced by air-liquid surface tension could resist the compressed gas and avoid the bubble defect. The effective widths of the prominent O-ring, which are different for various fluids with different surface properties, are deduced by theory analysis. The analysis results provide theoretical basis for the preparation of the mold. Friction phenomena in two-dimensional Frenkel-Kontorova model with hexagonal symmetry lattice Jia Ru-Juan, Wang Cang-Long, Yang Yang, Gou Xue-Qiang, Chen Jian-Min, Duan Wen-Shan 2013, 62 (6): 068104. doi: 10.7498/aps.62.068104 Abstract + Locked-to-sliding phase transition is studied based on the two-dimensional Frenkel-Kontorova model in this paper. The method of molecular dynamics simulation is used. The effect of the static friction force on system parameter is investigated numerically when the upper layer atoms are of the hexagon symmetric structure. First-principles studies of the structural and thermodynamic properties of TiAl3 under high pressure Wang Hai-Yan, Li Chang-Yun, Gao Jie, Hu Qian-Ku, Mi Guo-Fa 2013, 62 (6): 068105. doi: 10.7498/aps.62.068105 Abstract + In this paper, the structural properties of TiAl3 intermetallics are investigated by the plane-wave pseudopotential density functional theory method. The calculated results are consistent with experimental and other theoretical ones. Through the quasi-harmonic Debye model we calculate the thermodynamic properties and obtain the dependences of relative volume V/V0 on pressure P and temperture T, as well as the thermal expansion and specific heat coefficients under different temperatures and pressures. For the calculated results of TiAl, we find that the increase rate of thermal expansion coefficient of TiAl under the increase of temperature is higher than that of TiAl3, and further, the effect of temperature weakens with the increase of pressure. The specific heat of TiAl3 is nearly twice that of TiAl. Control of electron localization in the dissociation of H2+ using attosecond and two-color femtosecond pulses Xu Tian-Yu, He Feng 2013, 62 (6): 068201. doi: 10.7498/aps.62.068201 Abstract + We study the control of electron localization in the dissociation of H2+ using three laser pulses by numerically simulating the time-dependent Schrödinger equation. First, we use an attosecond pulse to excite the wave packet of H2+ from 1sσg to 2pσu. Then, two-color femtosecond pulses (800 nm+400 nm) are used to control the dissociation of H2+. By manipulating the phases of two femtosecond pulses, the electron localization can be controlled effectively. For the proper laser parameters, the maximal probability that the electron is located on the selective nucleus is up to 90%. This theoretical scheme can be realized by the state-of-art laser technology. A new ocean surface wind field retrieval method from C-band airborne synthetic aperture radar Ai Wei-Hua, Yan Wei, Zhao Xian-Bin, Liu Wen-Jun, Ma Shuo 2013, 62 (6): 068401. doi: 10.7498/aps.62.068401 Abstract + Wind direction retrieval depending on other background sources, e.g., the visible wind-induced streaks, numerical weather prediction model data, scatterometer data and buoy data is the key problem existing in the ocean wind field retrieval using airborne synthetic aperture radar (SAR) data based on geophysical model function which influences the wind speed and direction retrieval accuracies. To solve this problem, a new ocean wind field retrieval method is proposed, with which the wind speed and direction are estimated simultaneously through using the normalized radar cross sections corresponding to different incidence angles and geophysical model function according to the sounding characteristics of airborne SAR. To evaluate the ocean wind field retrieval errors and effects, the simulated data and C band airborne SAR data are used to obtain the wind speed and direction by the proposed method. The verification results show that the wind field retrieval method is suited to retrieve highly accurate wind speed and direction from airborne SAR sounding data without other background sources. The major error can be explained by the insufficient accuracy in calibration of the NRCS for wind speed and wind direction retrieval. The wind speed error increases with the value of speed increasing and at high wind speeds exceeding 18 m/s the error increases distinctly. The value of wind speed has no obvious influence on wind direction retrieval accuracy. The damage effect and mechanism of bipolar transistors induced by injection of electromagnetic pulse from the base Ren Xing-Rong, Chai Chang-Chun, Ma Zhen-Yang, Yang Yin-Tang, Qiao Li-Ping, Shi Chun-Lei 2013, 62 (6): 068501. doi: 10.7498/aps.62.068501 Abstract + A two-dimensional electrothermal model of the bipolar transistor (BJT) is established, and the transient behaviors of the BJT originally in the forward-active region are simulated with the injection of electromagnetic pulse from the base. The results show that the damage location of the BJT shifts with the amplitude of the pulse. With a low pulse amplitude, the burnout of the BJT is caused by the avalanche breakdown of the emitter-base junction, and the damage location lies in the cylindrical region of this junction. With a high pulse amplitude, the damage first occurs at the edge of the base closer to the emitter due to the second breakdown of the p-n-n+ structure composed of the base, the epitaxial layer and the substrate. The burnout time increases with pulse amplitude increasing, while the damage energy changes in a decrease-increase-decrease order with it, thus generating both a minimum value and a maximum value of the damage energy. A comparison between simulation results and experimental ones shows that the transistor model presented in the paper can not only predict the damage location in the BJT under intense electromagnetic pulses, but also obtain the damage energy. Rapid identification of the consistency of failure mechanism for constant temperature stress accelerated testing Guo Chun-Sheng, Wan Ning, Ma Wei-Dong, Zhang Yan-Feng, Xiong Cong, Feng Shi-Wei 2013, 62 (6): 068502. doi: 10.7498/aps.62.068502 Abstract + For avoiding the invalid acceleration experiments caused by the changes of the failure mechanism, the relationship of the failure mechanism consistency and the parameters of degradation data distribution in the early stage under different accelerated stress levels has been derived. Conditions for judging the failure mechanism consistency are also given as follows: firstly, the shape parameters of failure distribution has a uniform distribution mi=m, i=1,2,3,···; secondly, the dimension parameter ηi follows the equation ηi=AFi·η. A method to rapidly discriminate the consistency of failure mechanism under different experimental stresses in the early stage was obtained, and the invalid acceleration experiments caused by the changes of the failure mechanism could be avoided. Finally, theoretical degenerate data in the early stage of the accelerated test and the initial degenerate data of the MCM thick-film resistor were used for estimating, Weibull distribution parameter, and the consistency of failure mechanism degradation was also judged. Study of inhomogeneous tissue equivalent water thickness correction method in proton therapy Xie Zhao, Zou Lian, Hou Qing, Zheng Xia 2013, 62 (6): 068701. doi: 10.7498/aps.62.068701 Abstract + The inhomogeneous tissue equivalent water thickness correction method is an important part of research in proton radiotherapy. In this paper, we simulate the transport processes of a high-energy proton beam being injected into the water and other materials using Monte-Carlo multi-particle transport code Fluka, and according to the energy deposition distribution we obtain the depth of the Bragg peak when the protons are injected into different materials. Then we fit an analytic formula (R = αE0p) to the relationship between initial proton energy and the depth of the proton Bragg peak in different materials. It is found that for the different energies of proton beam being injected into non-uniform organization, the difference between the Bragg peak depth from fitting and the depth of the proton beam Bragg peaks from Fluka program is less than 1 mm. If we can establish a database about the relationship of Bragg peak ratio between medium and water, with electron density, then the equivalent water thickness correction method will be able to applied to the dose calculation of for homogeneous medium in proton therapy. Theoretical and experimental study of two-phase-stepping approach for hard X-ray differential phase contrast imaging Du Yang, Lei Yao-Hu, Liu Xin, Guo Jin-Chuan, Niu Han-Ben 2013, 62 (6): 068702. doi: 10.7498/aps.62.068702 Abstract + To satisfy the need of low-dose and high-speed in practical application of hard X-ray differential phase contrast imaging, according to the theoretical analysis and the optimal design of parameters for the experimental system, we propose a two-stepping phase shift algorithm to retrieve the object phase information. The method can effectively reduce the radiation dose and substantially improve the speed of retrieving phase information, which lays the foundation for the X-ray phase contrast imaging in medical and industrial applications. Fiber Fabry-Perot tunable filter based Fourier domain mode locking swept laser source Chen Ming-Hui, Ding Zhi-Hua, Wang Cheng, Song Cheng-Li 2013, 62 (6): 068703. doi: 10.7498/aps.62.068703 Abstract + An all-fiber Fourier domain mode locking (FDML) swept laser source at 1300 nm for swept source optical coherence tomography is reported. The swept laser source is realized with power amplification and laser resonator which includes gain medium, tunable filter and dispersion managed delay line. FDML swept laser can realize high-speed tuning, and phase is stable since its highly stable mode locking operation. The turning range of fiber Fabry-Perot tunable filter (FFP-TF) based FDML swept laser is 130 nm, and the 3 dB bandwidth is 70 nm with an average output power of 11 mW. The tunable speed of FDML laser is 48.12 kHz compared with 8 kHz of short-cavity FFP-TF based swept laser. The axial resolution in OCT imaging of FDML swept laser is 7.8 μm (in tissue), which is improved by 1.9 μm compared with that of short-cavity swept laser. Coupling analysis of multivariate bioelectricity signal based symbolic partial mutual information Zhang Mei, Cui Chao, Ma Qian-Li, Gan Zong-Liang, Wang Jun 2013, 62 (6): 068704. doi: 10.7498/aps.62.068704 Abstract + Symbolic partial mutual information is proposed in this paper, which is based on partial mutual information. This algorithm can be used to analyse the coupling between multivariate time series. We use this method to treat and analyse the sleeping multivariate bioelectricity signal (MBS) and wake one, it turns out that the coupling of wake MBS is obviously bigger than that of sleeping MBS. Finally hypothesis testing is done to prove that this method works and the average energy dissipation can be used as a parameter to detect nonequilibrium. "Cumulative Effect" of torrential rain in the middle and lower reaches of the Yangtze River Zhang Shi-Xuan, Feng Guo-Lin, Zhao Jun-Hu 2013, 62 (6): 069201. doi: 10.7498/aps.62.069201 Abstract + To expand the torrential rain which is a meso-and-micro scale weather process to a meso-and-long scale weather process, in this paper we choose the middle and lower reaches of the Yangtze River (MLRYZ) as a sample region, and propose the conception of "Cumulative Effect" of torrential rain (CETR) by using the daily precipitation observational data from 740 stations in China. On the statistical analysis of observations, we define CETR as the cumulation or superposition of many torrential rain processes, and three indexes, which are continuous time (Ld), control area (Ar) and precipitation contribution rate (Qs), which are used for explaining the conception of CETR. Then taking these three indexes into consideration, we establish the intensity index of CETR (BQDI) and study the relationship between the BQDI and the summer precipitation in MLRYZ. Results show that the interannual and interdecadal variations of BQDI are similar to those of summer precipitation in MLRYZ. The distribution of correlation coefficient between the BQDI and the summer precipitation in Eastern China and the composite analysis of representative years in BQDI show a large positive relation area in MLRYZ (significance test at the 95% level) and two large negative relation areas in North and South China (significance test at the 95% level), which reveals that the variations of BQDI not only correspond to the variations of summer precipitation in MLRYZ but also correlate with the distribution of summer precipitation in Eastern China to some extent. Besides, an empirical orthogonal analysis is performed on the frequency of torrential rain in MLRYZ, we find that the four major spatial modes of torrential rain are also similar to those of summer precipitation in MLRYZ. In conclusion, the precipitation caused by CETR greatly influences even determines the amount and distribution of summer rainfall, which is worth further investigating. Severe hail identification model based on saliency characteristics Wang Ping, Pan Yue 2013, 62 (6): 069202. doi: 10.7498/aps.62.069202 Abstract + There are always high false alarm ratios when warning against the severe hail with the severe hail index (SHI) which is supplied by digital weather radar system. To solve this problem, the extraction algorithm with several novel features, such as "overhang", is designed and realized, and these features can describe the severe hail conceptual model from different aspects. Then we take short-time heavy rainfall cells which are easy to be confused with severe hail cells as counter examples to perform statistic analysis for these features and the SHI. Test results show that they have more significant difference between two kinds of samples and hence each of them can reflect one aspect characteristic of severe hail cells. Then a severe hail recognition model that is the Support Vector Machine with radial primary kernel function is learned. Finally, the normalized distance between the sample to be recognized and the optimal separating hyper-plane is determined as a new SHI for warning against the severe hail. Experimental results show that the method proposed in this paper makes severe hail hit ratio higher than the SHI being used and the false alarm ratio is reduced substantially. Summer precipitation response to the length of the preceding winter over yangtze-huaihe river valley Su Tao, Zhang Shi-Xuan, Zhi Rong, Chen Li-Juan 2013, 62 (6): 069203. doi: 10.7498/aps.62.069203 Abstract + By using NCEP/NCAR reanalysis datasets, the length of preceding winter (LPW) in the Yangtze-Huaihe River valley (YHRV) from 1961 to 2011 is derived. We investigate the variation of LPW and the relationship between LPW and following summer precipitation, and the results indicate that LPW clearly displays interannual and decadal changes in the period of 1961-2011. The variation of LPW is closely related to temperature, pressure and meridional wind speed, statistical analysis indicates that a longer LPW corresponds to a lower temperature, a higher pressure and a stronger meridional wind, which shows that temperature, pressure, meridional wind are probably the key factors of adjusting the LPW. These characteristics also vary from region to region. There is significantly positive correlation between the summer precipitation and LPW. The statistical analysis also indicates that the longer (shorter) the LPW, the more (less) the summer precipitation in YHRV is. The comprehensive analysis of the circulation field indicates that when LPW is significantly longer than climatic status, a blocking situation is formed easily in the region of Ural Mountains and the Sea of Okhotsk in the summer, which will affect the summer rainfall in YHRV. By using singular value decomposition method, it is found that the relationship between summer precipitation and LPW is also very significant. Pulsar signal denoising method based on empirical mode decomposition mode cell proportion shrinking Wang Wen-Bo, Zhang Xiao-Dong, Wang Xiang-Li 2013, 62 (6): 069701. doi: 10.7498/aps.62.069701 Abstract + In order to improve the denoising quality of the pulsar signal, an empirical mode decomposing method (EMD) of pulsar signal denoising based on mode cell proportion shrinking is proposed. Firstly, the pulsar signal is decomposed into a series of intrinsic mode functions (IMF), and the part between the two adjacent zero-crossing within IMF is defined as a mode cell. Then, the optimal proportional shrinking factor is constructed by treating mode cell as the basic unit of analysis. Finally, the all mode cells within IMF are denoised by proportion shrinking, and the mode cell proportion shrinking denoising model is established. The experimental results show that compared with the two EMD denoising algorithms based on coefficient threshold and mode cell threshold, the proposed method can more effectively remove the pulsar signal noise, with better preserving the useful detail information in the original signal.
cbc50d0d5fe5af36
Ladder Operators Mathematically, a ladder operator is defined as an operator which, when applied to a state, creates a new state with a raised or lowered eigenvalue [1]. Their utility in quantum mechanics follows from their ability to describe the energy spectrum and associated wavefunctions in a more manageable way, without solving differential equations. We will discuss the most prominent example of the use of these operators; the quantum harmonic oscillator. Their use does not end there, however, as the mathematics of ladder operators can easily be extended to more complicated problems, including angular momentum and many body problems. In the latter case, the operators serve as creation and annihilation operators; adding or subtracting to the number of particles in a given state. Quantum Harmonic Oscillator This diagram shows the energy levels and wavefuntions for the harmonic oscillator potential. Image taken from ref [2] The one dimensional harmonic oscillator is often referred to in quantum mechanical calculations as many systems can be approximated by that potential when close to an equilibrium point [4]. As we know, for the harmonic oscillator, the potential is given by In class, we discussed the energy spectrum and solutions for the time-independent Schrödinger equation, which in this case is the following: In this formulation, our operators are defined using the coordinate basis. Notice the first term represents kinetic energy$\frac{P^2}{2m}$, while the second represents the potential. Accordingly, we have operators for momentum and position as follow: Of course, other bases exist, including the momentum basis or the energy basis, in which the expression of these operators might be different. The true beauty of the ladder operator method is that we can define the Hamiltonian in the energy basis without specifying the form of the operators. All that is needed is knowledge of their commutator, which is independent of basis. We will return to this idea later. For the moment, we can continue by rewriting the above Schrödinger equation to show explicitly the operation on $\psi$. The ladder operator method is sometimes referred to as the “method of factorization” because the next step involves defining the factor of the term in brackets [3]. If we were dealing with numbers rather than operators, it would be clear that In the case of operators, we cannot assume that $cd=dc$. However, we can continue the examination by defining two new operators, corresponding to the two sets of parenthesis above. Or, in terms of the previously defined position and momentum operators, These are our ladder operators. To facilitate their use, we need to determine their commutation relation. We can easily show $[\bold{X},\bold{P}]=\italic{i} \hbar$. Using the definition of the commutator $[\bold{X},\bold{P}]=\bold{XP}-\bold{PX}$ Dropping our test function f(x), we see the commutator is indeed $\italic{i} \hbar$. Now we compute the products of our ladder operators. Notice that the first term is simply the sum of the energies, H. Also, from above And therefore Often times, the ladder operators are each defined with a multiple of $\sqrt{\hbar \omega}$ so as to make this commutator equal to one and describe the energies in units of $\hbar \omega$ [3]. We will continue with the present definition for the moment. Schrödinger Equation in terms of ladder operators Note the Schrödinger equation becomes Here is where the ladder operators become especially useful. If $\psi$ is a solution of the equation, we can demonstrate that $a^+ \italic{\psi}$ is also. Keeping the commutator in mind, In the same manner, So, $(a^+\italic{\psi})$ is an eigenvector with an energy one unit $\hbar \omega$ greater than $\italic{\psi}$, and $(a\italic{\psi})$ is a solution of the hamiltonian with one $\hbar \omega$ less energy than $\italic{\psi}$. The operators can be said to have created or annihilated one quanta of energy equal to $\hbar \omega$ . For this reason they are also termed creation $(a^+)$ and annihilation $(a)$ operators [5]. Furthermore, starting with any solution, we can simply apply the ladder operators successively to generate any other solution. We know the harmonic oscillator contains a ground state with minimum energy, below which no state exists. Then, if we apply the annihilation operator, we must get 0 as the result. In other words, a “lowest rung” must exist on our ladder of allowed energies and states [4]. Or, inserting our first definition for the lowering operator, we can solve for $\psi_0$ This can be solved with simple integration Where $A_0$ is a normalization constant, in this case $(\frac{m \omega}{\pi \hbar})^{\frac{1}{4}}$ . So, assuming a lowest state allowed us to infer its form, if we chose a basis to express the operator in. Even without specifying a formulation, we can find the energy of that level [3], which clearly should not depend on basis. We can define the number operator as Where, again, many formulations of ladder operators incorporate the divisor into the operators themselves. The number operator, when acting on a state, simply returns the number of the current energy level. Using ladder operators, then, we have completely defined the harmonic oscillator states and energy levels Ladder operators are seen in many facets of quantum mechanics. Earlier, we defined the ladder operators in terms of momentum and position operators. With little effort, we could easily define X and P as linear combinations of the ladder operators. Because many of the potentials we are concerned with are functions of position only, ladder operators for other systems can be defined in a similar way. These formulations offer a method of working with such problems without solving the differential equations. In the theory of quantum fields, the momentum and potential of a region are simultaneously described in space-time by a state field. The mechanism of creation and annihilation operators is essential in this case, allowing us to describe the state as a combination of these operators, thus quantizing the field [6]. We have seen that ladder operators and their commutator relationship are all that are needed to completely solve the quantum harmonic oscillator. We were able to do so without ever actually addressing the choice of basis or solving the differential equations (though I did both in order to write a recognizable form of the ground state, we could leave our work in terms of the operators). Were this the only case where ladder operators proved useful, they would still merit much study. Fortunately, they find wide use in other application of quantum theory, and often make calculations much easier. 3. Shankar, R. Principles of Quantum Mechanics2nd ed. (chp. 7), New Haven:Plenum Press, 1994 4. Griffiths, D. Introduction to Quantum Mechanics(chp. 2), NJ: Prentice Hall, 1995 6. Schiff, L. Quantum Mechanics3rd ed. (chp. 14)/ New York: McGraw-Hill, 1968
4b62978988e04658
Led By An Equation The Schrödinger equation of quantum mechanics is probably one of the most beautiful ever conceived, because it can describe a Universe that we will never be able to see in its entirety. We didn’t invent that equation, but we just limited ourselves to discovering it by looking at the Universe from the keyhole. There is a preexistent perfection to us, scientists do not invent the Universe. This equation describes the “wave function”, whose purpose is not to determine the exact coordinates of an elementary particle, such as an electron, but rather to define a volume of space within which that particle could be found with greater chance. This volume, which represents what is called a “probability cloud”, is technically represented by an “orbital”, and describes the energy level of a particle, which is quantized in what is called a “wave packet”. When the scientist makes a measurement suddenly the particle is found in only one place, but between one measure and another the particle dissolves into an overlap of “probability waves” and it is potentially present in many different places in the inside of that same orbital. But as soon as the measurement is made the wave function collapses instantly, and the probability cloud materializes as a localized particle. All this means that the wave function behaves like an iridescent soap bubble but hides something inside that we are not allowed to see. The wave function is like the surface of that bubble and the moment we pierce it with a pin it disappears leaving only a drop of water. In fact, the moment we observe – using the scientific procedure of measurement – an elementary particle we suddenly transform that soap bubble into a drop of water. This is what happens when the wave function collapses. All this means that, in the world of elementary particles, the observer – or the one who makes a measurement – inexorably influences what is observed. But it also means that the integral structure of the Universe seen from the perspective of elementary particles is invisible to us, even if mathematically representable through a probability function. But the particles aggregate with each other to form atoms, molecules and objects, up to the galaxies. Here then the space-time reality unfolds in our eyes in all its splendor. Yet going inland at some point there is a barrier that prevents us from seeing what in fact is an “integral reality”. Erwin Schrödinger found the mathematical code of this, but it was and remains as Braille for the blind, who read with their hands but not with their eyes. Twenty years later Hugh Everett Jr. mathematically took a pindaric flight imagining a multi-dimensional Universe that gathers all the possible realities expected from the probability cloud represented by the Schrödinger equation. The photo above shows on the left the face of Erwin Schrödinger while the one on the right shows the back of a girl with a tattoo of the Schrödinger equation, which does not show her face: in fact the integral reality of the Universe has no face but only captivating elusive curves.
63f8977754ef825e
General relativity From Wikipedia, the free encyclopedia Jump to navigation Jump to search Slow motion computer simulation of the black hole binary system GW150914 as seen by a nearby observer, during 0.33 s of its final inspiral, merge, and ringdown. The star field behind the black holes is being heavily distorted and appears to rotate and move, due to extreme gravitational lensing, as spacetime itself is distorted and dragged around by the rotating black holes.[1] General relativity (GR), also known as the general theory of relativity (GTR), is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations. Some predictions of general relativity differ significantly from those of classical physics, especially concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Examples of such differences include gravitational time dilation, gravitational lensing, the gravitational redshift of light, and the gravitational time delay. The predictions of general relativity in relation to classical physics have been confirmed in all observations and experiments to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory that is consistent with experimental data. However, unanswered questions remain, the most fundamental being how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. Einstein's theory has important astrophysical implications. For example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. There is ample evidence that the intense radiation emitted by certain kinds of astronomical objects is due to black holes. For example, microquasars and active galactic nuclei result from the presence of stellar black holes and supermassive black holes, respectively. The bending of light by gravity can lead to the phenomenon of gravitational lensing, in which multiple images of the same distant astronomical object are visible in the sky. General relativity also predicts the existence of gravitational waves, which have since been observed directly by the physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of a consistently expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories.[2] Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall, he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations.[3] These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, and form the core of Einstein's general theory of relativity.[4] The 19th century mathematician Bernhard Riemann's non-Euclidean geometry, called Riemannian Geometry, provided the key mathematical framework which Einstein fit his physical ideas of gravity on, and enabled him to develop general relativity.[5] The Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, which eventually resulted in the Reissner–Nordström solution, now associated with electrically charged black holes.[6] In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption.[7] By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state.[8] Einstein later declared the cosmological constant the biggest blunder of his life.[9] During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors").[10] Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919,[11] making Einstein instantly famous.[12] Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity.[13] Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations.[14] Ever more precise solar system tests confirmed the theory's predictive power,[15] and relativistic cosmology, too, became amenable to direct observational tests.[16] Over the years, general relativity has acquired a reputation as a theory of extraordinary beauty.[2][17][18] Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed, a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory.[19] Other elements of beauty associated with the general theory of relativity are its simplicity, symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.[20] From classical mechanics to general relativity[edit] Geometry of Newtonian gravity[edit] Relativistic generalization[edit] As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics.[28] In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations and boosts.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena.[29] With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent.[30] In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the space–time's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure[31] or conformal geometry. Einstein's equations[edit] Einstein's field equations On the left-hand side is the Einstein tensor, a specific divergence-free combination of the Ricci tensor and the metric. Where is symmetric. In particular, On the right-hand side, is the energy–momentum tensor. All tensors are written in abstract index notation.[37] Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant can be fixed as , where is the gravitational constant and the speed of light in vacuum.[38] When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: where is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four space-time coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Alternatives to general relativity[edit] There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory.[39] Definition and basic applications[edit] Definition and basic properties[edit] General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime.[40] Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow.[41] The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.[42] As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems.[44] Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers.[45] Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.[46] Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly.[48] Nevertheless, a number of exact solutions are known, although only a few have direct physical applications.[49] The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe,[50] and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos.[51] Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).[52] Consequences of Einstein's theory[edit] Gravitational time dilation and frequency shift[edit] Gravitational redshift has been measured in the laboratory[59] and using astronomical observations.[60] Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks,[61] while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS).[62] Tests in stronger gravitational fields are provided by the observation of binary pulsars.[63] All results are in agreement with general relativity.[64] However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.[65] Light deflection and gravitational time delay[edit] General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a star. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.[66] This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity.[67] As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion),[68] several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light,[69] the angle of deflection resulting from such calculations is only half the value given by general relativity.[70] Gravitational waves[edit] Ring of test particles deformed by a passing (linearized, amplified for better visibility) gravitational wave Predicted in 1916[73][74] by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On February 11, 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.[75][76][77] Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space[80] or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves.[81] But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.[82] Orbital effects and the relativity of direction[edit] Precession of apsides[edit] In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.[83] The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass)[84] or the much more general post-Newtonian formalism.[85] It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations).[86] Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth),[87] as well as in binary pulsar systems, where it is larger by five orders of magnitude.[88] In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by[89] Orbital decay[edit] Orbital decay for PSR1913+16: time shift in seconds, tracked over three decades.[90] Geodetic precession and frame-dragging[edit] Several relativistic effects are directly related to the relativity of direction.[94] One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport").[95] For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging.[96] More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.[97][98] Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable.[99] Such effects can again be tested through their influence on the orientation of gyroscopes in free fall.[100] Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction.[101] Also the Mars Global Surveyor probe around Mars has been used.[102][103] Astrophysical applications[edit] Gravitational lensing[edit] The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing.[104] Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.[105] The earliest example was discovered in 1979;[106] since then, more than a hundred gravitational lenses have been observed.[107] Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.[108] Gravitational wave astronomy[edit] Artist's impression of the space-borne gravitational wave detector LISA Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research.[110] Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO.[111] Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 Hertz frequency range, which originate from binary supermassive blackholes.[112] A European space-based detector, eLISA / NGO, is currently under development,[113] with a precursor mission (LISA Pathfinder) having launched in December 2015.[114] Observations of gravitational waves promise to complement observations in the electromagnetic spectrum.[115] They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string.[116] In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.[75][76][77] Black holes and other compact objects[edit] Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars.[117] Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center,[118] and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.[119] Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation.[120] Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars.[121] In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.[122] General relativity plays a central role in modelling all these phenomena,[123] and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.[124] The current models of cosmology are based on Einstein's field equations, which include the cosmological constant since it has important influence on the large-scale dynamics of the cosmos, where is the spacetime metric.[127] Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions,[128] allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase.[129] Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation,[130] further observational data can be used to put the models to the test.[131] Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis,[132] the large-scale structure of the universe,[133] and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.[134] Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly.[135] There is no generally accepted description of this new kind of matter, within the framework of known particle physics[136] or otherwise.[137] Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.[138] An inflationary phase,[139] an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation.[140] Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario.[141] However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations.[142] An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed[143] (cf. the section on quantum gravity, below). Time travel[edit] Kurt Gödel showed[144] that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Advanced concepts[edit] Causal structure and global geometry[edit] Penrose–Carter diagram of an infinite Minkowski universe Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass-energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.[149] There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon).[153] Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation.[154] Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values.[155] Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole,[156] or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole.[157] The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.[158] Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization.[159] The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage[160] and also at the beginning of a wide class of expanding universes.[161] However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture).[162] The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.[163] Evolution equations[edit] To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism.[165] These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified.[166] Such formulations of Einstein's field equations are the basis of numerical relativity.[167] Global and quasi-local quantities[edit] Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass)[169] or suitable symmetries (Komar mass).[170] If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity.[171] Just as in classical physics, it can be shown that these masses are positive.[172] Corresponding global definitions exist for momentum and angular momentum.[173] There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.[174] Relationship with quantum theory[edit] If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid state physics, would be the other.[175] However, how to reconcile quantum theory with general relativity is still an open question. Quantum field theory in curved spacetime[edit] Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth.[176] In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime.[177] Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time.[178] As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.[179] Quantum gravity[edit] The demand for consistency between a quantum description of matter and a geometric description of spacetime,[180] as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics.[181] Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.[182][183] Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems.[184] Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity.[185] At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").[186] Simple spin network of the type used in loop quantum gravity. One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects.[187] The theory promises to be a unified description of all particles and interactions, including gravity;[188] the price to pay is unusual features such as six extra dimensions of space in addition to the usual three.[189] In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[190] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[191] Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff.[192] However, with the introduction of what are now known as Ashtekar variables,[193] this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.[194] Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced,[195] there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge Calculus,[182] dynamical triangulations,[196] causal sets,[197] twistor models[198] or the path integral based models of quantum cosmology.[199] Current status[edit] Observation of gravitational waves from binary black hole merger GW150914. General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete.[201] The problem of quantum gravity and the question of the reality of spacetime singularities remain open.[202] Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.[203] Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations,[204] while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes).[205] In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on September 14, 2015.[77][206][207] A century after its introduction, general relativity remains a highly active area of research.[208] See also[edit] 1. ^ "GW150914: LIGO Detects Gravitational Waves". Retrieved 18 April 2016. 2. ^ a b Landau & Lifshitz 1975, p. 228 "...the general theory of relativity...was established by Einstein, and represents probably the most beautiful of all existing physical theories." 4. ^ Pais 1982, ch. 9 to 15, Janssen 2005; an up-to-date collection of current research, including reprints of many of the original articles, is Renn 2007; an accessible overview can be found in Renn 2005, pp. 110ff. Einstein's original papers are found in Digital Einstein, volumes 4 and 6. An early key article is Einstein 1907, cf. Pais 1982, ch. 9. The publication featuring the field equations is Einstein 1915, cf. Pais 1982, ch. 11–15 5. ^ Moshe Carmeli (2008).Relativity: Modern Large-Scale Structures of the Cosmos. pp.92, 93.World Scientific Publishing 7. ^ Einstein 1917, cf. Pais 1982, ch. 15e 10. ^ Pais 1982, pp. 253–254 11. ^ Kennefick 2005, Kennefick 2007 12. ^ Pais 1982, ch. 16 13. ^ Thorne, Kip (2003). The future of theoretical physics and cosmology: celebrating Stephen Hawking's 60th birthday. Cambridge University Press. p. 74. ISBN 978-0-521-82081-3. Extract of page 74 16. ^ Section Cosmology and references therein; the historical development is in Overbye 1999 17. ^ Wald 1984, p. 3 18. ^ Rovelli 2015, pp. 1–6 "General relativity is not just an extraordinarily beautiful physical theory providing the best description of the gravitational interaction we have so far. It is more." 19. ^ Chandrasekhar 1984, p. 6 20. ^ Engler 2002 21. ^ The following exposition re-traces that of Ehlers 1973, sec. 1 22. ^ Arnold 1989, ch. 1 23. ^ Ehlers 1973, pp. 5f 24. ^ Will 1993, sec. 2.4, Will 2006, sec. 2 25. ^ Wheeler 1990, ch. 2 27. ^ Ehlers 1973, pp. 10f 29. ^ An in-depth comparison between the two symmetry groups can be found in Giulini 2006 31. ^ Ehlers 1973, sec. 2.3 32. ^ Ehlers 1973, sec. 1.4, Schutz 1985, sec. 5.1 38. ^ Kenyon 1990, sec. 7.4 41. ^ At least approximately, cf. Poisson 2004 42. ^ Wheeler 1990, p. xi 43. ^ Wald 1984, sec. 4.4 44. ^ Wald 1984, sec. 4.1 46. ^ section 5 in ch. 12 of Weinberg 1972 47. ^ Introductory chapters of Stephani et al. 2003 50. ^ Chandrasekhar 1983, ch. 3,5,6 51. ^ Narlikar 1993, ch. 4, sec. 3.3 53. ^ Lehner 2002 54. ^ For instance Wald 1984, sec. 4.4 55. ^ Will 1993, sec. 4.1 and 4.2 56. ^ Will 2006, sec. 3.2, Will 1993, ch. 4 63. ^ Stairs 2003 and Kramer 2004 65. ^ Ohanian & Ruffini 1994, pp. 164–172 66. ^ Cf. Kennefick 2005 for the classic early measurements by Arthur Eddington's expeditions. For an overview of more recent measurements, see Ohanian & Ruffini 1994, ch. 4.3. For the most precise direct modern observations using quasars, cf. Shapiro et al. 2004 68. ^ Blanchet 2006, sec. 1.3 72. ^ Will 1993, sec. 7.1 and 7.2 77. ^ a b c "Gravitational waves detected 100 years after Einstein's prediction". NSF - National Science Foundation. 11 February 2016. 79. ^ For example Jaranowski & Królak 2005 80. ^ Rindler 2001, ch. 13 81. ^ Gowdy 1971, Gowdy 1974 84. ^ Rindler 2001, sec. 11.9 85. ^ Will 1993, pp. 177–181 88. ^ Kramer et al. 2006 89. ^ Dediu, Adrian-Horia; Magdalena, Luis; Martín-Vide, Carlos (2015). Theory and Practice of Natural Computing: Fourth International Conference, TPNC 2015, Mieres, Spain, December 15–16, 2015. Proceedings (illustrated ed.). Springer. p. 141. ISBN 978-3-319-26841-5. Extract of page 141 93. ^ Kramer 2004 96. ^ Bertotti, Ciufolini & Bender 1987, Nordtvedt 2003 97. ^ Kahn 2007 101. ^ Ciufolini & Pavlis 2004, Ciufolini, Pavlis & Peron 2006, Iorio 2009 102. ^ Iorio L. (August 2006), "COMMENTS, REPLIES AND NOTES: A note on the evidence of the gravitomagnetic field of Mars", Classical and Quantum Gravity, 23 (17): 5451–5454, arXiv:gr-qc/0606092, Bibcode:2006CQGra..23.5451I, doi:10.1088/0264-9381/23/17/N01 103. ^ Iorio L. (June 2010), "On the Lense–Thirring test with the Mars Global Surveyor in the gravitational field of Mars", Central European Journal of Physics, 8 (3): 509–513, arXiv:gr-qc/0701146, Bibcode:2010CEJPh...8..509I, doi:10.2478/s11534-009-0117-6 106. ^ Walsh, Carswell & Weymann 1979 108. ^ Roulet & Mollerach 1997 109. ^ Narayan & Bartelmann 1997, sec. 3.7 110. ^ Barish 2005, Bartusiak 2000, Blair & McNamara 1997 111. ^ Hough & Rowan 2000 112. ^ Hobbs, George; Archibald, A.; Arzoumanian, Z.; Backer, D.; Bailes, M.; Bhat, N. D. R.; Burgay, M.; Burke-Spolaor, S.; et al. (2010), "The international pulsar timing array project: using pulsars as a gravitational wave detector", Classical and Quantum Gravity, 27 (8): 084013, arXiv:0911.5206, Bibcode:2010CQGra..27h4013H, doi:10.1088/0264-9381/27/8/084013 113. ^ Danzmann & Rüdiger 2003 115. ^ Thorne 1995 116. ^ Cutler & Thorne 2002 117. ^ Miller 2002, lectures 19 and 21 118. ^ Celotti, Miller & Sciama 1999, sec. 3 119. ^ Springel et al. 2005 and the accompanying summary Gnedin 2005 120. ^ Blandford 1987, sec. 8.2.4 125. ^ Dalal et al. 2006 126. ^ Barack & Cutler 2004 127. ^ Originally Einstein 1917; cf. Pais 1982, pp. 285–288 128. ^ Carroll 2001, ch. 2 130. ^ E.g. with WMAP data, see Spergel et al. 2003 133. ^ Lahav & Suto 2004, Bertschinger 1998, Springel et al. 2005 139. ^ A good introduction is Linde 2005; for a more recent review, see Linde 2006 141. ^ Spergel et al. 2007, sec. 5,6 143. ^ Brandenberger 2008, sec. 2 144. ^ Gödel 1949 151. ^ Bekenstein 1973, Bekenstein 1974 153. ^ Narlikar 1993, sec. 4.4.4, 4.4.5 160. ^ Namely when there are trapped null surfaces, cf. Penrose 1965 161. ^ Hawking 1966 164. ^ Hawking & Ellis 1973, sec. 7.1 168. ^ Misner, Thorne & Wheeler 1973, §20.4 169. ^ Arnowitt, Deser & Misner 1962 171. ^ For a pedagogical introduction, see Wald 1984, sec. 11.2 173. ^ Townsend 1997, ch. 5 177. ^ Wald 1994, Birrell & Davies 1984 179. ^ Wald 2001, ch. 3 181. ^ Schutz 2003, p. 407 182. ^ a b Hamber 2009 183. ^ A timeline and overview can be found in Rovelli 2000 184. ^ 't Hooft & Veltman 1974 185. ^ Donoghue 1995 186. ^ In particular, a perturbative technique known as renormalization, an integral part of deriving predictions which take into account higher-energy contributions, cf. Weinberg 1996, ch. 17, 18, fails in this case; cf. Veltman 1975, Goroff & Sagnotti 1985; for a recent comprehensive review of the failure of perturbative renormalizability for quantum gravity see Hamber 2009 189. ^ Green, Schwarz & Witten 1987, sec. 4.2 190. ^ Weinberg 2000, ch. 31 191. ^ Townsend 1996, Duff 1996 192. ^ Kuchař 1973, sec. 3 194. ^ For a review, see Thiemann 2007; more extensive accounts can be found in Rovelli 1998, Ashtekar & Lewandowski 2004 as well as in the lecture notes Thiemann 2003 195. ^ Isham 1994, Sorkin 1997 196. ^ Loll 1998 197. ^ Sorkin 2005 198. ^ Penrose 2004, ch. 33 and refs therein 199. ^ Hawking 1987 200. ^ Ashtekar 2007, Schwarz 2007 202. ^ section Quantum gravity, above 203. ^ section Cosmology, above 204. ^ Friedrich 2005 206. ^ See Bartusiak 2000 for an account up to that year; up-to-date news can be found on the websites of major detector collaborations such as GEO 600 Archived 2007-02-18 at the Wayback Machine and LIGO 207. ^ For the most recent papers on gravitational wave polarizations of inspiralling compact binaries, see Blanchet et al. 2008, and Arun et al. 2008; for a review of work on compact binaries, see Blanchet 2006 and Futamase & Itoh 2006; for a general review of experimental tests of general relativity, see Will 2006 208. ^ See, e.g., the electronic review journal Living Reviews in Relativity Further reading[edit] Popular books[edit] Beginning undergraduate textbooks[edit] • Callahan, James J. (2000), The Geometry of Spacetime: an Introduction to Special and General Relativity, New York: Springer, ISBN 978-0-387-98641-8 • Taylor, Edwin F.; Wheeler, John Archibald (2000), Exploring Black Holes: Introduction to General Relativity, Addison Wesley, ISBN 978-0-201-38423-9 Advanced undergraduate textbooks[edit] Graduate textbooks[edit] Specialists' books[edit] External links[edit] • Courses • Lectures • Tutorials
37851c26be027b63
I know how to calculate them and such stuff, but I wanted to know what they actually signify. I have a vague idea that they have something to do with an electron's position in an atom but what do all of them mean? Any help would be greatly appreciated! Quantum numbers give information about the location of an electron or set of electrons. A full set of quantum numbers describes a unique electron for a particular atom. Think about it as the mailing address to your house. It allows one to pinpoint your exact location out of a set of $n$ locations you could possibly be in. We can narrow the scope of this analogy even further. Consider your daily routine. You may begin your day at your home address but if you have an office job, you can be found at a different address during the work week. Therefore we could say that you can be found in either of these locations depending on the time of day. The same goes for electrons. Electrons reside in atomic orbitals (which are very well defined 'locations'). When an atom is in the ground state, these electrons will reside in the lowest energy orbitals possible (e.g. 1$s^2$ 2$s^2$ and 2$p^2$ for carbon). We can write out the physical 'address' of these electrons in a ground-state configuration using quantum numbers as well as the location(s) of these electrons when in some non-ground (i.e. excited) state. You could describe your home location any number of ways (GPS coordinates, qualitatively describing your surroundings, etc.) but we've adapted to a particular formalism in how we describe it (at least in the case of mailing addresses). The quantum numbers have been laid out in the same way. We could communicate with each other that an electron is "located in the lowest energy, spherical atomic orbital" but it is much easier to say a spin-up electron in the 1$s$ orbital instead. The four quantum numbers allows us to communicate this information numerically without any need for a wordy description. Of course carbon is not always going to be in the ground state. Given a wavelength of light for example, one can excite carbon in any number of ways. Where will the electron(s) go? Regardless of what wavelength of light we use, we know that we can describe the final location(s) using the four quantum numbers. You can do this by writing out all the possible permutations of the four quantum numbers. Of course, with a little more effort, you could predict the exact location where the electron goes but in my example above, you know for a fact you could describe it using the quantum number formalism. The quantum numbers also come with a set of restrictions which inherently gives you useful information about where electrons will NOT be. For instance, you could never have the following possible quantum numbers for an atom: $n$=1; $l$=0; $m_l$=0; $m_s$=1/2 $n$=1; $l$=0; $m_l$=0; $m_s$=-1/2 This set of quantum numbers indicates that three electrons reside in the 1$s$ orbital which is impossible! As Jan stated in his post, these quantum numbers are derived from the solutions to the Schrodinger equation for the hydrogen atom (or a 1-e$^-$ system). There are any number of solutions to this equation that relate to the possible energy levels of they hydrogen atom. Remember, energy is QUANTIZED (as postulated by Max Planck). That means that an energy level may exist (arbitrarily) at 0 and 1 but NEVER in between. There is a discrete 'jump' in energy levels and not some gradient between them. From these solutions a formalism was constructed to communicate the solutions in a very easy, numerical way just as mailing addresses are purposefully formatted in such a way that is easy that anyone can understand with minimal effort. In summary, the quantum numbers not only tell you where electrons will be (ground state) and can be (excited state), but also will tell you where electrons cannot be in an atom (due to the restrictions for each quantum number). Principle quantum number ($n$) - indicates the orbital size. Electrons in atoms reside in atomic orbitals. These are referred to as $s,p,d,f...$ type orbitals. A $1s$ orbital is smaller than a $2s$ orbital. A $2p$ orbital is smaller than a $3p$ orbital. This is because orbitals with a larger $n$ value are getting larger due to the fact that they are further away from the nucleus. The principle quantum number is an integer value where $n$ = 1,2,3... . Angular quantum number ($l$) - indicates the shape of the orbital. Each type of orbital ($s,p,d,f..$) has a characteristic shape associated with it. $s$-type orbitals are spherical while $p$-type orbitals have 'dumbbell' orientations. The orbitals described by $l$=0,1,2,3... are $s,p,d,f...$ orbitals, respectively. The angular quantum number ranges from 0 to $n$-1. Therefore, if $n$ = 3, then the possible values of $l$ are 0, 1, 2. Magnetic quantum number ($m_l$) - indicates the orientation of a particular orbital in space. Consider the $p$ orbitals. This is a set of orbitals consisting of three $p$-orbitals that have a unique orientation in space. In Cartesian space, each orbital would like along an axis (x, y, or z) and would be centered around the origin at 0,0. While each orbital is indeed a $p$-orbital, we can describe each orbital uniquely by assigning this third quantum number to indicate its position in space. Therefore, for a set of $p$-orbitals, there would be three $m_l$, each uniquely describing one of these orbitals. The magnetic quantum number can have values of $-l$ to $l$. Therefore, in our example above (where $l$ = 0,1,2) then $m_l$ would be -2, -1, 0, 1, 2. Spin quantum number ($m_s$) - indicates the 'spin' of the electron residing in some atomic orbital. Thus far we have introduced three quantum numbers that localize a position to an orbital of a particular size, shape and orientation. We now introduce the fourth quantum number that describes the type of electron that can be in that orbital. Recall that two electrons can reside inside one atomic orbital. We can define each one uniquely by indicating the electron's spin. According to the Pauli-exclusion principle, no two electrons can have the exact same four quantum numbers. This means that two electrons in one atomic orbital cannot have the same 'spin'. We generally denote 'spin-up' as $m_s$ =1/2 and spin-down as $m_s$=-1/2. • 1 $\begingroup$ This is quite helpful, but do you think a little more on the significance of these numbers might be even more helpful? Such as, the energy levels for the principle quantum number or the bonding implications of the angular quantum number? (I don't know enough about the implications of the last two to generalize that much). Also, I feel somewhat like the OP. I can calculate these numbers and I understand that they give us a way to annotate 3D info for an electron, but what does that enable us to do as a result? Why are quantum numbers important for chemistry? $\endgroup$ – Cohen_the_Librarian May 19 '15 at 16:19 • $\begingroup$ @Cohen_the_Librarian I've extensively edited my post to try and address your questions/suggestions. $\endgroup$ – LordStryker May 19 '15 at 17:58 • $\begingroup$ Consider l = 1 (i.e. p orbitals), do the px, py and pz orbitals correspond to ml = -1, 0 and 1 respectively? Is there any correspondence that can be done for the d and f orbitals as well? I understand that it is a matter of perspective but is there a particular convention to assign each value of ml to a particular orbital, be it px or dx-y or what not. $\endgroup$ – Tan Yong Boon Feb 17 '18 at 3:28 The Schrödinger equation for most system has many solutions $\hat{H}\Psi_i=E_i\Psi_i$, where $i=1,2,3,..$. In the case of the hydrogen atom the solutions has a specific notation, which are where the quantum numbers come from. In the case of the H atom the principal quantum number $n$ refers to solutions with different energy. For $n>1$ there are several solutions with the same energy, which come in different shapes ($s$, $p$, etc with different angular quantum numbers $l$) that can point in different directions ($p_x$, $p_y$, etc with different magnetic quantum numbers $m$) These quantum numbers are also applied to multi-electron atoms within the AO approximation. So the quantum numbers are a way to count (label) the solutions to the Schrödinger equation. Your Answer
67fb2cb13a129b9b
Is it possible to formulate the Schrödinger equation (SE) in terms of a differential equation involving only the probability density instead of the wave function? If not, why not? We can take the time independent SE as an example: $$-\frac{\hbar ^{2}}{2m}\nabla ^{2}\psi (\mathbf {r} )+V(\mathbf {r} )\psi (\mathbf {r} )=E\psi (\mathbf {r} )$$ Any solution will yield a probability density $p(\mathbf {r}) = \psi^*(\mathbf {r})\psi(\mathbf {r})$ and the question if an equation can be found of which $p$ is the solution if $\psi$ is a solution of the SE. I assume not since it would have been widely known but I have not seen the arguments why this would be impossible. I understand the wave function contains more information than the probability density (e.g. the phase of $\psi$ which is relevant in QM drops out of $p$) but I do not see that as sufficient reason against the existence of such an equation. • $\begingroup$ I think if you want to write down how the probability density changes in time, you basically get the continuity equation. As commented in the answer below, $\rho$ does not give all information of the quantum state. $\endgroup$ – Zheng Liu Dec 7 '17 at 7:42 • $\begingroup$ @Zheng Liu I'm not so worried not having all information in $\psi$ if you do not need it to find solutions for $\rho$. But even so, following AFT's response you can express the complex phase of $\psi$ in $\rho$ though it is a functional form and cumbersome. So all information in the quantum state can still be found if you want it. $\endgroup$ – Jan Bos Dec 9 '17 at 2:19 • $\begingroup$ Right. It can be recovered if you know the complex phase. $\endgroup$ – Zheng Liu Dec 14 '17 at 3:42 No, you can't. The function $\psi\in\mathbb C$ has two real degrees of freedom; they are coupled and dynamical (non-gauge). On the other hand, the function $\rho\in\mathbb R$ has one real degree of freedom. It is impossible to reduce the dynamics of the system from two variables to one variable without losing information in the process. (But, in a formal sense: Yes, you can) Let $\psi=\sqrt{\rho}\mathrm e^{iS}$, with $\rho,S$ a pair of real variables. You may write the Schrödinger equation directly in terms of $\rho,S$ as (cf. Madelung or Bohm) \begin{equation} \begin{aligned} \frac{\partial\sqrt{\rho}}{\partial t}&=-\frac{1}{2m}\left(\sqrt{\rho}\nabla^2S+2\nabla\sqrt{\rho}\cdot\nabla S\right)\\ \frac{\partial S}{\partial t}&=-\left(\frac{|\nabla S|^2}{2m}+V-\frac{\hbar^2}{2m}\frac{\nabla^2\sqrt{\rho}}{\sqrt{\rho}}\right) \end{aligned} \end{equation} As you can see, you cannot write an equation for $\rho$ alone, because its equation is coupled to a second unknown, $S$. Two real degrees of freedom, not one. Formally speaking, you may solve the equation for $S$ as a functional of $\rho$, and plug the result into the equation for $\rho$, thus obtaining an equation for $\rho$ alone. This is impractical because it is not really possible to solve for $S=S[\rho]$ in general terms, and even if we could, the functional would be highly non-local so the resulting equation for $\rho$ would be impossible to work with. The Schrödinger equation, written in terms of $\psi$, even if complicated, is as simple as it gets. Any other reformulation is way more cumbersome to use. • $\begingroup$ Shouldn't the answer then be along the lines of "Yes you can but the equation involves a complicated functional of $\rho$ and is not practical to use". In fact, the 2nd part of your answer seems to contradict the first part since you showed that the degrees of freedom are coupled albeit in a complicated way. It is interesting that the relative simple probability density of say a 1s electron in hydrogen is a solution of this very tedious equation. $\endgroup$ – Jan Bos Dec 7 '17 at 12:55 • $\begingroup$ Appreciate the link to the Quantum Hamilton-Jacobi Equation (the 2nd differential equation in the couples set). There seems to be some work done on that but it seems not straightforward. $\endgroup$ – Jan Bos Dec 7 '17 at 14:22 • $\begingroup$ any books on this? It's more mathematically cumbersome but I like this so much better conceptually. $\endgroup$ – Mike Flynn Feb 8 '18 at 5:26 • $\begingroup$ @MikeFlynn Bohm wrote several books himself, so you should definitely check them out. I've heard they are quite good (even by those that dislike Bohm's interpretation). $\endgroup$ – AccidentalFourierTransform Feb 8 '18 at 14:51 We have $\psi^\ast\nabla^2\psi=\dfrac{2m}{\hbar^2}(V-E)\rho$ so by complex conjugation $\psi\nabla^2\psi^\ast=\dfrac{2m}{\hbar^2}(V-E)\rho$. Hence $$\nabla^2 \rho=\psi\nabla^2\psi^\ast+\psi^\ast\nabla^2\psi+2\boldsymbol{\nabla}\psi^\ast\cdot\boldsymbol{\nabla}\psi=\dfrac{4m}{\hbar^2}(V-E)\rho+2\boldsymbol{\nabla}\psi^\ast\cdot\boldsymbol{\nabla}\psi.$$It's that last term that gets in the way. There's more quantum-mechanical information in $\psi$ than in $\rho$, so we can't in general rewrite everything in terms of $\rho$ alone. • $\begingroup$ You can write an equation for $\rho$ and $J$ (the probability current) though. $\endgroup$ – Mauricio Dec 7 '17 at 9:59 • $\begingroup$ Mauricio Yes, but $\mathbf{j}$ is $\psi$-dependent $\endgroup$ – J.G. Dec 7 '17 at 10:01 • $\begingroup$ Yeah, but if I remember correctly you can solve certain scattering introductory problems using continuity equation only and without using $\psi$. $\endgroup$ – Mauricio Dec 7 '17 at 10:06 • $\begingroup$ @Mauricio If you come up with an example, you should probably mention it in an answer here, even if it's only "a long comment". $\endgroup$ – J.G. Dec 7 '17 at 12:44 • 1 $\begingroup$ Your answer does not prove that such an equation does not exist. It just gives an example of one that does not work. Per my comment under my question in response to Zheng Liu I also am not convinced on your statement that there is more information in $\psi$ than in $\rho$. You probably could say at most that the state at point $x = x_1$ contains more information than $\rho$ at the single point $x = x_1$. $\endgroup$ – Jan Bos Dec 9 '17 at 2:27 The probability density isn't a great point of comparison, because it has absolutely no information about the momentum properties of the state. This goes a bit further in that the correct classical point of comparison for any quantum-mechanical formalism isn't really a single-trajectory Newtonian perspective; instead, it is the Liouville mechanics of the phase-space density $\rho(x,p)$ of a particle which obeys classical hamiltonian mechanics but whose state is only known down to a probability distribution on phase space, and whose density then obeys the Liouville equation $$ \frac{\partial\rho}{\partial t}=-\{\,\rho,H\,\}. $$ Once you do that, then there is a quantum analogue of the Liouville equation, given in this answer by Qmechanic, where you need to change the standard function multiplications for a $\hbar$-dependent Moyal product; the dynamical equation then reads $$ \frac{d\rho}{dt} = \frac{1}{i\hbar} [\rho\stackrel{\star}{,}H]. $$ I've never seen this used in anger, but that might just be because I've never looked at the places that do use it. • $\begingroup$ Thanks for your reference to the quantum analogue of the Liouville equation. It looks like something I was looking for. It must be related to the equation in the response of AccidentalFourierTransform. About your 1st paragraph I'm not so worried about it. On a higher level I was wondering if QM can be done without talking about states. Let's say you take all the probability densities $\rho_{nlm}$ of the hydrogen atom there must be some way to connect $m$ to the angular momentum without reference to states but that could be another question. $\endgroup$ – Jan Bos Dec 8 '17 at 1:00 Your Answer
ed815c863e683902
JAM 2012 Exam Syllabus - PH (Physics) Syllabus: Joint Admission Test For M.Sc. 2012 Physics (PH) Mechanics and General Properties of Matter: Newton’s laws of motion and applications, Velocity and acceleration in Cartesian, polar and cylindrical coordinate systems, uni­formly rotating frame, centrifugal and Coriolis forces, Mo­tion under a central force, Kepler’s laws, Gravitational Law and field, Conservative and non-conservative forces. Sys­tem of particles, Centre of mass, equation of motion of the CM, conservation of linear and angular momentum, con­servation of energy, variable mass systems. Elastic and inelastic collisions. Rigid body motion, fixed axis rotations, rotation and translation, moments of Inertia and products of Inertia. Principal moments and axes.. Kinematics of moving fluids, equation of continuity, Euler’s equation, Bernoulli’s theorem. Oscillations, Waves and Optics: Differential equation for simple harmonic oscillator and its general solution. Super­position of two or more simple harmonic oscillators. Lissajous figures. Damped and forced oscillators, reso­nance. Wave equation, traveling and standing waves in one-dimension. Energy density and energy transmission in waves. Group velocity and phase velocity. Sound waves in media. Doppler Effect. Fermat’s Principle. General theory of image formation. Thick lens, thin lens and lens combina­tions. Interference of light, optical path retardation. Fraunhofer diffraction. Rayleigh criterion and resolving power. Diffraction gratings. Polarization: linear, circular and elliptic polarization. Double refraction and optical rotation. Modern Physics: Inertial frames and Galilean invariance. Postulates of special relativity. Lorentz transformations. Length contraction, time dilation. Relativistic velocity addi­tion theorem, mass energy equivalence. Blackbody radia­tion, photoelectric effect, Compton effect, Bohr’s atomic model, X-rays. Wave-particle duality, Uncertainty principle, Schrödinger equation and its solution for one, two and three dimensional boxes. Reflection and transmission at a step potential, Pauli exclusion prin­ciple. Structure of atomic nucleus, mass and binding energy. Ra­dioactivity and its applications. Laws of radioactive decay. Solid State Physics, Devices and Electronics: Crystal structure, Braves lattices and basis. Miller indices. X-ray diffraction and Bragg's law Intrinsic and extrinsic semiconductors. Fermi level. p-n junctions, transistors. Transistor circuits in CB, CE, CC modes. Amplifier circuits with transistors. Operational amplifiers. OR, AND, NOR and NAND gates. Courtesy: iitb.ac.in
de675bb071ed9546
Wavelet and Multiscale Library This Wavelet and Multiscale Library in essence is a map with the aim to generate a platform for scholars, tutors, and researchers. The focus of this library are basis-oriented algorithms that use ansatz systems with multiscale structure and their mathematical background on research level. It aims to enable students to get in contact with the topic, so that they may write their thesis in this subject. Providing demo software and examples, it aims to support teachers in their lectures. Last but not least, it aims to enable researchers to communicate their findings and to present them to a broad audience. References in zbMATH (referenced in 15 articles ) Showing results 1 to 15 of 15. Sorted by year (citations) 1. Dahlke, Stephan; Döhring, Nicolas; Kinzel, Stefan: On the construction of stochastic fields with prescribed regularity by wavelet expansions (2018) 2. Plaskota, Leszek: On linear versus nonlinear approximation in the average case setting (2018) 3. Prömel, David J.; Trabs, Mathias: Rough differential equations driven by signals in Besov spaces (2016) 4. Belomestny, Denis; Schoenmakers, John; Dickmann, Fabian: Multilevel dual approach for pricing American style derivatives (2013) 5. Dahlke, Stephan; Oswald, Peter; Raasch, Thorsten: A note on quarkonial systems and multilevel partition of unity methods (2013) 6. Dahlke, Stephan; Sickel, Winfried: On Besov regularity of solutions to nonlinear elliptic partial differential equations (2013) 7. Dereich, Steffen; Scheutzow, Michael; Schottstedt, Reik: Constructive quantization: approximation by empirical measures (2013) 8. Hinrichs, Aicke; Novak, Erich; Woźniakowski, Henryk: Discontinuous information in the worst case and randomized settings (2013) 9. Rohwedder, Thorsten: The continuous coupled cluster formulation for the electronic Schrödinger equation (2013) 10. Cioica, Petru A.; Dahlke, Stephan: Spatial Besov regularity for semilinear stochastic partial differential equations on bounded Lipschitz domains (2012) 11. Cioica, Petru A.; Dahlke, Stephan; Döhring, Nicolas; Kinzel, Stefan; Lindner, Felix; Raasch, Thorsten; Ritter, Klaus; Schilling, René L.: Adaptive wavelet methods for the stochastic Poisson equation (2012) 12. Cohen, Albert; Dahmen, Wolfgang; Welper, Gerrit: Adaptivity and variational stabilization for convection-diffusion equations (2012) 13. Görner, Torsten; Hielscher, Ralf; Kunis, Stefan: Efficient and accurate computation of spherical mean values at scattered center points (2012) 14. Heinen, Dennis; Plonka, Gerlind: Wavelet shrinkage on paths for denoising of scattered data (2012) 15. Jahnke, Tobias; Kreim, Michael: Error bound for piecewise deterministic processes modeling stochastic reaction systems (2012)
a4af43061b5e4b77
Thursday, February 20, 2014 Experimental evidence for sterile neutrino? Many physicists are somewhat disappointed to the results from LHC: the expected discovery of Higgs has been seen as the main achievement of LHC hitherto. Much more was expected. To my opinion there is no reason for disappointment. The exclusion of the standard SUSY at expected energy scale is very far reaching negative result. Also the fact that Higgs mass is too small to be stable without fine tuning is of great theoretical importance. The negative results concerning heavy dark matter candidates are precious guidelines for theoreticians. The non-QCD like behavior in heavy ion collisions and proton-ion collisions is bypassed my mentioning something about AdS/CFT correspondence and non-perturbative QCD effects. I tend to see these effects as direct evidence for M89 hadron physics (see this). In any case, something interesting has emerged quite recently. Resonaances tells that the recent analysis of X-ray spectrum of galactic clusters claims the presence of monochromatic 3.5 keV photon line. The proposed interpretation is as a decay product of sterile 7 keV neutrino transforming first to a left-handed neutrino and then decaying to photon and neutrino via a loop involving W boson and electron. This is of course only one of the many interpretations. Even the existence of line is highly questionable. One of the poorly understood aspects of TGD is right-handed neutrino, which is obviously the TGD counterpart of the inert neutrino. 1. The old idea is that covariantly constant right handed neutrino could generate N=2 super-symmetry in TGD Universe. In fact, all modes of induced spinor field would generate super-conformal symmetries but electroweak interactions would break these symmetries for the modes carrying non-vanishing electroweak quantum numbers: they vanish for νR. This picture is now well-established at the level of WCW geometry (see this): super-conformal generators are labelled angular momentum and color representations plus two conformal weights: the conformal weight assignable to the light-like radial coordinate of light-cone boundary and the conformal weight assignable to string coordinate. It seems that these conformal weights are independent. The third integer labelling the states would label genuinely Yangian generators: it would tell the poly-locality of the generator with locus defined by partonic 2-surface: generators acting on single partonic 2-surface, 2 partonic 2-surfaces, ... 2. It would seem that even the SUSY generated by νR must be badly broken unless one is able to invent dramatically different interpretation of SUSY. The scale of SUSY breaking and thus the value of the mass of right-handed neutrino remains open also in TGD. In lack of better one could of course argue that the mass scale must be CP2 mass scale because right-handed neutrino mixes considerably with the left-handed neutrino (and thus becomes massive) only in this scale. But why this argument does not apply also to left handed neutrino which must also mix with the right-handed one! 3. One can of course criticize the proposed notion of SUSY: wonder whether fermion + extremely weakly interacting νR at same wormhole throat (or interior of 3-surface) can behave as single coherent entity as far spin is considered (see this)? 4. The condition that the modes of induced spinor field have a well-defined electromagnetic charge eigenvalue (see this) requires that they are localized at 2-D string world sheets or partonic 2-surfaces: without this condition classical W boson fields would mix the em charged and neutral modes with each other. Right-handed neutrino is an exception since it has no electroweak couplings. Unless right-handed neutrino is covariantly constant, the modified gamma matrices can however mix the right-handed neutrino with the left handed one and this can induce transformation to charged mode. This does not happen if each modified gamma matrix can be written as a linear combination of either M4 or CP2 gamma matrices and modified Dirac equation is satisfied separately by M4 and CP2 parts of the modified Dirac equation. 5. Is the localization of the modes other than covariantly constant neutrino to string world sheets a consequence of dynamics or should one assume this as a separate condition? If one wants similar localization in space-time regions of Euclidian signature - for which CP2 type vacuum extremal is a good representative - one must assume it as a separate condition. In number theoretic formulation string world sheets/partonic 2-surfaces would be commutative/co-commutative sub-manifolds of space-time surfaces which in turn would be associative or co-associative sub-manifolds of imbedding space possessing (hyper-)octonionic tangent space structure. For this option also right-handed neutrino would be localized to string world sheets. Right-handed neutrino would be covariantly constant only in 2-D sense. One can consider the possibility that νR is de-localized to the entire 4-D space-time sheet. This would certainly modify the interpretation of SUSY since the number of degrees of freedom would be reduced for νR. 6. Non-covariantly constant right-handed neutrinos could mix with left-handed neutrinos but not with charged leptons if the localization to string world sheets is assumed for modes carrying non-vanishing electroweak quantum numbers. This would make possible the decay of right-handed to neutrino plus photon, and one cannot exclude the possibility that νR has mass 7 keV. Could this imply that particles and their spartners differ by this mass only? Could it be possible that practically unbroken SUSY could be there and we would not have observed it? Could one imagine that sfermions have annihilated leaving only states consisting of fundamental fermions? But shouldn't the total rate for the annihilation of photons to hadrons be two times the observed one? This option does not sound plausible. What if one assumes that given sparticle is charactrized by the same p-adic prime as corresponding particle but is dark in the sense that it corresponds to non-standard value of Planck constant. In this case sfermions would not appear in the same vertex with fermions and one could escape the most obvious contradictions with experimental facts. This leads to the notion of shadron: shadrons would be (see this) obtained by replacing quarks with dark squarks with nearly identical masses. I have asked whether so called X and Y bosons having no natural place in standard model of hadron could be this kind of creatures. The interpretation of 3.5 keV photons as decay products of right-handed neutrinos is of course totally ad hoc. Another TGD inspired interpretation would be as photons resulting from the decays of excited nuclei to their ground state. 1. Nuclear string model (see this) predicts that nuclei are string like objects formed from nucleons connected by color magnetic flux tubes having quark and antiquark at their ends. These flux tubes are long and define the "magnetic body" of nucleus. Quark and antiquark have opposite em charges for ordinary nuclei. When they have different charges one obtains exotic state: this predicts entire spectrum of exotic nuclei for which statistic is different from what proton and neutron numbers deduced from em charge and atomic weight would suggest. Exotic nuclei and large values of Planck constant could make also possible cold fusion (see this). 2. What the mass difference between these states is, is not of course obvious. There is however an experimental finding (see Analysis of Gamma Radiation from a Radon Source: Indications of a Solar Influence) that nuclear decay rates oscillate with a period of year and the rates correlate with the distance from Sun. A possible explanation is that the gamma rays from Sun in few keV range excite the exotic nuclear states with different decay rate so that the average decay rate oscillates. Note that nuclear excitation energies in keV range would also make possible interaction of nuclei with atoms and molecules (see this). 3. This allows to consider the possibility that the decays of exotic nuclei in galactic clusters generates 3.5 keV photons. The obvious question is why the spectrum would be concentrated at 3.5 keV in this case (second question is whether the energy is really concentrated at 3.5 keV: a lot of theory is involved with the analysis of the experiments). Do the energies of excited states depend on the color bond only so that they would be essentially same for all nuclei? Or does single excitation dominate in the spectrum? Or is this due to the fact that the thermal radiation leaking from the core of stars excites predominantly single state? Could E=3.5 keV correspond to the maximum intensity for thermal radiation in stellar core? If so, the temperature of the exciting radiation would be about T≈ E/3≈ 1.2× 107 K. This in the temperature around which formation of Helium by nuclear fusion has begun: the temperature at solar core is around 1.57× 107 K. For background see the chapter SUSY in TGD Universe of "p-Adic Length Scale Hypothesis". Sunday, February 09, 2014 Class field theory and TGD: does TGD reduce to number theory? The intriguing general result of class field theory) -something extremely abstract for physicist's brain - is that the the maximal Abelian extension for rationals is homomorphic with the multiplicative group of ideles. This correspondence plays a key role in Langlands correspondence (see this,this, this, and this). Does this mean that it is not absolutely necessary to introduce p-adic numbers? This is actually not so. The Galois group of the maximal abelian extension is rather complex objects (absolute Galois group, AGG, defines as the Galois group of algebraic numbers is even more complex!). The ring Z of adeles defining the group of ideles as its invertible elements homeomorphic to the Galois group of maximal Abelian extension is profinite group. This means that it is totally disconnected space as also p-adic integers and numbers are. What is intriguing that p-dic integers are however a continuous structure in the sense that differential calculus is possible. A concrete example is provided by 2-adic units consisting of bit sequences which can have literally infinite non-vanishing bits. This space is formally discrete but one can construct differential calculus since the situation is not democratic. The higher the pinary digit in the expansion is, the less significant it is, and p-adic norm approaching to zero expresses the reduction of the insignificance. 1. Could TGD based physics reduce to a representation theory for the Galois groups of quaternions and octonions? Number theoretical vision about TGD raises questions about whether adeles and ideles could be helpful in the formulation of TGD. I have already earlier considered the idea that quantum TGD could reduce to a representation theory of appropriate Galois groups. I proceed to make questions. 1. Could real physics and various p-adic physics on one hand, and number theoretic physics based on maximal Abelian extension of rational octonions and quaternions on one hand, define equivalent formulations of physics? 2. Besides various p-adic physics all classical number fields (reals, complex numbers, quaternions, and octonions) are central in the number theoretical vision about TGD. The technical problem is that p-adic quaternions and octonions exist only as a ring unless one poses some additional conditions. Is it possible to pose such conditions so that one could define what might be called quaternionic and octonionic adeles and ideles? It will be found that this is the case: p-adic quaternions/octonions would be products of rational quaternions/octonions with a p-adic unit. This definition applies also to algebraic extensions of rationals and makes it possible to define the notion of derivative for corresponding adeles. Furthermore, the rational quaternions define non-commutative automorphisms of quaternions and rational octonions at least formally define a non-associative analog of group of octonionic automorphisms (see this). 3. I have already earlier considered the idea about Galois group as the ultimate symmetry group of physics. The representations of Galois group of maximal Abelian extension (or even that for algebraic numbers) would define the quantum states. The representation space could be group algebra of the Galois group and in Abelian case equivalently the group algebra of ideles or adeles. One would have wave functions in the space of ideles. The Galois group of maximal Abelian extension would be the Cartan subgroup of the absolute Galois group of algebraic numbers associated with given extension of rationals and it would be natural to classify the quantum states by the corresponding quantum numbers (number theoretic observables). If octonionic and quaternionic (associative) adeles make sense, the associativity condition would reduce the analogs of wave functions to those at 4-dimensional associative sub-manifolds of octonionic adeles identifable as space-time surfaces so that also space-time physics in various number fields would result as representations of Galois group in the maximal Abelian Galois group of rational octonions/quaternions. TGD would reduce to classical number theory! 4. Absolute Galois group is the Galois group of the maximal algebraic extension and as such a poorly defined concept. One can however consider the hierarchy of all finite-dimensional algebraic extensions (including non-Abelian ones) and maximal Abelian extensions associated with these and obtain in this manner a hierarchy of physics defined as representations of these Galois groups homomorphic with the corresponding idele groups. 5. In this approach the symmetries of the theory would have automatically adelic representations and one might hope about connection with Langlands program. 2. Adelic variant of space-time dynamics and spinorial dynamics? As an innocent novice I can continue to pose stupid questions. Now about adelic variant of the space-time dynamics based on the generalization of Kähler action discussed already earlier but without mentioning adeles (see this). 1. Could one think that adeles or ideles could extend reals in the formulation of the theory: note that reals are included as Cartesian factor to adeles. Could one speak about adelic or even idelic space-time surfaces endowed with adelic or idelic coordinates? Could one formulate variational principle in terms of adeles so that exponent of action would be product of actions exponents associated with various factors with Neper number replaced by p for Zp. The minimal interpretation would be that in adelic picture one collects under the same umbrella real physics and various p-adic physics. 2. Number theoretic vision suggests that 4:th/8:th Cartesian powers of adeles have interpretation as adelic variants of quaternions/ octonions. If so, one can ask whether adelic quaternions and octonions could have some number theretical meaning. Note that adelic quaternions and octonions are not number fields without additional assumptions since the moduli squared for a p-adic analog of quaternion and octonion can vanish so that the inverse fails to exist. If one can pose a condition guaranteing the existence of inverse, one could define the multiplicative group of ideles for quaternions. For octonions one would obtain non-associative analog of the multiplicative group. If this kind of structures exist then four-dimensional associative/co-associative sub-manifolds in the space of non-associative ideles define associative/co-associative ideles and one would end up with ideles formed by associative and co-associative space-time surfaces. 3. What about equations for space-time surfaces. Do field equations reduce to separate field equations for each factor? Can one pose as an additional condition the constraint that p-adic surfaces provide in some sense cognitive representations of real space-time surfaces: this idea is formulated more precisely in terms of p-adic manifold concept (see this). Or is this correspondence an outcome of evolution? Physical intuition would suggest that in most p-adic factors space-time surface corresponds to a point, or at least to a vacuum extremal. One can consider also the possibility that same algebraic equation describes the surface in various factors of the adele. Could this hold true in the intersection of real and p-adic worlds for which rationals appear in the polynomials defining the preferred extremals. 4. To define field equations one must have the notion of derivative. Derivative is an operation involving division and can be tricky since adeles are not number field. If one can guarantee that the p-adic variants of octonions and quaternions are number fields, there are good hopes about well-defined derivative. Derivative as limiting value df/dx= lim ( f(x+dx)-f(x))/dx for a function decomposing to Cartesian product of real function f(x) and p-adic valued functions fp(xp) would require that fp(x) is non-constant only for a finite number of primes: this is in accordance with the physical picture that only finite number of p-adic primes are active and define "cognitive representations" of real space-time surface. The second condition is that dx is proportional to product dx × ∏ dxp of differentials dx and dxp, which are rational numbers. dx goes to xero as a real number but not p-adically for any of the primes involved. dxp in turn goes to zero p-adically only for Qp. 5. The idea about rationals as points commont to all number fields is central in number theoretical vision. This vision is realized for adeles in the minimal sense that the action of rationals is well-defined in all Cartesian factors of the adeles. Number theoretical vision allows also to talk about common rational points of real and various p-adic space-time surfaces in preferred coordinate choices made possible by symmetries of the imbedding space, and one ends up to the vision about life as something residing in the intersection of real and p-adic number fields. It is not clear whether and how adeles could allow to formulate this idea. 6. For adelic variants of imbedding space spinors Cartesian product of real and p-adc variants of imbedding spaces is mapped to their tensor product. This gives justification for the physical vision that various p-adic physics appear as tensor factors. Does this mean that the generalized induced spinors are infinite tensor products of real and various p-adic spinors and Clifford algebra generated by induced gamma matrices is obtained by tensor product construction? Does the generalization of massless Dirac equation reduce to a sum of d'Alembertians for the factors? Does each of them annihilate the appropriate spinor? If only finite number of Cartesian factors corresponds to a space-time surface which is not vacuum extremal vanishing induced Kähler form, Kähler Dirac equation is non-trivial only in finite number of adelic factors. 3. Objections The basic idea is that apporopriately defined invertible quaternionic/octonionic adeles can be regarded as elements of Galois group assignable to quaternions/octonions. The best manner to proceed is to invent objections against this idea. 1. The first objection is that p-adic quaternions and octonions do not make sense since p-adic variants of quaternions and octonions do not exist in general. The reason is that the p-adic norm squared ∑ xi2 for p-adic variant of quaternion, octonion, or even complex number can vanish so that its inverse does not exist. 2. Second objection is that automorphisms of the ring of quaternions (octonions) in the maximal Abelian extension are products of transformations of the subgroup of SO(3) (G2) represented by matrices with elements in the extension and in the Galois group of the extension itself. Ideles separate out as 1-dimensional Cartesian factor from this group so that one does not obtain 4-field (8-fold) Cartesian power of this Galois group. If the p-adic variants of quaternions/octonions are be rational quaternions/octonions multiplied by p-adic number, these objections can be circumvented. 1. This condition indeed allows to construct the inverse of p-adic quaternion/octonion as a product of inverses for rational quaternion/octonion and p-adic number! The reason is that the solutions to ∑ xi2=0 involve always p-adic numbers with an infinite number of pinary digits - at least one and the identification excludes this possibility. 2. This restriction would give a rather precise content for the idea of rational physics since all p-adic space-time surfaces would have a rational backbone in well-defined sense. 3. One can interpret also the quaternionicity/octonionicity in terms of Galois group. The 7-dimensional non-associative counterparts for octonionic automorphisms act as transformations x→ gxg-1. Therefore octonions represent this group like structure and the p-adic octonions would have interpretation as combination of octonionic automorphisms with those of rationals. Adelic variants of of octonions would represent a generalization of these transformations so that they would act in all number fields. Quaternionic 4-surfaces would define associative local sub-groups of this group-like structure. Thus a generalization of symmetry concept reducing for solutions of field equations to the standard one would allow to realize the vision about the reduction of physics to number theory. For background see the chapter About Absolute Galois group of "TGD as Generalized Number Theory". Friday, February 07, 2014 Why TGD? Hamed kindly reminded me about article "Why TGD?" that I wrote recently: why not mention it in blog article. The article is as an attempt to provide a popular summary about TGD, its motivations, and basic implications. This is of course mission impossible as such since TGD is something at the top of centuries of evolution which has led from Newton to standard model. This means that there is a background of highly refined conceptual thinking about Universe so that even the best computer graphics and animations do not help much. One can still try - at least to create some inspiring impressions. The artice approaches the challenge by answering the most frequently asked questions. Why TGD? How TGD could help to solve the problems of recent day theoretical physics? What are the basic principles of TGD? What are the basic guidelines in the construction of TGD? These are examples of this kind of questions which I try to answer in the article using the only language that I can talk. This language is a dialect used by elementary particle physicists, quantum field theorists, and other people applying modern physics. At the level of practice involves technically heavy mathematics but since it relies on very beautiful and simple basic concepts, one can do with a minimum of formulas, and reader can always to to Wikipedia if it seems that more details are needed. I hope that reader could catch the basic idea: technical details are not important, it is principles and concepts which really matter. And I almost forgot: problems! TGD itself and almost every new idea in the development of TGD has been inspired by a problem. Why TGD? The first question is "Why TGD?". The attempt to answer this question requires overall view about the recent state of theoretical physics. Obviously standard physics plagued by some problems. These problems are deeply rooted in basic philosophical - one might even say ideological - assumptions which boil down to -isms like reductionism, materialism, determinism, and locality. Thermodynamics, special relativity, and general relativity involve also postulates, which can be questioned. In thermodynamics second law in its recent form and the assumption about fixed arrow of thermodynamical time can be questions since it is hard to understand biological evolution in this framework. Clearly, the relationship between the geometric time of physics and experienced time is poorly understood. In general relativity the beautiful symmetries of special relativity are in principle lost and by Noether's theorem this means also the loss of classical conservation laws, even the definitions of energy and momentum are in principle lost. In quantum physics the basic problem is that the non-determinism of quantum measurement theory is in conflict with the determinism of Schrödinger equation. Standard model is believed to summarize the recent understanding of physics. The attempts to extrapolate physics beyond standard model are based on naive length scale reductionism and have products Grand Unified Theories (GUTs), supersymmetric gauge theories (SUSYs). The attempts to include gravitation under same theoretical umbrella with electroweak and strong interactions has led to super-string models and M-theory. These programs have not been successful, and the recent dead end culminating in the landscape problem of super string theories and M-theory could have its origins in the basic ontological assumptions about the nature of space-time and quantum. How could TGD help? The second question is "Could TGD provide a way out of the dead alley and how?". The claim is that is the case. The new view about space-time as 4-D surface in certain fixed 8-D space-time is the starting point motivated by the energy problem of general relativity and means in certain sense fusion of the basic ideas of special and general relativities. This basic idea has gradually led to several other ideas. Consider only the identification of dark matter as phases of ordinary matter characterized by non-standard value of Planck constant, extension of physics by including physics in p-adic number fields and assumed to describe correlates of cognition and intentionality, and zero energy ontology (ZEO) in which quantum states are identified as counterparts of physical events. These new elements generalize considerably the view about space-time and quantum and give good hopes about possibility to understand living systems and consciousness in the framework of physics. Two basic visions about TGD There are two basic visions about TGD as a mathematical theory. The first vision is a generalization of Einstein's geometrization program from space-time level to the level of "world of classical worlds" identified as space of 4-surfaces. There are good reasons to expect that the mere mathematical existence of this infinite-dimensional geometry fixes it highly uniquely and therefore also physics. This hope inspired also string model enthusiasts before the landscape problem forcing to give up hopes about predictability. Second vision corresponds to a vision about TGD as a generalized number theory having three separate threads. 1. The inspiration for the first thread came from the need to fuse various p-adic physics and real physics to single coherent whole in terms of principle that might be called number theoretical universality. 2. Second thread was based on the observation that classical number fields (reals, complex numbers, quaternions, and octonions) have dimensions which correspond to those appearing in TGD. This led to the vision that basic laws of both classical and quantum physics could reduce to the requirements of associativity and commutativity. 3. Third thread emerged from the observation that the notion of prime (and integer, rational, and algebraic number) can be generalized so that infinite primes are possible. One ends up to a construction principle allowing to construct infinite hierarchy of infinite primes using the primes of the previous level as building bricks at new level. Rather surprisingly, this procedure is structurally identical with a repeated second quantization of supersymmetric arithmetic quantum field theory for which elementary bosons and fermions are labelled by primes. Besides free many-particle states also the analogs of bound states are obtained and this means the situation really fascinating since it raises the hope that the really hard part of quantum field theories - understanding of bound states - could have number theoretical solution. It is not yet clear whether both great visions are needed or whether either of them is in principle enough. In any case their combination has provided a lot of insights about what quantum TGD could be. Guidelines in the construction of TGD The construction of new physical theory is slow and painful task but leads gradually to an identification of basic guiding principles helping to make quicker progress. There are many such guiding principles. • "Physics is uniquely determined by the existence of WCW" is is a conjecture but motivates highly interesting questions. For instance: "Why M4× CP2 a unique choice for the imbedding space?", "Why space-time dimension must be 4?", etc... • Number theoretical Universality is a guiding principle in attempts to realize number theoretical vision, in particular the fusion of real physics and various p-adic physics to single structure. • The construction of physical theories is nowadays to a high degree guesses about the symmetries of the theory and deduction of consequences. The very notion of symmetry has been generalized in this process. Super-conformal symmetries play even more powerful role in TGD than in super-string models and gigantic symmetries of WCW in fact guarantee its existence. • Quantum classical correspondence is of special importance in TGD. The reason is that where classical theory is not anymore an approximation but in well-defined sense exact part of quantum theory. There are also more technical guidelines. • Strong form of General Coordinate invariance (GCI) is very strong assumption. Already GCI leads to the assumption that Kähler function is Kähler action for a preferred extremal defining the counterpart of Bohr orbit. Even in a form allowing the failure of strict determinism this assumption is very powerful. Strong form of general coordinate invariance requires that the light-like 3-surfaces representing partonic orbits and space-like 3-surfaces at the ends of causal diamonds are physically equivalent. This implies effective 2-dimensionality: the intersections of these two kinds of 3-surfaces and 4-D tangent space data at them should code for quantum states. • Quantum criticality states that Universe is analogous to a critical system meaning that it has maximal structural richness. One could also say that Universe is at the boundary line between chaos and order. The original motivation was that quantum criticality fixes the basic coupling constant dictating quantum dynamics essentially uniquely. • The notion of finite measurement resolution has also become an important guide-line. Usually this notion is regarded as ugly duckling of theoretical physics which must be tolerated but the mathematics of von Neumann algebras seems to raise its status to that of beautiful swan. • What I have used to call weak form of electric-magnetic duality is a TGD version of electric-magnetic duality discovered by Olive and Montonen. It makes it possible to realize strong form of holography implied actually by strong for of General Coordinate Invariance. Weak form of electric magnetic duality in turn encourages the conjecture that TGD reduces to almost topological QFT. This would mean enormous mathematical simplification. • TGD leads to a realization of counterparts of Feynman diagrams at the level of space-time geometry and topology: I talk about generalized Feynman diagrams. The highly non-trivial challenge is to give them precise mathematical content. Twistor revolution has made possible a considerable progress in this respect and led to a vision about twistor Grassmannian description of stringy variants of Feynman diagrams. In TGD context string like objects are not something emerging in Planck length scale but already in scales of elementary particle physics. The irony is that although TGD is not string theory, string like objects and genuine string world sheets emerge naturally from TGD in all length scales. Even TGD view about nuclear physics predicts string like objects. For details see the new article Why TGD?.
82966200fc40f384
Saturday, May 20, 2017 Cosmo : supremely relaxing fishing video The Seychelles are an angler’s paradise – if you can actually get to them. Follow the crew of the Alphonse Fishing Co. as they wade the flats of the Cosmoledo Atoll, hoping for a shot at Giant Trevally.  see the story Cosmoledo island with the GeoGarage platform Friday, May 19, 2017 Terrifying 20m-tall 'rogue waves' are actually real The Wave painting by Ivan Aivazovsky From BBC by Nic Fleming For centuries sailors told stories of enormous waves tens of metres tall. They were dismissed as tall tales, but in fact they are alarmingly common TEN-storey high, near-vertical walls of frothing water. Smashed portholes and flooded cabins on the upper decks. Thirty-metre behemoths that rise up from nowhere to throw ships about like corks, only to slip back beneath the depths moments later. Evocative descriptions of abnormally large "rogue waves" that appear out of the blue have been shared among sailors for centuries. With little or no hard evidence, and the size of the waves often growing with each telling, there is little surprise that scientists long dismissed them as tall tales. Until around half a century ago, this scepticism chimed with the scientific evidence. According to scientists' best understanding of how waves are generated, a 30m wave might be expected once every 30,000 years. Rogue waves could safely be classified alongside mermaids and sea monsters. However, we now know that they are no maritime myths. A wave is a disturbance that moves energy between two points. The most familiar waves occur in water, but there are plenty of other kinds, such as radio waves that travel invisibly through the air. Although a wave rolling across the Atlantic is not the same as a radio wave, they both work according to the same principles, and the same equations can be used to describe them. A rogue wave is one that is at least twice the "significant wave height", which refers to the average of the third highest waves in a given period of time. According to satellite-based measurements, rogue waves do not only exist, they are relatively frequent. The sceptics had got their sums wrong, and what was once folklore is now fact. This led scientists to altogether more difficult questions. Given that they exist, what causes rogue waves? More importantly for people who work at sea, can they be predicted? Until the 1990s, scientists' ideas about how waves form at sea were heavily influenced by the work of British mathematician and oceanographer Michael Selwyn Longuet-Higgins. In work published from the 1950s onwards, he stated that, when two or more waves collide, they can combine to create a larger wave through a process called "constructive interference". According to the principle of "linear superposition", the height of the new wave should simply be the total of the heights of the original waves. A rogue wave can only form if enough waves come together at the same point according to this view. However, during the 1960s evidence emerged that things might not be so simple. The key player was mathematician and physicist Thomas Brooke Benjamin, who studied the dynamics of waves in a long tank of shallow water at the University of Cambridge. With his student Jim Feir, Benjamin noticed that while waves might start out with constant frequencies and wavelengths, they would change unexpectedly shortly after being generated. Those with longer wavelengths were catching those with shorter ones. This meant that a lot of the energy ended up being concentrated in large, short-lived waves. At first Benjamin and Feir assumed there was a problem with their equipment. However, the same thing happened when they repeated the experiments in a larger tank at the UK National Physical Laboratory near London. What's more, other scientists got the same results. For many years, most scientists believed that this "Benjamin-Feir instability" only occurred in laboratory-generated waves travelling in the same direction: a rather artificial situation. However, this assumption became increasingly untenable in the face of real-life evidence. At 3am on 12 December 1978, a German cargo ship called The München sent out a mayday message from the mid-Atlantic. Despite extensive rescue efforts, she vanished never to be found, with the loss of 27 lives. A lifeboat was recovered. Despite having been stowed 66ft (20m) above the water line and showing no signs of having been purposefully lowered, the lifeboat seemed to have been hit by an extreme force. However, what really turned the field upside down was a wave that crashed into the Draupner oil platform off the coast of Norway shortly after 3.20pm on New Year's Day 1995. Hurricane winds were blowing and 39ft (12m) waves were hitting the rig, so the workers had been ordered indoors. No-one saw the wave, but it was recorded by a laser-based rangefinder and measured 85ft (26m) from trough to peak. The significant wave height was 35.4ft (10.8m). According to existing assumptions, such a wave was possible only once every 10,000 years. The Draupner giant brought with it a new chapter in the science of giant waves. When scientists from the European Union's MAXWAVE project analysed 30,000 satellite images covering a three-week period during 2003, they found 10 waves around the globe had reached 25 metres or more. "Satellite measurements have shown there are many more rogue waves in the oceans than linear theory predicts," says Amin Chabchoub of Aalto University in Finland. "There must be another mechanism involved." In the last 20 years or so, researchers like Chabchoub have sought to explain why rogue waves are so much more common than they ought to be. Instead of being linear, as Longuet-Higgins had argued, they propose that rogue waves are an example of a non-linear system. A non-linear equation is one in which a change in output is not proportional to the change in input. If waves interact in a non-linear way, it might not be possible to calculate the height of a new wave by adding the originals together. Instead, one wave in a group might grow rapidly at the expense of others. When physicists want to study how microscopic systems like atoms behave over time, they often use a mathematical tool called the Schrödinger equation. It turns out that certain non-linear version of the Schrödinger equation can be used to help explain rogue wave formation. The basic idea is that, when waves become unstable, they can grow quickly by "stealing" energy from each other. Researchers have shown that the non-linear Schrödinger equation can explain how statistical models of ocean waves can suddenly grow to extreme heights, through this focusing of energy. In a 2016 study, Chabchoub applied the same models to more realistic, irregular sea-state data, and found rogue waves could still develop. "We are now able to generate realistic rogue waves in the laboratory environment, in conditions which are similar to those in the oceans," says Chabchoub. "Having the design criteria of offshore platforms and ships being based on linear theory is no good if a non-linear system can generate rogue waves they can't cope with." Still, not everyone is convinced that Chabchoub has found the explanation. "Chabchoub was examining isolated waves, without allowing for interference with other waves," says optical physicist Günter Steinmeyer of the Max Born Institute in Berlin. "It's hard to see how such interference can be avoided in real-world oceans." Instead, Steinmeyer and his colleague Simon Birkholz looked at real-world data from different types of rogue waves. They looked at wave heights just before the 1995 rogue at the Draupner oil platform, as well as unusually bright flashes in laser beams shot into fibre optic cables, and laser beams that suddenly intensified as they exited a container of gas. Their aim was to find out whether these rogue waves were at all predictable. The pair divided their data into short segments of time, and looked for correlations between nearby segments. In other words, they tried to predict what might happen in one period of time by looking at what happened in the periods immediately before. They then compared the strengths of these correlations with those they obtained when they randomly shuffled the segments. The results, which they published in 2015, came as a surprise to Steinmeyer and Birkholz. It turned out, contrary to their expectations, that the three systems were not equally predictable. They found oceanic rogue waves were predictable to some degree: the correlations were stronger in the real-life time sequence than in the shuffled ones. There was also predictability in the anomalies observed in the laser beams in gas, but at a different level, and none in the fibre optic cables. However, the predictability they found will be little comfort to ship captains who find themselves nervously eyeing the horizon as the winds pick up. "In principle, it is possible to predict an ocean rogue wave, but our estimate of the reliable forecast time needed is some tens of seconds, perhaps a minute at most," says Steinmeyer. "Given that two waves in a severe North Sea storm could be separated by 10 seconds, to those who say they can build a useful device collecting data from just one point on a ship or oil platform, I'd say it's already been invented. It's called a window." However, others believe we could foresee rogue waves a little further ahead. The complexity of waves at sea is the result of the winds that create them. While ocean waves are chaotic in origin, they often organise themselves into packs or groups that stay together. In 2015 Themis Sapsis and Will Cousins of MIT in Cambridge, Massachusetts, used mathematical models to show how energy can be passed between waves within the same group, potentially leading to the formation of rogue waves. The following year, they used data from ocean buoys and mathematical modelling to generate an algorithm capable of identifying wave groups likely to form rogues. Most other attempts to predict rogue waves have attempted to model all the waves in a body of water and how they interact. This is an extremely complex and slow process, requiring immense computational power. Instead, Sapsis and Cousins found they could accurately predict the focusing of energy that can cause rogues, using only the measurements of the distance from the first to last waves in a group, and the height of the tallest wave in the pack. "Instead of looking at individual waves and trying to solve their dynamics, we can use groups of waves and work out which ones will undergo instabilities," says Sapsis. He thinks his approach could allow for much better predictions. If the algorithm was combined with data from LIDAR scanning technology, Sapsis says, it could give ships and oil platforms 2-3 minutes of warning before a rogue wave formed. Others believe the emphasis on waves' ability to catch other waves and steal their energy – which is technically called "modulation instability" – has been a red herring. "These modulation instability mechanisms have only been tested in laboratory wave tanks in which you focus the energy in one direction," says Francesco Fedele of Georgia Tech in Atlanta. "There is no such thing as a uni-directional stormy sea. In real-life, oceans' energy can spread laterally in a broad range of directions." In a 2016 study, Fedele and his colleagues argued that more straightforward linear explanations can account for rogue waves after all. They used historic weather forecast data to simulate the spread of energy and ocean surface heights in the run up to the Draupner, Andrea and Killard rogue waves, which struck respectively in 1995, 2007 and 2014. Their models matched the measurements, but only when they factored in the irregular shapes of ocean waves. Because of the pull of gravity, real waves have rounded troughs and sharp peaks – unlike the perfectly smooth wave shapes used in many models. Once this was factored in, interfering waves could gain an extra 15-20% in height, Fedele found. "When you account for the lack of symmetry between crest and trough, and add it to constructive interference, there is an enhancement of the crest amplitudes that allows you to predict the occurrence observed in the ocean," says Fedele. What's more, previous estimates of the chances of simple linear interference generating rogue waves only looked at single points in time and space, when in fact ships and oil rigs occupy large areas and are in the water for long periods. This point was highlighted in a 2016 report from the US National Transportation Safety Board, written by a group overseen by Fedele, into the sinking of an American cargo ship, the SS El Faro, on 1 October 2015, in which 33 people died. "If you account for the space-time effect properly, then the probability of encountering a rogue wave is larger," Fedele says. Also in 2016, Steinmeyer proposed that linear interference can explain how often rogue waves are likely to form. As an alternative approach to the problem, he developed a way to calculate the complexity of ocean surface dynamics at a given location, which he calls the "effective" number of waves. "Predicting an individual rogue wave event might be hopeless or non-practical, because it requires too much data and computing power. But what if we could do a forecast in the meteorological sense?" says Steinmeyer. "Perhaps there are particular weather conditions that we can foresee that are more prone to rogue wave emergence." Steinmeyer's group found that rogue waves are more likely when low pressure leads to converging winds; when waves heading in different directions cross each other; when the wind changes direction over a wide range; and when certain coastal shapes and subsea topographies push waves together. They concluded that rogue waves could only occur when these and other factors combined to produce an effective number of waves of 10 or more. Steinmeyer also downplays the idea that anything other than simple interference is required for rogue wave formation, and agrees that wave shape plays a role. However, he disagrees with Fedele's view that sharp peaks can have a significant impact on wave height. "Non-linearities have a role, but it's a minor one," he says. "Their main role is that ocean waves are not perfect sine waves, but have more spikey crests and depressed troughs. However, what we calculated for the Draupner wave is that the effect of non-linearities on wave height was in the order of a few tens of centimetres." In fact, Steinmeyer thinks that Longuet-Higgins had it pretty much right 60 years ago, when he emphasised basic linear interference as the driver of large waves, rogue or otherwise. But not everyone agrees. In fact, the argument over exactly why rogue waves form seems set to rumble on for some time. Part of the issue is that several kinds of scientists are studying them – experimentalists and theoreticians, specialists in optical waves and fluid dynamics – and they have not as yet done a good job of integrating their different approaches. There is no sign that a consensus is developing. But it is an important question to solve, because we will only be able to predict these deadly waves when we understand them. For anyone sitting on an isolated oil rig or ship, watching the swell of the waves under a stormy sky, those few minutes of warning could prove crucial. Links : Thursday, May 18, 2017 North Sea wind power hub: A giant wind farm to power all of north Europe North Sea Infrastructure The future development of a North Sea energy system up to approx. 2050 will require a rollout, coordinated at European level, of interlinked offshore interconnectors, i.e. a so-called interconnection hub, combined with large-scale wind power. Any surplus wind power could be converted into other forms of energy, or stored. Situating this interconnection hub on a modularly constructed island in a relatively shallow part of the North Sea would result in significant cost savings. These are the starting points for a proposed efficient, affordable and reliable energy system on the North Sea, which will contribute to European objectives being met. This vision does not preclude the option of providing renewably generated power from the wind farms to nearby oil and gas platforms to reduce Europe's CO2 emissions. From Ars Technica by William Steel The harnessing of energy has never been without projects of monolithic scale. From the Hoover Dam to the Three Gorges—the world's largest power station—engineers the world over have recognised that with size comes advantages. The trend is clear within the wind power industry too, where the tallest wind turbines now tower up to 220m, with rotors spinning through an area greater than that of the London Eye, generating electricity for wind farms that can power whole cities. While the forecast for offshore wind farms of the future is for ever-larger projects featuring ever-larger wind turbines, an unprecedented plan from electricity grid operators in the Netherlands, Germany, and Denmark aims to rewrite the rulebook on offshore wind development. A proposed North Sea power link island, as conceived by TenneT with a map of the North Sea, with the location of the Dogger Bank and the possible interconnectors highlighted The proposal is relatively straight-forward: build an artificial island in the middle of the North Sea to serve as a cost-saving base of operations for thousands of wind turbines, while at the same time doubling up as a hub that connects the electricity grids of countries bordering the North Sea, including the UK. In time, more islands may be built too; daisy chained via underwater cables to create a super-sized array of wind farms tapping some of best wind resources in the world. “Don’t be mistaken, this is really a very large, very ambitious project—there’s nothing like it anywhere in the world. We’re taking offshore wind to the next level,” Jeroen Brouwers, spokesperson for the organisation that first proposed the plan, Dutch-German transmission system operator (TSO) TenneT, tells Ars Technica. “As we see it, each island could facilitate approximately 30 gigawatts (GW) of offshore wind energy; but the concept is modular, so we could establish multiple interconnected islands, potentially supporting up to 70 to 100GW.” The London Array To add some context to those figures, consider that the world’s largest offshore wind farm in operation today, the London Array, has a max capacity of 630MW (0.63GW), and that all the wind turbines installed in European waters to date amount to a little over 12.6GW. The Danish TSO Energinet says 70GW could supply power for some 80 million Europeans. Undoubtedly ambitious, the North Sea Wind Power Hub—as the project is titled—is nevertheless being taken seriously by key stakeholders. The project was centre of attention at the seminal North Seas Energy Forum held in Brussels at the end of March. There, the consortium behind the project (Dutch-German TSO TenneT, alongside the Danish TSO Energinet) took the opportunity to sign a memorandum of understanding (MoU) that will drive the project forward over the coming decades. Dagmara Koska, a member of the cabinet of the EU vice-president in charge of the Energy Union (Maroš Šefčovič), tells Ars Technica: “We’re incredibly supportive of the project and welcome the MoU. The agreement demonstrates commitment to a very exciting prospect; one that stands to create a lot of synergies to benefit the growth of renewables energy in northern Europe.” On the intentions of the Wind Power Hub, Koska says: “From our perspective, the project fully reflects the spirit of the North Seas Energy Cooperation—the political agreement signed last yearto facilitate deployment of offshore renewable energy alongside interconnection capacity across the region. As Maroš Šefčovič said at the signing, it’s an ingenious solution.” The London Array wind farm is the largest in operation with 175 wind turbines generating enough power for close to half a million UK homes annually. A paradigm shift The North Sea Wind Power Hub represents a fundamentally new approach to the development of offshore wind; one that tackles multiple challenges faced by the wind industry head on and capitalises on economies of scale in a bid to deliver access to the wind resources of the North Sea at reduced costs. Something of a case of necessity being the mother of invention, Brouwers explains that the Wind Power Hub concept is a response to a looming problem faced by the wind industry: ”At the moment, offshore wind is focused on sites relatively close to shore where development costs are lower. The problem is that there’s not space for the 150GW of offshore wind power that the EU has called for. There are other industrial and economic interests in those near-shore regions—fishing, shipping lanes, military areas and so on. "This pushes things farther out to sea, but the costs can rapidly rise as you move to deeper waters. The solution? Create near-shore costs, or even lower, out at sea.” Construction of offshore wind farms is a highly complex logistical and engineering operation So how would the Wind Power Hub deliver on this objective? Well, the wind farms envisioned by the project wouldn’t be dissimilar from those we see today, but their proximity and connection to artificial "power link islands" represents a substantial departure from the conventional model for offshore wind. “The idea is that islands as large as six square kilometres would feature a harbour, a small airstrip, transmission infrastructure, and all equipment necessary to maintain the surrounding wind farms, alongside accommodation and workshops for staff,” Brouwers says. London Array construction These novel features would open up a lot of possibilities for wind power developers and operators. With a base of operations out at sea—complemented with storage of components, assembly lines, and other logistical assets—the installation of wind turbines would be more convenient, efficient, and ultimately cheaper than is achieved by today’s methods which rely on specialised ships journeying out from ports. Savings on installation would be coupled with reduced expenditure over the twenty-year lifetime of wind turbines, too. Operations and maintenance of offshore wind turbines—a crucial, albeit expensive, affair that stands to be transformed with a base of operations located out at sea. Onshore wind farms require a lot of support. But in harsh marine environments, that need is paramount. Operations and maintenance, or O&M, is key to ensuring turbines avoid downtime and remain productive. By convention (and presently also by necessity) offshore O&M run out of ports; it's logistically complex and pricey, easily representing some 20% of a wind turbine's levellised cost of energy (LCOE), and increasing with distance from shore. O&M is a permanent fixture on the wind industry’s list of areas within which it aims to lower expenditure, and highlighted as such by the International Renewable Energy Agency, which reports: “It is clear that reducing O&M costs for offshore wind farms remains a key challenge and one that will help improve the economics of offshore wind.” “In contrast to what we see today,” says Brouwers, “operating from an island on the doorstep of the wind farms would be a game-changer in terms of reducing costs and simplification of O&M activities.” Subsea DC cables would not only export power from the wind farms, but will serve as interconnectors between countries bordering the North Sea. High Voltage Direct Current Alongside savings on installation and reductions on O&M, a third major cost saving feature of the Wind Power Hub concerns grid connections—the electrical infrastructure that links wind farms with electricity grids. Typically, grid connection is a significant cost component in offshore wind, representing between 15 to 30% of the capital costs for an offshore wind farm, with costs creeping higher the farther from shore you go. Like O&M, grid connection is a cost component that holds potential for improvement. With the Wind Power Hub, instead of alternate current (AC) cables taking electricity from a wind farm to grids onshore—the typical arrangement we see today—the output of multiple wind farms would be directed to a power link island. There, electricity would be aggregated, conditioned for transmission, and then dispatched to onshore grids of the North Sea countries. It’s a setup that would reduce the amount of export cables running to individual wind farms, and enable cost-effective use of high-voltage direct current (DC) transmission that boasts the added benefit of reduced losses compared to AC transmission. International electricity interconnections are the set of lines and substations that allow the exchange of energy between neighbouring countries and generate a number of advantages in connected countries. North Sea Super Grid: The key to sustainable energy in Europe As significant as the North Sea Wind Power Hub would be terms of clean energy production and cost reduction of offshore wind power, the broader proposition for the concept goes beyond island-building and supporting wind farms. It would provide a solution to one of the central challenges in transitioning to a sustainable future. As Brouwers says: “When we talk about the transition towards 100% sustainable energy production, it’s simply not possible from a national point of view. We need to consider things on a European level, and we need the infrastructure to transport the renewable electricity to where it is needed.” The inherent difficulty with renewable energy is its intermittency: power generation relies on variable resources like the Sun and wind that we cannot control. It’s an immutable characteristic of renewables, and one that creates problems for grids trying to balance supply and demand, and ensure efficient use of generated electricity. At least part of the solution is interconnectors—cables that function as long distance energy conduits across and between electricity grids. Interconnectors allow for electricity generated in one region to be transmitted to another, and allow countries to import and export electricity. The UK, for example, has interconnectors with France (2GW), the Netherlands (1GW), Northern Ireland (500MW), and the Republic of Ireland (500MW). “Without interconnectors we’re not able to balance supply and demand and that’s crucial for the energy transition. It’s absolutely key,” explains the EU Energy Union’s Koska. “We have cables between some North Sea countries already, but considering the amount of renewables coming online in the region, it’s not enough if we are to optimise use of resources available.” The imperative and current efforts to establish a European super grid are part of another story for another day, but the significance of interconnectors is neatly outlined in the YouTube video above from the Spanish TSO Red Eléctrica. In this matter of interconnectors and energy distribution, the Wind Power Hub would serve an extraordinarily valuable purpose; one Koska describes as “a clear response to needs of the European grid, and the goals set by the European Union that would contribute to a crucial part of the energy transition.” As noted earlier, undersea cables would transmit electricity from islands to countries bordering the North Sea, but the same DC cables would also function as interconnectors between those nations. Something similar is already under development in the Baltic Sea, where the Combined Grid Solution will connect Danish and German electrical grids via the Kriegers Flak wind farm. The Wind Power Hub applies a similar logic, albeit connecting via islands and not wind farms, and on a much grander scale. The Netherlands, Denmark, Germany, the UK, Norway and Belgium are all potential players in this new North Sea grid.  Construction of Mischief island by China has resulted in some 1,379 acres of land. Specialized ships involved in the construction process can be seen in this image. The dark lines seen connected to ships are floating pipes that pump sediment to be deposited. photo : CSIS Asia Maritime Transparency Initiative /Digital Globe Building islands Construction of islands is nothing new. Prominent examples of the practice come from China and Dubai. Although motivated by radically different intentions (in the former instance, to establish a military presence in waters of the South China Sea; in the latter, to support luxurious hotels and residences) both nations have demonstrated the validity of creating artificial islands to varying specifications. In the simplest of terms, island-building involves dumping a huge amount of rock and sediment on the seabed until an island emerges. In reality, a little more finesse and a significant amount of engineering skill goes into the process. Acumen here means that islands may be built to survive waves, storms, and erosion, as well ensure that the newly minted land can physically support whatever is destined to be built on the island. Expertise will be especially critical for islands of the North Sea Wind Hub where the northerly climate and rough waters of the North Sea offer up considerable challenges. Still, with the Netherlands party to the project, there will be no shortage of world-class engineers on hand to deliver solutions. The Dutch have a long history in land reclamation and have been at the helm of some of the most prominent examples of island building around the world, including those of Dubai.  A European wind power infographic produced by WindEurope in 2016. The task ahead The North Sea Wind Power Hub is a vast, multinational project that won't just pop up overnight. Brouwers notes that the consortium imagines a first island could be realised by 2035. Project literature frames the project as one providing a vision for joint European collaboration out to 2050. “It’s a long-term project, but it’s important to begin now and that the industry knows what on the horizon,” says Brouwers. For its part, numerous bodies within the European wind industry have acknowledged and expressed optimism about the project. Andrew Ho, senior offshore wind analyst of the wind power trade association Wind Europe, tells Ars Technica: “Setting out a long term ambition for offshore wind provides a great signal to the wind sector. It’s not governments that are behind the target yet, it’s TSOs laying out the vision—but it’s still important to know that they see a big role for offshore wind in the future of European energy. "The reality is we need a lot more clean energy if we’re going to decarbonise and really commit to the actions of COP21. For that, we need the technologies that can deliver vast amounts of clean power with relatively stable output—and that’s what offshore wind gets you.The wind industry would certainly be ready to deliver the volume of offshore wind envisioned by the Wind Power Hub.” Ho emphasised that the wind industry’s activities over the forthcoming decade will lay the groundwork for the Wind Power Hub's success: “The project would give us a pathway from 2030 to 2050, but we’re missing policy targets for 2023 to 2030. To explore the project’s full potential we need to support development through the next decade to ensure we’re fully cost competitive with other sources of energy in the period leading up to 2030.” As the industry works towards reducing costs, the consortium will busy itself with more practical matters. Brouwers explains: “The next steps involve feasibility studies. We’re also underway in collaborating with environmental groups about the construction of the islands and in talks with infrastructure companies beyond the energy sector, of the sort that would provide critical insight on the project. There’s certainly a lot of work ahead of us.” The North Sea Wind Power Hub is an unquestionably mammoth project. But in so being it aptly reflects the enormity of challenges we face in tackling climate change. Many would contend that we already have the technologies necessary for transitioning to a sustainable energy system. The Wind Power Hub project reminds us that boldly pursuing the extraordinary, and resolving to commit to collaborative solutions, are traits that will serve us well in application of those technologies. Links : Wednesday, May 17, 2017 How an uninhabited island got the world’s highest density of trash The beaches of one of the world’s most remote islands have been found to be polluted with the highest density of plastic debris reported anywhere on the planet. From National Geographic by Laura Parker Henderson Island lies in the South Pacific, halfway between New Zealand and Chile. No one lives there. It is about as far away from anywhere and anyone on Earth. Yet, on Henderson’s white sandy beaches, you can find articles from Russia, the United States, Europe, South America, Japan, and China. All of it is trash, most of it plastic. It bobbed across global seas until it was swept into the South Pacific gyre, a circular ocean current that functions like a conveyor belt, collecting plastic trash and depositing it onto tiny Henderson’s shore at a rate of about 3,500 pieces a day.  One researcher claims that a hermit crab that has made its home in a blue Avon cosmetics pot is a 'common sight' on the island. The plastic is very old and toxic, and is damaging to much of the island's diverse wildlife Jennifer Lavers, co-author of a new study of this 38-million-piece accumulation, told the Associated Press she found the quantity “truly alarming.” Much of the trash consists of fishing nets and floats, water bottles, helmets, and large, rectangular pieces. Two-thirds of it was invisible at first because it was buried about four inches (10 cm) deep on the beach. “Although alarming, these values underestimate the true amount of debris, because items buried 10 cm below the surface and particles less than 2 mm and debris along cliff areas and rocky coastlines could not be sampled,” Lavers and a colleague wrote in their study, published Tuesday in the scientific journal, Proceedings of the National Academy of Sciences. The accumulation is even more disturbing when considering that Henderson is also a United Nations World Heritage site and one of the world’s biggest marine reserves. The UNESCO website describes Henderson as “a gem” and “one of the world’s best remaining examples of a coral atoll,” that is “practically untouched by human presence.” Henderson Island, with the GeoGarage platform a coral atoll in the south Pacific, is just 14.5 square miles (37.5 square km), and the nearest cities are some 3,000 miles (4,800 km) away Henderson is one of the four-island Pitcairn Group, a cluster of small islands whose namesake is famed as the home to the descendants of the HMS Bounty’s mutineers. Pitcairn’s population, which has dwindled to 42 people, uses Henderson as an idyllic get-away from the day-to-day life on Pitcairn. But aside from the neighboring Pitcairners, the occasional scientist or boatload of tourists making the two-day sail from the Gambier Islands, Henderson supports only four kinds of land birds, ten kinds of plants, and a large colony of seabirds. Lavers, a scientist at Australia’s University of Tasmania, and her co-author, Alexander Bond, a conservation biologist, arrived on Henderson in 2015 for a three-month stay. They measured the density of debris and collected nearly 55,000 pieces of trash, of which about 100 could be traced back to their country of origin. The duo’s analysis concluded that nearly 18 tons of plastic had piled up on the island—giving Henderson the highest density of plastic debris recorded anywhere in the world—at least so far.  Henderson Island has the highest density of plastic debris in the world, with 3,570 new pieces of litter washing up on its beaches every day. Jenna Jambeck, a University of Georgia environmental engineering professor, who was one of the first scientists to quantify ocean trash on a global scale, was not surprised that Lavers and Bond discovered plastic in such abundance on Henderson. Jambeck’s 2015 study concluded that 8 million tons of trash flow into the ocean every year, enough to fill five grocery store shopping bags for every foot of coastline on Earth. “One of the most striking moments to me while working in the field was when I was in the Canary Islands, watching microplastic being brought onto the shore with each wave,” she says. “There was an overwhelming moment of ‘what are we doing?’ It’s like the ocean is spitting this plastic back at us. So I understand when you’re there on the beach on Henderson, it’s shocking to see.” The Henderson research ranks with earlier discoveries of microplastics in places so remote, such as embedded in the deep ocean floor or in Arctic sea ice, that finding plastic in such abundance touched a nerve. “People are always surprised to find trash in what’s supposed to be an uninhabited paradise island. It does not fit our mental paradigms, and this might be the reason why it continues to be shocking,” says Enric Sala,a marine scientist who led a National Geographic Pristine Seas expedition to the Pitcairn Islands, including Henderson, in 2012. “There are no remote islands anymore. We have turned the ocean into a plastic soup.” Links : Tuesday, May 16, 2017 The incredible 'x-ray' map of the world's oceans that reveals the damage mankind has done to them Darker colors, which can be seen in the East China and the North Seas, for example, show just where the ocean has been hit hardest. Source: NGM Maps, 'Spatial and Temporal Changes in Cumulative Human Impacts on the World's Ocean,' Ben S. Halpern and others, Nature Communications; UNEP-WCMC World Database on Protected Areas (2016 From DailyMail by Cheyenne MacDonald • Study used satellite images and modelling software, to compare cumulative impact in 2008 and 2013 • Over this span of time, researchers found that nearly two-thirds of the ocean shows increased impact • These impacts stem from fishing, shipping, or climate change – and some areas are experiencing all three The stunning map comes from the April 2017 issue of National Geographic magazine, based on data from a recent study published to Nature Communications, and the World Database on Protected Areas. Darker colors, which can be seen in East China and the North Seas, for example, show just where the ocean has been hit hardest. ‘The ocean is crowded with human uses,’ the authors explain in the paper. ‘As human populations continue to grow and migrate to the coasts, demand for ocean space and resources is expanding, increasing the individual and cumulative pressures from a range of human activities. ‘Marine species and habitats have long experienced detrimental impacts from human stressors, and these stressors are generally increasing globally.’ Using satellite images and modelling software, the researchers calculated the cumulative impact of 19 different types of human-caused stress on the ocean, comparing the effects seen in 2008 with those occurring five years later. The map above reveals the cumulative human impact to marine ecosystems as of 2013, based on 19 anthropogenic stressors. Shades of red indicate higher impact scores, while blue shows lower scores This revealed that nearly two-thirds (66 percent) of the ocean, and more than three-quarters (77 percent) of coastal areas experienced increased human impact, which the researchers note are ‘driven mostly by climate change pressures.’ ‘A lot of the ocean is getting worse, and climate change in particular is driving a lot of those changes,’ lead author Ben Halpern told National Geographic. While the Southern Ocean was found to be subjected to a ‘patchy mix’ of increases and decreases, the researchers found that other areas, especially the French territorial holdings in the Indian Ocean, Tanzania, and the Seychelles, saw major increases. Just 13 percent of the ocean saw a decrease in human impact over the years included in the study. These regions were concentrated in the Northeast and Central pacific, along with the Eastern Atlantic, according to the researchers. In a comprehensive study analyzing changes over a five-year period, researchers found that nearly two-thirds of the ocean shows increased impact.’The graphic shows (a) the difference from 2013 to 2008, with shades of red indicating an increase, while blue shows decrease. It also reveals (b) the 'extreme combinations of cumulative impact and impact trend' Links : Monday, May 15, 2017 Netherlands NLHO layer update in the GeoGarage platform 1 new inset added see GeoGarage news Changes to Traffic Separation Scheme TSS to be implemented on 1st June, 2017  Prelimary notice of changes of the shipping routing Southern North Sea (Belgium and Netherlands).  changes in charts (see NTMs Berichten aan Zeevarenden week 17 / 15 / 09) New Zealand Linz layer update in the GeoGarage platform 7 nautical raster charts updated China revises mapping law to bolster claims over South China Sea land, Taiwan China claims they aren't military bases, but their actions say otherwise.  From JapanTimes China’s National People’s Congress Standing Committee, a top law-making body, passed a revised version of China’s surveying and mapping law intended to safeguard the security of China’s geographic information, lawmakers told reporters in Beijing. Hefty new penalties were attached to “intimidate” foreigners who carry out surveying work without permission. President Xi Jinping has overseen a raft of new legislature in the name of safeguarding China’s national security by upgrading and adding to already broad laws governing state secrets and security. Laws include placing management of foreign nongovernmental organizations under the Security Ministry and a cybersecurity law requiring that businesses store important business data in China, among others. Overseas critics say that these laws give the state extensive powers to shut foreign companies out of sectors deemed “critical” or to crack down on dissent at home. The revision to the mapping law aims to raise understanding of China’s national territory education and promotion among the Chinese people, He Shaoren, head spokesman for the NPC Standing Committee, said, according to the official China News Service. When asked about maps that “incorrectly draw the countries boundaries” by labeling Taiwan a country or not recognizing China’s claims in the South China Sea, He said, “These problems objectively damage the completeness of our national territory.” China claims almost all the South China Sea and regards neighboring self-ruled Taiwan as a breakaway province. The new law increases oversight of online mapping services to clarify that anyone who publishes or distributes national maps must do so in line with relevant national mapping standards, He said. The rise of technology companies which use their own mapping technology to underpin ride-hailing and bike-sharing services made the need for revision pressing, the official Xinhua News Agency said Tuesday. Foreign organizations that wish to carry out mapping or surveying work within China must make clear that they will not touch upon state secrets or endanger state security, according to Song Chaozhi, deputy head of the State Bureau of Surveying and Mapping. Foreign individuals or groups who break the law could be fined up to 1 million yuan ($145,000), an amount chosen to “intimidate,” according to Yue Zhongming, deputy head of the NPC Standing Committee’s legislation planning body.  According to MoT, China cleared the wreckage of stranded fishing boat on Scarborough Shoal to ensure the security of navigation. China’s Southeast Asian neighbors are hoping to finalize a code of conduct in the South China Sea, but those working out the terms remain unconvinced of Beijing’s sincerity. Signing China up to a legally binding and enforceable code for the strategic waterway has long been a goal for claimant members of the Association of Southeast Asian Nations. But given the continued building and arming of its artificial islands in the South China Sea, Beijing’s recently expressed desire to work with ASEAN to complete a framework this year has been met with skepticism and suspicion. The framework seeks to advance a 2002 Declaration of Conduct (DOC) of Parties in the South China Sea, which commits to following the United Nations Convention on the Law of the Sea (UNCLOS), ensuring freedom of navigation and overflight, and “refraining from action of inhabiting on the presently uninhabited islands, reefs, shoals, cays, and other features.” The South China Sea Dispute – An Update, Lecture Delivered on April 23, 2015 at a forum sponsored by the Bureau of Treasury and the Asian Institute of Journalism and Communications at the Ayuntamiento de Manila. But the DOC was not stuck to, especially by China, which has built seven islands in the Spratly archipelago.It is now capable of deploying combat planes on three reclaimed reefs, where radars and surface-to-air missile systems have also been installed, according to the Asia Maritime Transparency Initiative think tank. Beijing insists its activities are for defense purposes in its waters. Malaysia, Taiwan, Brunei, Vietnam and the Philippines, however, all claim some or all of the resource-rich waterway and its myriad of shoals, reefs and islands. Finalizing the framework would be a feather in the cap for the Philippines, which chairs ASEAN this year. Manila has reversed its stance on the South China Sea, from advocating a unified front and challenging Beijing’s unilateralism, to putting disputes aside to create warm ties. Philippine President Rodrigo Duterte has opted not to press China to abide by an international arbitration decision last year that ruled in Manila’s favor and invalidated Beijing’s sweeping South China Sea claims. There will be no mention of the Hague ruling in an ASEAN leaders’ statement at a summit in Manila on Saturday, nor will there be any reference to concerns about island-building or militarization that appeared in last year’s text, according to excerpts of a draft. The map’s most valuable and relevant feature is found on the upper left section where a cluster of land mass called “Bajo de Masinloc” and “Panacot” – now known as Panatag or Scarborough Shoal – located west of the Luzon coastline  (see YouTube : An ancient map is reinforcing Manila's arbitration victory against China on the disputed South China Sea.) Duterte said Thursday that he sees no need to gather support from his neighbors about the July 2016 landmark decision. His predecessor, Benigno Aquino III, brought the territorial disputes to the Permanent Court of Arbitration in The Hague in 2013 amid China’s aggressive assertion of its claims in the South China Sea by seizing control of Scarborough Shoal located less than about 300 km (200 miles) from the Philippines’ Luzon island, and harassment of Philippine energy surveillance groups near the Reed Bank, among others. While the arbitration case was heard, China completed a number of reclamation projects on some of the disputed features and fortified them with structures, including those military in nature. China did not participate in the arbitration hearing, and does not honor the award, insisting it only seeks to settle the matter bilaterally with the Philippines. Duterte had said he will confront China with the arbitral award at a proper time during his administration, which ends in 2022, especially when Beijing starts to extract mineral and gas deposits. He rejected the view that China can be pressed by way of international opinion, saying, “You are just dreaming.” The Philippines, meanwhile, has completed an 18-day scientific survey in the South China Sea to assess the condition of coral reefs and draw a nautical map of disputed areas. Two survey ships, including an advanced research vessel acquired from the United States, conducted surveys around Scarborough Shoal and on three islands, including Thitu, in the Spratly group, National Security Adviser Hermogenes Esperon said Thursday. “This purely scientific and environmental undertaking was pursued in line with Philippine responsibilities under the U.N. Convention of the Law of the Sea to protect the marine biodiversity and ensure the safety of navigation within the Philippines’ EEZ,” Esperon said in a statement. He gave no details of the findings from the reef assessments and nautical mapping of the area, which was carried out between April 7 to 25. Links : Sunday, May 14, 2017 Rock and Roll in the Roaring Forties - Dagmar Aaen of Arved Fuchs expeditions Dagmar Aaen on her "Ocean Change" Expeditions by Arved Fuchs. Here on the way from Ushuaia Argentina to Piriapolis in Uruguay. Footage by Arved Fuchs, Felix Hellmann and Heimir Harðarson. The Dagmar Aaen was built as a fishing cutter in 1931 in the Danish city of Esbjerg at the N. P. Jensen shipyard and was given the registration number E 510. The hull was built out of six cm oak planks and oak frames. The space between the single frames is sometimes so small, that a fist can hardly fit between them. Because of this and due to the addition of extra waterproof bulkheads, the hull was given a remarkably high strength. The ship was often used in the Greenland region because of its solid built and its choice building materials. Journeys through ice-fields and months of overwintering in frozen fjords and bays meant daily routine to a ship of this type. The famous Greenland explorer Knut Rasmussen chose just such a ship for one of his expeditions in the Arctic regions. The Dagmar Aaen was employed for the fishing industry until 1977. Niels Bach purchased her in 1988 together with the Peters shipyard in Wewelsfleth Germany and the Skibs & Bædebyggeri shipyard, owned by Christian Jonsson in Egernsund Denmark, built her into expedition ship with ice reinforcements. Since this time there have been many repairs and changes done at the shipyard, in order to adapt the ship to the different conditions on each expedition.
437765aeb25f891b
Chemistry matters. Join us to get the news you need. Physical Chemistry Quantum computing goes beyond hydrogen and helium IBM system calculates ground states of lithium hydride and beryllium hydride by Stu Borman September 13, 2017 | APPEARED IN VOLUME 95, ISSUE 37 Credit: IBM Micrograph of quantum computing processor with seven qubits (dark squares). Scale bar = 1 mm. Quantum computers could be the future of computational chemistry if they can calculate properties of molecules that conventional digital computers can’t handle. Today’s quantum systems are a long way from reaching that goal. But a quantum computer at IBM just passed a milestone: It performed the first calculations involving molecules containing more than just hydrogen and helium. Digital computers crunch numbers to describe properties such as a molecule’s ground-state energy by using the Schrödinger equation to calculate mathematical parameters called wave functions. However, digital computers can solve such problems exactly only for elementary molecules because of the great complexity of the many interactions of the multiple subatomic particles found in larger compounds. With digital computers, “exact solutions rapidly become unfeasible, even for the fastest computers working over the entire lifetime of the universe,” says theoretical chemist Donald Truhlar of the University of Minnesota, who was not involved in the new study. “Quantum computers do not require exponentially increasing time to solve larger and larger systems, so they do not suffer the same limitations.” Credit: Connie Zhou/IBM A peek through the window of the IBM Q lab at the Thomas J. Watson Research Center. Instead of the digital 1s and 0s in digital computers, quantum computers calculate wave functions with qubits—typically sensitive magnetic detectors called superconducting quantum interference devices. Because qubits are quantum systems themselves, they can represent the quantum chemistry of molecules directly, which digital bits cannot do. But technical factors currently limit the number of qubits quantum computers can use and their ability to correct qubit errors. As a result, quantum computers so far have been limited to calculating properties of very simple molecules: dihydrogen and helium hydride, using 2-qubit processors. Abhinav Kandala, Antonio Mezzacapo, and coworkers at IBM Thomas J. Watson Research Center have now used six of the qubits on a 7-qubit quantum computer processor to calculate the ground-state energies of lithium hydride and beryllium hydride (Nature 2017, DOI: 10.1038/nature23879). They achieved this advance by using a processor with more qubits than in previous quantum chemistry studies and by optimizing a previously developed algorithm to reduce the number of qubits and quantum operations needed to simulate larger molecules. The work puts IBM temporarily at the head of the pack in chemistry applications of quantum computing, comments quantum computing expert Alán Aspuru-Guzik of Harvard University. However, he says, the achievement could soon be leapfrogged by the company’s major quantum computing competitors, Google and Microsoft. Potential chemistry applications of quantum computing include discovering small-molecule drugs and determining the properties of new materials, Aspuru-Guzik says. IBM promotes its quantum chemistry program, called IBM Q, by giving chemists free access to an online quantum computer that carries out ground-state energy calculations on molecules such as H2 and LiH. This article has been sent to the following recipient: Kafantaris (September 14, 2017 12:55 AM) Our “fastest computers working over the entire lifetime of the universe” cannot calculate the properties of molecules the way quantum computers can -- at a fraction of the time. Leave A Comment *Required to comment
7411470df027a0c5
Quasiparticle self-consistent method; a basis for the independent-particle approximation Takao Kotani School of Materials, Arizona State University, Tempe, AZ, 85284    Mark van Schilfgaarde School of Materials, Arizona State University, Tempe, AZ, 85284    Sergey V. Faleev Sandia National Laboratories, Livermore, CA 94551 June 11, 2020 We have developed a new type of self-consistent scheme within the approximation, which we call quasiparticle self-consistent (QS). We have shown that QSdescribes energy bands for a wide-range of materials rather well, including many where the local-density approximation fails. QS contains physical effects found in other theories such as LDA, SIC and in a satisfactory manner without many of their drawbacks (partitioning of itinerant and localized electrons, adjustable parameters, ambiguities in double-counting, etc.). We present some theoretical discussion concerning the formulation of QS, including a prescription for calculating the total energy. We also address several key methodological points needed for implementation. We then show convergence checks and some representative results in a variety of materials. In the 1980’s, algorithmic developments and faster computers made it possible to apply Hedin’s approximation (A) Hedin (1965) to real materials Strinati et al. (1980); Pickett and Wang (1984). Especially, Hybertsen and Louie Hybertsen and Louie (1986) first implemented the A within an ab-initio framework in a satisfactory manner. Theirs was a perturbation treatment starting from the Kohn-Sham eigenfunctions and eigenvalues given in the local density approximation (LDA) to density functional theory (DFT)Hohenberg and Kohn (1964); Kohn and Sham (1965). We will denote this approach here as 1shot-. Until now 1shot- has been applied to variety of materials, usually in conjunction with the pseudopotential (PP) approximation. Quasiparticle (QP) energies so obtained are in significantly better agreement with experiments than the LDA Kohn-Sham eigenvaluesAryasetiawan and Gunnarsson (1998). However, we have recently shown that 1shot- has many significant failings. Even in simple semiconductors it systematically underestimates optical gapsKotani and van Schilfgaarde (2002); Usuda et al. (2004); Fleszar and Hanke (2005); van Schilfgaarde et al. . In general, the quality of results are closely tied to the quality of the LDA starting point. For more complicated cases where the LDA eigenfunctions are poor, 1shot- can fail even qualitativelyvan Schilfgaarde et al. . A possible way to overcome this difficulty is to determine the starting point self-consistently. The effects of the eigenvalue-only self-consistency (keeping the eigenfunctions as given in LDA), was discussed by Surh, Louie, and CohenSurh et al. (1991). Recently, Luo, Ismail-Beigi, Cohen, and Louie Luo et al. (2002) applied it to ZnS and ZnSe, where they showed that the band gaps of 1shot- 3.19 eV and 2.32 eV for ZnS and ZnSe are increased to 3.64 eV and 2.41 eV by the eigenvalue-only self-consistency (see Table 4 also). The differences suggest the importance of this self-consistency. Furthermore, for ZnSe, the value 2.41 eV changes to 2.69 eV when they use eigenfunctions given by generalized gradient approximation (GGA). This difference suggests that we may need to look for a means to determine optimum eigenfunctions for A. Aryasetiawan and Gunnarsson applied another kind of self-consistent scheme to NiO Aryasetiawan and Gunnarsson (1995). They introduced a parameter for the non-local potential which affects the unoccupied level, and made it self-consistent. They showed that the band gap of 1shot- is 1 eV, and that it is improved to 5.5 eV by the self-consistency. Based on these self-consistency ideas, we have developed a new ab-initio approach to Faleev et al. (2004); van Schilfgaarde et al. (2006); Chantis et al. (2006a, b), which we now call “quasiparticle self-consistent ” (QSGW) method. QSGW is a first-principles method that stays within the framework of Hedin’s A, that is, QSGW is a perturbation theory built around some noninteracting Hamiltonian. It does not depend on the LDA anymore but rather determines the optimum noninteracting Hamiltonian in a self-consistent manner. We have shown that QSGW satisfactorily describes QP energies for a wide range of materials. Bruneval, Vast and Reining Bruneval et al. (2006a) implemented it in the pseudopotential scheme, and gave some kinds of analysis including the comparison with the Hartree-Fock method and with the Coulomb-hole and Screened exchange (COHSEX) methods. The present paper begins with a derivation of the fundamental equation of QSGW, and some theoretical discussion concerning it (Sec. I). The fundamental equation is derived from the idea of a self-consistent perturbation. We also present a means for computing the total energy through the adiabatic connection formalism. Next, we detail a number of key methodological points (Sec. II). The present implementation is unique in that it makes no pseudopotential or shape approximation to the potential, and it uses a mixed basis for the response function, Coulomb interaction, and self-energy, which enables us to properly treat core states. The A methodology is presented along with some additional points particular to self-consistency. In Sec. III, we show some convergence checks, using GaAs as a representative system. Then we show how QSGW works by comparing it to other kinds of A for compounds representative of different materials classes: semiconductors C, Si, SiC, GaAs, ZnS, and ZnSe; oxide semiconductors ZnO and CuO; transition metal monoxides MnO and NiO; transition metals Fe and Ni. I Theory i.1 A Let us summarize the Hedin (1965); Hybertsen and Louie (1986) for later discussion. Here we omit spin index for simplicity. Generally speaking, we can perform A from some given one-body Hamiltonian written as The one-particle effective potential can be non-local, though it is local, i.e. when generated by the usual Kohn-Sham construction. determines the set of eigenvalues and eigenfunctions . From them we can construct the non-interacting Green’s function as where is for occupied states, and for unoccupied states. Within the RPA (random-phase approximation), the screened Coulomb interaction is where is the proper polarization function, and is the bare Coulomb interaction. denotes the dielectric function. As seen in e.g., works by Alouani and co-workers Alouani and Wills (1996); Arnaud and Alouani (2001), calculated from a reasonable should be in good agreement with experiments, even if does not satisfy the -sum rule because is non-local Alouani and Wills (1996) (because of the so-called scissors operator). Hedin’s A gives the self-energy as From this self-energy, the external potential from the nuclei, and the Hartree potential which is calculated from the electron density through , we obtain an -dependent one-body effective potential : Note that is detemined from the density which is calculated for the non-interacting system specified by . For simplicity we omit arguments . Then the one-body Green function is given as . and are local and -independent potentials. Thus the A maps to . In other words, the A generates a perturbative correction to the one-particle potential , written as and can be regarded as functionals of (or ). In the standard 1shot- with generated by the LDA, is the LDA Kohn-Sham Hamiltonian. Neglecting off-diagonal terms, the QP energy (QPE) is where is the QP renormalization factor: Subscripts label the wave vector and band index . We will write them later as a compound index, . Eq. (7) is the customary way QPEs are calculated in . However, as we discussed in Ref. van Schilfgaarde et al. , using =1 instead of Eq. (8) is usually a better approximation; see also Sec. III. Chapter 7 of Ref.Mahan (1990) presents another analysis where =1 is shown to be a better approximation, in the context of the Frölich Hamiltonian. In any case, we have to calculate matrix elements as accurately and as efficiently as possible (off-diagonal elements are necessary in the QSGW case, as explained below). As we showed in Ref. van Schilfgaarde et al. , generated by LDA is not necessarily a good approximation. [Even the for “true Kohn-Sham” Hamiltonian in DFT can be a poor descriptor of QP excitation energies Kotani (1998).] For example, time-reversal symmetry is automatically enforced because is local (and thus real). This symmetry is strongly violated in open -shell systems Chantis et al. (2006b). The bandgap of a relatively simple III-V semiconductor, InN, is close to zero Kotani and van Schilfgaarde (2002); Usuda et al. (2004); also the QP spectrum of NiO is little improved over LDA Faleev et al. (2004). A variety of other examples could be cited where A starting from is a poor approximation. (In contrast, see Sec. III and Ref. van Schilfgaarde et al. (2006) to see how QSGW gives consistently good agreement with experiment.) i.2 Quasiparticle self-consistent QSGW is a formalism which determines (or ) self-consistently within the A, without depending on LDA or DFT. If we have a mapping procedure , we can close the equation to determine , i.e. determine self-consistently by . The main idea to determine the mapping is grounded in the concept of the QP. Roughly speaking, is determined so as to reproduce the QP generated from . In the following, we explain how to determine this , and derive the fundamental QSGW equation Faleev et al. (2004); van Schilfgaarde et al. (2006); Chantis et al. (2006a, b). Based on Landau’s QP picture, there are fundamental one-particle-like excitations denoted as quasiparticles (QP), at least around the Fermi energy . The QPEs and QP eigenfunctions (QPeigs), , are given as Hedin (1965) We refer to the states characterized by these and as the dressed QP. Here means just take the hermitian part of so is real for . This is irrelevant around  because the anti-hermitian part of goes to zero as . On the other hand, we have another one-particle picture described by ; we name these QPs as bare QPs, and refer to the QPEs and eigenfunctions corresponding to as . Let us consider the difference and the relation of these two kinds of QP. The bare QP is essentially consistent with the Landau-Silin QP picture, discussed by, e.g., Pines and Nozieres in Sec. 3.3, Ref. Pines and Nozieres (1966). The bare QP interact with each other via the bare Coulomb interaction. The bare QP given by evolve into the dressed QP when the interaction is turned on adiabatically. Here is the total Hamiltonian (See Eq. (12)); and the hat signifies that is written in second quantized form. and are equivalent. The dressed QP consists of the central bare QP and an induced polarization cloud consisting of other bare QP ; this view is compatible with the way interactions are treated in the A. generating the bare QPs represents a virtual reference system just for theoretical convenience. There is an ambiguity in how to determine ; in principle, any can be used if could be completely included. However, as we evaluate the difference in some perturbation method like A, we must utilize some optimum (or best) : should be chosen so that the perturbative contribution is as small as possible. A key point remains in how to define a measure of the size of the perturbation. We can classify our QSGW method as a self-consistent perturbation method which self-consistently determines the optinum division of into the main part and the residual part . There are various possible choices for the measure; however, here we take a simple way, by requiring that the two kinds of QPs discussed in the previous paragraphs correspond as closely as possible. We choose so as to repoduce the dressed QPs. In other words, we assign the difference of the QPeig (and also the QPE) between the bare QP and the dressed QP as the measure, and then we minimize it. From the physical point of view, this means that the motion of the central electron of the dressed QP is not changed by . Note that contains two kinds of contributions: not only the Coulomb interaction but also the one-body term . The latter gives a counter contribution that cancel changes caused by the Coulomb interaction. We now explain how to obtain an expression in practice. Suppose that self-consistency has been somehow attained. Then we have around . is a complete set because they come from some , though the are not. Then we can expand () in as where . Then we introduce an energy-independent operator defined as which satisfies . Thus we can use this instead of in Eq. (9); however, is not hermitian thus we take only the hermitian part of as ; for the calculation of () in Eq. (9). Thus we have obtained a mapping : for given we can calculate in Eq. (10) through in the A. With this together with , which is calculated from the density for (or ), we have a new . The QSGW cycle determines all , and self-consistently. As shown in Sec. III and also in Refs.Faleev et al. (2004); van Schilfgaarde et al. (2006); Chantis et al. (2006a), QSGW systematically overestimates semiconductor band gaps a little, while the dielectric constant is slightly too small van Schilfgaarde et al. (2006). It is possible to derive Eq. (10) in a straightforward manner from a norm-functional formalism. We first define a positive-definite norm functional to measure the size of pertubative contribution. Here the weight function defines the measure; is for space, spin and . For fixed , this is treated as a functional of because determines through Eq. (6) in the A. As , we can show its minimum occurs when Eq. (10) is satisfied in a straightforward manner. This minimization formalism clearly shows that QSGW determines for a given ; in addition, it will be useful for formal discussions of conservation laws and so on. The discussion in this paragraph is similar to that given in Ref. van Schilfgaarde et al., 2006, though we use a slightly different . Eq. (10) is derived from the requirement so that around . This condition does not necessarily determine uniquely. It is instructive to evaluate how results change when alternative ways are used to determine . In Ref. Faleev et al. (2004) we tested the following: In this form (which we denote as ‘mode-B’), the off-diagonal elements are evaluated at . The diagonal parts of Eq. (11) and Eq. (10) are the same. As noted in Ref. Faleev et al. (2004), and as discussed in Sec. III, Eqs. (10) and (11) yield rather similar results, though we have found that mode-A results compare to experiment in the most systematic way. As the self-consistency through Eq. (10) (or Eq. (11)) results in , we can attribute physical meaning to bare QP: we can use the bare QP in the independent-particle approximation Ashcroft and Mermin (1976), when, for example, modeling transport within the Boltzmann-equation Fischetti and Laux (1988). It will be possible to calculate scattering rates between bare QP given by , through calculation of various matrix elements (electron-electron, electron-phonon, and so on). The adiabatic connection path from to used in QSGW is better than the path in the Kohn-Sham theory where the eigenfunction of (Kohn-Sham Hamiltonian) evolves into the dressed QP. Physical quantities along the path starting from may not be very stable. For example, the band gap can change very much along the path (it can change from metal to insulator in some cases, e.g. in Ge and InN van Schilfgaarde et al. ; QSGW is free from this problem van Schilfgaarde et al. (2006)), even if it keeps the density along the path. [Note: Pines and Nozieres (Ref. Pines and Nozieres (1966), Sec. 1.6) use the terms ‘bare QP’ and ‘dressed QP’ differently than what is meant here. They refer to eigenstates of as ‘bare QP,’ and spatially localized QP as ‘dressed QP’ in the neutral Fermi liquid.] From a theoretical point of view, the fully sc Schöne and Eguiluz (1998); Ku and Eguiluz (2002) looks reasonable because it is derived from the Luttinger-Ward functional . This apparently keeps the symmetry of , that is, where denotes some mapping (any symmetry in Hamiltonian, e.g. time translation and gauge transformation); this clearly results in the conservation laws for external perturbations Baym and Kadanoff (1961) because of Noether’s theorem (exactly speaking, we need to start from the effective action formalism for the dynamics of Fukuda et al. (1994)). However, it contains serious problems in practice. For example, fully sc uses from ; this includes electron-hole excitations in its intermediate states with the weight of the product of renormalization factors . This is inconsistent with the expectation of the Landau-Silin QP picture Faleev et al. (2004); Bechstedt et al. (1997). In fact, as we discuss in Appendix A, the effects of factor included in are well canceled because of the contribution from the vertex; Bechstedt et al. showed the -factor cancellation by a practical calculation at the lowest order Bechstedt et al. (1997). In principle, such a deficiency should be recovered by the inclusion of the contribution from the vertex; however, we expect that such expansion series should be not efficient. Generally speaking, perturbation theories in the dressed Green’s function (as in Luttinger-Ward functional) can be very problematic because contains two different kinds of physical quantities to intermediate states: the QP part (suppressed by the factor ) and the incoherent part (e.g. plasmon-related satellites). Including the sum of ladder diagrams into via the Bethe-Salpeter equation, should be a poorer approximation if is used instead of , because the one-particle part is suppressed by factors; also the contribution from the incoherent part can give physically unclear contributions. The same can be said about the -matrix treatment Springer et al. (1998). Such methods have clear physical interpretation in a QP framework, i.e. when the expansion is through . A similar problem is encountered in theories such as “dynamical mean field theory”+ Biermann et al. (2003), where the local part of the proper polarization function is replaced with a “better” function which is obtained with the Anderson impurity model. This question, whether the perturbation should be based on , or on , also appeared when Hedin obtained an equation to determine the Landau QP parameters; See Eq. (26.12) in Ref. Hedin (1965). As we will show in Sec. III (see Ref. van Schilfgaarde et al. (2006) also), QSGW systematically overestimates band gaps, consistent with systematic underestimation of . This looks reasonable because does not include the electron-hole correlation within the RPA. Its inclusion would effectively reduce the pair excitation energy in its intermediate states. If we do include such kind of correlation for at the level of the Bethe-Salpeter equation, we will have an improved version of QSGW. However, the QPE obtained from with such a corresponds to the approximation, from the perspective of the approximation, as used by Mahan and Sernelius Mahan and Sernelius (1989); the contribution from is neglected. In order to include the contribution properly, we need to use the self-energy derived from the functional derivative of as shown in Eq. (21) in next section, where we need to include the proper polarization which includes such Bethe-Salpeter contributions; then we can include the corresponging . It looks complicated, but it will be relatively easy to evaluate just the shift of QPE with neglecting the change of QPeig; we just have to evaluate the change of numerically, when we add (or remove) an electron to . However, numerical evalution for these contributions are demanding, and beyond the scope of this paper. i.3 Total energy Once is given, we can calculate the total energy based on the adiabatic connection formalism Fuchs and Gonze (2002); Miyake et al. (2002); Kotani (1998); Fukuda et al. (1994). Let us imagine an adiabatic connection path where the one-body Hamiltonian evolves into the total Hamiltonian , which is written as is also defined with instead of in Eq. (14). We use standard notation for the field operators , spin index , and external potential . We omit spin indexes below for simplicity. A path of adiabatic connection can be parametrized by as . Then the total energy is written as where is the ground state for . We define . This path is different from the path used in DFT, where we take a path starting from to while keeping the given density fixed. Along the path of the adiabatic connection, the Green’s function changes from to . Because of our minimum-perturbation construction, Eq. (10), the QP parts (QPeig and QPE) contained in are well kept by . If along the path is almost the same as , plus the second term in the RHS of Eq. (16) is reduced to (this is used in below). The last term on the RHS of Eq. (16) is given as , where Here we used . We define the 1st-order energy as the total energy neglecting : where subscript 0 means that we use instead of (and same for ) in the definition of , and ; . This is the HF-like total energy, but with the QPeig given by . is written as where is the proper polarization function for the ground state of . The RPA makes the approximation ( is simply expressed as below). The integral over is then trivial, and denotes the RPA total energy. is given by the product of non-interacting Green’s functions , where is calculated from . Thus we have obtained the total energy expression for QSGW. As we have the smooth adiabatic connection from to in QSGW(from bare QP to dressed QP) as discussed in previous section, we can expect that we will have better total energy than where we use the KS eigenfunction and eigenvalues (where the band gap can change much from bare QP to dressed QP). will have characteristics missing in the LDA, e.g. physical effects owing to charge fluctuations such as the van der Waals interaction, the mirror force on metal surfaces, the activation energy, and so on. However, the calculation of is numerically very difficult, because so many unoccupied states are needed. Also deeper states can couple to rather high-energy bands in the calculation of . Few calculations have been carried out to date Fuchs and Gonze (2002); Miyake et al. (2002); Aryasetiawan et al. (2002); Marini et al. (2006). As far as we tested within our implementation, avoiding systematic errors is rather difficult. In principle, the expression is basis-independent; however, it is not so easy to avoid the dependence; for example, when we change the lattice constant in a solid, artificially changes just because of the changes in the basis sets. From the beginning, very high-level numerical accuracy for required; very slight changes of results in non-negligible error when the bonding originates from weak interactions such as the van der Waals interaction. These are general problems in calculating the RPA-level of correlation energy, even when evaluated from Kohn-Sham eigenfunctions. QSGW with Eq. (10) or Eq. (11) can result in multiple self-consistent solutions for in some cases. This situation can occur even in HF theory. For any solution that satisfies the self-consistency as Eq. (10) or Eq. (11), we expect that it corresponds to some metastable solution. Then it is natural to identify the lowest energy solution as the ground state, that is, we introduce a new assumption that “the ground state is the solution with the lowest total energy among all solutions”. In other words, the QSGW method may be regarded as a construction that determines by minimizing under the constraint of Eq. (10) (or Eq. (11)). This discussion shows how QSGW is connected to a variational principle. The true ground state is perturbatively constructed from the corresponding . However, total energy minimization is not necessary in all cases, as shown in Sec. III. We obtain unique solutions (no multiple solutions) just with Eq. (10) or Eq. (11) (Exactly speaking, we can not prove that multiple solutions do not exist because we can not examine all the possibilities. However, we made some checks to confirm that the results are not affected by initial conditions). In the cases we studied so far, multiple solutions have been found, e.g. in GdN, YH and Ce Chantis et al. (2006b); Sakuma et al. (2006). These cases are related to the metal-insulator transition, as we will detail elsewhere. As a possibility, we can propose an extension of QSGW, namely to add a local static one-particle potential as a correction to Eq. (10). The potential is controlled to minimize . This is a kind of hybridization of QSGW with the optimized effective potential method Kotani (1998). See Appendix B for further discussion as to why the total energy minimization as functional of is not a suitable way to determine . Finally, we discuss an inconsistency in the construction of the electron density within the QSGW method. The density used for the construction of in the self-consistency cycle is written as , which is the density of the non-interacting system with Hamiltonian . On the other hand, the density can be calculated from by the functional derivative with respect to . Since is a functional of , we write it as ; its derivative gives the density . The difference in these two densities is given as where is the static non-local potential defined in Eq. (10) or Eq. (11). This difference indicates the size of inconsistency in our treatment; from the view of the force theorem (Hellman-Feynman theorem), we need to identify as the true density, and for as the QP density. We have not evaluated the difference yet. Ii methodological details ii.1 Overview In the full-potential linear muffin-tin orbital method (FP-LMTO) and its generalizations, eigenfunctions are expanded in linear combinations of Bloch summed muffin-tin orbitals (MTO) of wave vector as is the band index; is defined by the (eigenvector) coefficients and the shape of the . The MTO we use here is a generalization of the usual LMTO basis, and is detailed in Refs.van Schilfgaarde et al. ; Methfessel et al. (2000). identifies the site where the MTO is centered within the primitive cell, identifies the angular momentum of the site. There can be multiple orbitals per ; these are labeled by . Inside a MT centered at , the radial part of is spanned by radial functions (, or , , ) at that site. Here is the solution of the radial Schrödinger equation at some energy (usually, for channels with some occupancy, this is chosen to be at the center of gravity for occupied states). denotes the energy-derivative of ; denotes local orbitals, which are solutions to the radial wave equation at energies well above or well below . We usually use two or three MTOs for each for valence electrons (we use just one MTO for high channels with almost zero occupancy). In any case these radial functions are represented in a compact notation . is a compound index labeling and one of the , , triplet. The interstitial is comprised of linear combinations of envelope functions consisting of smooth Hankel functions, which can be expanded in terms of plane waves Bott et al. (1998). Thus in Eq. (24) can be written as a sum of augmentation and interstitial parts where the interstitial plane wave (IPW) is defined as and are Bloch sums of T and G are lattice translation vectors in real and reciprocal space, respectively. Eq. (25) is equally valid in a LMTO or LAPW framework, and eigenfunctions from both types of methods have been used in this scheme Usuda et al. (2002); Friedrich et al. (2006). Here we restrict ourselves to (generalized) LMTO basis functions, based on smooth Hankel functions. Throughout this paper, we will designate eigenfunctions constructed from MTOs as VAL. Below them are the core eigenfunctions which we designate as CORE. There are two fundamental distinctions between VAL and CORE: first, the latter are constructed independently by integration of the spherical part of the LDA potential, and they are not included in the secular matrix. Second, the CORE eigenfunctions are confined to MT spheres cor . CORE eigenfunctions are also expanded using Eq. (25) in a trivial manner ( and only one of is nonzero); thus the discussion below applies to all eigenfunctions, VAL and CORE. In order to obtain CORE eigenfunctions, we calculate the LDA Kohn-Sham potential for the density given by , and then solve the radial Schrödinger equation. In other words, we substitute the nonlocal potential with its LDA counterpart to calculate CORE. More details of the core treatment are given in Sec. II.2. We need a basis set (referred to as the mixed basis) which encompasses any product of eigenfunctions. It is required for the expansion of the Coulomb interaction (and also the screened interaction ) because it connects the products as . Through Eq. (25), products can be expanded by in the interstitial region because . Within sphere , products of eigenfunctions can be expanded by , which is the Bloch sum of the product basis (PB) , which in turn is constructed from the set of products . For the latter we adapted and improved the procedure of Aryasetiawan Aryasetiawan and Gunnarsson (1994). As detailed in Sec. II.3, we define the mixed basis , where the index classifies the members of the basis. By construction, is a virtually complete basis, and efficient one for the expansion of products. Complete information to generate the A self-energy are matrix elements , the eigenvalues , the Coulomb matrix , and the overlap matrix . (The IPW overlap matrix is necessary because for .) The Coulomb interaction is expanded as where we define and the polarization function shown below are expanded in the same manner. The exchange part of is written in the mixed basis as It is necessary to treat carefully the Brillouin zone (BZ) summation in Eq. (31) and also Eq. (34) because of the divergent character of at . It is explained in Sec. II.5. The screened Coulomb interaction is calculated through Eq. (3), where the polarization function is written as When time-reversal symmetry is assumed, can be simplified to read We developed two kinds of tetrahedron method for the Brillouin zone (BZ) summation entering into . One follows the technique of Rath and Freeman Rath and Freeman (1975). The other, which we now mainly use, first calculates the imaginary part (more precisely the anti-hermitian part) of , and determines the real part via a Hilbert transformation (Kramers-Krönig relation); see Sec. II.4. The Hilbert transformation approach significantly reduces the computational time needed to calculate when a wide range of is needed. A similar method was developed by Miyake and Aryasetiawan Miyake and Aryasetiawan (2000). The correlation part of is where ( must be used for occupied states, for unoccupied states). Sec. II.6 explains how the -integration is performed. ii.2 Core treatment Contributions from core (or semi-core) eigenfunctions require special cares. In our , CORE is divided into groups, CORE1 and CORE2. Further, VAL can be divided into “core” and “val”. Thus all eigenfunctions are divided into the following groups: VAL states are computed by the diagonalization of a secular matrix for MTOs; thus they are completely orthogonal to each other. VAL can contain core eigenfunctions we denote as “core”. For example, we can treat the Si 2 core as “core”. Such states are reliably determined by using local orbitals, tailored to these states van Schilfgaarde et al. . CORE1 is for deep core eigenfunctions. Their screening is small, and thus can be treated as exchange-only core. The deep cores are rigid with little freedom to be deformed; in addition, CORE2+VAL is not included in these cores. Thus we expect they give little contribution to and to for CORE2+VAL. Based on the division of CORE according to Eq. (35), we evaluate as (We only calculate the matrix elements , where and belongs to CORE2+VAL, not to CORE1.) We need to generate two kinds of PB; one for , the other for and . As explained in Sec. II.3, these PB should be chosen taking into account what combination of eigenfunction products are important. States CORE2+VAL are usually included in , which determines . Core eigenfunctions sufficiently deep (more than 2 Ry below ), are well-localized within their MT spheres. For such core eigenfunctions, we confirmed that results are little affected by the kind of core treatments (CORE1, CORE2, and “core” are essentially equivalent); see Ref. van Schilfgaarde et al. . As concerns their inclusion in the generation of and , Eq. (36) means that not only VAL but also CORE2 are treated on the same footing as “val”. However, we have found that it is not always possible to reliably treat shallow cores (within 2 Ry below ) as CORE2. Because CORE  eigenfunctions are solved separately, the orthogonality to VAL is not perfect; this results in a small but uncontrollable error. The nonorthogonality problem is clearly seen in as : cancellation between denominator and numerator becomes imperfect. (We also implemented a procedure that enforced orthogonalization to VAL states, but it would sometimes produce unphysical shapes in the core eigenfunctions.) Even in LDA calculations, MT spheres can be often too small to fully contain a shallow core’s eigenfunction. Thus we now usually do not use CORE2; for such shallow cores, we usually treat it as “core” VAL; or as CORE1 when they are deep enough. We have carefully checked and confirmed the size of contributions from cores by changing such grouping, and also intentional cutoff of the core contribution to and so on; see Ref. van Schilfgaarde et al. . ii.3 Mixed basis for the expansion of A unique feature of our implementation is its mixed basis set. This basis, which is virtually complete for the expansion of the products , is central for the efficient expansion of products of relatively localized functions, and essential for proper treatment of very localized states such as core states or systems. Products within a MT sphere are expanded by the PB procedure originally developed by Aryasetiawan Aryasetiawan and Gunnarsson (1994). We use an improved version explained here. For the PB construction we start from the set of radial functions , which are used for the augmentation for in a MT site. is the principal angular momentum, is the other index (e.g. we need for , and for in addition to for local orbitals and cores). The products can be re-ordered by the total angular momentum . Then the possible products of the radial functions are arranged by . To make the computation efficient, we need to reduce the dimension of the radial products as follows: • Restrict the choice of possible combinations and . In the calculation of , one is used for occupied states, the other for unoccupied states Aryasetiawan and Gunnarsson (1994). In the calculation of , appears, with coming from . Thus all possible products can appear; however, we expect the important contributions come from low energy parts. Thus, we define two sets and as the subset of . includes mainly for occupied states (or a little larger sets), and is plus some for unoccupied states (thus ). Then we take all possible products of for and . Following Aryasetiawan Aryasetiawan and Gunnarsson (1994), we usually do not include -kinds of radial functions in these sets (we have checked in a number of cases that their inclusion contributes little). • Restrict to be less than some cutoff . removing expensive product basis with high . In our experience, we need (maximum with non-zero (or not too small) electron occupancy) is sufficient to predict band gaps to eV, e.g. we need to take for transition metal atoms. • Reduce linear dependency in the radial product basis. For each , we have several radial product functions. We calculate the overlap matrix, make orthogonalized radial functions from them, and omit the subspace whose overlap eigenvalues are smaller than some specified tolerance. The tolerance for each can be different, and typically tolerances for higher can be coarser than for lower . This procedure yields a product basis, to functions for a transition metal atom, and less for simple atoms (see Sec. III.1 for GaAs). There are two kinds of cutoffs in the IPW part of the mixed basis: for eigenfunctions Eq. (25), and for the mixed basis in the expansion of . In principle, must be to span the Hilbert space of products. However, it is too expensive. The computational time is strongly controlled by the size of the mixed basis. Thus we usually take small , rather smaller than (the computational time is much less strongly controlled by ). As we illustrate in Sec. III.1, 0.1 eV level accuracy can be realized for cutoffs substantially below . For the exchange part of CORE1, we need to construct another PB. It should include products of CORE1 and VAL. We construct it from , where
85792b105529b84e
From victor Revision as of 16:26, 19 August 2014 by Layla (Talk | contribs) Jump to: navigation, search The Victor2.0 library (Virtual Construction Toolkit for Proteins) is an open-source project dedicated to providing a C++ implementation of tools for analyzing and manipulating protein structures. Victor is composed of four main modules: • Biopool - BIOPolymer Object Oriented Library. Generates the protein object and provides useful methods to manipulate the structure. • Align - ALIGNment generation and analysis. • Energy - A library to calculate statistical potentials from protein structures. • Lobo - LOop Build-up and Optimization. Ab intio prediction of missing loop conformation in protein models. The Biopool class implementation follows the composite design pattern and for a complete description of the class hierarchy we recommend to see the Doxygen documentation. Without going into implementation details a Protein object is just a container for vectors representing chains. Each vector has 2 elements: the Spacer and the Ligand Set. The Spacer is the container for AminoAcid objects whereas the LigandSet is a container for all other molecules and ions, including DNA/RNA chains. Ultimately all molecules, both in the Spacer and in the LigandSet are collections of Atom objects. The main feature in Biopool is that each AminoAcid object in the Spacer is connected to its neighbours by means of one rotational vector plus one translational vector. This implementation make easy the modification of the protein structure and lot of functions were implemented to modify/perturbate/transformate the residue relative position in an efficient way, rotation and Translation vectors. Vector aa.png For more detail on how to use it look the Biopool features section. The package comes with several options. The necessary data files (e.g. substitution matrices) are provided. The most important feature of the package is the modular object oriented design, which should allow a moderately experienced C++ programmer to rapidly implement and test new features for sequence alignment. Inside this package, you can use, different weighting schemes, scoring functions, ways to penalize gaps, and typologies of structural information. The Align library was designed to be modular and easy to expand. There are four basic components which are needed to use the alignment methods. The four main components are: • AlignmentData - Stores information on sequence (SequenceData) and, when needed, secondary structure (SecSequenceData). • ScoringScheme - Stores information on how a single position shall be scored in the alignment (it requires both AlignmentData and Blosum objects to be initialized), possible specialization of this class are: • ScoringS2S - sequence-to-sequence • ScoringP2S - profile-to-sequence • ScoringP2P - profile-to-profile • Align - The alignment algorithm. It requires both AlignmentData and ScoringScheme objects, and can be specialized in: • SWAlign - local (Smith-Waterman) • NWAlign - global (Needleman-Wunsch) • FSAlign - glocal/overlap (Free-Shift) . • Blosum - The substitution matrix. If P2S or P2P scoring is used, the class Profile stores the necessary information to generate the profile from a multiple sequence alignment. Two advanced options, which may be useful in certain circumstances, are supported by the software: 1. ReverseScoring This allows the estimation of a staistical significance of the raw alignment score by testing it against an ensemble of alignments based on the reversed sequence in the form of a Z-score. 2. Suboptimal alignments Rather than generating a single solution, the user may decide on a number of different, alternative, suboptimal alignments to be generated. The complete representation of all classes is this: Align classes.png For more detail on how to use it look the Align features section. Energy functions are used in a variety of roles in protein modelling. An energy function precise enough to always discriminate the native protein structure from all possible decoys would not only simplify the protein structure prediction problem considerably. It would also increase our understanding of the protein folding process itself. If feasible, one would like to use quantum mechanical models, being the most detailed representation, to calculate the energy of a protein. It can theoretically be done by solving the Schrödinger equation. This equation can be solved exactly for the hydrogen atom, but is no longer trivial for three or more particles. In recent years it has become possible to approximately solve the Schrödinger equation for systems up to hundred atoms with the Hartree-Fock or self-consistent field approximations. Their main idea is that the many-body interactions are reduced to several two-body interactions. Energy functions are important to all aspects of protein structure prediction, as they give a measure of confidence for optimization. An ideal energy function would also explain the process of protein folding. The most detailed way to calculate energies are quantum mechanical methods. These are, to date, still overly time consuming and impractical. Two alternative classes of functions have been developed: force fields and knowledge-based potentials. Force fields (e.g. AMBER) are empirical models approximating the energy of a protein with bonded and non-bonded interactions, attempting to describe all contributions to the total energy. They tend to be very detailed and are prone to yield many erroneous local minima. An alternative are knowledge-based potentials, where the “energy” is derived from the probability of a structure being similar to interaction patterns found in the database of known structures. This approach is very popular for fold recognition, as it produces a smoother “global” energy surface, allowing the detection of a general trend. Abstraction levels for knowledge-based potentials vary greatly, and several functional forms have been proposed. The energy functions presented in the package allow to optimize procedures. The main feature is its applicability in the context of the protein classes implemented in the package. It should be possible to invoke the energy calculation with any structure from all programs. At the same time the parameters of the energy models had to be stored externally to allow their rapid modification. With this considerations in mind, the package Energy was designed to collect the classes and programs dealing with energy calculation. The main design decision was to use the “strategy” design pattern from Gamma et al. The abstract class Potential was defined to provide a common interface for energy calculation. It contains the necessary methods to load the energy parameters during initialization of an object. Computing the energy value for objects of the Atom and Spacer classes as well as a combination of both is allowed. For more detail on how to use energy look Energy Current database methods using solely experimentally determined loop fragments do not cover all possible loop conformations, especially for longer fragments. On the other hand it is not feasible to use a combinatorial search of all possible torsion angle combinations. For an algorithm to be efficient, a compromise has to be found. One improvement in ab initio loop modelling is the use of look-up tables(LUT) to avoid the repetitive calculation of loop fragments. LUTs can be generated once and stored, only requiring loading during loop modelling. Using a set of LUTs reduces the computational time significantly. The next problem is how to best explore the conformational space. Especially for longer loops, it is useful to generate a set of different candidate loops to exclude improbable ones by ranking. The method should therefore be able to select different loops by global exploration of the conformational space independently of starting conditions. Methods building the loop stepwise from one anchor residue to the other bias the solutions depending on choices made in conformation of the first few residues. Rather a global approach to the optimization is required. This criterion is fulfilled by the divide & conquer algorithm, which is recursively described by the following steps: 1. if start = end, compute result; 2. else use algorithm for: (a) start to end/2 (b) end/2 to end 3. combine the partial solutions into the full result. Applied to loop modelling, the basic idea of a divide & conquer approach is to divide the loop into two segments of half the original length choosing a good central position, as shown: The segments can be recursively divided and transformed, until the problem is small enough to be solved analytically (conquered). The positions of main-chain atoms for segments of a single amino acid can be calculated analytically, using the vector representation. Longer loop segments can be stored in LUTs and their coordinates extracted by geometrically transforming the coordinates for single amino acids back into the context of the initial problem. To this end we need to define an unambiguous way to represent the conformation of any given residue along the chain and a set of operations to concatenate and decompose loop segments. For more detail on how to use Lobo look Lobo
2ecb1c059cf061a4
Jun 07, 2019 Nuclear-physics and quantum-chemistry theoreticians join forces to accurately predict the properties of the atomic nucleus Predicting properties of, e.g., molecules or atomic nuclei from first principles requires to solve the Schrödinger equation with high accuracy. The computing cost to find exact solutions of the Schrödinger equation scales exponentially with the number of particles constituting the system. Thus, with nuclei composed of tens or hundreds of nucleons, it necessitates accurate approximate methods of lower computing cost. However, such methods can be applied to a limited number of systems: the weakly correlated ones. Consequently, a universally applicable method is still missing. Employing a novel formalism recently developed at Irfu/DPhN [1], highly accurate solutions of the Schrödinger equation – in the context of the exactly solvable Richardson model - have been obtained, independently of the weakly- to strongly-correlated character of the system. This work has been performed in collaboration with ab initio quantum chemists from Rice University. This exciting new achievement, paving the way for precise ab initio computations of molecular or nuclear properties of a large number of systems, was recently published in Physical Review C [2] and highlighted as the Editor’s suggestion. Molecules and atomic nuclei are finite mesoscopic quantum systems, i.e. they are neither microscopic systems made out of a few particles nor macroscopic systems made out of a very large number of them, which makes their description particularly arduous. While the degrees of freedom at play in the former are A electrons interacting via the Coulomb force, the latter are effectively modelled in terms of A strongly interacting neutrons and protons. While properties of these systems strongly depend on the nature of their constituents and interactions, the physics of the system is governed by the same dynamical equation, i.e. the A-body Schrodinger equation. As a result, the difficulty to make accurate predictions is mostly independent of the nature of the particles and of their interactions. Figure 1: Schematic representation of the last occupied and first empty shell in the reference state obtained via a zeroth-order description of the system. Arrows embody the processes by which correlations are captured beyond that zeroth-order description. Weak versus strong correlations The fact that a molecule or an atomic nucleus displays weak or strong correlations is empirically related to the picture at play in the zeroth-order description of the system. In this description, the A particles are placed on the A lowest quantum levels whose degree of degeneracy, i.e. the number of fermions (electrons or nucleons) that can sit on a given level or “shell”, is related to the symmetries displayed by the physical system. If A is such that all levels including (but) the highest one are fully occupied, the system is said to be of “closed-shell” (“open-shell”) character and typically displays weak (strong) correlations. These two situations are schematically illustrated in Fig. 1. As a result of the two-fold degeneracy of electronic shells, most molecules are closed-shell in their ground state but do transition through an open-shell-like state when breaking chemical bonds. Contrarily, rotational symmetry induces a larger degree of degeneracy for nuclear shells, making the large majority of nuclear ground states to be of open-shell character. Furthermore, only the nuclei exhibiting both neutron and proton closed shells, i.e. so-called doubly-magic nuclei, are weakly correlated. The dominance of open-shell nuclei is illustrated in Fig. 2. One key manifestation of strong correlations in open-shell nuclei relates to their superfluid character. Expending the exact solution  1) From exponential numerical scaling to polynomial numerical scaling Solving the A-body Schrodinger equation, whose naïve cost grows exponentially with A, constitutes a highly non-trivial problem from the formal and computational perspectives. So-called “brute force” methods attacking directly this cost are thus limited to systems with A ≤ 15. To describe heavier systems, one must resort to approximate methods whose cost is polynomial with A.  Whenever a system is of closed-shell character, polynomially-scaling methods can be designed by expanding the exact solution of the A-body Schrodinger equation around a Slater determinant, i.e. the zeroth-order reference state discussed above. Indeed, the fact that the reference state is not degenerate with respect to elementary fermion excitations denoted by red arrows in Fig. 1, i.e. the promotion of nucleons from occupied to unoccupied shells in the reference state, makes it possible to capture weak correlations in a controlled and meaningful way. All polynomially-scaling methods are based on this principle. Over the years, several polynomially-scaling methods delivering highly accurate results for weakly-correlated systems have been designed and applied. One typical example constituting the golden standard in ab initio quantum chemistry and that is now flourishing in nuclear physics, is the so-called coupled cluster (CC) formalism. However, these methods typically fail in the presence of strong correlations such that research activities currently focus on the design of novel formalisms that can be universally applied to A-body systems with 2 ≤ A ≤ few hundreds, independently of the fact that they are weakly or strongly correlated. Figure 2: Low portion of the Segrè chart representing known and predicted-to-exist atomic nuclei below the Barium element (Z=56). N (Z) denotes the neutron (proton) number of a given nucleus. Vertical (horizontal) dashed lines locate the neutron (proton) closed shells (“magic numbers”). Bullets at the crossing of these lines embody doubly closed-shell (doubly “magic”) nuclei that are typically weakly correlated. The remaining nuclei are of singly or doubly open-shell character and are thus typically strongly correlated. Figure 3: Schematic shell sequence and shell filling for an open-shell nucleus in a symmetry-conserving (left) and a symmetry-breaking (right) zeroth-order description.  2) From weakly to strongly Whenever the system is open-shell, the possibility to promote fermions from occupied to unoccupied levels at no energy cost (see Fig. 1) makes the methods ill-defined and incapable of capturing the associated strong correlations. A powerful way to bypass this impediment while maintaining the polynomial cost of the method is to authorize one or several symmetries of the system to spontaneously break in the zeroth-order description. Doing so lifts the degeneracy of elementary excitations such that the system effectively acquires a closed-shell character as schematically illustrated in Fig. 3. Starting from the newly obtained reference state, the solution of the Schrodinger equation can be safely expanded and strong correlations captured both via the new reference state and the expansion around it. The case of present interest consists of authorizing the so-called U(1) global gauge symmetry, an abstract symmetry associated with the fact that nuclei contain specific numbers of protons and neutrons, to break in order to capture the superfluid character of open-shell nuclei via the use of a so-called Bogoliubov reference state. That this reference state breaks U(1) symmetry means that it does not contains a sharp number of neutrons and/or protons but rather mixes several neighboring values. Restoring the symmetry In the past years, theoreticians from Irfu/DPhN have developed three different expansion methods allowing U(1) symmetry to break, thus producing the first systematic ab initio calculations of mid-mass singly open-shell nuclei [3]. One of these three methods, coined as Bogoliubov CC (BCC) theory [4], generalizes standard CC theory applicable to closed-shell systems. While the exact solution necessarily displays a well-defined number of nucleons, approximate solutions obtained through these expansion methods do not as mentioned above. Though such a breaking can be real in macroscopic systems, it is only fictitious in finite quantum systems such that obtaining an accurate solution eventually requires restoring the broken symmetry, i.e. the specific number of nucleons in the present case.  This task, a long-term challenge in many-body theory, was recently achieved by theoreticians from Irfu/DPhN via the formulation of the so-called particle-number-projected Bogoliubov CC (PBCC) theory [1]. In this formalism, U(1) symmetry is broken to authorize a meaningful expansion and further restored to obtain a correct symmetry solution that fully captured strong, i.e. superfluid, correlations. Richardson model The best way to first gauge the performance of a novel many-body method is to use it to describe a model problem whose exact solutions are known. This was achieved for PBCC in collaboration with quantum chemists from Rice university [2] on the basis of the well-celebrated Bardeen-Cooper-Schrieffer (BCS) pairing problem whose exact solutions were provided long ago by Richardson [5]. This solvable problem is perfectly suited given that its solutions transition, when tuning the interaction strength between the particles in the model, from a normal (weakly correlated) to a superfluid (strongly correlated) state. As illustrated in Fig. 4, the newly designed PBCC method (coined as Projected BCS-CC on the figure) delivers essentially exact ground-state energies for all interaction strengths, improving decisively over the unprojected BCC method (coined as BCS-CC) as well as over the unprojected or projected zeroth-order descriptions (respectively coined as BCS and Projected BCS). PBCC provides these results while retaining a polynomial cost not too much larger than the unprojected BCC method. Figure 4: Ground-state energy errors (arbitrary units) against the exact solution of the Richardson many-body problem as a function of the interaction strength. The critical point denotes the phase transition between the normal and the superfluid phases. Polynomial methods that do no break U(1) symmetry (results not reproduced here) cannot deliver any sensible results in the superfluid phase. The combined benefit of (i) breaking the symmetry before (ii) capturing correlations beyond the zeroth order description while (iii) restoring the symmetry is clearly illustrated. In summary, the theoreticians from Irfu/DPhN have designed and tested a novel many-body formalism delivering highly accurate solutions of the exactly solvable Richardson many-body problem in all situations ranging from weak to strong correlations.  The next step consists in implementing the many-body method to compute ab initio properties of molecules and atomic nuclei independently of their weakly or strongly correlated character. [1] T. Duguet, A. Signoracci, J. Phys. G: Nucl. Part. Phys. 44 (2016) 015103 [2] Y. Qiu, T. M. Henderson, T. Duguet, G. E. Scuseria, Phys. Rev. C99 (2019) 044301 [3] V. Somà, C. Barbieri, T. Duguet, Phys. Rev. C87 (2013) 011303(R) [4] A. Signoracci, T. Duguet, G. Hagen, G. R. Jansen, Phys. Rev. C91 (2015) 064320 [5] R. W. Richardson, Phys. Lett. 3 (1963) 277 ; Phys. Rev. 141 (1966) 949 Thomas DUGUET  #4608 - Last update : 05/28 2020 Retour en haut
9cd6bedd597c2707
lördag 29 juni 2013 The Linear Scalar MultiD Schrödinger Equation as Pseudo-Science If we are still going to put up with these damn quantum jumps, I am sorry that I ever had anything to do with quantum theory. (Erwin Schrödinger) The pillars of modern physics are quantum mechanics and relativity theory, which both however are generally acknowledged to be fundamentally mysterious and incomprehensible to even the sharpest minds and thus gives modern physics a shaky foundation. The mystery is so deep that it has been twisted into a virtue with the hype of string theory representing maximal mystery. The basic trouble with quantum mechanics is its multi-dimensional wave function solution depending on 3N space coordinates for an atom with N electrons,  as solution to the linear scalar multi-dimensional Schrödinger equation, which cannot be given a real physical meaning because reality has only 3 space coordinates. The way out to save the linear scalar multidimensional Schrödinger equation, which was viewed to be a gift by God and as such was untouchable, was to give the multidimensional wave function an  interpretation as the probability of the N-particle configuration given by the 3N coordinates. Quantum mechanics based on the linear scalar Schrödinger equation was thus rescued at the cost of making the microscopic atomistic world into a game of roulette asking for microscopics of microscopics as contradictory reduction in absurdum. But God does not write down the equations describing the physics of His Creation, only human minds and if insistence on a linear scalar (multidimensional) Schrödinger wave equation leads to contradiction, the only rational scientific attitude would be to search for an alternative, most naturally as a system of non-linear wave equations in 3 space dimensions, which can be given a deterministic physical meaning. There are many possibilities and one of them is explored as Many-Minds Quantum Mechanics in the spirit of Hartree. It is well known that macroscopic mechanics including planetary mechanics is not linear, and there is no reason to expect that atomistic physics is linear and allows superposition. There is no rational reason to view the linear scalar multiD Schrödinger equation as the basis of atomistic physics (other than as a gift by God which cannot be questioned), and physics without rational reason is unreasonable and thus may represent pseudo-science. The linear scalar multiD Schrödinger equation with an incredibly rich space of solutions beyond reason, requires drastic restrictions to represent anything like real physics.  Seemingly out of the blue, physicists  have come to agree that God can play only with fully symmetric (bosons) or antisymmetric (fermions) wave functions with the Pauli Exclusion Principle as a further restriction. But nobody has been able to come with any rational reasons for the restrictions to symmetry, antisymmetry and exclusion. According to Leibniz Principle of Sufficient Reason, this makes these restrictions into ad hoc pseudo-science.   måndag 17 juni 2013 Welcome Back Reality: Many-Minds Quantum Mechanics The new book Farewell to Reality by Jim Baggott gets a positive reception on Not Even Wrong (and accordingly a negative by Lubos). The main message of the book is that modern physics (SUSY, GUTS, Superstring/M-theory, the multiverse) is no longer connected to reality in the sense that experimental support is no longer possible and therefore is not considered to even be needed. But science without connection to reality is pseudo-science, and so how can it be that physics classically considered to be the model of all sciences, in modern times seems to have evolved into pseudo-science? Let's take a look back and see if we can find an answer: My view is that the departure from reality started in the 1920s with the introduction of the multi-dimensional wave function as solution to a linear scalar Schrödinger equation, with 3N space dimensions for an atom with N electrons. Such a wave function does not describe real physics, since reality has only 3 space dimensions and the only way out insisting on the truth of the linear Schrödinger equation as given by God,  was to give the wave function a statistical interpretation. But that meant a non-physical and non-real interpretation, since there is no reason to believe that real physics can operate like an insurance company filled with experts doing statistics, in Einstein's words expressed as "God does not play dice".  The statistical interpretation was so disgusting to Schrödinger that he gave up further exploration of the quantum mechanics he had invented.  Schrödinger believed that the wave function had a physical meaning as a description of the electron distribution around a positive kernel of an atom. A non-linear variant of the Schrödinger equation in the form of a system of N equations in 3 space dimensions for an N-electron atom was early on suggested by Hartree as a method to compute approximate solutions of the multi-dimensional Schrödinger, an equation which cannot be solved, and the corresponding wave function can be given a physical meaning as required by Schrödinger. I have explored this idea a little bit in the form of Many-Minds Quantum Mechanics (MMQM) as an analog of Many-Minds Relativity. MMQM seems to deliver a ground state of Helium corresponding to the observed minimal energy E = - 2.904,  with the 2 electrons of Helium distributed basically as two half-spherical shells (blue and green patches) filling a full shell around the kernel (red) as illustrated in the left picture. This configuration is to be compared with the spherically symmetric distributions of Parahelium 1s1s (hydrogenic orbital) in the middle with E = -2.75 and Ortohelium 1s2s with even bigger energy to the right: Classical quantum mechanics based on a multi-dimenisonal wave function satisfying the linear Schrödinger equation (QM) presents Parahelium as the ground state of Helium with the two electrons sharing a common spherically symmetric orbit in accordance with the Pauli Exclusion principle (PEP).  But the energy E = -2.75 of Parahelium is greater than the observed E = -2.904 and so Parahelium cannot be the ground state. QM with PEP thus does not describe even Helium correctly, a fact which is hidden in text books, while the non-spherical distribution of MMQM appears to give the correct energy. MMQM does not require any PEP and suggests a different explanation of the electronic shell structure of an atom with the numbers of 2, 8, 8, 18, 18... of electrons in each shell arising as 2 x n x n, with n=1,  2, 2, 3, 3, and the factor 2 reflecting the structure of the innermost shell as that of Helium, and n x n the two-dimensional aspect of a shell. The Farewell to Reality from modern physics was thus initiated with the introduction of the multi-dimensional wave function of the linear Schrödinger equation of QM in the 1920s, and the distance to Reality has only increased since then.  Once the connection the Reality is given up there is no limit to how far you can go with your favorite theory. QM is cut in stone as the linear multidimensional Schrödinger equation with wave function solution being either symmetric or antisymmetric and satisfying PEP, but QM in this form lacks real physical interpretation. The exploration of non-linear Schrödinger equations in 3 space dimensions with obvious possibilities of physical interpretation, has been pursued only as a way to compute approximate solutions to the multi-dimensional linear Schrödinger equation, but may merit attention also as true models of physical reality.    söndag 16 juni 2013 Gomorron Sverige Ärende hos Academic Rights Watch Academic Rights Watch har nu tagit upp mitt ärende "Gomorron Sverige", som inom kort kommer att behandlas av Kammarrätten i Stockholm. Den större frågan är om KTH har brutit mot Yttrandefrihetsgrundlagen, eller inte. Essence of Dynamics 1                            Computed turbulent flow around an airplane represents Case 3. below. The dynamics of a physical system can typically be described as an initial value problem of finding a vector function U(t) depending on time t such that • dU/dt + A(U) = F  for t > 0 with U(0) = G, where, A(U) is a given vector function of U,  F(t) is a given forcing and G is a given intial value at t = 0. In the basic case A(U) = A*U is linear with A = A(t) a matrix depending on time, which is also the linearized form of the system describing growth/decay of perturbations characterizing stable/unstable dynamics. An essential aspect of the dynamics is the perturbation dynamics described by the linearized system which is determined by the eigenvalues of its linearization matrix A, assuming for simplicity that A is diagonalizable and independent of time: 1. Positive eigenvalues: Stable in forward time; unstable in backward time. 2. Negative eigenvalues : Unstable in forward time; stable in backward time. 3. Both positive and negative eigenvalues: Both unstable and stable in both forward and backward time. 4. Imaginary eigenvalues: Wave solutions marginally stable in both forward and backward time. 5. Complex eigenvalues: Combinations of 1. - 4.          Here Case 1. represents a dissipative system with exponential decay of perturbations in forward time making long time prediction possible, but backward time reconstruction difficult because of exponential growth of perturbations. This is the dynamics of a diffusion process, e.g. the spreading of a contaminant by diffusion or heat conduction.  Case 2. is the reverse with forward prediction difficult but backward reconstruction possible. This is the dynamics of a Big Bang explosion.  Case 3. represents turbulent flow with both exponential growth and decay giving rise to complex dynamics without explosion, with mean-value but not point value predictability in forward time. The picture above shows the turbulent flow around an airplane with mean-value quantities like drag and lift being predictable (in forward time). This case represents the basic unsolved problem of classical mechanics which is now being uncovered by computational methods including revelation of the secret of flight (hidden in the above picture).   Case 4 represents wave propagation with possibilities of both forward prediction and backward reconstruction, with the harmonic oscillator as basic case.  There is a further limit case with A non-diagonalizable with an incomplete set of eigenvectors for a multiple zero eigenvalue, with possibly algebraic growth of perturbations, a case arising in transition to turbulence in parallel flow.  onsdag 12 juni 2013 The Dog and the Tail: Global Temperature vs CO2, continuation. This is a continuation of the previous post. Consider the following special case with T(t) = T_0 for t < 1970, T(t) increasing linearly for 1970 < t < 1998 to the value T_1 with T(t) = T_1 for t > 1998. The corresponding solution C(t) of the equation dC/dt increases linearly for t < 1970, quadratically for 1970 < t < 1998  and again linearly for t > 1998 as sketched by the solid lines in the following graph: We see that after 1998 the temperature stays constant while the CO2 increases linearly. The solid lines could picture reality. On the other hand, if you want to create a fiction of CO2 alarmism, you would argue as follows: Look at the solid lines before 1998 representing recorded reality and simply make an extrapolation until 2020 of the simultaneous increase of both T and C during the period 1970 - 1998, to get the dotted red line as a predicted alarming global warming in 2020 resulting from a continued increase of CO2. The extrapolation would then correspond to using a connection between T and C of the form T ~ C with T determined by C, instead of the as in the above model dC/dt = T with C determined by T. This shows the entirely different global warming scenarios obtained using the model T ~ C with T determined by C, and the model dC/dt = T with C determined by T. tisdag 11 juni 2013 The Dog and the Tail: Global Temperature vs CO2 måndag 10 juni 2013 Need of Education in Mathematics - IT Images des Math issued by CNRS reports on a proposal by L'Academie des Sciences to strengthen school education in Information Science and Technology (IT), and expresses the concern that while  the proposal identifies the strong impact of IT in physics, chemistry, biology, economy and social sciences, the connection between IT and mathematics is less visible. The reason L'Academie des Sciences forgets the fundamental connection between mathematics and IT, is that school mathematics is focussed on a tradition of analytical mathematics, where the IT-revolution of computational mathematics is not visible. This connects to my proposal of a reform of school mathematics into a new school subject named Mathematics - IT combining analytical and computational mathematics with the world of apps as the world of applications of mathematics using IT. Without such a reform school mathematics will follow the fate of classical latin and greek once at the center of the curriculum but now gone. This is not understood by mathematicians paralyzed by the world of apps based on computational mathematics. The strength of the (aristochratic) tradition of analytical mathematics is preventing a marriage with (newly rich) computational mathematics, which would serve as the adequate school mathematics of the IT age. As often, a strength can be turned into a fatal weakness when conditions change but strong tradition resists reform. lördag 1 juni 2013 Milestone: Direct Fem-Simulation of Airflow around Complete Airplane The first direct computational simulation of the flow of air around a complete airplane (DLR F11 high-lift configuration) has been performed by the CTLab group at KTH led by Johan Hoffman in the form of Direct Fem-Simulation (DFS). The simulation gives support to the new theory of flight developed by Hoffman and myself now under review by Journal of Mathematical Fluid Mechanics after initial rejection by AIAA. The milestone will be presented at 2nd AIAA CFD High Lift Prediction Workshop, San Diego, June 22-23 2013. DFS is performed by computational solution using an adaptive  residual-stabilized finite element method for the Navier-Stokes equations with a slip boundary condition modeling the small skin friction of air flow. DFS opens for the first time the possibility of constructing a realistic flight simulator allowing flight training under extreme dynamics, beyond the vision of AIAA limited by classical flight theory. For more details browse my upcoming talk at ADMOS 2013.
32f168148071bdb1
söndag 16 oktober 2011 False-SB Violates the 2nd Law To understand the difference between the two versions of Stefan-Boltzmann's radiation law (SB) under discussion, True-SB and False-SB, let us inspect the proof of SB from Planck's radiation law in its normalized form presented in Computational Blackbody Radiation: • $R(f ,T) =\gamma Tf^2$ for $f\le T$ • $R(f,T) = 0$ for $f > T$ where $R(f ,T)$ is the radiance of frequency $f$ from a blackbody of temperature $T$, $\sigma$ is a constant and a simplified high-frequency cut-off is used (as compared to Planck's exponential cut-off). The total radiative transfer $R_{True}$ to a blackbody 1 of temperature $T_1$ from a blackbody 2 of temperature $T_2>T_1$ is given by integration over frequencies as follows: • $R_{True} =\int_{T_1}^{T_2}\gamma T_2 f^2 df + \int_0^{T_1}\gamma (T_2 - T_1) f^2df \equiv I_1 + I_2$, where the first integral $I_1$ is the heating effect from 2 above the cut-off of 1 and the second integral the net heating from 2 below cut-off. We see that $R_{True}$ expressing True-SB is the sum of two integrals with positive integrands, that is, $R_{True}$ is the sum of many small positive contributions all with transfer of heat from 2 to 1. We shall now see that False-SB arises by rewriting $I_1$ as follows: • $I_1 = \int_0^{T_2}\gamma T_2f^2df - \int_0^{T_1}\gamma T_2f^2df =\sigma T_2^4 - \sigma T_2T_1^3$ where $\sigma =\frac{\gamma}{3}$, which since $I_2 = \sigma (T_2-T_1)T_1^3$ gives False-SB on the form • $R_{False} = \sigma T_2^4 - \sigma T_1^4$ expressing the transfer of energy from 2 to 1 as the difference of two gross flows with different signs. We see that False-SB arises by rewriting an integral with positive integrand as the difference of two integrals of different signs as follows: • $\int_{T_1}^{T_2} f^2 df = \int_0^{T_2}f^2df - \int_0^{T_1}f^2df$, where the lower integration limit $0$ could be replaced by any positive number smaller than $T_1$. False-SB arises when giving this formal mathematical manipulation a physical meaning stating that one-way net flow is the difference of two-way gross flows, with the flow from 1 to 2 violating the 2nd law of thermodynamics and involving an arbitrary constant. False-SB thus arises by an ad hoc translation to physics of a mathematical operation which results in a violation of the 2nd law of thermodynamics. Accordingly False-SB is not found in physics literature, but has appeared outside physics as an ad hoc free invention by climate scientists for the purpose of selling CO2 alarm. 55 kommentarer: 1. "a simplified high-frequency cut-off is used" - that contradicts all available observational data. Show me a black body spectrum that has a high frequency cut-off, and then please explain why all the black body spectra that have ever been measured that don't show such a thing are somehow wrong. 2. Come on, Wiens displacement law is the high-frequency cut-off. Read Planck and look at the curves and see the high frequency cut-off avoiding the ultra-violet catastrophy. 3. No, that is not what Wien's displacement law is. There is no cut-off in a black body spectrum. According to your equations, a black body spectrum should contain a discontinuity where the flux suddenly drops to zero. Show me the spectrum that contains such a thing. 4. Read Plank's law and see that it has an exponential term with cut-off of high frequency. Just read! 5. No, it doesn't. There is no f for which R=0, in Planck's law. As usual you can't bring yourself to answer simple questions so I will repeat it: according to your equations, a black body spectrum should contain a discontinuity where the flux suddenly drops to zero. Show me the spectrum that contains such a thing. 6. It is just a simplification of an exponential drop to zero, nothing to get upset about. 7. But doesn't R_true turn out to be the same as R_false? Isn't this just a complicated way of saying what you have already said. Karl Popper, after having had fruitless conversations with his father, concluded that you should never argue about the definition of concepts (such as "backradiation"), it is the scope of a theory's predictive power that should be evaluated. I'm afraid that you are on a very fruitless track at the moment. 8. It is fruitless if nobody is capable of listening and thinking. 9. "It is just a simplification of an exponential drop to zero, nothing to get upset about." Starting with something incorrect renders everything that follows invalid. You are incapable of providing a black body spectrum with a discontinuity in it, so why are you using equations that predict one? 10. As I said, it is only a simplification of the exponential cut-off in Planck's law. The essence is the same: cut-off of high frequencies. 11. Planck's law does not cut off high frequencies. 12. Yes it does, this is how the ultraviolet catastrophe is avoided and why Planck got the Nobel Prize, and not Rayleigh-Jeans for their radiation law without cut-off. 13. Do you speak English? A decline is not the same as a cut-off. Planck's law does not cut off high frequencies. There is no value of $\nu$ for which I=0. 14. No, but is very small, practically and physically zero compared to the contributions below cut-off. 15. Anyone who thinks about these points will realise there is no greenhouse effect ... (1) The direction of net radiative energy flow can be the opposite of the direction of heat transfer. If you have a warmer object (say 310 K) with low emissivity (say 0.2) and a cooler object (say 300 K) with much higher emissivity (say 0.9) then net radiative energy flow is from the cooler to the warmer object. Yet the Second Law says heat transfer is from hot to cold. So, there is no warming of the warmer body by any of the (net) radiative energy going into it. (2) Any warming of a warmer surface by radiation from a cooler atmosphere violates the Second Law of Thermodynamics. Consider the situation when the surface is being warmed by the Sun at 11am somewhere. Its temperature is rising and net radiative energy flow is into the surface. How could additional thermal energy transfer from the cooler atmosphere to make the surface warm at a faster rate? Clearly radiation from a cooler atmosphere cannot add thermal energy to a warmer surface. The surface molecules scatter radiation which has a peak frequency lower than the peak frequency of their own emission, and so no radiative energy is converted to thermal energy. (This was proved in Johnson's Computational Blackbody Radiation.), So the atmospheric radiative greenhouse effect is a physical impossibility. 16. Oh Doug, you know you are referring to the The Imaginary Second Law of Thermodynamics now don't you? 17. Just out of curiosity, Claes, do you agree with the comment by mr Cotton? 18. Good, then I want you to show that radiation exchange between a hotter and a colder body results in a decrease in entropy and hence violates the second law. To say that it violate Clausius weak formulation does not suffice since it's to weak, so please show us the calculation using a stronger statement that proves your statement that the second law is violated. 19. Are you capable of showing this violation? You do claim that there is a violation, and from that I see two options. Either you have shown this, or you are guessing. If you are guessing or can't show it, please say so. Radiation was not understood when Clausius formulated his statement so it's ambiguous in relation to radiation, therefore use another equivalent formulation that is better suited. 20. Nobody knows what entropy is so forget it. But nobody has ever observed a cold body heating a hot body without extra work, and so there is no reason to believe that this can happen. It is the same with ghosts. Nobody has ever seen a real ghost, and thus there is no reason to believe that there are ghosts. To directly prove that there are no ghosts is impossible and is a non-issue. 21. Do you call this answering a scientific question in a serious way? How on earth am I gonna take you serious if you do not want to give a serious answer? Are you a man of double standards? You demand a serious answer from others but refuse to act such yourself. Judging from you post regarding your discussion with Roy Spencer. Frankly I feel a bit offended by this. 22. Of course I want a scientific answer to why (2). Have you calculated it or not? If not, are you guessing? Since the Clausius statement is ambiguous when it comes to radiation you should not use it. So how, scientifically speaking, is the second law violated? You can not know, and not know, that is very unscientific. 23. It is also very strange that you say that nobody knows what entropy is. It's very clearly defined in statistical physics. If you find the meaning of this abstruse shouldn't be anybody else's problem, that is far from being scientific. You are trying to answer a question by not answer the question. 24. I think you should clarify what you mean with warming to. 25. What 2nd law are you referring to? 26. In an isolated system, DS.ge.0 For state x, S = k log V, V is the sub-volume of the phase space that contains x and is a sub set of the total coarse grained phase space. Preferably the coarse graining is done using quantum states to avoid ambiguity. Considering how much time you seem to have spent on the subject I feel surprised that I should tell you this. Does (2) violate this 2nd law? 27. It is impossible to relate this statistical 2nd law to anything physically meaningful, and so I cannot tell if it is vilolated or not. If you read what I say you should be able to decide yourself, since you are so well informed. So what is your answer to your question? 28. Why do say that it isn't meaningful, you need to specify why then. 29. With this kind of attitude I don't see the point in discussing. You act very unscientific since it seems you made up your mind and adapt what is acceptable from this a priori idealization of "your version" of science. How do you think anybody will take your ideas serious?? (Unless others who seem to have made up their minds). I have now read the conversation between you and one Tomas Milanovic at Judith Curry's blog. From that discussion I get the impression that you go so far as acting directly disrespectful of your fellow scientists that has thought long and hard on these topics. How will that help you spreading your ideas in a serious manner?? It makes me wonder what your motivation for doing science is in the first place. What is it that you want to accomplish. Now it only looks like you are only out quarrel and spread sophistry. 30. Because it is statistics. Physics is not statistics. 31. You don't understand my arguments. These are scientific arguments and has nothing to do with disrespectfulness. Science is about scientific truth, not about respect or disrespect. As you say yourself, our discussion is not meaningful and I think we should stop here. Best regards, Claes. 32. Physics is the science of study and describe nature. Do you mean that one can not use statistics to study and describe nature? 33. Maybe it is possible to use statistics to describe anything, including physics. But I am interested in physics as systems interacting according to some form of deterministic laws in some form of analog computation of finite precision as in digital computation with finite number of decimals. The indeterminancy thus comes from chopping decimals and not by throwing dice. In particular, I cannot see that real physics computes ensemble mean values and thus has little to do with statistics. I am interested in understanding how Nature works without human observers, rather than understanding human observers using statistics when studying Nature which to me is more like psychology. 34. What do you think that physics is? How do you define physics? 35. I just said that. Read the comment. 36. I just did, and it seems as if you identify physics with nature. That is just plain strange. Do you really mean that physics isn't the science of studying and describing nature? 37. Yes it is describing physical nature but nature does not compute ensembles mean values, while human beings in the form of statisticians compute ensemble mean values as statistics. So I do not see statistics as basic physics, rather as an activity of human beings. Is that so strange? 38. Yes, that is strange. You mix up reality (nature or what you want to call it) with models of reality. 39. I think that the modern interpretation of science is to accumulate knowledge and formulate testable hypotheses that you test with experiments in accordance with the scientific method. From this view science is connected with reality but it is not the same as the reality. Physics is a scientific discipline, it is the discipline of observing and hypothesize about energy and matter. What is your thoughts about analog computation that makes you think that it would be deeper than standard physics? I do have a faint memory that analog computations is equivalent with a standard Turing machine since it can never be noise free. Never the less, you are perfectly allowed to view your computations as extra deep and meaningful, but then you must remember that when discussing these things with others they probably will not see it in the same light, and they will probably be discussing from completely different premises than you concerning what physics is. 40. Sure, I don't view my thoughts as standard and this is why I am pursuing this line of thought. My view is closer to the view of physicists in the late 19th century, before the deceptions of relativity and statistics, than that of modern string theorists. 41. Well I applaud your bravery, it's not a really a minor successful science you are competing against. But what has string theory to do with the current discussion? The people who are working with string theory are a minute minority of all physicists, and they are working within a highly speculative branch of physics called mathematical physics. It's implicitly understood that their endeavor is a playground for more or less crazy ideas, that what Kuhn called extraordinary science. The ordinary scientific method doesn't apply in that realm. Most physicist works in more applied fields, the dominant I think is condensed matter physics. Interestingly enough that is a field where you really need both statistical mechanics and quantum physics. Never the less, what is it about analog computing that you think is so revolutionizing? Shouldn't there be the same need of verification and validation that you need for instance when writing an ordinary CFD code for example? It sounds as if you are aiming more at engineering applications than fundamental physics. 42. Are your hypothesis that the physical world is a simulation done with analog computing? 43. The physical world is the real thing, not a simulation, except cameleonts. 44. So, what so special about your type of simulations? Are you aiming at revolutionize the knowledge about Chamaeleo Calyptratus? No, but seriously. You seem to avoid the question by dragging red herrings all over the place. What so special about your type of simulation? 45. The special thing is that a concept of finite precision computation replaces the statistics of statistical mechanics and quantum mechanics. Atoms don't throw dice but they chop decimals in their analog computation which is the world. Think about and you will understand that this opens new possibilities of simulation and understanding. 46. "Atoms don't throw dice but they chop decimals in their analog computation which is the world." How do you prove this? 47. Atoms do not have the capacity of throwing dice because that requires microscopic games of dice and thus microscopics upon microscopics. So dice throwing is impossible, unphysical. Likewise atoms do not have the capacity of infinite precision, and so the only possibility left is finite precision computation similar to chopping decimals. 48. Maybe you didn't understand my question so I give it again. How do you prove this? Now you are defining qualities that an atom should have according to your belief. 49. It is not a belief, it is rational thinking based on Schrodingers wave mechanics, which has shown to describe atomistic mechanics. The ultimate nature of atoms is beyond human understanding, because we consist of atoms and atoms cannot fully understand atoms. I notice that your just seeking objections without constructive aspects and I am getting tired of explaining. 50. Try to see it from my viewpoint. You say that there are rational arguments for introducing this ontological status to the atom; based on wave mechanics. But, the ontological status of the wave function in the Schrödinger equation is far from settled so I don't see how this natural would lead to rational thinking. If you have some rational deduction that proves your position it is up to this point so abstruse or fully implicit so I don't see it. You need to be much clearer why this is so rational. I also wonder why you get so defensive when asking for an explanation from you? Isn't it in your full interest that these arguments should be fully lucid? I also have one other issue with this reasoning of yours, but maybe it's good to take one thing at the time. 51. I wonder where my comment went? Did you misplace it? And also I must say that it feels strange that you write that you are getting tired of explaining, when nothing really has been explained. It is far from obvious that it is possible to draw such fundamental conclusions as you seem to wish from the Schrödinger equation, since it isn't that fundamental in nature. So, if I miss the obvious, please inform me! I do wish to understand if you have a deeper point concerning this matter.
bb1520653d48c93e
Abraham Meets Abraham from a Parallel Universe And he [Abraham] lifted up his eyes and looked, and, lo, three men stood over against him…  (Gen. 18:2)   On this blog, we often discuss a collapse of the wavefunction as the result of a measurement. This phenomenon is called by some physicists the “measurement problem.” There are several reasons, why the collapse of the wavefunction—part and parcel of the Copenhagen interpretation of quantum mechanics—is called a problem. Firstly, it does not follow from the Schrödinger equation and is added ad hoc. Secondly, nobody knows how it happens or how long it takes to collapse the wavefunction.  This is not to mention that any notion that the collapse of the wavefunction is caused by human consciousness leading to Cartesian dualism is anathema to physicists. It is a problem, no matter how you [...]
cfea71b73ba440c0
Agronomy and Horticulture Department Date of this Version Published in PHYSICAL REVIEW A 79, 023403 (2009). Copyright ©2009 The American Physical Society. Used by permission. Three alternative forms of harmonic spectra, based on the dipole moment, dipole velocity, and dipole acceleration, are compared by a numerical solution of the Schrödinger equation for a hydrogen atom interacting with a linearly polarized laser pulse, whose electric field is given by E(t)= E0f(t)cos(ω0t + η) with Gaussian carrier envelope f(t) = exp(−t22). The carrier frequency ω0 is fixed to correspond to a wavelength of 800 nm. Spectra for a selection of pulses, for which the intensity I0=cε0E20, duration T∞ δ, and carrier-envelope phase η are systematically varied, show that, depending on η, all three forms are in good agreement for “weak” pulses with I0 < Ib, the over-barrier ionization threshold, but that marked differences among the three appear as the pulse becomes shorter and stronger (I0 >Ib). Except for scalings by powers of the harmonic frequency, the three forms differ from one another only by “limit contributions” proportional to the expectation values of the dipole moment ‹z(tf)› or dipole velocity ‹z(tf)› at the end (tf) of the pulse. For long, weak pulses the limit contributions are negligible, whereas for short, strong ones they are not. In the short, strong limit, where ‹z(tf)› ≠ 0 and therefore ‹z(t)› may increase without bound (i.e., the atom may ionize), depending on η, an “infinite-time” spectrum based on the acceleration form provides a convenient computational pathway to the corresponding infinite-time dipole-velocity spectrum, which is related directly to the experimentally measured “harmonic photon number spectrum” (HPNS). For short, intense pulses the HPNS is quite sensitive to η and exhibits not only the usual odd harmonics but also even ones. The analysis also reveals that most of the harmonic photons are emitted during the passage of the pulse. Because of the divergence of ‹z(t)› the dipolemoment form does not provide a numerically reliable route to the harmonic spectrum for very short (fewcycle), very intense laser pulses.
318ead0c708b433e
Tuesday, November 27, 2012 An arguable misnomer in physics: the term “quantum mechanics” Analog or digitaluninterrupted or pixilated, continuity or discontinuity, field or particle? These pairs of opposing adjectives and nouns often occur in texts and discussions about physical reality and theoretical modeling. The periodic table of discrete chemical elements with their characteristic numbers and spectra directs scientists towards a quantum view of matter. As an example: the solution of Schrödinger's equation for the hydrogen atom provides us with quantum numbers (n, l, and m), which are integers [1]. Important: these integers are coming forth from solving an equation formulated with continuous variables for physical quantities that encode electron movement and potential. Thus, quantum mechanics models reality on the basis of continuity. Discrete values result from the approach in which the theoretical model is treated and solved; but may not be nature-inherent. Intrigued by cosmological challenges and debates over the fundamental laws of the physical world, David Tong—a theoretical physicist at the University of Cambridge—is giving the continuity-discontinuity interrelation a closer look. He writes that the term “quantum mechanics” could said to be a misnomer for a theory that formulates its equations in terms of continuous quantities [2].  He cites Leopold Kronecker's proclamation “God made the integers, all else is the work of man.” and counters with “God did not make the integers. He made continuous numbers, and the rest is the work of the Schrödinger equation.” [3].  Tong explains the latter in detail: Integers are not inputs of the [quantum] theory, as Bohr thought [Danish physicist Niels Bohr “implemented” discreteness at the atomic scale]. They are outputs. The integers are an example of what physicists call an emergent quantity.  In this view, the term “quantum mechanics” is a misnomer. Deep down, the theory is not quantum. In systems such as the hydrogen atom, the processes described by the theory mold discreteness from underlying continuity.  Quantum phenomena are these days demonstrated and animated in educational as well as entertaining videos. The Zeitgeist-driven perception: What I simulate and animate, is what I see and believe in. Yet, living in a digital age does not automatically imply living in a digital universe. Keywords: physics, philosophy, quantum theory, physical world, pointillist universe, emergent integers. References and more to explore [1] Quantum Mechanics: Solving Schrödinger's equation [users.aber.ac.uk/ruw/teach/237/hatom.php]. [2] David Tong: The Unquantum Quantum. Scientific American, December 2012, 307 (6), pp. 46-49 [www.nature.com/scientificamerican/journal/v307/n6/full/scientificamerican1212-46.html]. [3] Quoted at axeleratio.tumblr.com: axeleratio.tumblr.com/post/36680758289/god-did-not-make-the-integers-he-made-continuous. 1. Somebody seems to have forgotten that it's not just the theory. One of the first indications that the universe may not be continuous was the finding that measuring the excitation spectrum of hydrogen found a series of discrete emission lines, an experimental (empirical) result. A theory was then needed to explain that experimental result. Several were proposed. Other experiments were conducted in different areas of physics, that also could not be explained by a model of the universe that only contained continuous phenomena, the photoelectric effect, for example. Another was the impossiibility of explaining blackbody radiation from any model that assumed that light was continuous. 2. otr214427 Internet related work Diet food Thus think over this concept and go ahead.
d21c9dfb4b65b4c4
Quantum chemistry From Wikipedia, the free encyclopedia Jump to navigation Jump to search Quantum chemistry, also called molecular quantum mechanics, is a branch of chemistry focused on the application of quantum mechanics to chemical systems. Understanding electronic structure and molecular dynamics using the Schrödinger equations are central topics in quantum chemistry. Chemists rely heavily on spectroscopy through which information regarding the quantization of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and scanning probe microscopy. Quantum chemistry studies the ground state of individual atoms and molecules, and the excited states, and transition states that occur during chemical reactions. Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926.[citation needed] However, the 1927 article of Walter Heitler (1904–1981) and Fritz London, is often recognized as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Fock, to cite a few. The history of quantum chemistry also goes through the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy radiating atomic system can theoretically be divided into a number of discrete energy elements ε such that each of these energy elements is proportional to the frequency ν with which they each individually radiate energy and a numerical value called Planck's constant. Then, in 1905, to explain the photoelectric effect (1839), i.e., that shining light on certain materials can function to eject electrons from the material, Albert Einstein postulated, based on Planck's quantum hypothesis, that light itself consists of individual quantum particles, which later came to be called photons (1926). In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. Probably the greatest contribution to the field was made by Linus Pauling.[citation needed] Electronic structure[edit] The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian. This is called determining the electronic structure of the molecule. It can be said that the electronic structure of a molecule or crystal implies essentially its chemical properties. An exact solution for the Schrödinger equation can only be obtained for the hydrogen atom (though exact solutions for the bound state energies of the hydrogen molecular ion have been identified in terms of the generalized Lambert W function). Since all other atomic, or molecular systems, involve the motions of three or more "particles", their Schrödinger equations cannot be solved exactly and so approximate solutions must be sought. Valence bond[edit] Although the mathematical basis of quantum chemistry had been laid by Schrödinger in 1926, it is generally accepted that the first true calculation in quantum chemistry was that of the German physicists Walter Heitler and Fritz London on the hydrogen (H2) molecule in 1927.[citation needed] Heitler and London's method was extended by the American theoretical physicist John C. Slater and the American theoretical chemist Linus Pauling to become the valence-bond (VB) [or Heitler–London–Slater–Pauling (HLSP)] method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance. Molecular orbital[edit] An anti-bonding molecular orbital of Butadiene An alternative approach was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptional basis of the Hartree–Fock method and further post Hartree–Fock methods. Density functional theory[edit] The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies. A large part of the focus on developing DFT is on improving the exchange and correlation terms. Though this method is less developed than post Hartree–Fock methods, its significantly lower computational requirements (scaling typically no worse than n3 with respect to n basis functions, for the pure functionals) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and often comparable accuracy to MP2 and CCSD(T) (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry. Chemical dynamics[edit] Adiabatic chemical dynamics[edit] Non-adiabatic chemical dynamics[edit] Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surface (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau–Zener transition. Their formula allows the transition probability between two diabatic potential curves in the neighborhood of an avoided crossing to be calculated. Spin-forbidden reactions are one type of non-adiabatic reactions where at least one change in spin state occurs when progressing from reactant to product. See also[edit] • Atkins, P.W. (2002). Physical Chemistry. Oxford University Press. ISBN 0-19-879285-9. • Atkins, P.W.; Friedman, R. (2005). Molecular Quantum Mechanics (4th ed.). Oxford University Press. ISBN 978-0-19-927498-7. • Atkins, P.W.; Friedman, R. (2008). Quanta, Matter and Change: A Molecular Approach to Physical Change. ISBN 978-0-7167-6117-4. • Bader, Richard (1994). Atoms in Molecules: A Quantum Theory. Oxford University Press. ISBN 978-0-19-855865-1. • Gavroglu, Kostas; Ana Simões: Neither Physics nor Chemistry: A History of Quantum Chemistry, MIT Press, 2011, ISBN 0-262-01618-4 • Karplus M., Porter R.N. (1971). Atoms and Molecules. An introduction for students of physical chemistry, Benjamin–Cummings Publishing Company, ISBN 978-0-8053-5218-4 • Landau, L.D.; Lifshitz, E.M. Quantum Mechanics:Non-relativistic Theory. Course of Theoretical Physic. 3. Pergamon Press. ISBN 0-08-019012-X. • Levine, I. (2008). Physical Chemistry (6th ed.). McGraw–Hill Science. ISBN 978-0-07-253862-5. • McWeeny, R. (1979). Coulson's Valence. Oxford Science Publications. ISBN 0-19-855144-4. • Pauling, L.; Wilson, E. B. (1963) [1935]. Introduction to Quantum Mechanics with Applications to Chemistry. Dover Publications. ISBN 0-486-64871-0. • Pullman, Bernard; Pullman, Alberte (1963). Quantum Biochemistry. New York and London: Academic Press. ISBN 90-277-1830-X. • Scerri, Eric R. (2006). The Periodic Table: Its Story and Its Significance. Oxford University Press. ISBN 0-19-530573-6. Considers the extent to which chemistry and especially the periodic system has been reduced to quantum mechanics. • Simon, Z. (1976). Quantum Biochemistry and Specific Interactions. Taylor & Francis. ISBN 978-0-85626-087-2. • Szabo, Attila; Ostlund, Neil S. (1996). Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory. Dover. ISBN 0-486-69186-1. External links[edit]
0f98477eeab1228c
Tuesday, August 19, 2014 Maldacena's bound on statistical significance JM: Geometry and Quantum Mechanics, Maldacena reminds us of the obvious and old observation that the spacetime inside the black hole interior (i.e. the lifetime and the Lebensraum of the poor infalling observers) is limited which inevitably seems to affect the accuracy and reliability of the experiments. Such limitations are often described in terms of the usual uncertainty relations. Inside the hole, you can't measure the energy more accurately than with the \[ \Delta E = \frac\hbar{2 \Delta t} \] error margin and similarly for the momentum, and so on. But Juan chose to phrase his speculative ideas about the universal bound in a more invariant and more novel way, using the notion of entropy. A person who is falling into a black hole and wants to make a measurement must be sufficiently different from the vacuum. But after she is torn apart, hung by her balls, and destroyed (note that I am politically correct and "extra" nice to the women so I have used "she"), the space she has once occupied is turned into the vacuum. The vacuum inside a black hole of a fixed mass is more generic so the "emptying" means that the total entropy goes up. Juan says that the relative entropy\[ S(\rho|\rho_{\rm vac}) = \Delta K - \Delta S \geq 0 \] Because we know that once she's destroyed at the singularity, the entropy jumps at least by her entropy, it is logical – and Juan is tempted – to interpret the life and measurements inside the black hole, and not just the fatal end, as a process in which she approaches the equilibrium. So it's not possible to perform a sophisticated, accurate, and/or reliable experiment without sending something in. And if we send something in, the entropy will increase. An explicit inequality that Maldacena conjectured is the following inequality for the statistical significance:\[ p \gt \exp(-S) \] That's a formula written in the convention where the \(p\)-value is close to zero. If you prefer to talk about "\(P=\)99% certainty", you would write the same thing as\[ P \lt 1-\exp(-S) \] The certainty is less certain than 100% minus the exponential of the negative entropy and I suppose that by \(S\), Juan only means the entropy of the object. It's still huge which means that the statement above is very weak. The entropy of a human being exceeds \(10^{26}\) (in the dimensional units nats or, almost equivalently, in the less natural but more well-known bits) so the deviation from 100% is just \(\exp(-10^{26})\) which is a really small number morally closer to the inverse googolplex than the inverse googol. There may be stronger inequalities like that. And I also suspect that many such inequalities could be applicable generally – outside the context of black hole interiors. Have you ever encountered such inequalities or proved them? Note that the \(p\)-value encoding the statistical significance is the probability of a false positive. If we're constrained to live in a finite-dimensional Hilbert space where all basis vectors get ultimately mixed up with each other or something else, it's probably impossible to be any certain than your microstate isn't a particular one. But there are just \(\exp(S)\) basis vectors in the relevant Hilbert space and one of them may be right even if the "null hypothesis" holds, whatever it is. I am essentially trying to say that \(\exp(-S)\) is the minimum probability of a false positive. If someone thinks that she can formulate such comments more clearly or construct some evidence if not a conclusive proof (or proofs to the contrary), I will be very curious. If you allow me to return to the black hole interior issues: It seems to me that these "bounds on accuracy or significance" haven't played an important role in the recent firewall wars. But they're still likely to be a part of any complete picture of the black hole interior. For example, it's rather plausible that all the arguments (and instincts) directed against the state dependence violate these bounds. Juan tends to say that the rules of quantum mechanics may become approximate or inaccurate or emergent inside the black hole, and so on. He even says that "because the time is emergent inside, so is probably the whole quantum mechanics". Well, the answer may depend on which rule of quantum mechanics we exactly talk about. But quite generally, I don't believe that there can be any modification of quantum mechanics, even in the mysterious black hole interiors. In particular, the inequalities sketched by Maldacena himself might be derivable from orthodox quantum mechanics itself. And I would be repeating myself if I were arguing that ideas like ER-EPR and state dependence agree with all the postulates of quantum mechanics. Also, if we sacrifice the exact definition of time as a variable that state vectors or operators depend on – and we do so e.g. in the S-matrix description of string theory – it doesn't really mean that we deform quantum mechanics, does it? If we lose time, we no longer describe the evolution from one moment to another and we get rid of the explicit form of the Heisenberg or Schrödinger equations. But the "true core" of quantum mechanics – linearity and Hermiticity of operators, unitarity of transformation operators, and Born's rule – remain valid. What breaks down inside the black hole is the idea that exactly local degrees of freedom capture the nature of all the phenomena. But unlike locality, quantum mechanics doesn't break down. I should perhaps emphasize that even locality is only broken "spontaneously" – because the black hole geometry doesn't allow us to use the Minkowski spacetime as an approximation for the questions we want to be answered. 1. They're government workers. Of course 80% are going to require an operating system that was designed for mental defectives! Frankly, I'm surprised that the number is not even higher. I guess that's an indication of the partial success that the LiMux developers had in dumbing down the system to government-worker level -- a difficult task. I guess that Munich, in anticipation of the change, is transferring the budget for hiring a competent IT staff to purchasing third-party virus-protection software. "Penguins belong to the South Pole, not to European or American buildings." Except, apparently, Google datacenters. You do know that Google Web Server (which feeds this blog) runs on Linux, don't you? 2. Linux is fast and tight, Windows is pretty. I did a 3-month calculation of a growing crystal lattice. Knoppix (boot from CD) ran 30% faster than Windows, AMD ran 30% faster than Intel. Knoppix in AMD still ran three months - but the log-log plot of the output was longer, Past 32 A radius ran in blades. Theoretical slope is -2. The fun is in the intercept (smaller is better) and the bandwidth. Unix is not unfriendly, but it is selective about who its friends are. "the Linux solution is very expensive because it requires lots of custom programming." Bespoke vs. off the rack. 3. Nope, I am using Linux since over 20 years, and I am in trouble only whenever I have to use a computer with Windows installed :-) 4. This is silly. Germany is (unlike Greece and others) a very well functioning country with a healthy equilibrium between the commercial and government sector. So the people who work for the government are in principle the very same kind of people who work in the private sector, too. The government sector has a different way how it's funded - it's stealing money from the productive citizens via the so-called "taxes" - but that doesn't really affect the work that the employees are doing there. I think that the Google web server running this server should be moved to the South Pole, too. ;-) 5. I just cannot envision any modification of quantum mechanics whatsoever. I’ll bet that lubos is correct here. 6. "Time" is a whore concept. No reason to believe QM depends on its survival. 7. Interesting point that one cannot perform a measurement absent a source and a sink. If everything is at equilibrium, one can build a thermometer and read it, but not calibrate it to assign the output meaning. 8. Sadly, Windows taught people that (1) Computers should be pretty and should be so easy a 3 year old could use them and (2) Computers should crash all the time. People expect lousy performance and don't care, as long as Facebook and Twitter come up most of the time. I don't use Windows at all now. I use open source software. I fully admit that most people have not the training nor the ambition to do this. I pay nothing for my software and my computer works the way I want it to. I find Windows too confining. On the other hand, for those who want pretty, sparkly screens, and no thought required, Windows is the way to go. 9. OK but having used Linux for 20 years should be classified as a medical disorder. ;-) 10. It's only strange because the "technical people" have been penetrated by anti-market zealots who suppress everyone else. It's much stranger to be a fan of such a thing. Unix is a system from the 1960s that should be as obsolete today as the cars or music from the 1960s. But it's not obsolete especially because its modern clones have been promoted by a political movement. Unix, like Fortran and other things, should share the fate of Algol, Cobol, Commodore 64 OS, and many other things, and go to the dumping ground of the history where it has belonged for quite some time. 11. There is nothing wrong for a system to be usable by a 3-year-old. Coffee machines, toasters, and vacuum cleaners have the same property. Kids are ultimately the best honest benchmarks to judge whether software is constructed naturally. When kids may learn it, it really means that an adult is spending less energy with things that could also be made unnecessarily complicated, and it's a good thing. My Windows 7 laptop hasn't crashed for a year since I stopped downloading new and new graphics drivers etc. I had freezes due to Mathematica's insane swapping to the disk - when it should say "I give up" instead - but that's a different thing. 12. "So the people who work for the government are in principle the very same kind of people who work in the private sector, too." Ah ... so can you show me the private sector equivalent, in principle, of the Potsdam Institute for Climate Impact Research? ;-) The United States also is a very well-functioning country with a healthy equilibrium between the commercial and government sector. (In fact, I would argue that the US is less socialist than Germany.) Surely, during your time in the US you must have been forced to deal with the New Jersey or Massachusetts DMV? (Here I use the generic term -- in New Jersey it's called the MVC, while in Massachusetts it's the RMV.) If not, consider yourself very fortunate. There's a little bit of Greece in every government bureaucracy. (In the US, we have to tell them not to defecate in the hall -- http://www.newser.com/story/189036/epa-to-workers-stop-pooping-in-the-hall.html -- yeah.) These are the folks who prefer a platform that is better suited for gaming, entertainment, and viruses than getting quality work done. Hence, I agree with you, I think that Munich is leaning toward making the right decision. 13. Sure, I can. The commercial sector is literally drowning in similar šit, too. Try e.g. 14. Your taking of COBOL out to the dumping ground of history may be a bit premature. It's still actively being used in bluechip industries such as banking, insurance, and telecommunications. As far as new development goes it's rarely (if ever) used in GUI type applications but remains popular for high volume backend transaction processing in the bluechip industries. My guess is that your recent Bank of America transactions were touched by COBOL at some point, most likely in the mission critical application of updating your account. Not that I don't agree with your sentiment, it's just that it's incredibly difficult to get rid of. The business case for replacing existing backend systems with a more modern platform are usually weak. 15. Keyboards and mice should theoretically be obsolete too, but after playing with tablets for a couple of years, many people are moving back to laptops and even desktops for "real work". Linux having its origins in the 1960's is not an argument at all against it. 16. LOL, right, it surely feels like the two debit cards were attempted to be sent to me by a COBOL robot. ;-) I understand it's hard to get rid of things when lots of stuff has been written in an old framework. 17. Eelco HoogendoornAug 19, 2014, 10:57:00 PM 'What I am really stunned by is the unbelievably complicated culture of installing things on Linux.' Indeed. The only thing such accomplishes is making people feel clever because they haxxored their computer with 1337 compilars. In the real world of people trying to get stuff done, such nonsense is known as a lack of encapsulation, which is simply objectively bad software design. 18. Wow, what a highly emotional and non-factual piece. I come here for science news, but the credibility of the blog just plummeted. So three year old user friendliness is the main criterion for municipal desktop operating systems? Where did this criterion come from? If valid, there are several Linux distributions dedicated to three year olds. Dou Dou, for example. Come on Lubos you can de better. Where is the meat (facts)? 19. Have people who struggled with Linux run Windows computers for a long time before switching to a different operative system? Are there people who have always run Linux machines and never used Windows, but still feel unhappy about the Linux user experience. Just wondering because my mother started using computers when she was 60 yo, and she always found it pretty straightforward to use. Only time she tried to use Windows she found it pretty disgusting and user-unfriendly. 20. Lubos is a theorist. All theorists use Windows, while most all experimentalists use Linux (Scientific Linux is the official OS of Fermilab and CERN). I'll let someone else explain the reasons. 21. I think I get it already. Theorists tax the Operating System as lightly as a three-year-old, whereas experimentalists need the system for real work. 22. Dear Eelco, thanks for making these observations clear with some adult terminology! ;-) 23. I think it is true to some extent and there is nothing to be ashamed of. Of course that theorists often use computers in similar ways as writers (of literature), not really to compute, and they don't want to waste their time by forcing computers to do elementary things because computers are supposed to make things simpler, not harder. Experimenters do lots of complicated things with computers so they may sacrifice some friendliness without increasing the amount of wasted time by too high a percentage. For the Kaggle contest, I had to recreate an Ubuntu virtual machine because it seemed like the most plausible if not only way to install software that helps one produce competitive scores. By now, someone has ported it to Windows. I would probably prefer it but my experience with things like Visual Studio etc. is really non-existent, due to my Linux training, so the Linux path could have been easier for me due to the historical coincidences, too. 24. "it's been my point for years that the movement to spread Linux on desktop is an ideological movement" The reverse is true. Computing in the free world is subject to market forces. Linux has won hands down everywhere except for the Desktop where MS Office addicted persons obstruct innovation. Political and objective reasoning has placed Linux everywhere except the desktop. Grandmothers, children and some theorists have been well served on Desktop Linux for a decade or more. I invite you to drill down to the objective reasons why that is. We will probably never know the truth about Munich IT management decisions, but the wider market tells a clear and dramatic story in favour of open (but profit making) systems. If you find being called out for lack of meat obnoxious then I am sorry. This article happens to be the the first protein lacking I have seen by you, Thank you for the Reference Frame. 25. Desktop - and increasingly more often, mobile platforms - are the places where the actual work is being done and where the actual relevant features of operating systems are being tested. It's unambiguously clear that for the operating systems to do their work well, they should be profit-driven, company-protected systems. Whether the source is open or closed isn't too important. What's important is that a company has a financial interest to make it work. So Apple is doing the same thing for iOS and Google for Android that Microsoft is doing for Windows. The underlying mechanisms that make all these things usable are completely analogous and they require capitalism. 26. You call the sharing of IT ideas, architecture and open core modules "socialism". By the same token you are a rabid socialist for openly discussing your physics theories. By all means let Apple and Microsoft tinker with buttons and pixels to accommodate the increasingly dumbed down populations, but let the core architecture be defined by the Open Source world. This massively benefits the corporate world as well as the rest of humanity, which is why the corporate world all use Open solutions in one way or another. 27. Yes, I am an insane socialist donating intellectual assets of multi-million values to others for free. But that's less unethical than to be forcing others to use unusable products. 28. It may be several hundred thousand generations behind the most obsolete flying saucer dimensional transfer management system in the galaxy, but .NET is the greatest thing in the known universe for sure. Do the Linux bug dwellers have anything remotely like this? I don't know since I haven't looked but I seriously doubt it. Congratulations to the officials of Munich city who have belatedly achieved common sense. 29. Hmm, think you have been brainwashed by microsoft, Lubos---there are plenty of uses for Linux...even Google uses a lightly morphed version, as does Android, etc...here is a partial list of surprising adopters from Wikipedia: --lots of free compilers as well for developers and programmers. 30. I have never communicated with Microsoft or read any of its opinions - unfortunately, I would say - so I couldn't have been "brainwashed by Microsoft". I am not saying that people aren't using all kinds of other products, and so am I. Concerning mobile OSes, I have devices with iOS, Android, as well as Windows Phone, and Android is the most expensive one. I am just warning against the political movement that is trying to force different systems upon desktop users whose majority clearly and voluntarily prefers Microsoft Windows as the market conditions unambiguously show. 31. Unlike benchtop chemistry and biology, physics can be mostly taught online, with engineers later being hired to do experiments. I sure would like Lubos to join an online university to create video lectures, at both advanced and entry level physics. 32. Honest question: What's so great about it? Can you explain or give an example? Thanks. 33. I have to say that I fail to see the Linux world as some sort of sinister kabal that is forcing innocents to use unusable systems. Look at the Linux desktop market share, and you can at least say that they have failed. Windows is great for Microsoft-style word processing and spreadsheets. Perhaps it's even OK for TeX/LaTeX, if there's a decent and easy to install distribution for it (I know there is one for OSX, not sure about Windows). Linux seems popular for scientific computing, and where such users want a more polished and easy to use system for their work laptop/desktop, they choose OSX, which gives you Unix underneath and a polished user interface on top. That's why a progressive household would have all three operating systems on their computers. I know mine does. :) 34. OT: Which reminds me ... I'm feeling nostalgic. It's many decades since every other word in those horrible computer trade magazines seemed to be about the 'goto' statement and 'spaghetti code'. Now all is silent — as far as I know anyway. Oh, how I miss the tedium of it all! Anyone care to rekindle the exquisite ennui? Hey, how about a discussion on punched cards versus paper tape? :) Incidentally, as far as operating systems go, I mostly use Windows simply because, reluctantly, that was all that was made available to me at one point (more accurately it started with that awful DOS), but I got used to it and I can do all I need to do with it. But most of all I use it these days because I'm buggered if I'm going to spend any time looking up the kind of stuff that I lost interest in and forgot about years ago just to make a change for the sake of Greater F#cking Spartan. Also VBA behind Excel can be very handy for a quickie, a little like a fast shag behind the bicycle shed. Just the ticket sometimes. :) P.S. Many years ago, but again long past my interest date, I surprised myself by reading Bjarne Stroustrup's book on the genesis of C++ (I forget the title) and found it fascinating. I'm pretty sure I'm fully cured now though. :) 35. I just noticed that Microsoft is currently in the process of shifting its German operational center to - München, Schwabing. Now that they are becoming a big tax payer over there, it seems inconvenient for the municipal government to run on Linux. After all, Linux won't finance any pleasure ('amigo') trips for the local politicians, Microsoft perhaps does ... 36. Absent a source and a sink of time... everything happens? Or nothing happens? The event horizon is when happening stops? Can entropy be static? 37. "Suggestions the council has decided to back away from Linux are wrong, according to council spokesman Stefan Hauf." Some meat: 38. Dear FlanObrien, the committee to review the computing in the city was probably built by the executive power in the city which is why one should also respect the interpretation of the executive power, and not the council, why it was done. 39. Believing the world should run on the level of three-year-olds is really very disturbing. It may also explain why social has become more and juvenile over time. I figure if you need pretty pictures and shiny baubles, you're not really looking for a computer. More like an electronic playmate. It's interesting that your Vista computer worked so well. Mine crashed, despised the peripherals (all of which I replaced) and drove me to buy an Apple to escape the Microsoft curse. Maybe I just really use my computer more than most and expect it to function like I want it, not like a three-year-old wants it. I'm a grown-up now. I want a grown-up computer.
087a484f605eddac
onsdag 9 oktober 2013 Nobel Prize in Chemistry Awarded for Not Solving Schrödinger's Equation               Picture from presentation of Nobel Prize in Chemistry 2013: Multiscale Models Quantum mechanics based on Schrödinger's equation as the core of modern physics, is presented as an almost perfect mathematical model of the atoms and molecules making up the world. But there is one hook: Schrödinger's equation can be solved analytically only for the Hydrogen atom with one electron and computationally only for atoms with few electrons; already Helium with two electrons poses difficulties. The reason is that the Schrödinger equation is formulated using 3N spatial dimensions for an atom with N electrons with each electron demanding its own three-dimensional space. If each dimension takes 10 pixels to be resolved (very low accuracy), 10^100 = googol = 1 followed by 100 zeros pixels would be required for the Arsenic atom with 33 electrons, which is deadly poison for any thinkable computer, since the required number of pixels would be much larger than the number of atomic particles in the Universe! Schrödinger's equation thus has to be replaced by an equation which is simpler to solve, and this is what computational chemistry is about. The first Nobel Prize in this field was given in 1998 to Kohn and Pople for so called density functional theory which reduces Schrödinger's equation to computable three spatial dimensions with say 100 pixels in each dimension. The next one was given today to Karplus, Lewitt and Warshel for • the development of multiscale models for complex chemical systems: • Karplus, Levitt and Warshel...managed to make Newton’s classical physics work side-by-side with the fundamentally different quantum physics.  These prizes were thus awarded for not solving the Schrödinger equation, while the very formulation of the equation (and the similar Dirac equation) was awarded in 1933.  In any case the prize was awarded to Computational Mathematics as an expression of The World as Computation. 2 kommentarer: 1. I would call this mathematical (or calculational) heuristics, and it is NOT physical science. It is guessing, and guessing by computer at that. They might as well admit they are using fudge factors, and have done with the "progress in science" propaganda, because it is degeneration of science, not progression. 2. Well, a man can only what a man can do, and if solving the Schrödinger equation is beyond human capability, then you have to solve some other equation and that is not necessarily degeneration. It could be just realism, but it could also be fake science.
bc00e4fb98b9598d
Advances in High Energy Physics Advances in High Energy Physics / 2013 / Article Special Issue New Developments in Cosmology and Gravitation from Extended Theories of General Relativity View this Special Issue Research Article | Open Access Volume 2013 |Article ID 214172 | Paul S. Wesson, James M. Overduin, "Scaling Relations for the Cosmological “Constant” in Five-Dimensional Relativity", Advances in High Energy Physics, vol. 2013, Article ID 214172, 6 pages, 2013. Academic Editor: Jose Edgar Madriz Aguilar Received06 Sep 2013 Accepted08 Oct 2013 Published30 Oct 2013 When the cosmological “constant” is derived from modern five-dimensional relativity, exact solutions imply that for small systems it scales in proportion to the square of the mass. However, a duality transformation implies that for large systems it scales as the inverse square of the mass. 1. Introduction The cosmological “constant” as it appears in Einstein’s general relativity has several puzzling aspects, and it is a serious problem to understand why its value as inferred from cosmology is much smaller than its magnitude as implied by particle physics. However, it has been known for a long time that the cosmological “constant” appears more naturally when the world is taken to be five-dimensional [1], and recently there has been intense work on the modern versions of 5D relativity where the extra dimension is not compactified [24]. The purpose of the present paper is to draw together various results in the literature which indicate that there may be simple scaling relations between the values of the cosmological “constant” and the mass of the system concerned. Tentatively, we identify for small systems and for large, gravitationally-dominated systems. While these relations cannot be rigorously established with our present level of understanding, we believe that it is useful to point them out as guides for future research. The subjects which indicate possible relations are diverse and include the embedding of -dominated solutions of 4D general relativity in the so-called 5D canonical metric [58]; the embeddings which lead to variable values of [913]; the equations of motion for canonical and related metrics [1420]; conformal transformations which affect and possibly [21, 22]; the vacuum and gauge fields associated with elementary particles [23, 24]; and the wave-particle duality connected with certain -dominated 5D metrics [2527]. Most of our results are in Section 2. There we will reexamine the meaning of , reinterpret two classes of known solutions, and present a new class with interesting properties. Section 3 is a conclusion. To streamline the work, we will often absorb the speed of light , the gravitational constant , and the quantum of action , except in places where they are made explicit to aid in understanding. As usual, uppercase Latin letters run for time, space and the extra dimension. We label the last to avoid confusion. Lowercase Greek letters run . Other notation is standard. 2. The Cosmological “Constant” and Possible Scaling Relations In this section, we will examine certain subjects which involve the cosmological “constant” of a spacetime and the mass of a test particle moving in it. That these parameters may be linked can be appreciated by noting that 5D relativity is broader than Einstein’s 4D theory, being in general an account of gravity, electromagnetism, and a scalar field, where the last is widely believed to be concerned with how particles acquire mass [24]. However, in 5D neither nor are in general constants. Rather, they depend on the field equations and solutions of them. It is common to take the field equations to be given in terms of the Ricci tensor by These apparently empty 5D equations actually contain Einstein’s 4D equations with a finite energy-momentum tensor, a result guaranteed by Campbell’s embedding theorem [57]. This means that the 4D theory is smoothly contained in the 5D one and that the latter can be brought into agreement with observations at some level. In Einstein’s theory, the cosmological “constant” is usually introduced by adding a term to the field equations: Here, is the metric tensor, whose covariant derivative is zero, hence the acceptability of the noted term. We recognize that the term is a kind of a gauge term. It is sometimes moved to the right-hand side of Einstein’s equations, where it can be viewed as a vacuum fluid with density and equation of state . However, it should be recalled that the coupling constant between the left-hand (or geometrical) side of the Einstein equations and the right-hand (or matter) side is . This, therefore, cancels the similar coefficient of the vacuum density, leading us back to the realization that is really a stand-alone parameter insofar as general relativity is concerned (this is in line with the fact that its physical dimensions or units are , matching those of the rest of the field equations, which involve the second derivatives of the dimensionless metric coefficients with respect to the coordinates.) An implication of this is that when is derived from a 5D as opposed to a 4D theory, it may be connected not with gravity but with the scalar field, a possibility we will return to later. The quantum vacuum, as opposed to the classical one, is frequently attributed an energy density which is calculated in terms of many simple harmonic oscillators and expressed in terms of an effective value of [23]. This energy density is formally divergent, unless it is cut off by introducing a minimum wavelength or equivalently a maximum wave number . With this being understood, there results . If the cutoff in is chosen to be the inverse of the Planck length, this has the size of  erg cm−3. For comparison, the cosmologically determined value of (~10−56 cm−2) corresponds to an energy density of order 10−8 erg cm−3. The discrepancy, of order 10120, is the crux of the cosmological-constant problem. An alternative interpretation of the result in the preceding paragraph is to imagine that the quantum vacuum does not spread through ordinary 3D space but is concentrated in particles of mass . It is reasonable to suppose that the stuff of each particle occupies a volume whose size is given by the Compton wavelength, . Then, the average density is approximately This expression is formally identical to the one above. But the high-density vacuum is now confined to the particle, as expected if it is the product of a scalar field which couples to matter (see below). There is no conflict between (3) and the all-pervasive cosmological vacuum discussed above, so the cosmological-constant problem is avoided. The best way to incorporate a scalar field into physics is to take its potential to be the extra, diagonal element of an extended 5D metric tensor. Then, following Kaluza the extra, nondiagonal elements can be identified with the potentials of electromagnetism, while the 4D block remains as a description of the 4D Einsteinian gravity. Since we are here mainly interested in the scalar field, we can eliminate the electromagnetic potentials by a suitable use of the coordinate degrees of freedom of the metric, so the interval for the gravitational and scalar fields is Here and depend in general on both the coordinates of spacetime () and the extra dimension (). The symbol indicates whether the extra dimension is spacelike or timelike, both being allowed in modern 5D theory (the extra dimension does not have the physical nature of an extra time, so for there is no problem with closed timelike paths). Many solutions are known of the field equations (1) for the metric (4) [24]. It transpires that the easiest way to approach the field equations is by splitting the 4D part of the metric into two functions; thus, Here, is a gauge function which determines the behavior in , while depends only on the spacetime coordinates . While the form (5) provides a mathematical advantage, it involves a physical quandary: does an observer experience the whole 4D space or only the spacetime-dependent subspace ? This question is akin to the argument for the so-called Jordan frame versus the Einstein frame in old 4D scalar-tensor theory, where a scalar function was applied to the 4D metric with no fifth dimension. It did not find a definitive answer then and has not done so today. There is a difference in the physics between the two frames, but so long as the function is slowly varying, this will be minor. Cosmological observations may one day reveal the difference between the two frames, but for now we proceed with the view that they yield complementary physics. An instructive case of the metric (5) has and , where is any solution of the Einstein equations without ordinary matter but with a vacuum fluid whose density is measured by . This is known as the (pure) canonical metric. There is a large literature on this case (see [8] for a review). It includes the Schwarzschild-de Sitter metric for the sun and the solar system and the de Sitter metric for the universe in its inflationary stage. It turns out that the equations of motion for a test particle in the 5D metric (5) are the same as those in the 4D theory, a result which enforces agreement with the classical tests of relativity [28, 29]. The dynamics may be obtained either by using the 5D geodesic equation or by putting in (5). The latter is based on the fact that null paths in 5D with reproduce the timelike paths of massive particles in 4D with , as well as the paths of photons with . The definition of dynamics and causality by matches the null nature of the field equations (1). It turns out that the nature of the motion in the extra dimension depends on the choice of in the metric (5), as does the sign of . Thus introducing a constant , we findThe second of these equations is of particular interest, because it is the same as the expression for the wave function in old wave mechanics. In fact, it may be shown that the 5D geodesic equation for the (pure) canonical metric reproduces the Klein-Gordon equation with in place of and in place of [2527]. We will meet the Klein-Gordon equation again below. Here, we note that the (pure) canonical metric suggests the possibility that Here, has been written in terms of the Compton wavelength. This identification presupposes that the observer experiences the 4D spacetime in (5) rather than the composite spacetime defined by . This is a subtle issue, as noted above, and we will return to it below. The next most simple case of (5) is when a shift is applied to the extra coordinate in the canonical metric. This may appear to be close to trivial, but it is not because of the way in which the 4D Ricci scalar transforms and with it [9, 10, 21, 22]. The equations of motion and the mass of a test particle for the shifted canonical metric were worked out by Ponce de Leon [1620]. He used the principle of the least action and the eikonal equation for massive and massless particles, as opposed to the geodesic equation used by Mashhoon et al. [14, 15]. As before, it turns out that for a spacelike extra dimension () and for a timelike one (). The metric and the expressions for and areThe second line here requires lengthy calculations for and [9, 10, 1620], so the fact that we again find is significant. The third case we present is more complicated than the canonical metrics studied in the two preceding paragraphs. In (5), we put ), where . This may be shown to satisfy the field equations (1), which break down into sets: ten relations which determine the energy-momentum tensor necessary to balance Einstein’s equations; four conservation-type relations which fix a 4-tensor that has an associated scalar ; and one wave equation for the scalar field . The work is tedious (see [24]; indices are raised and lowered using of (5)). The metric and final results of the field equations read as follows: Here, a comma denotes the partial derivative, a semicolon denotes the (4D) covariant derivative, and where . There are scalar quantities associated with the above which are of physical interest. For example, can be obtained by contracting (9b) and using (9d) to simplify it; as given by the contraction of (9c) is a conserved quantity; and the (4D) Ricci or curvature scalar can be expressed in its general form and in the special form it takes for the metric (9a). Thus,These relations and (9a), (9b), (9c), and (9d) can be given physical interpretations along the lines of what has been done for other solutions in the literature [24]. The energy-momentum tensor (9b) shows that the source consists of the scalar field plus a term which, because of its proportionality to , would usually be attributed to a vacuum fluid with cosmological constant . The conserved tensor of (9c) obeys by the field equations, and its scalar has in other works been linked to the rest mass of a test particle, which here is [2527]. This is confirmed by the wave equation (9d), which deserves some discussion. Relation (9d), depending on the choice for , is known either as the Helmholtz equation or as the Klein-Gordon equation. Many solutions to it are known with applications to problems in atomic physics (like diffusion) and elementary particle physics (like wave mechanics). There are different modes of behavior, depending on whether or , which correspond to the monotonic and oscillatory modes (6a) and (6b) of the canonical metric discussed before. For the present metric (9a), the scalar field may be real or complex, and in the latter case for the wave equation (9d) is identical to the Klein-Gordon equation, with being the Compton wavelength of the test particle. This is similar to a previous interpretation based on the shifted-canonical metric [2527]. (In (9d), the oscillation is in , whereas in the corresponding equation of [2527] it is in , because in the canonical metric it is presumed that , so the physical behavior is moved from one parameter to the other. In (9a), the problem can be made explicitly complex by writing , if so desired.) It may seem strange that a classical field theory yields an equation typical of (old) quantum theory, but it should be recalled that the wave equation (9d) comes from the field equation , which does not exist in standard general relativity. In fact, the present interpretation of the metric (9a) is fully consistent with the approach to noncompactified 5D relativity known as Space-Time-Matter theory, where matter on the macroscopic and microscopic scales is taken to be the result of higher-dimensional geometry [24]. By contrast, while the metric (9a) may resemble the warp metric of the alternative approach to 5D relativity known as the Membrane theory, in that approach, the “” in the exponent of the 4D part of the metric is absent, which means that the metric does not satisfy the field equations in the simple form (1). Our view is that (9a), (9b), (9c), and (9d) show the wave-mechanical properties of matter. The scalars (10a), (10b), and (10c) associated with the solution bear this out. With conventional units restored, the conserved quantity is inversely proportional to the Compton wavelength of a test particle moving in the spacetime. Viewed as a wave which couples to matter, we expect that the Compton wavelength should be consistent with the radius of curvature of the spacetime, and this is confirmed by the relation for . Lastly, we note that the aforementioned relation shows once again that . This relation is common to the three classes of solutions examined above, which come from the different choices of the gauge function in (5). They involve which gives (6a), (6b), which gives (8a), (8b), and which gives (9a), (9b), (9c), and (9d). By comparison with known physics, we infer that the constant length is inversely proportional to the particle mass , which we can write in terms of the Compton wavelength as . The exponential gauge, in particular, leads from the field equation to the Klein-Gordon equation, which is the basic relation in wave mechanics (its low-energy limit is the Schrödinger equation which underlies the physics of the hydrogen atom). The implication is that the scalar field of 5D relativity is connected to the mass of a particle, and with the phenomenon of wave-particle duality ([2527]; the Klein-Gordon equation can have real or complex forms). These comments are in accordance with the longstanding view that theories of Kaluza-Klein type provide a way of unifying the interactions of particles with gravity. What is, however, of the latter interaction? It is natural to wonder if there is not a complementary relation to what we have found above, but for macroscopic gravity-dominated systems. This subject will require detailed analysis, but some comments of a preliminary type may be made. It is useful, in this context, to reconsider the traditional distinction between inertial mass () and gravitational mass (). The Kaluza-Klein equation involves the former, so our previous considerations have concerned and as the scaling relation for the cosmological “constant”. It is clear that this scaling rule cannot persist to arbitrarily large masses without leading to excessive curvature of empty spacetime (). We expect, therefore, that it might pass over to some other scaling relation for large gravitational masses. Such a relation is actually implicit in certain works on the canonical metric [24, 822]. We recall that the 4D part of the 5D canonical metric involves the combination . This can be compared to the element of action for classical mechanics, . Two obvious identifications are possible: and . We have already explored the former, so attention is focused on the latter. In fact the possibility has been considered, mainly in relation to cosmology, and cannot be ruled out [24, 1113]. As regards , we note that its behavior depends on the coordinate frame experienced by an observer (see above). To illustrate this, consider a vacuum spacetime with the (pure) canonical metric, where the 4D part of the interval is . The effective value of can be obtained from either the Ricci scalar or the Einstein tensor and depends on whether the observer experiences only or the full . The results are, respectively, and , and both appear in the literature. Let us take the second alternative and combine it with the physical identification noted above. The obvious parameter with which to geometrize the gravitational mass is , the Schwarzschild radius. Then we find that in total, . That is, for large gravitationally dominated systems we expect to scale as the inverse square of the mass. The argument of the preceding paragraph is tentative, but can be checked by combining it with the more detailed work concerning the inertial mass which went before. For simplicity, we take the numerical factors to be those of the canonical case and consider a proton (inertial mass ) and the observable part of the universe (gravitational mass ). Then, the scaling relations for the cosmological “constant” read and . These can be combined to give the number of baryons in the observable universe as In this, we substitute the quantum field theoretical value of  cm−2 and the cosmological value of  cm−2 (obtained from , where and =, together with current observational data giving  km s−1 Mpc−1 and ). The result is , which is in agreement with conventional estimates. The two scaling relations considered in this section should be regarded as complementary. The first is better based on theory than the second, since it can be examined in three gauges rather than one. However, there is in principle no conflict between them, and in practice we expect the first to grade into the second. The rule should be dominant on the particle scale (~10−13 cm), and the rule should be dominant on the cosmological scale (~1028 cm). Theoretically, they should be comparable on scales of order 100 km, which in practice is rough where quantum interactions and solid-state forces are superseded by the effects of gravity. 3. Conclusion We have seen in the preceding section that the cosmological constant is open to reinterpretation, particularly as a measure of the energy density of the vacuum fields of particles. It is somewhat better understood in cosmology, where its theoretical status is relatively clear in Einstein’s equations, and where observations establish its approximate value. Unfortunately, there is a very large mismatch between the microscopic and the macroscopic domains. This can in principle be alleviated by using a five-dimensional theory, of the kind indicated by unification, where in general is not a universal constant but a variable. This is shown most clearly by the 5D canonical gauge, where scales according to the size of the potential well () or the value of the extra coordinate (). Since the mass () of a test particle also depends on these parameters, we are tentatively led to suggest scaling relations of the form . For the canonical gauge in its pure and shifted forms, the scaling relation is for small and has the form . This is also the form derived from the exponential gauge, which has the advantage of showing that the extra field equation resembles the Klein-Gordon equation of wave mechanics, implying that the scalar field is connected with particle mass. There is, however, an alternative interpretation of the canonical gauge and others like it. The 4D part of this involves a term , and to match the classical action , it is possible to use the gravitational mass with rather than the inertial mass with . The implication is that when gravity is dominant, for large , there is a scaling relation of the form . This macroscopic relation should be viewed as complementary to the microscopic one, the changeover occurring at a length scale of order 100 km. When the two relations are combined, it is possible to obtain an expression for the number of baryons in the observable universe. This result (11) agrees with conventional estimates, which may be seen as provisional support for the idea that the cosmological “constant” varies with scale. Thanks for comments are due to members of the Space-Time-Matter group ( 1. V. A. Rubakov and M. E. Shaposhnikov, “Extra space-time dimensions: towards a solution to the cosmological constant problem,” Physics Letters B, vol. 125, no. 2-3, pp. 139–143, 1983. View at: Google Scholar 2. J. M. Overduin and P. S. Wesson, “Kaluza-Klein gravity,” Physics Report, vol. 283, no. 5-6, pp. 303–378, 1997. View at: Google Scholar 3. P. S. Wesson, Five-Dimensional Relativity, World Scientific, Singapore, 2006. 4. P. S. Wesson, “The geometrical unification of gravity with its source,” General Relativity and Gravitation, vol. 40, no. 6, pp. 1353–1365, 2008. View at: Publisher Site | Google Scholar 5. S. Rippl, C. Romero, and R. Tavakol, “D-dimensional gravity from (D + 1) dimensions,” Classical and Quantum Gravity, vol. 12, no. 10, pp. 2411–2421, 1995. View at: Publisher Site | Google Scholar 6. C. Romero, R. Tavakol, and R. Zalaletdinov, “The embedding of general relativity in five dimensions,” General Relativity and Gravitation, vol. 28, no. 3, pp. 365–376, 1996. View at: Google Scholar 7. J. E. Lidsey, C. Romero, R. Tavakol, and S. Rippl, “On applications of Campbell's embedding theorem,” Classical and Quantum Gravity, vol. 14, no. 4, pp. 865–879, 1997. View at: Publisher Site | Google Scholar 8. P. S. Wesson, “The embedding of general relativity in five-dimensional canonical space: a short history and a review of recent physical progress,” View at: Google Scholar 9. B. Mashhoon and P. S. Wesson, “Gauge-dependent cosmological ‘constant’,” Classical and Quantum Gravity, vol. 21, no. 14, pp. 3611–3620, 2004. View at: Publisher Site | Google Scholar 10. B. Mashhoon and P. Wesson, “An embedding for general relativity and its implications for new physics,” General Relativity and Gravitation, vol. 39, no. 9, pp. 1403–1412, 2007. View at: Publisher Site | Google Scholar 11. J. M. Overduin, “Nonsingular models with a variable cosmological term,” Astrophysical Journal Letters, vol. 517, no. 1, pp. L1–L4, 1999. View at: Google Scholar 12. J. M. Overduin, P. S. Wesson, and B. Mashhoon, “Decaying dark energy in higher-dimensional gravity,” Astronomy and Astrophysics, vol. 473, no. 3, pp. 727–731, 2007. View at: Publisher Site | Google Scholar 13. P. S. Wesson, B. Mashhoon, and J. M. Overduin, “Cosmology with decaying dark energy and cosmological quot ‘constant’,” International Journal of Modern Physics D, vol. 17, no. 13-14, pp. 2527–2533, 2008. View at: Google Scholar 14. B. Mashhoon, H. Liu, and P. S. Wesson, “Particle masses and the cosmological constant in Kaluza-Klein theory,” Physics Letters B, vol. 331, pp. 305–312, 1994. View at: Publisher Site | Google Scholar 15. B. Mashhoon, P. Wesson, and H. Liu, “Dynamics in Kaluza-Klein gravity and a fifth force,” General Relativity and Gravitation, vol. 30, no. 4, pp. 555–571, 1998. View at: Google Scholar 16. J. Ponce de Leon, “Equations of motion in Kaluza-Klein gravity reexamined,” Gravitation & Cosmology, vol. 8, no. 4, pp. 272–284, 2002. View at: Google Scholar | Zentralblatt MATH | MathSciNet 17. J. Ponce de Leon, “Mass and charge in brane-world and non-compact kaluza-klein theories in 5 dim,” General Relativity and Gravitation, vol. 35, no. 8, pp. 1365–1384, 2003. View at: Publisher Site | Google Scholar 18. J. Ponce de Leon, “Invariant definition of rest mass and dynamics of particles in 4d from bulk geodesics in brane-world and non-compact kaluza-klein theories,” International Journal of Modern Physics D, vol. 12, p. 757, 2003. View at: Publisher Site | Google Scholar 19. J. Ponce de Leon, “The principle of least action for test particles in a four-dimensional spacetime embedded in 5D,” Modern Physics Letters A, vol. 23, p. 249, 2008. View at: Google Scholar 20. J. Ponce de Leon, “Embeddings for 4D Einstein equations with a cosmological constant,” Gravitation and Cosmology, vol. 14, pp. 241–247, 2008. View at: Publisher Site | Google Scholar 21. R. M. Wald, General Relativity, University of Chicago Press, Chicago, Ill, USA, 1984. View at: MathSciNet 22. P. S. Wesson, “Particle masses and the cosmological “Constant” in five dimensions,” View at: Google Scholar 23. S. M. Carroll, Spacetime and Geometry, Addison-Wesley, San Francisco, Calif, USA, 2004. View at: MathSciNet 24. Y. Hosotani, “Gauge-higgs unification: stable higgs bosons as cold dark matter,” International Journal of Modern Physics A, vol. 25, p. 5068, 2010. View at: Publisher Site | Google Scholar 25. P. S. Wesson, “General relativity and quantum mechanics in five dimensions,” Physics Letters B, vol. 701, no. 4, pp. 379–383, 2011. View at: Publisher Site | Google Scholar | MathSciNet 26. P. S. Wesson, “The cosmological “constant” and quantization in five dimensions,” Physics Letters B, vol. 706, no. 1, pp. 1–5, 2011. View at: Publisher Site | Google Scholar | MathSciNet 27. P. S. Wesson, “Vacuum waves,” Physics Letters B, vol. 722, pp. 1–4, 2013. View at: Google Scholar 28. H. Liu and P. S. Wesson, “The motion of a spinning object in a higher-dimensional spacetime,” Classical and Quantum Gravity, vol. 13, no. 8, p. 2311, 1996. View at: Google Scholar 29. J. M. Overduin, R. D. Everett, and P. S. Wesson, “Constraints on Kaluza-Klein gravity from Gravity Probe B,” General Relativity and Gravitation, vol. 45, no. 9, pp. 1723–1731, 2013. View at: Google Scholar Copyright © 2013 Paul S. Wesson and James M. Overduin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. More related articles PDF Download Citation Citation Download other formatsMore Order printed copiesOrder Related articles
2f7f2d32d8ccdb26
Challenges to Comprehension Implied by the Logo of Laetus in Praesens Laetus in Praesens Alternative view of segmented documents via Kairos 19 April 2021 | Draft Meta-pattern via Engendering and Navigating "Pantheons" of Belief? Exploration of three-dimensional patterns inspired by mathematical experience of interrelationship - / - Emergence of a pantheon -- cognitive or otherwise Pantheons as patterns of cognitive N-foldness Pantheon dynamics in a globalized semi-secular civilization? Mathematical theology enabling the quest for a meta-pattern? Complex equations forming "pantheons" of mathematical experience? Prime number and curvature implications for global governance? Polyhedra suggestive of arrays of requisite variety of pantheons in 3D Configuring the 64 subjects of mathematics as a 64-edged drilled truncated cube Exploring potential dynamics within a pantheon? Engendering and navigating pantheons -- "angelic" and "demonic"? Pantheon as a psychosocial "O-ring" -- speculatively understood? Conventionally a pantheon is the particular set of all gods of any individual polytheistic religion, mythology, or tradition. There are an estimated 4,200 different religions in the world, although these may be variously clustered (Stephen Prothero, God Is Not One: the eight rival religions that run the world -- and why their differences matter, 2010). However, in an extensively secularized global civilization of considerable complexity, "pantheon" may in practice have other meanings -- as with "religion" and "god". Religion may then be extended to mean a pattern of fundamental beliefs. Any such religion may then be recognized as having one or more gods -- and perhaps many. Framed in this way, it could be asked whether science can be recognized as a pantheon -- whether this is to be understood in terms of fundamental concepts or extends to the many specific disciplines which cultivate them. A similar question could be asked of the arts. Such a pattern is evident in relation to the media and its celebrities -- and to sports. In each case the focus is on a pattern of belief, how it is cultivated, and the integrative focal points it engenders. The question here is how a pattern of belief  emerges and how some form of pantheon is then engendered within it or by it. The situation is obviously relatively dynamic in that the pattern for an individual or a group typically develops and evolves over time -- most obviously in response to events and shifts in fashion. In the case of science this may be recognized in terms of paradigm shifts and revolutions (Thomas Kuhn, The Structure of Scientific Revolutions, 1962/2012). The focus here is however on what an individual cultivates as a pantheon  of "gods" to be honoured in some way -- whether as a child, an adolescent, or an adult. Clearly the pantheon at any particular time is susceptible to development. New gods are recognized or engendered and the pantheon as a whole may be reconfigured and transformed. There may then be a challenge to navigating from one pantheon to another -- to the extent that the relation to the earlier gods can be easily abandoned, and especially if the emerging gods are only partially or dimly understood. As is only too obvious, pantheons and their gods may effectively compete for the belief of an individual -- with each having a tendency to deprecate or demonise the other. Following engagement with such a succession and variety of pantheons, the concern might then be framed as to whether the process offers insight into the nature of any "meta-pattern", what form that might take, and how engagement with it might be cultivated. One insight in that regard is offered by Gregory Bateson: The pattern which connects is a meta-pattern. It is a pattern of patterns. It is that meta-pattern which defines the vast generalization that, indeed, it is patterns which connect. (Mind and Nature: a necessary unity, 1979) And it is from this perspective that he warned in a much-cited phrase: Break the pattern which connects the items of learning and you necessarily destroy all quality. There is of course the irony that each pantheon has a natural tendency to cultivate the assumption that it is itself that meta-pattern -- or that its array of (secondary and dependent) deities is indicative of its more fundamental and transcendent nature. All else is then necessarily illusion and potentially dangerous as such. The difficulty in the current global civilization is that any such preoccupation is necessarily naive from the perspective of a given pantheon -- other than that believed to be primary. Framing alternative worldviews as fundamentally irrelevant or problematic establishes the claim that there is no fundamental difficulty to be addressed. For each pantheon the truth is already at hand -- or is a natural consequence of its further development, if not to its commitment to some form of global hegemony. In practice the situation gives rise to institutional arenas in which a degree of token discourse with "others" is tolerated at best. Most evident are legislative assemblies, but the dynamic is also evident in interdisciplinary, intersectoral and interfaith gatherings. The situation is further complicated by the degree to which iconic figures in religion, science, and other domains may be experienced and labelled (if only nicknamed) as "gods" or having "god-like" attributes. Eminent professors may be known by such labels (Gods of Science: Stephen Hawking and Brian Cox discuss mind over matter, The Guardian, 11 September 2010; Jerry Klinger, The Coronavirus Hysteria and the Gods of Science, Times of Israel, 10 March 2020). Leaders of countries may be referred to as deities, or may so consider themselves (Pierre Briançon, Macron’s 'Jupiter' model unlikely to stand test of time, Politico, 16 June  2017; William Drozdiak, After Decade in Power, Mitterrand still 'Dieu', The Washington Post, May 11, 1991): Two-thirds of the French public consider him a superb statesman, and his reverential nickname, "Dieu" (God), attests to the imperial demeanor that many French voters admire in a head of state. Comparable allusions are made regarding the heads of commercial enterprises and finance, notably through their presentation as "Masters of the Universe" (Davos as the "crowning experience" for the "Masters of the Universe", "Mistresses of the Universe"? 2009). The relations between such deities -- if any -- may well recall those evident in myths regarding traditional pantheons. The pantheons of religion have given rise to lists of the deities associated with them (List of deities; List of demigods). However Wikipedia also offers an extensive List of people who have been considered deities. Surprisingly this includes George Washington and Prince Philip -- and more recently Prince Charles on the death, of the former. Those acknowledged as the "gods" of other pantheons are not similarly recognized however, except through devices such as the many Lists of Celebrities, the Forbes Celebrities 100, and Orders of Precedence for purposes of protocol (List of heads of state by diplomatic precedence; Order of precedence in the Catholic Church). The Lists of academic ranks by country are naturally subject to interpretation in terms of the Academic Ranking of World Universities. Of some relevance to the following argument are references to a "personal pantheon", namely one freely composed independently of any particular belief system. One example -- My Personal Pantheon -- has been extensively, but anonymously, developed. This bears comparison with that titled Pantheon of Atheists -- again extensively developed, but with a degree of humour. Conventionally a pantheon is typically the result of a degree of anthropomorphism and personification through which human characteristics are attributed to the deities arrayed -- notably to facilitate memorable reference to them. A pantheon could however be understood more generally as an array of fundamental distinctions held to be separately meaningful -- into which "supernatural" attributes are somehow imbued, as with values (irrespective of any secular bias). These can be more conventionally recognized as complex memes, or even as memeplexes, namely clusters of memes. A reasonable summary of the controversial matter is offered by Sam Barnett-Cormac (Pantheons and Archetypes, Quaker Openings, 17 October 2017): Some see the figures of the gods of their pantheon as literally existing, as having their own agendas, and as interacting with one another and with the world as we know it; in summary, that they behave as theistic deities. Others see them as embodiments of ideas, or ideals; as archetypes that are useful in their practice. For example, a pagan who believes in practical magic might invoke a deity appropriate to their current working; in doing so, they may literally believe there is a supernatural being that they are inviting to assist them, or they may believe that they better focus their mind and energies but dwelling on the figure – or perhaps both!... Well, the example of modern pagans and other polytheists does show us a key form of conception and usage of pantheons beyond the literal... They are concepts, archetypes, ideas and ideals. In essence, they can fill the same role as stories. We use stories to shape our thoughts and to communicate... The figures of traditional pantheons are not simply a collection of characteristics and areas of dominion. They are also part of intertwined sets of stories Framed in this way, there is then the paradox as to whether a pantheon is most appropriately experienced as a memeplex clustering "god-like" qualities distinguished as memes. For those preferring such conventionally secular terms, any exploration of such memes then evokes the question as to the nature of the experiential "pantheon" -- given any deprecation of the pantheons engendered by religions. This exploration exploits the conventional articulation of mathematics into 64 disciplines as indicative of a pantheon in its own right. So framed it focuses on the fundamental equations deemed by mathematicians to constitute a nexus of beauty and truth -- and potentially to have changed the world, as argued by Ian Stewart (In Pursuit of the Unknown: 17 equations that changed the world, 2012). These can be contrasted with the UN's 17 Sustainable Development Goals by which it is currently hoped to change the world -- namely through a pantheon of a different kind. Emergence of a pantheon -- cognitive or otherwise Whether meme or memeplex, there are various indications as to how a pantheon might be experienced as "emerging". These include: In general terms it could be argued that there is a development from a confusing sense of subjective identification with an array of progressively more distinct experiences -- possibly deprecated from other perspectives as inchoate or "mystical'. For those experiencing them, these are only subsequently named and labelled objectively in some way. This enables a later process of ordering and classification -- as a consequence of their progressive reification (as memes). There is considerable irony to the manner in which a traditional pantheon may be effectively reified to the degree that it is embodied in symbolic architecture -- distracting from its intangible significance, as with the Pantheon in Rome (and its many imitations). Any associated personal experience of a pantheon may be understood as evolving through some form of learning, initiation, self-reference or mirroring -- arguably claimed to be of ever higher order. As noted with respect to the psychology of religion, any such pantheon emergence tends to be accompanied by dynamics of disagreement. These are seldom encompassed by those identifying with it as fervent believers or adherents -- other than through processes of deprecation and demonisation of alternatives typical of long-term rivalry (Knowledge Processes Neglected by Science: insights from the crisis of science and belief, 2012). The individual is notably confronted by the need to make whatever sense is possible in an authoritative context potentially experienced as highly confusing. This is the microcosmic version of the current challenge of global sensemaking. Pantheons as patterns of cognitive N-foldness The pantheons of tradition tend to be of a particular size. It is therefore curious to note that other "memes" tend to be arrayed in clusters of a specific size, with little understanding of why this is the case. As one example, humans have an unexplored enthusiasm for 12-fold arrays -- whether or not they are to be recognized as memeplexes (Checklist of 12-fold Principles, Plans, Symbols and Concepts: web resources, 2011). That checklist necessarily includes a number of traditional pantheons. The question that then merits exploration is whether other 12-fold sets of principles, concepts, etc are to be recognized as constituting pantheons in some experiential sense. The checklist is in fact the annex to an exploration of how such a 12-fold pattern might be indicative of an array of systemic functions (Eliciting a 12-fold Pattern of Generic Operational Insights: recognition of memory constraints on collective strategic comprehension, (2011). Why is it considered appropriate to distinguish 12 memes in any such set -- in preference to some other number? Are such distinctions indicative of requisite variety, as might be understood in some cybernetic or systemic sense? A similar exercise can be undertaken with respect to the unexplored enthusiasm for more complex 20-fold patterns (Requisite 20-fold Articulation of Operative Insights? Checklist of web resources on 20 strategies, rules, methods and insights, 2018). For whom do such patterns function as experiential pantheons and why? That exercise was provoked by the possibility that in general systems terms some justification for such a pattern was to be found at a fundamental biological level (Memetic Analogue to the 20 Amino Acids as vital to Psychosocial Life? 2015). Just as with the 12-fold and 20-fold arrays, the 8-fold pattern is variously considered of fundamental significance whether or not it is explicitly embodied in a traditional pantheon. There is no lack of reference to some form of 8-fold array, whether by quite distinct religions, as a feature of policy analysis, or as an organizational scheme for a class of subatomic particles. Clearly, given the determining role of the 10 Commandments for the Abrahamic religions, this could also be recognized as the expression of a form of pantheon. Yet to be determined: is the the size of such arrays of memes to be considered arbitrary and coincidental, or is it of particular significance to the organization of meaning -- under some circumstances, and perhaps only credible to some? Why does any such array "work" to the point of being a deeply valued organization of experience -- again, typically only for some? The argument can be taken further, and more generally, through considering the arrays of concepts, methods and insights variously proposed in academic treatises, strategic documents, and in a variety of domains, as explored separately (Patterns of N-foldness: comparison of integrated multi-set concept schemes as forms of presentation, 1980). A wide range of examples was presented in annexes to that exercise (Examples of Integrated, Multi-set Concept Schemes, 1980).  Pantheon dynamics in a globalized semi-secular civilization? The size of a pantheon (or memeplex) clearly varies. There are obvious preferences for particular sizes, with little explanation justifying the choice. Arguably the size may extend through 20 to 100, although the pantheons of Hinduism allegedly number thousands of deities. There is clearly an unexplored constraint on the number that can be held to be meaningful in experiential terms, especially given constraints on human memory, as separately discussed (Comprehension of Numbers Challenging Global Civilization, 2014). The latter noted a possible upper constraint implied by " Dunbar's number", namely a suggested cognitive limit to the number of people with whom one can maintain stable social relationships (commonly held to be 150). Given the understanding of a pantheon as a set of interrelated stories, it might then be asked how many stories or jokes a raconteur is typically able to recall. At best, what mnemonic aids enable any complex set of memes to be recalled, as highlighted by Frances Yates (The Art of Memory, 1966) Especially curious is the extremely limited attention to the relation between whatever distinct meanings are arrayed within any pantheon. This is as evident in the 8-fold, 10-fold, 12-fold, or 20-fold arrays. It is striking that the systemic nature of the pattern of relations between the UN's Sustainable Development Goals is considered of such limited interest -- given their acclaimed fundamental role for a global system in crisis. Little is known about the purported (or assumed) interactions between those goals, although  a recent analysis has been published behind a paywall (David Tremblay, et al, Sustainable Development Goal Interactions: an analysis based on the five pillars of the 2030 agenda, Sustainable Development, 28, 2020. 6). If a pantheon is appropriately understood as a pattern -- potentially indicative of a meta-pattern -- a particular contrast to such systemic negligence is offered by Christopher Alexander's A Pattern Language: towns, buildings, construction (1977). Alexander (and his team) clarified 254 interlinked patterns as providing one such pattern language with that particular focus. Their work was framed by a study of The Timeless Way of Building (1979), as discussed separately (Pattern language: a timeless way of building, 1981). As described there, of relevance to any understanding of a meta-pattern, this noted Alexander's argument that: Alexander's focus on building was presented with the suggestion that other pattern languages are indeed possible. As an exploration of that possibility that set of patterns and linkages was "translated" into four other variants of the interlinked pattern of 254 (5-fold Pattern Language, 1984). With respect to any architecture of knowledge or experience, "building" can indeed be understood more generally -- and especially cognitively. Also of potential relevance are the carefully articulated memeplexes of 64, 72 and 81, which feature in Western and Eastern traditions, with interrelationships most explicit in the Eastern patterns of 64 and 81 (9-fold Magic Square Pattern of Tao Te Ching Insights experimentally associated with the 81 insights of the T'ai Hsüan Ching, 2006). Metaphor is extensively used in the classic Chinese examples to render comprehensible the distinctions and the relationships between them ( Transformation Metaphors -- derived experimentally from the Chinese Book of Changes (I Ching) for sustainable dialogue, vision, conferencing, policy, network, community and lifestyle, 1997). The 72-fold distinctions in the Western traditions are controversially embedded in mythological frameworks which undermine their credibility from a conventional perspective -- therefore calling for careful clarification (Variety of System Failures Engendered by Negligent Distinctions: mnemonic clues to 72 modes of viable system failure from a demonic pattern language, 2016; Engaging with Hyperreality through Demonique and Angelique? Mnemonic clues to global governance from mathematical theology and hyperbolic tessellation, 2016). One obvious reason for preference for patterns of a particular size, and especially in the case of those of a larger size, is the characteristics of those numbers which facilitate memorability. Especially noteworthy are combinations of prime number factors (offering a degree of symmetry) which enable this. Examples include: 12 (as 22 x 3), 64 (as 26), and 72 (as 23 x 32). The variety of such patterns is considered separately (Commentary on patterns of N-foldness, 2020). It is however remarkable to note the extent to which the relations between the deities of traditional pantheons have figured in the memorable tales which are a feature of myth. There is a considerable degree of irony to the fact that the principal figures of such pantheons in the Western tradition have been appropriated for the iconography of the United Nations Specialized Agencies (Apollo, Ceres, etc). This offers the elusive suggestion that the relations between the functions with which those agencies are associated are implied by the myths of the pantheons with which they were associated. Mathematical theology enabling the quest for a meta-pattern? Role of number: In the desperate quest for global coherence and harmony, it is strange to note the manner in which number seems to play a central role in seemingly unrelated modes: With respect to the challenges of governance, and to any understanding of the global unity to which reference is so frequently and glibly made, it is then appropriate to ask: how is "unity" to be understood? This is especially the case within a global civilization in which there is also considerable preoccupation with the respect for diversity and its implications for individual and collective identity. Unfortunately it is only too easy to recognize that appeals for unity are simply and naively a disguise for exhortation to agree with "my plan", or "our plan". (Rebekah Koffler, The Words that Undermine Biden's Call for Unity, White House Dossier, 21 January 2021; Exhortation to We the Peoples from the Club of Rome, 2018; Adhering to God's Plan in a Global Society: serious problems framed by the Pope from a transfinite perspective, 2014). It is however remarkable how global consensus has been achieved in response to the pandemic with respect to social distancing (Humanity's Magic Number as 1.5? Dimensionless constant governing civilization and its potential collapse, 2020). The interrelationship between the distinct modalities of appreciation of number (as listed above) could indeed be explored. The challenge is of course that as distinct modalities they could be held to constitute a pantheon governed by an elusive meta-pattern -- one potentially perceived as alien by each. Those identified with each mode would readily tend to promote their particular relevance to eliciting such a meta-pattern -- to the extent that they recognize the possibility of its existence. Mathematical theology: Given the primary association of pantheons with religious belief, and despite the secularisation of belief systems, there is a case for exploring the challenge of emergent "unity" (and the nature of any "meta-pattern") through the seemingly improbable discipline of mathematical theology. The possibility is discussed separately in terms of self-reflexive global reframing to enable faith-based governance (Mathematical Theology: future science of confidence in belief, 2011; Bibliography of Relevance to Mathematical Theology, 2011). Relevant commentaries include; As implied by the above argument, any initiative in quest of a meta-pattern would be expected to engender a pantheon of contrasting modalities and mutually challenging dynamics. Its method, however institutionalised, would indeed be a metaphor of its own preoccupation, as envisaged separately (International Institute of Advanced Studies in Mathematical Theology Enabling Proposal for Faith-based Governance, 2011): Potential strategic importance of mathematical theology Reframing mathematical theology in terms of confidence Imagining the initiative: reframing conventional labels Institutional and thematic precedents Organization of the initiative Examples of research themes for consideration Integrative thematic organization Mathematical theology of experience Comprehension of ignorance, nonsense and craziness Implication of research on opinion and belief Symbolic location of the initiative Self-referential quest? With respect to such a grail-like collective quest for transformative, integrative insight, the initiative might be provocatively enriched by the symbolism of the traditional Sufi tale of The Conference of the Birds (Mantiq al-tair) by Farid al-Din Attar. In their collective pursuit of that transformative understanding -- a transcendent theory of everything -- each of the 30 birds in that tale has a special significance, and a corresponding didactic fault. In reaching the expected goal -- the land of the mythical Simurgh -- all they see there are each other and their collective reflection in a lake. "Simurgh" actually means "30 birds" in Persian -- potentially to be understood as a dynamic form of pantheon. It might then be asked whether the Sustainable Development Goals of the UN would have been of greater global significance had they taken the form of a 30-fold pattern -- corresponding to the 30-fold articulation of the Universal Declaration of Human Rights. Pantheon of mathematics? Of peculiar relevance to this argument is the degree to which mathematics can be understood as an extreme form of detachment from personal belief -- in contrast to the preoccupation of theology as the extreme identification with belief. Both extremes pose a challenge with respect to the organization of meaning. Paradoxically, despite vigorous assertions of impersonal objectivity, many mathematical innovations are named after their discoverers -- who have become the icons of that discipline. That seeming contradiction is exemplified in the so-called "folklore" of mathematics by recognition of the Erdős number, namely the "collaborative distance" between mathematician Paul Erdös and another mathematician, as measured by authorship of mathematical papers. The "icons" of mathematics are readily recognized and may even be said to belong to the "pantheon of mathematics" in any non-mathematical description of the psychosocial system of mathematicians -- however irrelevant this may be held to be from a mathemaical perspective. From that perspective it is appropriate to note the existence of an online database named Pantheon. This project uses biographical data to expose patterns of human collective memory. Pantheon has one dataset -- effectively a Pantheon of Mathematicians -- profiling 828 people classified as mathematicians born between 500 BC and 1988. Thr focus of the dataset is on the geographical associations of the mathematicians (birth, death). Together with the period they were alive, this is the only concern with how they might be considered to be related as a social system. There is no indication of how their mathematical preoccupations might be related in defining mathematics as a system. Citations as framing an emergent meta-pattern? It might be assumed that the relations between mathematical papers -- through the vast network of citations -- would offer a more systematic understanding of mathematics as a whole. The major difficulty is that there are multiple citations databases with problematic coverage of the literature as a whole (Best citations database, MathOverflow). A major complicating factor for any systemic comprehension is the manner in which ranking of journals and papers is taken into account (Citations of Mathematical Journals). This is most evident with respect to any notion of an impact factor, namely a measure of how many times an academic journal article or book or author is cited by other articles, books or authors -- but potentially biased by the coverage of that citation index and its selection of journals. Perhaps remarkably, rather than endeavouring to recognize mathematics in systemic terms, the American Mathematical Society frames the practice of mathematics in non-mathematical terms as a "culture" (The Culture of Research and Scholarship in Mathematics: citation and impact in mathematical publications, American Mathematical Society: Committee on the Profession). In that valuable clarification is noted: A scientist's publication record is the basic "statistic"' on which promotion, salary and funding decisions are made. In many fields the number of citations to a work, the order of authorship, and impact factor of the journal, are used as proxies for expert evaluation. For a variety of reasons, mathematicians have not embraced the impact factor as a reliable indicator of a journal's quality. Indeed, there are documented cases where unscrupulous editors have dramatically inflated the impact factors of entirely undistinguished journals... Several issues combine to require careful consideration of publication cultures before understanding and using citation statistics in Mathematics... Citations tend to be focused and targeted to specific required results rather than being used as a broad survey of the field.... These citation practices may contribute to the relatively low impact factors of even the most prestigious mathematical journals, as compared to those in other fields. The degree to which current practice is dissociated from any systemic understanding of mathematics is further clarified by the report of a Joint Committee on Quantitative Assessment of Research from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS): Citation Statistics (2008). The situation is also complicated by differences within subdisciplines of mathematics: Variation of citation counts by subdisciplines within a particular discipline is known but rarely systematically studied. This paper compares citation counts for award-winning mathematicians in different subdisciplines of mathematics.... We find a pattern in which mathematicians working in some subdisciplines have fewer citations than others who won the same award, and this pattern is consistent for all awards. (Lawrence Smolinsky and Aaron Lercher, Citation rates in mathematics: a study of variation by subdiscipline, Scientometrics. 91, 2012) Further insights are offered by Keith R. Leatham (Observations on Citation Practices in Mathematics Education Research, Journal for Research in Mathematics Education. 46, 2015, 3). One notable factor is the often extreme delays in publication in "high impact" journals compared to the rapidity of publication in other media which may not be covered by citation indexing. Somewhat ironically the coverage by Google Scholar may be deemed more comprehensive than other facilities -- although deprecated as "tainted" by the absence of effective peer review. Notably missing from a systemic perspective, no distinction is made between citations implying a development of what is cited -- namely supportive of the earlier articulation to some degee -- in contrast with any implication that that articulation is obsolete, misleading, or even dangerously incorrect. Such an omission precludes recognition of how contrasting perspectivess might complement each other in enabling the emergence of a more inclusive perspective . This is especially the case if citations of relevant studies are ignored or omitted for reasons which will prove to be historically questionable. Myths highlight the dynamics of support and opposition between the deities of any pantheon understood in systemic terms. Complex equations forming "pantheons" of mathematical experience? Given the sophisticated approach of mathematics to patterns of order, this argument can be developed by considering how an all-connecting "meta-pattern" might be recognized. Could mathematical experience as a whole be fruitfully articulated in some form of "pantheon"? Such questions would follow from much-cited studies of what is indeed referenced by that term (Philip J. Davis and Reuben Hersh, The Mathematical Experience, 1981/1995; The Mathematical Experience, Study Edition, 2012). Theory of Everything as a meta-pattern? The above argument has focused on the possibility of some form of transcendent meta-pattern. In the realm of physics, a primary focus of mathematics, a Theory of Everything (TOE) is a hypothetical single, all-encompassing, coherent theoretical framework that fully explains and links together all physical aspects of the universe. Finding such TOE is considered one of the major unsolved problems in physics. String theory and M-theory have been proposed as theories of everything. String theory has a notable feature that requires extra dimensions for mathematical consistency. As currently understood spacetime is 26-dimensional in bosonic string theory, 10-dimensional in superstring theory, and 11-dimensional in supergravity theory and M-theory. Of considerable relevance to this argument is the form that such a theory might take, the number of variables required, and the operations through which they would be related. How complex would such a theory need to be to encompass the reality it seeks to embody? To whom would it be comprehensible and should comprehensibility indeed be a constraint on the formulation of such a theory? The question is highlighted by the most complex form of symmetry discovered by mathematics -- and known as the Monster Group,  being of order 8 x 1053 (approximately). The monster is unusual among simple groups in that there is no known easy way to represent its elements.  However of particular interest is the assumed restriction of "everything" to what the discipline of physics currently deems relevant -- thereby excluding the problematic dynamics noted separately (Knowledge Processes Neglected by Science: insights from the crisis of science and belief, 2012; Neglected "external" dimensions, 2010). Naively it could be asked whether that discipline would then have any future in the millennia to come -- other than in the provision of "footnotes" to that theory. This would be the case if there was no probability that reality could be understood otherwise (Beyond the Standard Model of Universal Awareness: being not even wrong? 2010; Quest for a "universal constant" of globalization? Questionable insights for the future from physics, 2010). Given the fundamental significance of the Monster Group, its inexplicability would be ironic if it were to be concluded that it was effectively a Theory of Everything.  Monstrous moonshine (or moonshine theory) now describes the unexpected connection between the Monster Group  and modular functions. The reference to "moonshine" is an invitation to speculation on the wider implications of any Theory of Everything (Potential Psychosocial Significance of Monstrous Moonshine: an exceptional form of symmetry as a Rosetta stone for cognitive frameworks, 2007). Somewhat intriguing in that respect is the potential correspondence between the 20-fold articulation of the Monster Group and the 20-fold pattern noted above (Memetic Analogue to the 20 Amino Acids as vital to Psychosocial Life? 2015). The Monster Group contains 20 sporadic groups (including itself) as subquotients -- now nicknamed as the "happy family". "Extra dimensions?" The challenge for the future is evident in the meaning to be associated with the so-called extra dimensions required by string theory (Robert Garisto, Curling Up Extra Dimensions in String Theory, Physical Review Focus, 1, 7, 9 April 1998; How can one imagine curled up dimensions? Physics Stack Exchange, 3 April 2012; Would someone please explain the whole "tiny curled up extra dimensions" thing? Reddit; Paul Sutter, How the universe could possibly have more dimensions, Space, 21 February 2020). Clearly any experiential pantheon implies analogous cognitive challenges. One indication of the nature of the cognitive realm neglected to date by physics and mathematics is provided by George Lakoff and Mark Johnson (Philosophy In The Flesh: the embodied mind and its challenge to western thought, 1999) and by George Lakoff and Rafael Núñez (Where Mathematics Comes From: how the embodied mind brings mathematics into being, 2000). The argument is further developed by Mark Johnson (The Meaning of the Body: aesthetics of human understanding, 2007) and Maxine Sheets-Johnstone (The Primacy of Movement, 1999). Provocatively the argument can also be developed in the light of various understandings of psychophysics and sociophysics (Purportedly objective configurations with potentially subjective implications, 2021; Eliciting provocative clues for psychosocial challenges, 2021). Are specifically distinctive cognitive functions to be recognized as experientially associated in some manner with the fundamental equations of mathematical experience, as might emerge from the arguments of Douglas Hofstadter and Emmanuel Sander (Surfaces and Essences: analogy as the fuel and fire of thinking, 2012). Are such possibilities also excluded by current explorations of the "mathematics of mathematics" or "meta-mathematics" (Wolff-Michael Roth, The Mathematics of Mathematics: thinking with the late, Spinozist Vygotsky, 2017; Stephen Cole Kleene, Introduction to Metamathematics, 1952)? Curiously a related argument can be developed with respect to the experience of time (Cognitive Implication of Globality via Temporal Inversion: embodying the future through higher derivatives of time, 2018). Mathematics subject classification? It might be assumed that the focus of mathematics would give rise to an especially sophisticated organization of its subject matter, most obviously in the Mathematics Subject Classification (MSC). Its origins and evolution are reviewed by Craig Fraser (Mathematics in Classification Systems, Encyclopedia of Knowledge Organization, 2019; Mathematics in Library and Review Classification Systems: an historical overview, Knowledge Organization, 47, 2020, 4). This review is introduced by the quotation: The classification of mathematical studies is involved in extraordinary difficulties, and so is the classifying of many mathematical books. The relations of the branches are so intricate, so plastic, so recondite, that it is well-nigh impossible to define them or to comprehend them. (Henry E. Bliss, The Organization of Knowledge in Libraries and the Subject-Approach to Books, H.W. Wilson Company, 1935) The MSC is currently a hierarchical classification scheme, with three levels of structure.  This is indeed suggestive of an emergent pantheon as argued above (Dave Rusin, A Gentle Introduction to the Mathematics Subject Classification Scheme, The Mathematical Atlas, 12 May 1999). At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. However for physics papers the Physics and Astronomy Classification Scheme (PACS) is often used as an alternative. Due to the large overlap between mathematics and physics research it is quite common to see both PACS and MSC codes on research papers, particularly for multidisciplinary journals and repositories such as the arXiv. The ACM Computing Classification System (CCS) is a similar hierarchical classification scheme for computer science. There is some overlap between the AMS and ACM classification schemes, in subjects related to both mathematics and computer science, however the two schemes differ in the details of their organization of those topics. The classification scheme used on the arXiv is chosen to reflect the papers submitted. As arXiv is multidisciplinary its classification scheme does not fit entirely with the MSC, ACM or PACS classification schemes. It is common to see codes from one or more of these schemes on individual papers. Arguably the obvious challenge is the interrelationship between these systems of organizing math-related themes -- given that each is of potentially fundamental significance to the representation and organization of a singular reality. Somewhat ironically, their questionable relationship is analogous to that of pantheons in other domains, as argued separately (Is the House of Mathematics in Order? Are there vital insights from its design, 2000). In a study of the place of philosophy in modern mathematics, the organization of the MSC of 2010 has been specifically criticized criticizes the MSC2010 on the ground that is does not reflect underlying connections that exist between different parts of mathematics (Daniel Parrochia, Mathematics and Philosophy, ISTE/Wiley, 2018). If mathematics is a system, how is that system articulated in systemic terms -- as might be explored in the philosophy of mathematics? The question raised there was whether this body of knowledge has any structure that emerges from the mathematical insights it endeavours to incorporate. Or, alternatively and in its entirety, is it only to be understood as a tree -- namely one of the simplest structures in mathematical terms -- of some value only to librarians of mathematical institutes? To what extent are such librarians acquiring responsibility for the coherence of the pattern of hyperlinks extending from particular papers, especially to other branches of mathematics through citation indexing? That initial  question was later explored separately as: Given the accessibility of relevant techniques, and the degree of familiarity that mathematicians have with them, it might be asked why there is not continuing experimentation with alternative orderings of mathematical subject matter. The contrast with the case of the periodic table of chemical elements is striking (Denis H. Rouvray and R. Bruce King, The Mathematics of the Periodic Table, 2005). What criteria might be relevant to eliciting more fruitfully meaningful patterns of order from mathematics itself? Why is that possibility of such limited interest to mathematicians given the significance they attach to their own domain? Is it indeed the case that the architecture of mathematics subject matter as a "pantheon" in its own right has evoked far less interest than that of the Pantheon -- as reviewed by Giangiacomo Martines (The Relationship between Architecture and Mathematics in the Pantheon. Nexus Network Journal, 2, 2000). Equations as an implication of fundamental order? In mathematical equations, a variable is a symbol which works as a placeholder for an expression or quantities that may vary or change. It is often used to represent the argument of a function or an arbitrary element of a set. In addition to numbers, variables are commonly used to represent vectorsmatrices and functions If there are fewer equations than variables, the system is called underdetermined. This type of system may have either zero or infinitely many solutions. If there are more equations than variables, then the system is called overdetermined. In general, the goal is to be able to solve N equations with N variables, this is called a determined system, but you can have more or fewer equations than variables. The presentation of an equation in the form of a theorem gives rise to an extensive List of theorems called fundamental (as provided by Wikipedia). Little effort is seemingly devoted to indicating the relationships between such theorems. Equations have a curious relationship to any understanding of explanation -- given that the existence of a proven equation may be understood to indicate that a feature of reality has thereby been explained. A difficulty in the psychosocial realm is the compex relationship between equivalence, equality and explanation. The complexities in the construction of an equation are however indicative of the complexities in the form of distinctive cognitive modalities -- as they might be distinguished in a pantheon of experience (and represented by distinctive deities of some kind). Beautiful equations? The so-called Euler identity (or Euler equation) has been named as the "most beautiful theorem in mathematics" and has tied in a nomination by mathematicians for the "greatest equation ever" (Robert P. Crease, The greatest equations everPhysicsWeb, October 2004). It is presented as follows: Euler identity e i π + 1 = 0 • is Euler's number, the base of natural logarithms, As noted by Wikipedia, its mathematical beauty. is associated with its use of the three basic arithmetic operations only once: additionmultiplication, and exponentiation. It also links five fundamental mathematical constants (Five constants tie together multiple branches of mathematics, 2008; Enabling a reconciliation between one and nothing: π and the mysterious Euler identity, 2012) Reflection on the mathematical experience is associated with consideration of what is understood as mathematical beauty -- especially in fundamental equations. This follows from the frequently articulated belief of mathematicians in the intimate relationship of mathematics, truth and beauty (Michael Atiyah, Truth, Beauty and Mathematics, The World Academy of Sciences, 22 October 2009; Doris Schattschneider, Beauty and Truth in Mathematics, Mathematics and the Aesthetic, 2006; David Appell, Math = beauty + truth / (really hard), Salon, 5 September 2002; Caarlo Cellucci, Mathematical beauty, understanding, and discoveryFoundations of Science, 20. 2015, 4; Clara Moskowitz, Equations Are Art inside a Mathematician’s Brain, Scientific American, 4 March 2014). In this light it is therefore relevant to note the various efforts to identify the equations considered most beautiful and/or influential: Techniques of neuroscience have been used experimentally on mathematicians to review a set of 60 mathematical formulas (seemingly not indicated) and to rate these on a scale ranging from minus five (ugly) to plus five (beautiful) (Semir Zeki, et al The experience of mathematical beauty and its neural correlates Fronteirs in Human Neuroscience, 13 February 2014): Fundamental equations? Surprisingly the overlap between the (approximately) 10-fold lists above was far less than might be expected -- given the qualifier "most" used by each. Over 40 equations were noted, limited here to 30 by exclusion of some which appeared to have less in common with a core set of 30. It is important to note that a number of the equations can be variously represented with the choice made not necessarily consistent with that of others. The purpose of this table is primarily to highlight the contrasting forms which quite distinct fundamental equations may take. No effort has been made to indicate the significance of the variables or operations in each case since that is typically evident (to some) from the hyperlinked commentaries. The point to be stressed is the manner in which these equations are recognized as fundamental to the experience of mathematics. As such they can be understood as constituting the elements of a form of pantheon through which that experience is framed and configured. 30 Fundamental equations as "mathematical deities" of a "pantheon of mathematical experience" Name Equation Name Equation General relativity Euler identity Special relativity Gaussian integral (normal distribution) Prime-counting function Euler product formula Wave equation Bayes's theorem Euler-Lagrange equation Second law of thermodynamics fundamental theorem Fourier transform Dirac equation Schrödinger equation  Information entropy Fibonacci sequence Newton's second law Law of gravity Pythagorean theorem a2 + b2 = c2 (and exponents) log xy = log x + log y Chaos theory xt+1 = kxt (1 - xt) Square root of minus one (imaginary unit) i2 = -1 Energy-Mass equivalence E = m c 2 Euler polyhedra characteristic V - E + F = 2 Minimal surface equation Black-Scholes equation Callan-Symanzik equation Maxwell's equations Yang-Baxter equation Navier-Stokes equation Pantheons as configurations of arrays of fundamental mathematical significance? Especially intriguing with respect to the "10-fold" lists above is why so many mathematicians focus both on the coherence of checklists of equations and on highlighting such a limited set of equations. Is there no more relevant configuration of the array of equations by which "truth-and-beauty" could be ordered? Is a 10-fold list as good as it gets? In quest of any comprehension of "mathematics as a system", it is appropriate to note the arguments for the entangled origins of geometry and philosophy (Olivier Keller, Préhistoire de la Géométrie: la gestation d'une science d'apres les sources archéologiques et ethnographiques, EHESS, 1996). To those arguments might be added the assertion of Buckminster Fuller  that polyhedra are to be understood as systems, with the corollary that all systems may be represented by polyhedra then meriting exploration -- especially in the case of mathematics. In Fuller's terms (SynergeticsExplorations in the Geometry of Thinking, 1975/1979): As mnemonic aids, some provocative alternatives to checklists are presented experimentally below using polyhedra -- for 12, 20 and 30 equations. Again, no attempt has been made to select or position these to enhance the significance of the experimental mappings. With respect to use of a polyhedron for the 12-fold pattern, this is an intentional shift into 3D -- beyond the conventional tendency to configure those imbued with "god-like" functions at a 2D round table (Clarifying the Unexplored Dynamics of 12-fold Round tables: visualization of patterns of sustainable discourse between 12 systemic archetypes, 2019). Exploratory mapping of fundamental mathematical equations onto polyhedral faces (arbitrarily selected and positioned) 12 equations on dodecahedral faces 20 equations on icosahedral faces 12 fundamental mathematical equations on dodecahedral faces 20 fundamental mathematical equations on icosahedral faces Animations above and below developed using Stella: Polyhedron Navigator Exploratory unfolding of fundamental mathematical equations mapped onto polyhedral faces 12 equations on dodecahedral faces 20 equations on icosahedral faces Unfolding of 12 fundamental mathematical equations on dodecahedral faces Unfolding of 20 fundamental mathematical equations on icosaahedral faces Exploratory mapping of 30 fundamental mathematical equations onto polyhedra (arbitrarily selected and positioned) Mapping onto 30 vertices of rhombic triacontahedron Mapping onto 30 faces of dual of rhombic triacontahedron 30 fundamental mathematical equations mapped onto vertices of rhombic triacontahedron 30 fundamental mathematical equations mapped onto faces of dual rhombic triacontahedron Exploratory animations of 30 fundamental mathematical equations on polyhedra (arbitrarily selected and positioned) Morphing between rhombic triacontahedron and dual variant Unfolding of rhombic triacontahedron from 3D to 2D network Morphing of 30 fundamental mathematical equations between rhombic triacontahedron and dual Unfolding of 30 fundamental mathematical equations mapped onto rhombic triacontahedron Prime number and curvature implications for global governance? Comprehension of sustainable development: Rather than idle curiosity, the questions evoked by the "fundamental equations" above acquire far greater pertinence through the manner in which the United Nations has engendered a set of Sustainable Development Goals. The set could be understood as the global constitution of a form of secular pantheon in strategic terms. With no explanation as to the systemic nature of that pattern of 16 Goals (with a 17th coordinating Goal), it is the successor to the 8-fold set of Millennium Development Goals. Similar questions could be asked of the latter with regard to the perceived systemic inadequacies which resulted in its replacement. Intended as they are to change the world (as noted above), it is only a mathemtician who could comment on the coherence of a prime number set of 17 goals in changing the world -- in the light of the coherence of a 17-fold set of equations held to have had a similar function, as claimed by Ian Stewart (In Pursuit of the Unknown: 17 equations that changed the world, 2012; Jumping Champions: leaping over the gaps between prime numbers, Scientific American, December 2000). Wallpaper group: Writing on prime numbers, the challenge is framed otherwise by Stewart's colleague, Marcus du Sautoy (The Music of the Primes, 2003), variously subtitled: Why an Unsolved Problem in Mathematics Matters and Searching to Solve the Greatest Mystery in Mathematics. With a specific mandate to enhance public understanding of science, Marcus du Sautoy has initiated one project Maths in the City aiming to highlight the fundamental role that maths plays in society by viewing the urban environment in a mathematical way; another is a BBC Two series The Code. Both note the unsuspected role of the 17-fold "wallpaper group", as does another study by Ian Stewart (Professor Stewart's Cabinet of Mathematical Curiosities, 2009). Although no such indication is offered, ironically this is seemingly one of the very rare ways in which the 17-fold set of UN Goals might be recognized as coherent (Anna Nelson, et al, 17 Plane Symmetry Groups; Frank A. Farris. Creating Symmetry: the artful mathematics of wallpaper patterns, 2015). Others are variously presented (Prime Curios: 17; Tanya Khovanova (Number Gossip: 17). These include the fact that 17 distinct sets of regular polygons (triangles, squares and hexagons) can be packed in combinations around a point (Counting how many regular polygons combinations can form 360 degrees around a point, Math StackExchange, 2019). Understood as a tesselation, this is otherwise expressed in terms of the 17 possible ways that a pattern can be used to tile a flat surface with a common single vertex. Used separately the three polygons make a total of 3 The set of 17 derives from the fact that a graph can be viewed as a polygon with face, edges, and vertices, which can be unfolded to form a possibly infinite set of polygons which tile either the sphere, the plane or the hyperbolic plane. If the Euler characteristic is positive then the graph has an elliptic (spherical) structure; if it is negative it will have a hyperbolic structure; but if it is zero then it has a parabolic structure. When the full set of possible graphs is enumerated it is found that only 17 have Euler characteristic 0, namely a wallpaper group. As noted by Marcus du Sautoy, the Alhambra palace in Granada contains examples of all 17 patterns. A further lead to any intuited sense of 17-fold coherence in 4 dimensions is offered in by the 64 convex uniform 4-polytopes of which 5 are polyhedral prisms based on the Platonic solids and 13 are polyhedral prisms based on the Archimedean solids. One is however duplicated with the cubic hyperprism (namely a tesseract), reducing the set to 17. Cognitive implications of tesselation? Of potential interest in relation to the degree of preference for the coherence of 15-fold strategic articulations, is the recent discovery of the 15 tilings of convex pentagons (Olena Shmahalo, Pentagon Tiling Proof Solves Century-Old Math Problem, Quanta Magazine, 11 July 2017). Of similar relevance to other clustering preferences are those variously described as: If such tiling patterns are indeed a key to comprehending cognitive clustering preferences, this immediately raises the question of whether more appropriate clusters would result from consideration of tilings on a sphere (positive Euler characteristic). Seemingly constrained as it is to planar tilings (zero characteristic), does this suggest that humanity has trapped itself unknowingly in a "flat Earth" strategic perspective rather than exploring "global" or other possibilities (Irresponsible Dependence on a Flat Earth Mentality -- in response to global governance challenges, 2008). Negative curvature -- a hyperbolic structure (negative characteristic)? The case for topological complexification in the quest for more fundamental order can be made otherwise in terms of the significance accorded by astrophysicists to recognition of negative curvature and its implications for understanding the shape of the universe, as discussed separately (Eliciting a Universe of Meaning -- within a global information society of fragmenting knowledge and relationships, 2013). Recent research by Stephen Hawking and colleagues (Accelerated Expansion from Negative Lambda, 2012) has shown that the universe may have the same surreal geometry as some of art's most mind-boggling images (Lisa Grossman, Hawking's 'Escher-verse' could be theory of everythingNew Scientist, 9 June 2012). This offers a way of reconciling the geometric demands of string theory, a still-hypothetical "theory of everything", with the universe as observed -- through a negatively-curved Escher-like geometry (essentially a hyperbolic space). Arguably, whether discovered by artificial intelligence or otherwise, analogous topological breakthroughs may have significance for connectivity in the ways of knowing, as argued separately in relation to deprecated symbol systems (Engaging with Hyperreality through Demonique and Angelique? Mnemonic clues to global governance from mathematical theology and hyperbolic tessellation, 2016; Quest for a "universal constant" of globalization? Questionable insights for the future from physics, 2010). Might viable global governance require some analogue to negative curvature to render global order coherent? Sustainable Development Goals and "God's number" of 20? Despite the 17-fold pattern of the UN Goals, it is curious to note the extent to which worldwide enthusiasm for Rubik's Cube has been interpreted in that light (Recognition of Rubik's Cube as a relevant strategic development metaphor, 2017). It is then especially curious from a mathematical perspective, in the light of the 20-fold argument above, that there is considerable focus on the minimal number of moves required to resolve a scrambled Rubik's Cube. As noted by Wikipedia with respect to optimal solutions for Rubik's Cube, There are two common ways to measure the length of a solution to Rubik's Cube. The first is to count the number of quarter turns. The second is to count the number of outer-layer twists, called "face turns". The maximum number of face turns needed to solve any instance of the Rubik's Cube is 20, and the maximum number of quarter turns is 26.These numbers are also the diameters of the corresponding Cayley graphs of the Rubik's Cube group. That diameter is known as "God's Number" (Tomas Rokicki, et al., The Diameter of the Rubik's Cube Group Is TwentySIAM Journal on Discrete Mathematics, 2013; God's Number is 20, 14 August 2010). There are many algorithms to solve scrambled Rubik's Cubes. An algorithm that solves a cube in the minimum number of moves is known as God's algorithm. Ironically it is to that capacity that cubing enthusiasts aspire. Potential relevance of insights of neuroscience? Complementing insights into "polyhedra as systems" are the sresults of recent neuroscience research which indicate the remarkable possibility of cognitive processes taking up even up to 11-dimensional form in the light of emergent neuronal connectivity in the human brain: Using mathematics in a novel way in neuroscience, the Blue Brain Project shows that the brain operates on many dimensions, not just the three dimensions that we are accustomed to. For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions - ground-breaking work that is beginning to reveal the brain's deepest architectural secrets..... these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object. ... The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner. It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates. (Blue Brain Team Discovers a Multi-Dimensional Universe in Brain NetworksFrontiers Communications in Neuroscience, 12 June 2017) [emphasis added] Polyhedra suggestive of arrays of requisite variety of pantheons in 3D As mentioned above, it is curious to note how intangible pantheons of deities in their original sense have been embodied with little question in the architecture of iconic buildings -- but not in knowledge architecture. The experimental animations above are indicative of the possibility of embodying contrasting cognitive modalities in polyhedral arrays -- as a form of knowledge architecture of mnemonic significance (at least). Given the set of symmetrical polyhedra of different degrees of complexity, the simplest polyhedra might then be recognized as suitable for mapping the most fundamental equations (with any "supernatural" functions). Those of lesser import could then be understood as suitable for ordering equations of more secondary function. Polyhedral sets of "flowers" as a pantheon design metaphor? One design metaphor explored separately derived from polyhedral arrangements of "flowers" (Flowering of Civilization -- Deflowering of Culture: flow as a necessarily complex experiential dynamic, 2014). Following the argument above with regard to polyhedral holding patterns of different complexity, some indicators are offered by extending into three dimensions what might be considered any "2D-flower" pattern, as illustrated by the following images and animations. Polyhedral arrangement of configuration of elements of a pantheon Schematic of a "4-flower" tetrahedron Schematic of a "8-flower" octahedron Alternative animations of a "12-flower" dodecahedron Schematic of a '4-flower' tetrahedron Schematic of a '8-flower' octahedron Animation of a '12-flower' dodecahedron Animation of a '12-flower' dodecahedron Animations above and below developed using Stella: Polyhedron Navigator Other examples of use of this metaphor are presented separately (Gallery of Polyhedral Flower Arrangements: engendering sustainable psycho-social systems through metaphor, 2014). Of interest in this mnemonic approach is the representation of the compatibility between the flowers in the "ecosystem" constituted by each case (Arranging the flowers to engender an ecosystem? 2014). This could be indicated by how the directionality of the arrows (clockwise/anti-clockwise, inward/onward) meshes with the neighbouring flowers (or "clashes" with them). A more complex case is offered by the "12-flower" case of the dodecahedron as indicated below. The configuration of 12 "flowers" is consistent with the separate argument developed with regard to the requisite variety of 12-fold patterns of governance (Enabling a 12-fold Pattern of Systemic Dialogue for Governance, 2011; Eliciting a 12-fold Pattern of Generic Operational Insights: Recognition of memory constraints on collective strategic comprehension, 2011). In terms of the flower metaphor, what might "gardening knowledge" then imply for a global knowledge-based system (Knowledge Gardening through Music, 2000)? Again the relevance of the song: Where have all the flowers gone? ... Oh, when will they ever learn? Pantheons as nested configurations? Such an experiment can be extended  to other forms of pantheon -- of which the set of Sustainable Development Goals now offers an iconic example. Mathematics has a tendency to distinguish equations which are more or less fundamental-beautiful and enabling of change. It could be asked whether a pantheon might then be understood as nested systems -- nested sub-pantheons -- as implied by the nested "heavens" of some religious traditions. In the case of mathematics there is a certain elegance to exploration of the possibility of nesting levels of the pantheon in nested polyhedra as presented below An animation indicative of how connections are activated between disparate parts of a configuration would be useful, especially any distinction between those which were "explicate" (externally on a polyhedron) and the "implicate" (internally across a polyhedron) -- a distinction articulated by David Bohm (Wholeness and the Implicate Order, Routledge, 1980). In forming coherent structures as resonance patterns, this would be suggestive of how collective memories are held, recalled or fade away. The following screen shots are suggestive of the emergence of dominant patterns -- with the animation on the right suggestive of the evanescent nature of any such dominance. Suggestive of patterns of explicate and implicate coherence nested within a dynamic framework Cubic (grey) Dodecahedral (blue) Icosahedral (red) Tetrahedral (mauve) Animation Dominance of cubic pattern in a nested configuration of polyhedra Dominance of dodecahedral pattern in a nested configuration of polyhedra Dominance of icosahedral pattern in a nested configuration of polyhedra Dominance of tetrahedral pattern in a nested configuration of polyhedra Animation of energence of polyhedral patterns from a nested configuration Reproduced from Psychosocial Implication in Polyhedral Animations in 3D (2015) Configuring the 64 disciplines of mathematics as a 64-edged drilled truncated cube The Mathematics Subject Classification (MSC) at its highest hierarchical level has 64 mathematical disciplines labeled with a unique two-digit number (as noted above). This could indeed be understood as framing the pantheon of mathematical experience. The preference for a pattern of 64 would seem to be as unexplained as that for other checklists, whether more or less fundameental in implication. Perhaps only coincidentally, the 64-fold organization is especially suggestive in mathematical terms of possibilities of experimenting with more appropriate configurations of the disciplines of mathematical experience. The 64-fold pattern is of course fundamental in the following respects, as noted by Wikipedia: In the quest for polyhedra suitable for mapping a 64-fold set of distinctions, it is therefore somewhat curious to note that the 64-edged drilled truncated cube is unique in enabling such a mapping in 3D (Proof of concept: use of drilled truncated cube as a mapping framework for 64 elements, 2015). Other polyhedra have that characteristic but are either complex compounds of simpler polyhedra or 3D aspects of 4D polytopes -- both posing challenges to their comprehensibility for mapping purposes. The drilled truncated cube can therefore be used in a simple exploration of how the realm of mathematics can be coherently configured in a manner distinct from a checklist of disciplines which does so little to honour the fundamental significance attributed to the mathematical experience. Understood as a pantheon of a particular form, associating the articulated disciplines of mathematics with the features of that form is then helpful in recognizing how other patterns of cognitive modalities of similar complexity could be ordered in this way. Exploratory mapping of 64 mathematical disciplines onto 64-edged drilled truncated cube (original discipline names slightly edited to reduce length in order to facilitate mapping) 32 faces rendered transparent Only 4 octangular faces rendered transparent mapping of 64 mathematical disciplines onto 64-edged drilled truncated cube mapping of 64 mathematical disciplines onto 64-edged drilled truncated cube Animations developed using Stella: Polyhedron Navigator No attempt has been made in this preliminary exercise to position the 64 mathematical disciplines on the polyhedron in a manner which might reflect to a higher degree their relationships. Various visualization techniques could be considered for that purpose, including colour and animation. As noted above with respect to citation links between papers in different disciplines, the framework could be used to explore the connectivity of those disciplines in quest of the nature of a pattern that connects -- potentially in dynamic terms. Logical implications? Exploration of meta-mathematics, and the mathematics of mathematics (as mentioned above), have tended to highlight the role of symbolic logic. In this sense the form of the drilled truncated cube is itself interesting in its resemblance to the structure of the 4D tesseract of significance to the configuration of the sixteen Boolean functions of logic, especially with respect to studies of oppositional geometry -- presumably of relevance to relations between modalities in any pantheon. Suggestive visual correspondences to configurations of relevance to logical connectivity The Logic Alphabet Tesseract - a four-dimensional cube (see coding). by Shea Zellweger Tesseract animation Topologically faithful 4-statement Venn diagram is the graph of edges of a 4-dimensional cube as described by Tony Phillips Embedding of the Borromean ring logo of the International Mathematical Union within a drilled truncated cube The Logic Alphabet Tesseract by Shea Zellweger Tesseract animation Topologically faithful 4-statement Venn diagram Borromean rings in 3D within drilled truncated cube Diagram by Warren Tschantz (reproduced from the Institute of Figuring) . by Jason Hise [CC0], via Wikimedia Commons A vertex is labeled by its coordinates (0 or 1) in the A, B, C and D directions; the 4-cube is drawn as projected into 3-space; edges going off in the 4th dimension are shown in green. See Wolfram Mathematica animation of the logo Exploring potential dynamics within a pantheon? Dynamics within a pantheon ordered in 3D as a drilled truncated cube? The point was stressed above that little effort is made to clarify in systemic terms the dynamics within any pantheon. The drilled truncated cube could offer a way of exploring the dynamics within a pantheon of 64-fold complexity. As an exercise to that end, the movement of selected edges between parallel positions offers one design metaphor of mnemonic value, as discussed separately in detail with respect to that form (Decomposition and recomposition of a toroidal polyhedron -- towards vortex stabilization? 2015). This formed part of a discussion of Psychosocial Implication in Polyhedral Animations in 3D: patterns of change suggested by nesting, packing, and transforming symmetrical polyhedra (2015). The following animations developed from that exercise offer contrasting views of what might be understood as the dynamics of a pantheon taking the the form of a drilled truncated cube. As a feature of the design choice, the edges switch colour when they reach their parallel position. Alternative perspectives on the same experimental movement of selected edges of a drilled truncated cube x3d *** Dynamics implied by an influential 2D circular configuration: Ironically far greater consideration of the dynamics of transformative movement within a 64-fold configuration has been given to that between the 64 hexagrams of the I Ching -- usefully recognized as an archetypal pantheon in its own right. The dynamics identified follow from transformations in the systematic encoding of each hexagram which determines the change to an alternative condition. Historically it was this pattern of transformations which was influential in the original insight of Gottfried Leibniz that subsequently gave rise to the binary coding fundamental to modern computing. In contrast to the more widely known tabular configurations, the circle of Shao Yong (1011-1077), or the I Ching hexagram circle, was an influential feature of the communication to Leibniz in 1701 (James A. Ryan, Leibniz' Binary System and Shao Yong's "Yijing"Philosophy East and West, 46, 1996, 1). Features of the configuration are discussed separately (Diagram of 384 Relationships between I Ching Hexagrams, 1983; Bagua and the sequence of 64 hexagramsShanghai Daily, 20 December 2015). The question here is how to embody more fruitfully the psychosocial dynamics implied by the I Ching encoding patterns. The possibility had been clarified in an earlier study in the light of the alternation between two orientations as shown below centre and right, with commentary adapted to curent issues (Alternating between Complementary Conditions -- for sustainable dialogue, vision, conference, policy, network, community and lifestyle, 1983). Map of transformations encoded by a circle of 65 hexagrams and their relationships Shao Yong circle of hexagrams as communicated to Leibniz (1703) Global, 'heads-together' networking conditions ('top-in') Local, 'back-to-back' networking conditions ('top-out') Shao Yung circle of hexagrams Map of transformations between global, 'heads-together' networking conditions ('top-in') Map of transformations between local, 'back-to-back' networking conditions ('top-out') By Unknown - Perkins, Franklin. Leibniz and China: a commerce of light. Cambridge UP, 2004. 117., Public Domain, Link Reproduced from Alternating between Complementary Conditions (1983) The internal dynamics, as classically understood, are discussed separately from which the following images are reproduced (Encompassing the "attraction-harassment" dynamic with a notation of requisite ambiguity? 2017). Such images in 2D are immediately suggestive of projections into polyhedral variants in 3D. Relational map from a Chinese cultural perspective? Projection of all 64 I Ching relational conditions (hexagrams) onto a circle (use browser facility to view enlarged version for details) Original version by Anagarika Govinda (1981) Addition of labels to version on the left Alternative version with Chinese elements instead of the questionable non-traditional English interpretations I Ching hexagrams projected onto a circle by Anagarika Govinda I Ching hexagrams projected onto a circle and labelled I Ching Relational map with hexagrams and Chinese ideograms Reproduced with the kind permission of Anagarika Govinda, from the Inner Structure of the I Ching; the Book of Transformations (1981) Labels added from Transformation Metaphors -- derived experimentally from the Chinese Book of Changes (I Ching) (1997) Hexagrams and ideograms from Transformation Metaphors (1997) The question meriting attention is how the coherence of seemingly incommensurable contrasts might be usefully represented with the aid of new technologies? Possibilitis ar suggested by the following. Circle of hexagrams surrounded by a circle of codons Examples of drilled truncated cube of 64 edges as a "pantheon" in 3D random attribution of genetic codons random attribution of hexagram names Circle of hexagrams surrounded by a circle of codons Drilled truncated cube Drilled truncated cube of 64 edges with hexagram names   Reproduced from Enabling Wisdom Dynamically within Intertwined Tori: requisite resonance in global knowledge architecture (2012) Ontogeny recapitulating phylogeny? As the systematic organization of life in general -- potentially to be recognized as a pantheon -- systems biology makes use of circular cladograms and dendrograms and phylogenetic trees (as illustrated below). These techniques could be compared with those used by comparative mythology in the organization of mythomemes. However it does not appear that efforts have been made to explore the possibility of such organization of knowledge in 3D. With respect to the continuing controversy with regard to recapitulation theory, it is noteworthy that its potential relevance to cognitive development is of ongoing interest. From that perspective it could be asked whether the articulation of a pantheon follows some such pattern. From a general systems perspective, one possibiity that can be explored is the potential correspondence between fundamental biological processes and globalzation (Engendering Invagination and Gastrulation of Globalization: reconstructive insights from the sciences and the humanities, 2010). This includes discussion of: Isomorphism of globalization and embryogenesis: summary Invagination as a postmodern "quagmire": methodological preamble Invagination in psychosocial terms: understandings from web resources  Morphogenesis of globalization: enabling topological transformation Enactivating "gastrulation" of "globalization" Engendering holistic integration: Borromean knots and Klein bottles? From global to helicoidal -- "Shells" of globality Cognitive bias and "Death Star" pantheons? A related diagrammatic approach has been used in the remarkable organization of 180 cognitive biases in the circular articulation of the Cognitive Bias Codex (Terry Heick, The Cognitive Bias Codex: a visual cf 180+ cognitive biases, TeachThought, 3 July 2019). Phylogenetic tree Cognitive bias codex Highly resolved, automatically generated tree of life, based on completely sequenced genomes Cognitive Bias Codex: design by John Manoogian III categories and descriptions; implementation by Buster Benson. See large scale version Phylogenetic tree Cognitive Bias Codex Ivica Letunic: Iletunic. Retraced by Mariana Ruiz Villarreal: LadyofHats, Public domain, via Wikimedia Commons By Jm3 [CC BY-SA 4.0], from Wikimedia Commons Arguably it is indeed cognitive biases which are central to comprehension of the global problematique at this time -- and to effective engagement with it. Just as the religious pantheons may refer to the organization of "demons" in "hells" (in addition to the organization of "deities" in "heavens"), there is a case for exploring the set of such biases as a form of demonic pantheon in cognitive terms (Variety of System Failures Engendered by Negligent Distinctions: mnemonic clues to 72 modes of viable system failure from a demonic pattern language, 2016). However, rather than a representation in 2D (as above), it is appropriate to ask whether greater insight could be achieved by the organization of biases in 3D. Configuration of future blindness biases in 3D: The argument can be developed in terms of the cognitive biases integral to current global institutions (Group of 7 Dwarfs: Future-blind and Warning-deaf: self-righteous immoral imperative enabling future human sacrifice, 2018). This is especially the case in the light of the envisaged Global Reset (Justin Haskins, Introducing the 'Great Reset': world leaders' radical plan to transform the economy, The Hill, 25 June 2020; Klaus Schwab: ‘Great Reset’ Will Lead to Transhumanism, New World Order Report, 17 November 2020). The question is the susceptibility of such initiatives to some form of cognitive bias, as may be argued with respect to their collective representation as a "negative pantheon" on polyhedra (Global configuration of cognitive biases: towards mapping G7 susceptibility, 2018). The failure to learn from history, and the assumption of lack of bias, takes little account of the resultant unintegrative conflict as concluded by Nicholas Rescher: For centuries, most philosophers who have reflected on the matter have been intimidated by the strife of systems. But the time has come to put this behind us -- not the strife, that is, which is ineliminable, but the felt need to somehow end it rather than simply accept it and take it in stride. To reemphasize the salient point: it would be bizarre to think that philosophy is not of value because philosophical positions are bound to reflect the particular values we hold. (The Strife of Systems: an essay on the grounds and implications of philosophical diversity, 1985) In a quest for insight into "future blindness" it is somewhat extraordinary to note the far greater proportion of references to the "future of blindness" and to "blindness in the future" -- especially given the eventual possibility of enabling the blind to see (Future blindness and the deaf effect as cognitive biases, 2018). Especially interesting therefore is the checklist of 30 forms of future blindness by Morne Mostert (Future Blindness: an index of bias for leaders, University of Stellenbosch, 15 October 2015) and the thesis of Arno Nuijten (Deaf Effect for Risk Warnings A Causal Examination applied to Information Systems Projects. Erasmus University Repository, 2012). The set of 30 "future blindness biases" selected by Mostert can be usefully configured in 3D. The rhombic triacontahedron on the right is the dual of the icosidodecahedron in the images on the left. Animations of tentative mapping of 30 Future blindness biases onto polyhedra onto vertices of icosidodecahedron (triangular faces transparent) onto vertices of icosidodecahedron (pentagonal faces transparent) onto faces of rhombic triaconahedron Mapping of 30 Future blindness biases onto vertices of icosidodecahedron Mapping of 30 Future blindness biases onto vertices of icosidodecahedron Mapping of 30 Future blindness biases onto faces of rhombic triacontahedron Animations prepared using Stella Polyhedron Navigator Configuration of 180 cognitive biases in 3D: The larger set of cognitive biases can be tentatively configured as follows, necessarily raising the question of how they may be clustered and interrelated in any such mapping. Animation of tentative mapping of biases from Cognitive Bias Codex on 180 vertices of truncated truncated icosahedron Animation of tentative mapping of clusters of Cognitive Bias Codex on 20 faces of icosahedron Animation of mapping of 180 biases from Cognitive Bias Codex on vertices of truncated truncated icosahedron Animation of mapping of 30 Codex bias clustes onto faces of icosahedron Animations prepared using Stella Polyhedron Navigator In popular imagination such configurations could be readily recognizable as corresponding to the design of a Death Star -- the fictional mobile space station and galactic superweapon featured in the Star Wars space-opera franchise. Indeed it is that imagination which currently gives attention in online gaming to the "war of the pantheons" and to the "dimensions of chaos". Just as pantheons have inspired architectural forms, it is of some relevance to recall the Nazi ambitions in relation to Wewelsburg Castle -- used by the SS from 1934 under Heinrich Himmler as a complex to serve as the central SS cult-site and as a so-called "Centre of the World". Both popular imagination and global leadership now anticipate space warfare in quest of full-spectrum dominance of physical space. Curiously, given the religious interpretation of pantheon, this quest is matched by a less evident quest for "full-spectrum dominance of spiritual space". In the case of evangelical Christianity, dominionism is a primary driving force with major political implications (Brian Morris, Dominionism – nothing to see here? Australin Independent Media, 16 April 2021; Katherine Yurica, Conquering by Stealth and Deception: how the dominionists are succeeding in their quest for national control and world power, Rosamond Press, 14 September 2004). This is however consistent with the Great Commission, with its "marching orders for Christians", as a "a comprehensive task that aims at developing a worldwide Christian civilization and culture" -- understood to be one of the most significant directives in the Bible (Matthew 28:16-20). Corresponding agendas of mutual dominance may be recognized in the other Abrahamic religions (Chris Farrell, Civilization Jihad: Islam's "Great Commission", Scribd, 2014). It is therefore appropriate to anticipate the design of cognitive counterparts to any "Death Star" in the drama -- of which configurations of human values are indicative, as argued separately (Dynamic Exploration of Value Configurations: polyhedral animation of conventional value frameworks, 2008). The Polyhedral representation of value configurations: a challenge to integrative imagination screen shots of stages in the transformation of the geometry of sets of values European Convention on Human Rights Universal Declaration of Human Rights Arab Charter on Human Rights 18 Articles displayed on 2 face-types of a rhombicuboctahedron 30 Articles displayed on 1 face-type of a rhombicosidodecahedron 53 Articles displayed on 2 face-types of a rhombicosidodecahedron European Convention on Human Rights Universal Declaration of Human Rights Arab Charter on Human Rights Prepared using Stella Polyhedron Navigator As the hypothetical confluence of three Abrahamic pantheons, any integrative comprehension of Jerusalem usefully frames the challenge of the insights with which its dimensions might be fruitfully ordered (Jerusalem as a Symbolic Singularity: comprehending the dynamics of hyperreality as a challenge to conventional two-state reality, 2017). Some "navigational implications" are explored separately (Hyperspace Clues to the Psychology of the Pattern that Connects, 2003). The navigation metaphor can be notably explored in the light of progressive insight into the so-called Pentagramma Mirificum as a spherical polyhedron (Global Psychosocial Implication in the Pentagramma Mirificum: clues from spherical geometry to "getting around" and circumnavigating imaginatively, 2015; Beyond dispute in 5-dimensional space: Pentagramma Mirificum? 2015). Further clues to the possibility of such navigation are considered separately (Time for Provocative Mnemonic Aids to Systemic Connectivity? 2018) Roman dodecahedron, Chinese puzzle balls and Rubik's Cube? Interweaving disparate insights? Inversion of the cube and related forms: configuring discourse otherwise? Dynamics of discord anticipating the dynamics of concord Associating significance with a dodecahedron Increasing the dimensionality of the archetypal Round Table? Necessity of encompassing a "hole" -- with a dodecameral mind? Design criteria? Imagination regarding the architectural embodiment of any pantheon has focused primarily on the iconic buildings of that name -- and on archetypal Round Tables around which the deities might be configured. The animations above explore the possibility of extensions into higher dimensional configurations. Possible configurations can be understood more symbolically and allusively, as in the case of the "philosopher's stone" or some other device for interweaving disparate insights. Especially intriguing are the design criteria to enable dynamics vital to the integrity of any such pantheon or meta-pattern (Brian Grimmer The Meta Model: Beyond A Theory of Everything? 1 August 2014; Martha Senger, The Iconic Revolution). As argued separately: Criteria for a Rosetta stone as a meta-model? There is a case for repeatedly challenging any elaboration of a Rosetta stone (or a Philosopher's stone) -- or a Theory of Everything -- on the basis of criteria which can already be recognized, and in the light of criteria which may be of relevance. (Insights into Dynamics of any Psychosocial Rosetta Stone, 2018) Missing from the objectivity by which the quest for such a design is framed are the paradoxical considerations evoked by Douglas Hofstadter (I Am a Strange Loop, 2007) in a sequel to his seminal study (Gödel, Escher, Bach: an Eternal Golden Braid, 1979). "The O-ring"? In the spirit of the unexpected correspondence explored in moonshine mathematics (noted above), any such paradox invites speculation on the "ring" ironically shared by theology and theorem -- and their philosophical complement (The-O ring: Theory, Theorem, Theology, Theosophy? a playful intercultural quest for fruitful complementarity, 2014). As argued there, curiously the prefix "theo" is effectively central to one of the most divisive debates in the current global civilization, namely that between science and religion. On a mathematical blog an obvious question was asked: Is there some connection between the etymology of "theorem" and words like "theology" or "theist"? (Michael Lugo, Etymology of "theorem"God Plays Dice, 23 November 2008). Some respondents asserted that they are not related, as for Eugene van der Pijll: There are two different Proto-Indo-European roots here: dheie-, to look, watch, and dhes-, holy, divine. The first evolved into Greek theaomai, "to watch", thea, "spectacle", and theatron, "theater". Together with orao "to look": thea-oros > theoros, "spectacle watcher"; and theorema, "performance", theoria, "attendance at a spectacle". The other became thesos > theos, god, and thea, goddess. So theorem and theory are related to theater, but not to god. It might be similarly asserted that "waves" and "particles" are not related -- except from the perspective of quantum mechanics. Appropriate to this playful argument however, it took the perspective of a playful theoretical physicist, Richard Feynman, to show dramatically (to a government committee of inquiry) the vulnerability of the O-ring -- under certain conditions -- as an explanation for the traumatic US Challenger Space Shuttle disaster in 1986. As a piece of theatre in its own right, and given the etymological argument, that presentation suggests a further extension of this speculation (The-O Ring and The Bull Ring as Spectacular Archetypes: dramatic correlation of theatre, theory, theorem, theology, and theosophy, 2014). If an "O-ring" is indeed emblematic of the pattern that connects -- and of a meta-pattern -- the question is then how it embodies the "strange loop" with which Hofstadter identifies. Is it indicative of the form of a pantheon -- if only as the organizing metaphor for Hofstadter's own identity? Marrying incommensurables? The earlier speculation concluded with a discussion of Nonsense commensurate with dysfunctionalities of "theo" variants (2014). This quoted the renowned poem, The Owl and the Pussy-Cat (1871) by Edward Lear, known for his various works of nonsense. Curiously it offers a degree of memorable coherence to the improbable relationship between incommensurables fundamental to human society -- more recently expressed as that between the "headless hearts" and the "heartless heads" (Challenge of the "headless hearts" to the "heartless heads"? 2018). Aside from the more obvious sexual connotations, one remarkable no-nonsense commentary on the possible hidden significance of the poem is offered by David Cowles (Owl and PussycatAletheia, 26 March 2014). With respect to the famous ring by which the owl and the cat were finally "married" (obtained from the end of the "Piggy-wig's nose"), Cowles wonders: This possibility might be compared with that of one archetypal ring, namely the Ouroboros as the tail-eating snake (or dragon) -- itself a potential candidate for the form of a pantheon. How might the paradoxical "cognitive twist" of the Möbius strip be understood as embodied in the Ouroboros? Some explorations to that end are presentated separately (Complementary visual patterns: Ouroboros, Möbius strip, Klein bottle; experimental animations in 3D of the ouroboros pattern, 2017; Enantiodromia: cycling through the 'cognitive twist', 2007). Reference to the "nose ring" of the "Piggy-wig" usefully recalls the function of such a ring in leading domesticated animals -- adapted to the sense of humans being "led by the nose". The reference is especially appropriate in a period in which there is every suspicion that global civilization is being deliberately "dumbed down" via the media and the education system (John Taylor Gatto,  Dumbing Us Down: the hidden curriculum of compulsory schooling, 2005; Ivo Mosley, Dumbing Down: culture, politics, and the mass media. 2000; Andrea Halewood, The Silencing of the Lambs, OffGuardian, 16 April 2021; Rosemary Frei, What’s up with our fact-checking blind spots? OffGuardian, 10 April 2021). Is the O-ring indicative of a sense in which humanity is being "led by the nose' -- in a manner yet to be recognized? Singular ring? Speculation can be taken further by addressing the assumption that any ultimate meta-pattern or pantheon is necessarily singular. In the mythopoetic formulation of The Lord of the Rings (1954) of such popular appeal, a singular ring is described in the concluding lines of a poem regarding the 20 "rings of power": One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them In the Land of Mordor where the Shadows lie. As discussed separately, that ring might well be understood as providing the bearer with the most repressive forms of control over social change and development (The "Dark Riders" of Social Change: a challenge for any Fellowship of the Ring, 2002). Tolkien presents the challenge of the One Ring in terms of the necessity for its destruction to safeguard humanity and the planet. Rather than "destruction", it could be usefully argued that it is the cognitive "deconstruction" of the ring that is the challenge. Unrecognizably embedded within theorem, theology and theosophy, the singular O-ring -- as a "nose ring" -- could indeed be understood as intrumental to "in the darkness bind them". Given their cognitive incommensurability, more intriguing is however a sense in which the ring is strangely 3-fold, as 3 rings "mystically" intertwined. Borromean rings? A highly suggestive possibility is offered by the paradoxical 3-ring Borromean ring configuration, of which the 3D variant is presented above as the emblematic logo of the International Mathematical Union. Of particular relevance is the manner in which the 3 rings are mutually orthogonal, as discussed separately (Borromean challenge to comprehension of any trinity? 2018). They then exemplify the challenge of comprehending "unity" from any singular perspective -- and the misleading cognitive closure which may then result. That challenge can be explored through the problematic configuration in 3D of symbols of the Abrhamic religions (Mutually orthogonal Abrahamic symbols from the perspective of projective geometry, 2017). There one or both of the other symbols may be "invisible" from certain perspectives and confusing from other perspectives. Other explorations of global comprehension as a mistaken quest for closure are reproduced below from earlier exercises using Möbius strips in a Borromean ring configuration (Engaging with Elusive Connectivity and Coherence, 2018; Towards a higher order of coherent global strategic organization? 2018; Confusion in Exchanging "Something" for "Nothing", 2015; Encoding meaningful psychosocial complexity otherwise, 2018). In the image on the left, Borromean rings used to indicate interlocking of 3-part Club of Rome report (Come On! Capitalism, Short-termism, Population and the Destruction of the Planet, 2018)t. The animation on the right uses 3 mutually orothogonal tori with a 3-loop helix moving over each of them. Experimental use of three mutually orthogonal rings 3-part strategy Use of Möbius strips towards a Borromean ring configuration Tori and 3-loop helix Borromean rings used to indicate interlocking of 3-part Club of Rome Come On! report Borromean rings formed by 3 orthogonal Moebius strips (animation) Mutually orthogonal Mobius strips Animation of cube with 3 Mobius strips Animation of 3 mutually orothogonal tori with a 3-loop helix moving over each Reproduced from critique (2018) Interactive (x3d; wrl); video (mp4); Interactive versions (x3d, wrl) Interactive (x3d, wrl); video (mp4). The challenge to comprehension is delightfully clarified and illustrated in an extensive analysis of how Dante Alighieri describes the three rings (tre giri) of the Holy Trinity in Paradiso 33 of the Divine Comedy (Arielle Saiber and Aba Mbirika, The Three Giri of Paradiso XXXIIIDante Studies, 131, 2013, pp. 237-272). As the authors summarize: ... we analyze one particularly suggestive arrangement for the giri: that of three intertwined circles in a triangular format. Of the many permutations of this figure, we isolate two variations -- a Brunnian link commonly called the Borromean rings and a (3,3)-torus link -- to show how they more than any other possible arrangement offer unique mathematical, aesthetic, and metaphoric properties that resonate with many of the qualities of the Trinity Dante allusively described in Paradiso 33. We propose these as a possible configuration, rich with mystery in themselves, out of a number of Trinitarian models that Dante knew and contemplated. (p. 239) Christopher Alexander: Sander Bais. The Equations: Icons of Knowledge. Harvard University Press, 2005 Gregory Bateson. Mind and Nature: a necessary unity. Hampton Press,, 1979 Gregory Chaitin. Metamaths: the quest for omega. Atlantic Books, 2005 Rich Cochrane. The Secret Life of Equations: the 50 greatest equations and how they work. Firefly Books, 2016 Philip Davis and Reuben Hersh. The Mathematical Experience. Birkhauser, 1981 Philip Davis, Reuben Hersh and Elena Anne Marchisotto. The Mathematical Experience, Study Edition. Birkhäuser, 2012 John Derbyshire. Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics. Plume, 2004 Marcus du Sautoy: Graham Farmelo. It Must Be Beautiful: great equations of modern science. Granta Books, 2003 Frank A. Farris. Creating Symmetry: the artful mathematics of wallpaper patterns. Princeton University Press, 2015 Craig Fraser: R. Buckminster Fuller with E. J. Applewhite: John Taylor Gatto. Dumbing Us Down. The Hidden Curriculum of Compulsory Schooling. New Society Publishers, 2005 Jonathan Haidt. The Righteous Mind: why good people are divided by politics and religion. Vintage, 2013 [summary] Douglas Hofstadter: Douglas Hofstadter and Emmanuel Sander. Surfaces and Essences: analogy as the fuel and fire of thinking. Basic Books, 2012 [summary] Mark Johnson. The Meaning of the Body: aesthetics of human understanding. University of Chicago Press, 2007 Stephen Cole Kleene. Introduction to Metamathematics. Generic, 1952 [contents] Thomas Kuhn. The Structure of Scientific Revolutions. University of Chicago Press, 1962 [summary] George Lakoff and Mark Johnson. Philosophy In The Flesh: the embodied mind and its challenge to western thought. Basic Books, 1999 Ernest G. McClain. Myth of Invariance: the origins of the gods, mathematics and music from the Rg Veda to Plato. Nicolas-Hays, 1976 Arthur I. Miller: Ivo Mosley (Ed.). Dumbing Down: culture, politics, and the mass media. Imprint Academic, 2000 Thamer Naouech. Prime Numbers: the Holy Grail of mathematics. ndependently published, 2020 Diarmuid O'Murchu. Quantum Theology: spiritual implications of the new physics. Crossroad Publishing Company, 2004 Daniel Parrochia. Mathematics and Philosophy. ISTE/Wiley, 2018 [contents] John Polkinghorne. Quantum Physics and Theology: an unexpected kinship. Yale University Press, 2008 Helena Rasiowa and Roman Sikorski. Mathematics of Mathematics. Polish Scientific Publishers, 1970 Wolff-Michael Roth: Denis H. Rouvray and R. Bruce King (Eds.): Maxine Sheets-Johnstone: Ian Stewart: Marie-Lousie von Franz. Number and Time; reflections leading towards a unification of psychology and physics. Rider 1974. Sarah Voss. What Number Is God? Metaphors, Metaphysics, Metamathematics, and the Nature of Things. State University of New York Press, 1995 [review] Michael Witzel. The Origins of the World's Mythologies. Oxford University Press, 2010 Frances Yates.(The Art of Memory. Routledge, 1966 Creative Commons License For further updates on this site, subscribe here
6d2c72deeb538fd8
High-performance and GPU computing GPU cluster To fully exploit modern computer architectures (such as GPUs) often requires purpose-built algorithms. To perform development and numerical simulation we maintain a local GPU cluster with 31 GPUs (more details for using the system can be found here, only accessible with the network of the University of Innsbruck). We have accelerated a range of computer simulations using GPU computing. For example Vlasov simulation in plasma physics, Schrödinger equation in quantum mechanics, fluid flow over airfoils, and sonic boom propagation. We are further interested in exploiting specific features of modern computer hardware in order to accelerate computer simulations. For example, the improved performance of single precision computation on GPUs (especially on the cheaper consumer cards) can be exploited by mixed precision algorithms.
4165bf1be07efb45
Mathematical Physics 1708 Submissions [16] viXra:1708.0445 [pdf] replaced on 2019-04-12 06:04:26 How Many Points Are there in a Line Segment? – a New Answer from Discrete-Cellular Space Viewpoint Authors: Victor Christianto, Florentin Smarandache Comments: 12 Pages. This paper has been published by Octogon Mathematical Magazine, 2018. Your comments are welcome While it is known that Euclid’s five axioms include a proposition that a line consists at least of two points, modern geometry avoid consistently any discussion on the precise definition of point, line, etc. It is our aim to clarify one of notorious question in Euclidean geometry: how many points are there in a line segment? – from discrete-cellular space (DCS) viewpoint. In retrospect, it may offer an alternative of quantum gravity, i.e. by exploring discrete gravitational theories. To elucidate our propositions, in the last section we will discuss some implications of discrete cellular-space model in several areas of interest: (a) cell biology, (b) cellular computing, (c) Maxwell equations, (d) low energy fusion, and (e) cosmology modelling Category: Mathematical Physics [15] viXra:1708.0424 [pdf] submitted on 2017-08-29 02:39:21 A Covariant Ricci Flow Authors: Vu B Ho Comments: 5 Pages. In this work, we discuss the possibility to formulate a covariant Ricci flow so that it satisfies the principle of relativity and therefore can be applied to all coordinate systems defined on a Riemannian manifold. Since the investigation may be considered to be in the domain of pure mathematics, which is outside our field of physical investigations, therefore there may be errors in mathematical arguments that we are unable to foresee. Category: Mathematical Physics [14] viXra:1708.0406 [pdf] submitted on 2017-08-28 04:39:26 Nouvelle Ecriture des Equations du Problème de n Corps Authors: Abdelmajid Ben Hadj Salem Comments: 6 Pages. In French. From the equations of the problem of $n$ body, we consider that $t$ is a function of the variables $(x_k,y_k,z_k)_{k=1,n}$. We write a new formulation of the equations of the $n$ body problem. Category: Mathematical Physics [13] viXra:1708.0328 [pdf] replaced on 2017-10-29 11:13:19 Using a Quotient Polynomial to Probe the Solvability of Polynomial Potentials in One-Dimensional Quantum Mechanics Authors: Spiros Konstantogiannis Comments: 17 Pages. Making use of the Bethe ansatz, we introduce a quotient polynomial and we show that the presence of intermediate terms in it, i.e. terms other than the constant and the leading one, constitutes a non-solvability condition of the respective potential. In this context, both the exact solvability of the quantum harmonic oscillator and the quasi-exact solvability of the sextic anharmonic oscillator stem naturally from the quotient polynomial, as in the first case, it is an energy-dependent constant, while in the second case, it is a second-degree binomial with no linear term. In all other cases, the quotient polynomial has at least one intermediate term, the presence of which makes the respective potential non-solvable. Category: Mathematical Physics [12] viXra:1708.0254 [pdf] replaced on 2019-01-25 14:11:13 Double Conformal Space-Time Algebra for General Quadric Surfaces in Space-Time Authors: Robert B. Easter Comments: 27 pages. Extended paper, extending the 10-page conference paper Double Conformal Space-Time Algebra (ICNPAA 2016; DOI:10.1063/1.4972658). The G(4,8) Double Conformal Space-Time Algebra (DCSTA) is a high-dimensional 12D Geometric Algebra that extends the concepts introduced with the G(8,2) Double Conformal / Darboux Cyclide Geometric Algebra (DCGA) with entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) in spacetime with a new boost operator. The base algebra in which spacetime geometry is modeled is the G(1,3) Space-Time Algebra (STA). Two G(2,4) Conformal Space-Time subalgebras (CSTA) provide spacetime entities for points, hypercones, hyperplanes, hyperpseudospheres (and their intersections) and a complete set of versors for their spacetime transformations that includes rotation, translation, isotropic dilation, hyperbolic rotation (boost), planar reflection, and (pseudo)spherical inversion. G(4,8) DCSTA is a doubling product of two orthogonal G(2,4) CSTA subalgebras that inherits doubled CSTA entities and versors from CSTA and adds new 2-vector entities for general (pseudo)quadrics and Darboux (pseudo)cyclides in spacetime that are also transformed by the doubled versors. The "pseudo" surface entities are spacetime surface entities that use the time axis as a pseudospatial dimension. The (pseudo)cyclides are the inversions of (pseudo)quadrics in hyperpseudospheres. An operation for the directed non-uniform scaling (anisotropic dilation) of the 2-vector general quadric entities is defined using the boost operator and a spatial projection. Quadric surface entities can be boosted into moving surfaces with constant velocities that display the Thomas-Wigner rotation and length contraction of special relativity. DCSTA is an algebra for computing with general quadrics and their inversive geometry in spacetime. For applications or testing, G(4,8) DCSTA can be computed using various software packages, such as the symbolic computer algebra system SymPy with the GAlgebra module. Category: Mathematical Physics [11] viXra:1708.0226 [pdf] replaced on 2018-04-20 05:33:43 On the Principle of Least Action Authors: Vu B Ho Comments: 6 Pages. Investigations into the nature of the principle of least action have shown that there is an intrinsic relationship between geometrical and topological methods and the variational principle in classical mechanics. In this work, we follow and extend this kind of mathematical analysis into the domain of quantum mechanics. First, we show that the identification of the momentum of a quantum particle with the de Broglie wavelength in 2-dimensional space would lead to an interesting feature; namely the action principle δS=0 would be satisfied not only by the stationary path, corresponding to the classical motion, but also by any path. Thereupon the Bohr quantum condition possesses a topological character in the sense that the principal quantum number n is identified with the winding number, which is used to represent the fundamental group of paths. We extend our discussions into 3-dimensional space and show that the charge of a particle also possesses a topological character and is quantised and classified by the homotopy group of closed surfaces. We then discuss the possibility to extend our discussions into spaces with higher dimensions and show that there exist physical quantities that can be quantised by the higher homotopy groups. Finally we note that if Einstein’s field equations of general relativity are derived from Hilbert’s action through the principle of least action then for the case of n=2 the field equations are satisfied by any metric if the energy-momentum tensor is identified with the metric tensor, similar to the case when the momentum of a particle is identified with the curvature of the particle’s path. Category: Mathematical Physics [10] viXra:1708.0198 [pdf] submitted on 2017-08-17 01:42:13 A Temporal Dynamics: a Generalised Newtonian and Wave Mechanics Authors: Vu B Ho Comments: 24 Pages. In this work we discuss the possibility of reconciling quantum mechanics with classical mechanics by formulating a temporal dynamics, which is a dynamics caused by the rate of change of time with respect to distance. First, we show that a temporal dynamics can be derived from the time dilation formula in Einstein’s theory of special relativity. Then we show that a short-lived time-dependent force derived from a dynamical equation that is obtained from the temporal dynamics in a 1-dimensional temporal manifold can be used to describe Bohr’s postulates of quantum radiation and quantum transition between stable orbits in terms of classical dynamics and differential geometry. We extend our discussions on formulating a temporal dynamics to a 3-dimensional temporal manifold. With this generalisation we are able to demonstrate that a sub-quantum dynamics is a classical dynamics. Category: Mathematical Physics [9] viXra:1708.0197 [pdf] submitted on 2017-08-17 01:50:44 On the Stationary Orbits of a Hydrogen-Like Atom Authors: Vu B Ho Comments: 10 Pages. In this work we discuss the possibility of combining the Coulomb potential with the Yukawa’s potential to form a mixed potential and then investigate whether this combination can be used to explain why the electron does not radiate when it manifests in the form of circular motions around the nucleus. We show that the mixed Coulomb-Yukawa potential can yield stationary orbits with zero net force, therefore if the electron moves around the nucleus in these orbits it will not radiate according to classical electrodynamics. We also show that in these stationary orbits, the kinetic energy of the electron is converted into potential energy, therefore the radiation process of a hydrogen-like atom does not related to the transition of the electron as a classical particle between the energy levels. The radial distribution functions of the wave equation determine the energy density rather than the electron density at a distance r along a given direction from the nucleus. It is shown in the appendix that the mixed potential used in this work can be derived from Einstein’s general theory of relativity by choosing a suitable energy-momentum tensor. Even though such derivation is not essential in our discussions, it shows that there is a possible connection between general relativity and quantum physics at the quantum level. Category: Mathematical Physics [8] viXra:1708.0196 [pdf] submitted on 2017-08-17 01:54:09 A Theory of Temporal Relativity Authors: Vu B Ho Comments: 15 Pages. In this work we develop a theory of temporal relativity, which includes a temporal special relativity and a temporal general relativity, on the basis of a generalised Newtonian temporal dynamics. We then show that a temporal relativity can be used to study the dynamics of quantum radiation of an elementary particle from a quantum system. Category: Mathematical Physics [7] viXra:1708.0192 [pdf] replaced on 2018-07-19 18:41:23 Spacetime Structures of Quantum Particles Authors: Vu B Ho Comments: 11 Pages. This paper has been published in International Journal of Physics In this work first we show that the three main formulations of physics, namely, Newton’s second law of motion, Maxwell field equations of electromagnetism and Einstein field equations of gravitation can be formulated in similar covariant forms so that the formulations differ only by the nature of the geometrical objects that represent the corresponding physical entities. We show that Newton’s law can be represented by a scalar, the electromagnetic field by a symmetric affine connection or a dual vector, and the gravitational field by a symmetric metric tensor. Then with the covariant formulation for the gravitational field we can derive differential equations that can be used to construct the spacetime structures for short-lived and stable quantum particles. We show that geometric objects, such as the Ricci scalare curvature and Gaussian curvature, exhibit probabilistic characteristics. In particular, we also show that Schrödinger wavefunctions can be used to construct spacetime structures for the quantum states of a quantum system, such as the hydrogen atom. Even though our discussions in this work are focused on the microscopic objects, the results obtained can be applied equally to the macroscopic phenomena. Category: Mathematical Physics [6] viXra:1708.0184 [pdf] replaced on 2017-09-18 07:37:47 A One Page Derivation of the Theory of Everything Authors: Alexandre Harvey-Tremblay Comments: 11 Pages. In a previous work I have derived the theory of everything (ToE) in a 74 pages paper. To make the theory more accessible, in this work, I derive the equation for the ToE on one page. I then follow the derivation with a few pages of discussion. Category: Mathematical Physics [5] viXra:1708.0166 [pdf] submitted on 2017-08-15 06:49:15 Regular and Singular Rational Extensions of the Harmonic Oscillator with Two Known Eigenstates Authors: Spiros Konstantogiannis Comments: 50 Pages. Exactly solvable rational extensions of the harmonic oscillator have been constructed as supersymmetric partner potentials of the harmonic oscillator [1] as well as using the so-called prepotential approach [2]. In this work, we use the factorization property of the energy eigenfunctions of the harmonic oscillator and a simple integrability condition to construct and examine series of regular and singular rational extensions of the harmonic oscillator with two known eigenstates, one of which is the ground state. Special emphasis is given to the interrelation between the special zeros of the wave function, the poles of the potential, and the excitation of the non-ground state. In the last section, we analyze specific examples. Category: Mathematical Physics [4] viXra:1708.0149 [pdf] replaced on 2017-09-09 17:23:30 On an Entropic Universal Turing Machine Isomorphic to Physics (draft) Authors: Alexandre Harvey-Tremblay Comments: 39 Pages. According to the second law of thermodynamics, a physical system will tend to increase its entropy over time. In this paper, I investigate a universal Turing machine (UTM) running multiple programs in parallel according to a scheduler. I found that if, over the course of the computation, the scheduler adjusts the work done on programs so as to maximize the entropy in the calculation of the halting probability Ω, the system will follow the laws of physics. Specifically, I show that the computation will obey algorithmic information theory (AIT) analogues to general relativity, entropic dark energy, the Schrödinger equation, a maximum computation speed analogous to the speed of light, the Lorentz's transformation, light cone, the Dirac equation for relativistic quantum mechanics, spins, polarization, etc. As the universe follows the second law of thermodynamics, these results would seem to suggest an affinity between an "entropic UTM" and the laws of physics. Category: Mathematical Physics [3] viXra:1708.0147 [pdf] replaced on 2017-08-28 07:33:28 Approximation to Higgs Boson Authors: Harry Watson Comments: 2 Pages. Consider the product (4pi)(4pi-1/pi)(4pi-2/pi)(4pi-2/pi)(4pi-4/pi). The product of the first three terms is 1836.15. The product of the last two terms is 134.72. The mass ratio of the proton to the electron is 1836.15. We may sharpen the result by letting the last two terms be (4pi-3/pi)(4pi-4/pi) = 131.13. Category: Mathematical Physics [2] viXra:1708.0031 [pdf] replaced on 2018-05-26 01:00:37 Diamond Operator as a Square Root of D'alembertian for Bosons Authors: Hideki Mutoh Comments: 6 Pages. Dirac equation includes the 4 x 4 complex differential operator matrix, which is one of square roots of d'Alembertian with spin of half integer. We found another 4 x 4 complex differential matrix as a square root of d'Alembertian for bosons, which we call diamond operator. The extended Maxwell's equations with charge creation-annihilation field and the linear gravitational field equations with energy creation-annihilation field can be simply written by using the diamond operator. It is shown that the linear gravitational field equations derive Newton's second law of motion, Klein-Gordon equation, time independent Schrödinger equation, and the principle of quantum mechanics. Category: Mathematical Physics [1] viXra:1708.0011 [pdf] replaced on 2019-04-02 07:05:42 General Solutions of Mathematical Physics Equations Authors: Hong Lai Zhu Comments: 53 Pages. In this paper, using proposed three new transformation methods we have solved general solutions and exact solutions of the problems of definite solutions of the Laplace equation, Poisson equation, Schrödinger equation, the homogeneous and non-homogeneous wave equations, Helmholtz equation and heat equation. In the process of solving, we find that in the more universal case, general solutions of partial differential equations have various forms such as basic general solution, series general solution, transformational general solution, generalized series general solution and so on. Category: Mathematical Physics
b8ecdc3e49c24df8
Search: schrödinger equation 7 results Quantum Mechanics 7   Modeling Atoms, Molecules, and Crystals in One Dimension eigenenergy, eigenstate, schrödinger equation, waves, spin, hemmer Solving the time independent Schrödinger equation in one dimension using matrix diagonalisation for five different potentials.   Solving the Time-Dependent Schrödinger equation eigenenergy, eigenstate, schrödinger equation, tunneling, scattering, ehrenfest's theorem, animation The Time-Dependent Schrödinger equation is solved by expressing the solution as a linear combination of (stationary) solutions of the Time-Independent Schrödinger equation.   One-Dimensional Wave Propagation animation, schrödinger equation, tunneling, scattering A one-dimensional wave-packet is propagated forward in time for various different potentials.   Eigenenergies Through Matrix Diagonalization harmonic oscillator The eigenenergies of a system are found by discretizing the Schrödinger equation and finding the eigenvalues of the resulting matrix.   Numerical Determination of Eigenenergies for an Asymmetric Potential eigenenergy, forward shooting, eigenstate, schrödinger equation Using a forward-shooting method to determine the eigenenergies and eigenfunctions of an asymmetric potential in one dimension.   Hydrogen Molecule Ion schrödinger equation Employing Monte Carlo integration to determine the "shape" of the hydrogen molecule ion.   Band Structures and Newton's Method bloch's theorem, newton's theorem, schrödinger equation Using Newton's method to calculate the band structure for the simple Dirac comb potential in one dimension.
db5b737cd74e7567
In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities[1] asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables or canonically conjugate variables such as position x and momentum p, can be known. Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.[2] The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard[3] later that year and by Hermann Weyl[4] in 1928: where ħ is the reduced Planck constant, h/(2π). Historically, the uncertainty principle has been confused[5][6] with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[7] It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[8] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.[9] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[10][note 1] Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting[12] or quantum optics[13] systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.[14] The uncertainty principle is not readily apparent on the macroscopic scales of everyday experience.[15] So it is helpful to demonstrate how it applies to more easily understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.[16] Wave mechanics interpretationEdit (Ref [10]) Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform, there is no definite position of the particle. As the amplitude increases above zero the curvature reverses sign, so the amplitude begins to decrease again, and vice versa—the result is an alternating amplitude: a wave. According to the de Broglie hypothesis, every object in the universe is a wave, i.e., a situation which gives rise to this phenomenon. The position of the particle is described by a wave function  . The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is On the other hand, consider a wave function that is a sum of many waves, which we may write this as Matrix mechanics interpretationEdit (Ref [10]) In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators  and , one defines their commutator as where Î is the identity operator. Suppose, for the sake of proof by contradiction, that   is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write On the other hand, the above canonical commutation relation requires that Robertson–Schrödinger uncertainty relationsEdit The most common general form of the uncertainty principle is the Robertson uncertainty relation.[17] For an arbitrary Hermitian operator   we can associate a standard deviation where the brackets   indicate an expectation value. For a pair of operators   and  , we may define their commutator as In this notation, the Robertson uncertainty relation is given by The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation,[18] where we have introduced the anticommutator, • For position and linear momentum, the canonical commutation relation   implies the Kennard inequality from above: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for  , a choice  ,  , in angular momentum multiplets, ψ = |j, m〉, bounds the Casimir invariant (angular momentum squared,  ) from below and thus yields useful constraints such as j(j + 1) ≥ m(m + 1), and hence j ≥ m, among others. • In non-relativistic mechanics, time is privileged as an independent variable. Nevertheless, in 1945, L. I. Mandelshtam and I. E. Tamm derived a non-relativistic time–energy uncertainty relation, as follows.[26][27] For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator  , the following formula holds: where σE is the standard deviation of the energy operator (Hamiltonian) in the state ψ, σB stands for the standard deviation of B. Although the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters the Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B: In other words, this is the time intervalt) after which the expectation value   changes appreciably. A counterexampleEdit Suppose we consider a quantum particle on a ring, where the wave function depends on an angular variable  , which we may take to lie in the interval  . Define "position" and "momentum" operators   and   by where we impose periodic boundary conditions on  . Note that the definition of   depends on our choice to have   range from 0 to  . These operators satisfy the usual commutation relations for position and momentum operators,  .[31] Now let   be any of the eigenstates of  , which are given by  . Note that these states are normalizable, unlike the eigenstates of the momentum operator on the line. Note also that the operator   is bounded, since   ranges over a bounded interval. Thus, in the state  , the uncertainty of   is zero and the uncertainty of   is finite, so that Although this result appears to violate the Robertson uncertainty principle, the paradox is resolved when we note that   is not in the domain of the operator  , since multiplication by   disrupts the periodic boundary conditions imposed on  .[22] Thus, the derivation of the Robertson relation, which requires   and   to be defined, does not apply. (These also furnish an example of operators satisfying the canonical commutation relations but not the Weyl relations.[32]) For the usual position and momentum operators   and   on the real line, no such counterexamples can occur. As long as   and   are defined in the state  , the Heisenberg uncertainty principle holds, even if   fails to be in the domain of   or of  .[33] (Refs [10][19]) Quantum harmonic oscillator stationary statesEdit the variances may be computed directly, The product of these standard deviations is then In particular, the above Kennard bound[3] is saturated for the ground state n=0, for which the probability density is just the normal distribution. Quantum harmonic oscillator with Gaussian initial conditionEdit Position (blue) and momentum (red) probability densities for an initially Gaussian distribution. From top to bottom, the animations show the cases Ω=ω, Ω=2ω, and Ω=ω/2. Note the tradeoff between the widths of the distributions. where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to From the relations we can conclude the following: (the right most equality holds only when Ω = ω) . Coherent statesEdit A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount   in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Particle in a boxEdit The product of the standard deviations is therefore Constant momentumEdit such that the uncertainty product can only increase with time as Additional uncertainty relationsEdit Mixed statesEdit The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.[34] The Maccone–Pati uncertainty relationsEdit The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Maccone and Pati give non-trivial bounds on the sum of the variances for two incompatible observables.[35] For two non-commuting observables   and   the first stronger uncertainty relation is given by where  ,  ,   is a normalized vector that is orthogonal to the state of the system   and one should choose the sign of   to make this real quantity a positive number. The second stronger uncertainty relation is given by where   is a state orthogonal to  . The form of   implies that the right-hand side of the new uncertainty relation is nonzero unless   is an eigenstate of  . One may note that   can be an eigenstate of   without being an eigenstate of either   or  . However, when   is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless   is an eigenstate of both. Phase spaceEdit Choosing  , we arrive at or, explicitly, after algebraic manipulation, Systematic and statistical errorsEdit The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation  . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. If we let   represent the error (i.e., inaccuracy) of a measurement of an observable A and   the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Ozawa[6] — encompassing both systematic and statistical errors — holds: Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.[37][38] Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors   and  . There is increasing experimental evidence[8][39][40][41] that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism,[1] it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): The two simultaneous measurements on A and B are necessarily[42] unsharp or weak. It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson[1] and Ozawa relations we obtain The four terms can be written as: as the inaccuracy in the measured values of the variable A and as the resulting fluctuation in the conjugate variable B, Fujikawa[43] established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: Quantum entropic uncertainty principleEdit For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.[24][44][45][46] Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.[47] This conjecture, also studied by Hirschman[48] and proven in 1975 by Beckner[49] and by Iwo Bialynicki-Birula and Jerzy Mycielski[50] is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where the Shannon information entropies are subject to the following constraint, where the logarithms may be in any base. The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefunction φ(p), the above constraint can be written for the corresponding entropies as where h is Planck's constant. Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then If, instead, x0 p0 is chosen to be ħ, then If x0 and p0 are chosen to be unity in whatever system of units are being used, then where h is interpreted as a dimensionless number equal to the value of Planck's constant in the chosen system of units. The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities[51] In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as Under the above definition, the entropic uncertainty relation is Here we note that δx δp/h is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. Harmonic analysisEdit Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function f and its Fourier transform ƒ̂:[52][53][54] Signal processing Edit where   and   are the standard deviations of the time and frequency estimates respectively [55]. Stated alternatively, "One cannot simultaneously sharply localize a signal (function f ) in both the time domain and frequency domain ( ƒ̂, its Fourier transform)". DFT-Uncertainty principleEdit There is an uncertainty principle that uses signal sparsity (or the number of non-zero coefficients).[56] Let   be a sequence of N complex numbers and   its discrete Fourier transform. Denote by   the number of non-zero elements in the time sequence   and by   the number of non-zero elements in the frequency sequence  . Then, Benedicks's theoremEdit Amrein–Berthier[57] and Benedicks's theorem[58] intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is non-zero cannot both be small. Specifically, it is impossible for a function f in L2(R) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure. A more quantitative version is[59][60] One expects that the factor CeC|S||Σ| may be replaced by CeC(|S||Σ|)1/d, which is only known if either S or Σ is convex. Hardy's uncertainty principleEdit The mathematician G. H. Hardy formulated the following uncertainty principle:[61] it is not possible for f and ƒ̂ to both be "very rapidly decreasing". Specifically, if f in   is such that   (  an integer), then, if ab > 1, f = 0, while if ab = 1, then there is a polynomial P of degree N such that This was later improved as follows: if   is such that where P is a polynomial of degree (Nd)/2 and A is a real d×d positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander[62] (the case  ) and Bonami, Demange, and Jaming[63] for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.[64] A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref.[65] Theorem. If a tempered distribution   is such that for some convenient polynomial P and real positive definite matrix A of type d × d. Werner Heisenberg formulated the uncertainty principle at Niels Bohr's institute in Copenhagen, while working on the mathematical foundations of quantum mechanics.[66] Werner Heisenberg and Niels Bohr In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement,[2] but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture[69] he refined his principle: Kennard[3] in 1927 first proved the modern inequality: Terminology and translationEdit Heisenberg's microscopeEdit The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by utilizing the observer effect of an imaginary microscope as a measuring device.[69] He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.[71]:49–50 Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around. The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to Planck's constant.[72] Heisenberg did not care to formulate the uncertainty principle as an exact limit (which is elaborated below), and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. Critical reactionsEdit The ideal of the detached observerEdit Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German): "Like the moon has a definite position" Einstein said to me last winter, "whether or not we look at the moon, the same must also hold for the atomic objects, as there is no sharp distinction possible between these and macroscopic objects. Observation cannot create an element of reality like a position, there must be something contained in the complete description of physical reality which corresponds to the possibility of observing a position, already before the observation has been actually made." I hope, that I quoted Einstein correctly; it is always difficult to quote somebody out of memory with whom one does not agree. It is precisely this kind of postulate which I call the ideal of the detached observer. • Letter from Pauli to Niels Bohr, February 15, 1955[73] Einstein's slitEdit A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.[74] Einstein's boxEdit EPR paradox for entangled particlesEdit But Einstein came to much more far-reaching conclusions from the same thought experiment. He believed the "natural basic assumption" that a complete description of reality would have to predict the results of experiments from "locally changing deterministic quantities" and therefore would have to include more information than the maximum possible allowed by the uncertainty principle. In 1964, John Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. These hidden variables may be "hidden" because of an illusion that occurs during observations of objects that are too large or too small. This illusion can be likened to rotating fan blades that seem to pop in and out of existence at different locations and sometimes seem to be in the same place at the same time when observed. This same illusion manifests itself in the observation of subatomic particles. Both the fan blades and the subatomic particles are moving so fast that the illusion is seen by the observer. Therefore, it is possible that there would be predictability of the subatomic particles behavior and characteristics to a recording device capable of very high speed tracking....Ironically this fact is one of the best pieces of evidence supporting Karl Popper's philosophy of invalidation of a theory by falsification-experiments. That is to say, here Einstein's "basic assumption" became falsified by experiments based on Bell's inequalities. For the objections of Karl Popper to the Heisenberg inequality itself, see below. Popper's criticismEdit Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist.[81] He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations".[81][82] In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables. In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften,[83] and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: Many-worlds uncertaintyEdit Free willEdit Some scientists including Arthur Compton[86] and Martin Heisenberg[87] have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature.[88] The standard view, however, is that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.[88] The second law of thermodynamicsEdit There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics.[89] See alsoEdit 1. ^ N.B. on precision: If   and   are the precisions of position and momentum obtained in an individual measurement and  ,   their standard deviations in an ensemble of individual measurements on similarly prepared systems, then "There are, in principle, no restrictions on the precisions of individual measurements   and  , but the standard deviations will always satisfy  ".[11] 1. ^ a b c Sen, D. (2014). "The Uncertainty relations in quantum mechanics" (PDF). Current Science. 107 (2): 203–218. 2. ^ a b c Heisenberg, W. (1927), "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", Zeitschrift für Physik (in German), 43 (3–4): 172–198, Bibcode:1927ZPhy...43..172H, doi:10.1007/BF01397280.. Annotated pre-publication proof sheet of Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, March 21, 1927. 3. ^ a b c Kennard, E. H. (1927), "Zur Quantenmechanik einfacher Bewegungstypen", Zeitschrift für Physik (in German), 44 (4–5): 326–352, Bibcode:1927ZPhy...44..326K, doi:10.1007/BF01391200. 4. ^ Weyl, H. (1928), Gruppentheorie und Quantenmechanik, Leipzig: Hirzel 5. ^ Furuta, Aya (2012), "One Thing Is Certain: Heisenberg's Uncertainty Principle Is Not Dead", Scientific American 6. ^ a b Ozawa, Masanao (2003), "Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement", Physical Review A, 67 (4): 42105, arXiv:quant-ph/0207121, Bibcode:2003PhRvA..67d2105O, doi:10.1103/PhysRevA.67.042105 7. ^ Werner Heisenberg, The Physical Principles of the Quantum Theory, p. 20 8. ^ a b Rozema, L. A.; Darabi, A.; Mahler, D. H.; Hayat, A.; Soudagar, Y.; Steinberg, A. M. (2012). "Violation of Heisenberg's Measurement–Disturbance Relationship by Weak Measurements". Physical Review Letters. 109 (10): 100404. arXiv:1208.0034v2. Bibcode:2012PhRvL.109j0404R. doi:10.1103/PhysRevLett.109.100404. PMID 23005268. 9. ^ Indian Institute of Technology Madras, Professor V. Balakrishnan, Lecture 1 – Introduction to Quantum Physics; Heisenberg's uncertainty principle, National Programme of Technology Enhanced Learning on YouTube 10. ^ a b c d L.D. Landau, E. M. Lifshitz (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. Online copy. 11. ^ Section 3.2 of Ballentine, Leslie E. (1970), "The Statistical Interpretation of Quantum Mechanics", Reviews of Modern Physics, 42 (4): 358–381, Bibcode:1970RvMP...42..358B, doi:10.1103/RevModPhys.42.358. This fact is experimentally well-known for example in quantum optics (see e.g. chap. 2 and Fig. 2.1 Leonhardt, Ulf (1997), Measuring the Quantum State of Light, Cambridge: Cambridge University Press, ISBN 0 521 49730 2 12. ^ Elion, W. J.; M. Matters, U. Geigenmüller & J. E. Mooij; Geigenmüller, U.; Mooij, J. E. (1994), "Direct demonstration of Heisenberg's uncertainty principle in a superconductor", Nature, 371 (6498): 594–595, Bibcode:1994Natur.371..594E, doi:10.1038/371594a0 13. ^ Smithey, D. T.; M. Beck, J. Cooper, M. G. Raymer; Cooper, J.; Raymer, M. G. (1993), "Measurement of number–phase uncertainty relations of optical fields", Phys. Rev. A, 48 (4): 3159–3167, Bibcode:1993PhRvA..48.3159S, doi:10.1103/PhysRevA.48.3159, PMID 9909968CS1 maint: Multiple names: authors list (link) 14. ^ Caves, Carlton (1981), "Quantum-mechanical noise in an interferometer", Phys. Rev. D, 23 (8): 1693–1708, Bibcode:1981PhRvD..23.1693C, doi:10.1103/PhysRevD.23.1693 15. ^ Jaeger, Gregg (September 2014). "What in the (quantum) world is macroscopic?". American Journal of Physics. 82 (9): 896–905. Bibcode:2014AmJPh..82..896J. doi:10.1119/1.4878358. 16. ^ Claude Cohen-Tannoudji; Bernard Diu; Franck Laloë (1996), Quantum mechanics, Wiley-Interscience: Wiley, pp. 231–233, ISBN 978-0-471-56952-7 17. ^ a b Robertson, H. P. (1929), "The Uncertainty Principle", Phys. Rev., 34: 163–64, Bibcode:1929PhRv...34..163R, doi:10.1103/PhysRev.34.163 18. ^ a b Schrödinger, E. (1930), "Zum Heisenbergschen Unschärfeprinzip", Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse, 14: 296–303 19. ^ a b Griffiths, David (2005), Quantum Mechanics, New Jersey: Pearson 20. ^ Riley, K. F.; M. P. Hobson and S. J. Bence (2006), Mathematical Methods for Physics and Engineering, Cambridge, p. 246 21. ^ Davidson, E. R. (1965), "On Derivations of the Uncertainty Principle", J. Chem. Phys., 42 (4): 1461, Bibcode:1965JChPh..42.1461D, doi:10.1063/1.1696139 22. ^ a b c Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer, p. 245 23. ^ Jackiw, Roman (1968), "Minimum Uncertainty Product, Number‐Phase Uncertainty Product, and Coherent States", J. Math. Phys., 9 (3): 339, Bibcode:1968JMP.....9..339J, doi:10.1063/1.1664585 24. ^ a b Carruthers, P.; Nieto, M. M. (1968), "Phase and Angle Variables in Quantum Mechanics", Rev. Mod. Phys., 40 (2): 411–440, Bibcode:1968RvMP...40..411C, doi:10.1103/RevModPhys.40.411 25. ^ Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer 26. ^ L. I. Mandelshtam, I. E. Tamm, The uncertainty relation between energy and time in nonrelativistic quantum mechanics, 1945. 27. ^ Hilgevoord, Jan (1996). "The uncertainty principle for energy and time" (PDF). American Journal of Physics. 64 (12): 1451–1456. Bibcode:1996AmJPh..64.1451H. doi:10.1119/1.18410.; Hilgevoord, Jan (1998). "The uncertainty principle for energy and time. II". American Journal of Physics. 66 (5): 396–402. Bibcode:1998AmJPh..66..396H. doi:10.1119/1.18880. 28. ^ The broad linewidth of fast-decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used detuned microwave cavities to slow down the decay rate, to get sharper peaks. Gabrielse, Gerald; H. Dehmelt (1985), "Observation of Inhibited Spontaneous Emission", Physical Review Letters, 55 (1): 67–70, Bibcode:1985PhRvL..55...67G, doi:10.1103/PhysRevLett.55.67, PMID 10031682 29. ^ Likharev, K. K.; A. B. Zorin (1985), "Theory of Bloch-Wave Oscillations in Small Josephson Junctions", J. Low Temp. Phys., 59 (3/4): 347–382, Bibcode:1985JLTP...59..347L, doi:10.1007/BF00683782 30. ^ Anderson, P. W. (1964), "Special Effects in Superconductivity", in Caianiello, E. R., Lectures on the Many-Body Problem, Vol. 2, New York: Academic Press 31. ^ More precisely,   whenever both   and   are defined, and the space of such   is a dense subspace of the quantum Hilbert space. See Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer, p. 245 34. ^ Steiger, Nathan. "Quantum Uncertainty and Conservation Law Restrictions on Gate Fidelity". Brigham Young University. Retrieved 19 June 2011. 35. ^ Maccone, Lorenzo; Pati, Arun K. (31 December 2014). "Stronger Uncertainty Relations for All Incompatible Observables". Physical Review Letters. 113 (26): 260401. arXiv:1407.0338. Bibcode:2014PhRvL.113z0401M. doi:10.1103/PhysRevLett.113.260401. 36. ^ Curtright, T.; Zachos, C. (2001). "Negative Probability and Uncertainty Relations". Modern Physics Letters A. 16 (37): 2381–2385. arXiv:hep-th/0105226. Bibcode:2001MPLA...16.2381C. doi:10.1142/S021773230100576X. 37. ^ Busch, P.; Lahti, P.; Werner, R. F. (2013). "Proof of Heisenberg's Error-Disturbance Relation". Physical Review Letters. 111 (16): 160405. arXiv:1306.1565. Bibcode:2013PhRvL.111p0405B. doi:10.1103/PhysRevLett.111.160405. PMID 24182239. 38. ^ Busch, P.; Lahti, P.; Werner, R. F. (2014). "Heisenberg uncertainty for qubit measurements". Physical Review A. 89. arXiv:1311.0837. Bibcode:2014PhRvA..89a2129B. doi:10.1103/PhysRevA.89.012129. 39. ^ Erhart, J.; Sponar, S.; Sulyok, G.; Badurek, G.; Ozawa, M.; Hasegawa, Y. (2012). "Experimental demonstration of a universally valid error-disturbance uncertainty relation in spin measurements". Nature Physics. 8 (3): 185–189. arXiv:1201.1833. Bibcode:2012NatPh...8..185E. doi:10.1038/nphys2194. 40. ^ Baek, S.-Y.; Kaneda, F.; Ozawa, M.; Edamatsu, K. (2013). "Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation". Scientific Reports. 3: 2221. Bibcode:2013NatSR...3E2221B. doi:10.1038/srep02221. PMC 3713528. PMID 23860715. 41. ^ Ringbauer, M.; Biggerstaff, D.N.; Broome, M.A.; Fedrizzi, A.; Branciard, C.; White, A.G. (2014). "Experimental Joint Quantum Measurements with Minimum Uncertainty". Physical Review Letters. 112: 020401. arXiv:1308.5688. Bibcode:2014PhRvL.112b0401R. doi:10.1103/PhysRevLett.112.020401. PMID 24483993. 42. ^ Björk, G.; Söderholm, J.; Trifonov, A.; Tsegaye, T.; Karlsson, A. (1999). "Complementarity and the uncertainty relations". Physical Review. A60: 1878. arXiv:quant-ph/9904069. Bibcode:1999PhRvA..60.1874B. doi:10.1103/PhysRevA.60.1874. 43. ^ Fujikawa, Kazuo (2012). "Universally valid Heisenberg uncertainty relation". Physical Review A. 85 (6). arXiv:1205.1360. Bibcode:2012PhRvA..85f2117F. doi:10.1103/PhysRevA.85.062117. 44. ^ Judge, D. (1964), "On the uncertainty relation for angle variables", Il Nuovo Cimento, 31 (2): 332–340, Bibcode:1964NCim...31..332J, doi:10.1007/BF02733639 45. ^ Bouten, M.; Maene, N.; Van Leuven, P. (1965), "On an uncertainty relation for angle variables", Il Nuovo Cimento, 37 (3): 1119–1125, Bibcode:1965NCim...37.1119B, doi:10.1007/BF02773197 46. ^ Louisell, W. H. (1963), "Amplitude and phase uncertainty relations", Physics Letters, 7 (1): 60–61, Bibcode:1963PhL.....7...60L, doi:10.1016/0031-9163(63)90442-6 47. ^ DeWitt, B. S.; Graham, N. (1973), The Many-Worlds Interpretation of Quantum Mechanics, Princeton: Princeton University Press, pp. 52–53, ISBN 0-691-08126-3 48. ^ Hirschman, I. I., Jr. (1957), "A note on entropy", American Journal of Mathematics, 79 (1): 152–156, doi:10.2307/2372390, JSTOR 2372390. 49. ^ Beckner, W. (1975), "Inequalities in Fourier analysis", Annals of Mathematics, 102 (6): 159–182, doi:10.2307/1970980, JSTOR 1970980. 50. ^ Bialynicki-Birula, I.; Mycielski, J. (1975), "Uncertainty Relations for Information Entropy in Wave Mechanics", Communications in Mathematical Physics, 44 (2): 129–132, Bibcode:1975CMaPh..44..129B, doi:10.1007/BF01608825 51. ^ Chafaï, D. (2003), Gaussian maximum of entropy and reversed log-Sobolev inequality, pp. 194–200, arXiv:math/0102227, doi:10.1007/978-3-540-36107-7_5, ISBN 978-3-540-00072-3 52. ^ Havin, V.; Jöricke, B. (1994), The Uncertainty Principle in Harmonic Analysis, Springer-Verlag 53. ^ Folland, Gerald; Sitaram, Alladi (May 1997), "The Uncertainty Principle: A Mathematical Survey", Journal of Fourier Analysis and Applications, 3 (3): 207–238, doi:10.1007/BF02649110, MR 1448337 54. ^ Sitaram, A (2001) [1994], "Uncertainty principle, mathematical", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 55. ^ Matt Hall, "What is the Gabor uncertainty principle?" 56. ^ Donoho, D.L.; Stark, P.B (1989). "Uncertainty principles and signal recovery". SIAM Journal on Applied Mathematics. 49 (3): 906–931. doi:10.1137/0149053. 57. ^ Amrein, W.O.; Berthier, A.M. (1977), "On support properties of Lp-functions and their Fourier transforms", Journal of Functional Analysis, 24 (3): 258–267, doi:10.1016/0022-1236(77)90056-8. 58. ^ Benedicks, M. (1985), "On Fourier transforms of functions supported on sets of finite Lebesgue measure", J. Math. Anal. Appl., 106 (1): 180–183, doi:10.1016/0022-247X(85)90140-4 59. ^ Nazarov, F. (1994), "Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type", St. Petersburg Math. J., 5: 663–717 60. ^ Jaming, Ph. (2007), "Nazarov's uncertainty principles in higher dimension", J. Approx. Theory, 149 (1): 30–41, arXiv:math/0612367, doi:10.1016/j.jat.2007.04.005 61. ^ Hardy, G.H. (1933), "A theorem concerning Fourier transforms", Journal of the London Mathematical Society, 8 (3): 227–231, doi:10.1112/jlms/s1-8.3.227 62. ^ Hörmander, L. (1991), "A uniqueness theorem of Beurling for Fourier transform pairs", Ark. Mat., 29: 231–240, Bibcode:1991ArM....29..237H, doi:10.1007/BF02384339 63. ^ Bonami, A.; Demange, B.; Jaming, Ph. (2003), "Hermite functions and uncertainty principles for the Fourier and the windowed Fourier transforms", Rev. Mat. Iberoamericana, 19: 23–55., arXiv:math/0102111, Bibcode:2001math......2111B, doi:10.4171/RMI/337 64. ^ Hedenmalm, H. (2012), "Heisenberg's uncertainty principle in the sense of Beurling", J. Anal. Math., 118 (2): 691–702, arXiv:1203.5222, doi:10.1007/s11854-012-0048-9 65. ^ Demange, Bruno (2009), Uncertainty Principles Associated to Non-degenerate Quadratic Forms, Société Mathématique de France, ISBN 978-2-85629-297-6 66. ^ American Physical Society online exhibit on the Uncertainty Principle 67. ^ Bohr, Niels; Noll, Waldemar (1958), "Atomic Physics and Human Knowledge", American Journal of Physics, New York: Wiley, 26 (8): 38, Bibcode:1958AmJPh..26..596B, doi:10.1119/1.1934707 68. ^ Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. 69. ^ a b c Heisenberg, W. (1930), Physikalische Prinzipien der Quantentheorie (in German), Leipzig: Hirzel English translation The Physical Principles of Quantum Theory. Chicago: University of Chicago Press, 1930. 70. ^ Cassidy, David; Saperstein, Alvin M. (2009), "Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb", Physics Today, New York: Bellevue Literary Press, 63: 185, Bibcode:2010PhT....63a..49C, doi:10.1063/1.3293416 71. ^ George Greenstein; Arthur Zajonc (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics. Jones & Bartlett Learning. ISBN 978-0-7637-2470-2. 72. ^ Tipler, Paul A.; Llewellyn, Ralph A. (1999), "5–5", Modern Physics (3rd ed.), W. H. Freeman and Co., ISBN 1-57259-164-1 73. ^ Enz, Charles P.; Meyenn, Karl von, eds. (1994). Writings on physics and philosophy by Wolfgang Pauli. Springer-Verlag. p. 43. ISBN 3-540-56859-X; translated by Robert Schlapp 74. ^ Feynman lectures on Physics, vol 3, 2–2 75. ^ a b Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p.260. 77. ^ Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p. 260–261. 78. ^ Kumar, M., Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality, Icon, 2009, p. 287. 79. ^ Isaacson, Walter (2007), Einstein: His Life and Universe, New York: Simon & Schuster, p. 452, ISBN 978-0-7432-6473-0 80. ^ Gerardus 't Hooft has at times advocated this point of view. 81. ^ a b c Popper, Karl (1959), The Logic of Scientific Discovery, Hutchinson & Co. 82. ^ Jarvie, Ian Charles; Milford, Karl; Miller, David W (2006), Karl Popper: a centenary assessment, 3, Ashgate Publishing, ISBN 978-0-7546-5712-5 83. ^ Popper, Karl; Carl Friedrich von Weizsäcker (1934), "Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations)", Naturwissenschaften, 22 (48): 807–808, Bibcode:1934NW.....22..807P, doi:10.1007/BF01496543. 84. ^ Popper, K. Quantum theory and the schism in Physics, Unwin Hyman Ltd, 1982, pp. 53–54. 85. ^ Mehra, Jagdish; Rechenberg, Helmut (2001), The Historical Development of Quantum Theory, Springer, ISBN 978-0-387-95086-0 86. ^ Compton, A. H. (1931). "The Uncertainty Principle and Free Will". Science. 74 (1911): 172. Bibcode:1931Sci....74..172C. doi:10.1126/science.74.1911.172. PMID 17808216. 87. ^ Heisenberg, M. (2009). "Is free will an illusion?". Nature. 459 (7244): 164–165. Bibcode:2009Natur.459..164H. doi:10.1038/459164a. 88. ^ a b Davies, P. C. W. (2004). "Does quantum mechanics play a non-trivial role in life?". Biosystems. 78 (1–3): 69–79. doi:10.1016/j.biosystems.2004.07.001. PMID 15555759. 89. ^ E. Hanggi, S. Wehner, A violation of the uncertainty principle also implies the violation of the second law of thermodynamics; 2012, arXiv:1205.6894v1 (quant-phy). External linksEdit
7a135afce42a30e3
The Megaphragma mymaripenne is the smallest animal with eyes, brain, wings, muscles, guts and genitals. If by some miracle it could be still shrinked, how much smaller could it get before it starts to become largely aware of quantum mechanical effects such as tunneling? By shrinking I mean wasp being made of less atoms but with similar "organs". By affected I mean, if that wasp was by some miracle smart as humans, it would have understanding of quantum mechanical effects same as humans have understanding of naive physics. • 82 $\begingroup$ According to a guy named "Schrödinger", a cat should be small enough to be affected by quantum mechanics. $\endgroup$ – Nolonar Nov 3 '16 at 14:39 • 3 $\begingroup$ One of the theories of olfaction (smell) includes pretty significant quantum effects. If that is true, humans (and most other animals) would fit as well. $\endgroup$ – Alice Nov 3 '16 at 15:37 • 22 $\begingroup$ @Nolonar I don't know who you're talking about, but according to the physicist named Schrödinger, a cat definitely shouldn't be small enough. $\endgroup$ – JiK Nov 3 '16 at 15:58 • 8 $\begingroup$ As it turns out, photosynthesis uses quantum tunneling to move electrons from the surface, deeper into the leaf without generating heat on the way. So some of the biggest living things on the planet are using "quantum mechanics" outside of chemistry. $\endgroup$ – Chris Becke Nov 4 '16 at 6:05 • 7 $\begingroup$ Huh... as for Schrödinger... a lion is a cat... a 250kg cat. And now suddenly, it becomes obvious why sabertooths are extinct and we find their bones in limestone. They got stuck trying to tunnel through walls and starved. $\endgroup$ – Damon Nov 4 '16 at 11:17 12 Answers 12 It is hard to put an exact number on this, but it seems like the answer would be maybe 1000 atoms at most. From Wikipedia, And that is just for superposition in location, not even getting to quantum tunneling like you mention in your question. Observing QM effects in anything larger than that has been notoriously difficult. However, some scientists have been trying to observe a small microbe in a superposition. I can't find anything indicating that the experiment was actually done, just lots of stuff about people trying to do it and thinking it will be done in the next few years. So maybe we will get small bacterium and viruses to experience QM effects relatively soon. That would probably set the upper limit on the size you are asking for. This source claims that even a 100nm microbe would be seriously difficult to observe in a superposition: A recent proposal suggested “piggybacking” a tiny microbe (100 nanometres) on to a slightly less tiny (15 micrometres) aluminium drum, whose motion has been brought to the quantum level. While this experiment is feasible, the separation between the “two places at once” that the bacteria would find itself in is 100m times smaller than the bacterium itself. Edit: Just to clarify my wording, everything always experiences quantum effects, they just become unobservably small as the object gets larger and larger (with rare exceptions, like the black body spectrum of the sun, but that is another matter entirely). • 1 $\begingroup$ I estimate about 2nm for diffraction effects, as a very rough analysis. Superposition is a slippery concept, but it's nice to see we're on about the same scale. $\endgroup$ – spraff Nov 4 '16 at 18:33 • 1 $\begingroup$ The problem for superposition is to keep the system coherent. That typically mean a very cold system near the vacuum, definitely not physiological conditions. $\endgroup$ – Davidmh Nov 5 '16 at 15:13 You appear to have a misunderstanding of how physics works. Classical physics (i.e., the thing we generally refer to when discussing how things interact) is merely an approximation of quantum mechanics. There is no boundary that says "Only past this point are you affected by quantum mechanics." But, if you are concerned with how this creature would behave, you would need to make it smaller than an atom, as only then does quantum mechanics predict different behavior than Newtonian physics. Of course, one could simply look up quantum tunneling to see that it applies to particles, and not organisms, which are comprised of lots of particles. There appears to be some concern about my third citation, and I completely agree. The user on Physics has no linked research, low reputation, and low votes. However, I don't pretend to be an expert in the field of quantum mechanics; I rely entirely on some basic ideas of what it is and the expertise of others. To sum up the above (and comments below): quantum mechanics dominates in the smallest scales, while Newtonian physics dominates in the largest scales, and no one knows why or what the tipping point is. • 8 $\begingroup$ Yay! No boundary to QM. My first thought seeing this question. Glad to see your answer. Macroscopic systems don't show quantum behaviour -- usually. Bose-Einstein condensates are one of the exceptions. Although modern electronics runs on QM properties the spooky effects don't flow into our everyday world. $\endgroup$ – a4android Nov 3 '16 at 13:00 • 14 $\begingroup$ You need a tighter definition of "affected". Chemistry happens because of quantum mechanics. Deuterium is an imperfect substitute for hydrogen in biochemistry because of QM. We don't spontaneously ignite because of QM (triplet vs. singlet Oxygen energy levels). Differently, when our eyes are startlight-adapted we see a "grainy" low-resolution image. Your eyes are detecting individual light quanta. $\endgroup$ – nigel222 Nov 3 '16 at 15:25 • 4 $\begingroup$ QM predicts different behavior even at macroscopic scales. See the Ultraviolet Catastrophe. True, statistical approximations could be used instead of raw QM, but the statistical approximation relies on quantization of photons... $\endgroup$ – Yakk Nov 3 '16 at 18:32 • 6 $\begingroup$ Interference has been shown for molecules made of up to 430 atoms, so quite obviously quantum effects don't stop at the atomic scale. $\endgroup$ – celtschk Nov 3 '16 at 18:57 • 4 $\begingroup$ This answer is incorrect. Just because using quantum mechanics on atoms is more correct than Newtonian doesn't make it the limit. Newtonian mechanics start not working on a larger scale, as @celtschk's example shows. Your "source" Isn't a source at all $\endgroup$ – Zach Saucier Nov 3 '16 at 19:36 In the world of processors 5nm was assumed as smallest size before quantum tunneling starts to be a problem. If you shrink your wasp 1000 times it will become 200nm long, since its legs are much smaller they will probably be affected by tunneling. • 26 $\begingroup$ You are misunderstanding this effect. The quantum tunneling becomes more prominent at 5nm because the gate oxide must shrink (but not because the linewidth is 5nm). As this reduces quantum effects will have significantly more effects (on the gate oxide). But this is not what the 5nm refers to (minimum printable linewidth, or minimum drain->source difference). $\endgroup$ – jbord39 Nov 3 '16 at 16:03 • 10 $\begingroup$ Not to mention that quantum tunnelling is required for semiconductors to work. The charges simply don't have enough energy to cross the potential boundary - they need to tunnel through. The problem you're talking about is related to unwanted quantum tunnelling - the charges tunnelling through places we don't want them to tunnel through. $\endgroup$ – Luaan Nov 4 '16 at 12:58 My day job is (currently) designing the software/firmware/electronics for nanopositioning systems. With our current best kit, we can reliably and repeatably move something to 70pm accuracy over a 15um range. This is a classical-mechanics chunk of metalwork moving. At that range we have significant challenges with material stiffness and other interesting mechanical effects, but the physics is still very much in the classical domain. So the basic chemistry of the wasp's body isn't something it needs to worry about just yet. Of course quantum tunnelling could be an issue for the wasp's nervous system. Since that relies on electrical signals, it'll have the same issues as shrinking a processor die. • 3 $\begingroup$ My day job is (currently) designing the software/firmware/electronics for nanopositioning systems. Wow... I mean wow. :jaw drops: (Sorry for the comment spam.) $\endgroup$ – mg30rg Nov 4 '16 at 9:33 • 1 $\begingroup$ @mg30rg Someone has to do it :) $\endgroup$ – Roman Nov 4 '16 at 11:50 • 4 $\begingroup$ Of course, the semiconductors used in those electronics only work thanks to quantum electrodynamics, but that's kind of begging the question - classical physics is just an approximation, a model of the underlying reality. Quantum physics is a more accurate model of the underlying reality (and possibly actual reality - but how could we tell? :)). Things don't "start" or "stop" behaving classically - it's just that under different conditions, classical physics can be a better or worse approximation of reality as far as we care. Quantumness doesn't disappear when things get big. $\endgroup$ – Luaan Nov 4 '16 at 12:56 • 3 $\begingroup$ Just to be clear it's not a typo, you can accurately position things to roughly the covalent bonding diameter of a hydrogen atom? $\endgroup$ – BenRW Nov 5 '16 at 21:20 • 2 $\begingroup$ @BenRW With a best specimen of our top-line kit we can measure and position to around 70 picometres resolution. We aim for around 100pm, and we'd reject it if it's worse than about 150pm. Yes, this is insane stuff! There are caveats to this, of course. Temperature and pressure will affect the system if they're not insanely tightly controlled. Also hysteresis is a problem for run-to-run variation - you can't do that kind of fine movement in both directions. $\endgroup$ – Graham Nov 7 '16 at 11:30 Quite large animals are "affected" by quantum mechanics, because even large animals consist of small parts and many mechanisms at the smallest scales of animal bodies rely on quantum mechanics. For example: the reason that geckos' feet stick to glass is because of quantum mechanics (Van der Waals forces to be precise: see here). For other examples see this Wikipedia article about quantum biology. Your question is rather vague, in that you don't specify what you mean by "affected". Quantum mechanics can affect everything at the molecular level. By that logic, even blue whales are affected by quantum mechanics. For example: Vision relies on quantized energy in order to convert light signals to an action potential in a process called phototransduction. In phototransduction, a photon interacts with a chromophore in a light receptor. The chromophore absorbs the photon and undergoes photoisomerization. This change in structure induces a change in the structure of the photo receptor and resulting signal transduction pathways lead to a visual signal. However, the photoisomerization reaction occurs at a rapid rate, <200 fs, with high yield. Models suggest the use of quantum effects in shaping the ground state and excited state potentials in order to achieve this efficiency. Other examples on that Wikipedia page include: • Studies show that long distance electron transfers between redox centers through quantum tunneling plays important roles in enzymatic activity of photosynthesis and cellular respiration. • Magnetoreception refers to the ability of animals to navigate using the magnetic field of the earth. A possible explanation for magnetoreception is the radical pair mechanism. • Other examples of quantum phenomena in biological systems include olfaction, the conversion of chemical energy into motion, DNA mutation and brownian motors in many cellular processes. Regarding DNA mutation: • $\begingroup$ Well that's a 200: success answer! +1! $\endgroup$ – RudolfJelin Nov 4 '16 at 19:05 The world we know, macroscopically, would not be without quantum mechanics. Even solid matter wouldn't stay in cohesion without it. The sun wouldn't shine, chemical reactions wouldn't exist etc. You might say: "yeah, but these are things we are used to. They make sense." Exactly. That's the point. We see these things all the time, so they don't sound "quantum", but they are. Quantum mechanics are everywhere, and if some people say they appear only at some microscopic size, that's only because some "unusual" stuff happens then. Of course it is unusual! We are not that small to see them with our own eyes. So the answer to the question: "how small should an animal be to show unusual quantum behaviour" would be:. Smaller than you can see (even with a microscope), because that's the definition of "unusual". It turns out to be of the order of hundreds of atoms. Note that some systems, prepared in "coherent states" can exhibit similar properties because all atoms "beat" at the same rate. Their contributions add up to macroscopic scale. Now, interesting studies suggest the quantum randomness of the world, one of the most amazing things in quantum mechanics, may be the cause of usual randomness (like flipping a coin). This is a big deal in my opinion: • 1 $\begingroup$ Best answer so far IMO. Of that randomness article I think not much though – you don't need to resort to quantum effects to explain e.g. fluctuations in gases, such fluctuations can even be observed in purely classical CFD simulations. Basically, any sufficiently chaotic system looks random if you don't have access to the full parameter space, even if the dynamics are actually completely deterministic. In fact this is even the case for quantum mechanics – the Schrödinger equation is perfectly deterministic and only if you introduce decoherence/measurements does it “cause randomness”. $\endgroup$ – leftaroundabout Nov 3 '16 at 20:53 • $\begingroup$ I don't have a definite opinion about that article. There are effects which are not fully explained through classical thermodynamics, like irreversibility, which might rely fundamentally on quantum randomness. But that's not very clear to me how much we don't know here, and I find the article interesting at least. Anyway, this is not really the point of the answer, but just a note. $\endgroup$ – fffred Nov 4 '16 at 14:16 Although it's correct to answer "QM happens at macroscopic scales and it affects humans", I'll try to answer in the spirit of the question. What is a "quantum mechanical effect"? I'll pick one: matter diffraction. How big an animal can be and still diffract through a grating? Larger particles (including composite particles) have smaller de Broglie wavelengths, and diffraction is most evident when the gap is about the same size as the wavelength. So to get the largest admissible animal, use the smallest admissible diffraction gate. The de Broglie wavelength depends on momentum $mv=\frac{h}{\lambda}$ and as a coarse simplification, since we're dealing with small animals, pick $v=1~\mathrm{ms^{-1}}$ so $m=\frac{h}{\lambda}$. Model the "particle" animal as a uniform sphere of "typical" density of $\rho\approx 10~\mathrm{ kg\cdot m^{-3}}$ so $m=\frac{4}{3}\rho\pi r^3\approx 4\rho r^3$ and as we said above, we are looking for $r=\lambda$ so $\frac{h}{r}\approx 4 \rho r^3$ and so... $r \approx 2\times 10^{-9}~\mathrm m$ Animals significantly bigger than this can't produce diffraction patterns at normal animal speeds. This would be a difficult experiment to perform, since animals are not uniform spheres. You would get chaotic effects when legs broke off and such like, adding somewhat a lot of noise to the results. You might be able to get larger animals to diffract successfully if they were moving on a tightly curved section of spacetime (they take up less space if they're stretched into the time direction somewhat) e.g. if their trajectory was the orbit of a small black hole, although I don't know enough GR to analyse this and relativistic velocities would shrink the limiting wavelength/radius further. • $\begingroup$ I think animals the size of buckyball would move at speeds similar to velocities of particles in a gas (whether they want to or not), not “typical animal speeds”. $\endgroup$ – JDługosz Nov 5 '16 at 6:09 • $\begingroup$ Fair enough, but that factor doesn't change it much as an estimate when you take fourth-roots: $v=500ms^{-1}$ gives $4\rho r^3=\frac{h}{500\lambda}$, or $r^4\approx\frac{h}{100}$, $r\approx10^{-9}m$ $\endgroup$ – spraff Nov 6 '16 at 13:14 When I read "If by some miracle it could be still shrinked [sic]..." in the question, I wonder whether you really want to try to conform to "known" physics, especially if you're telling a story. But that said, I haven't noticed the phrase "thermodynamic limit" being used in any answers yet. The reason human-sized object don't suddenly teleport is because along these lines: (1) There's a probability of any given particle "suddenly showing up" anywhere in the known universe, as far as Shrodinger's equation can tell you. (2) When you put multiple particles together, they behave as a "conjunctive event," in probability-speak. The short version is this: imagine you flip a coin. There's a 50% of either side landing, so neither outcome is a surprise. Now suppose you flip 6*10^23 coins and try to predict the outcome. (ex. "All heads!") Your probability of being right is the product of the probabilities of all the events that would make it up. That probability is minuscule enough that the entire lifespan of the universe (by current estimations) could easily elapse before you successfully guessed the outcome of such an event. To get "teleportation," you'd need to probabilistic analogue of guessing such an outcome correctly. In other words, we don't see such things happen because the chemistry of the objects that we encounter in daily life (which is a consequence of quantum mechanics) makes is really unlikely for such things to happen during a time-span short enough for a human to observe it. (You'll note that this doesn't rule out such things...it's just says "don't spend your life waiting for it...you'll be bored.") As an example of a a "thermodynamic limit as a conjunctive even of probabilistic events occurring as determined by quantum mechanics," imagine you have 6*10^23 particles, each with a 1% chance of showing up 1 meter away from where you last observed them, then as a "clump" they'll have a 0.01^(6*10^23) probability of appearing there. I don't think your calculator will be able to tell you what that number is....it's way, way too small of a probability. This is the "first semester of quantum mechanics" answer, by the way. The afterword of your quantum mechanics textbook may then say, "So...entanglement plays a role in how this actually works, but that's beyond the scope of this book, and not entirely understood yet anyhow." (I guess my point is, don't expect to get the complete answer to this question without devoting your life to physics.) By the way, if the number 6*10^23 doesn't ring any bells, check out Avogadro's number. (You'll also then have to consider how many multiples of Avogadro's number of molecules make up your lifeform in question.) Let's point one more thing: A standard example in an introductory class on quantum mechanics (called "modern physics" when I took it) is that of radioactivity (in particular, that of alpha particles, I believe it was), and how quantum mechanics gives an explanation for why it can happen at all. (The answer is tunneling, although let's give it the definition of "a particle having a non-zero probability of suddenly existing away from the chemistry of its usual material, so it then continues its existence without being 'held in place' by all the other particles around it.") But radioactivity doesn't happen because your sample of uranium (for example) is small; it's just the chemistry of the material is such that the probability of a tunneling even is high enough that you can observe it over a time-frame that that people would consider pretty short. Switching gears, let's get back to your story (or whatever prompted you to ask about this). Miniaturization, as it sounds like you're describing it, isn't really a real-world thing. The objects we encounter in a day-to-day lives are defined by their chemistry, and chemistry can't simply be 'shrunk.' (As an analogy: Build your dream house with Legos, then say "now I want to shrink this down to doll-house sized." To make that happen, you'd need the individual Legos to shrink. But the protons, neutrons and electrons that make up chemistry don't shrink. In fact, they don't vary in any way. Every electron is flawlessly identical to every other electron in the universe. (A physicists, I think John Wheeler, once made a probably-tongue-in-cheek quip about there only being one electron in the universe, doing the job of every electron we ever think exists. If you've every done object-oriented programming, you may find this reminiscent of defining an "electron" class, then instantiation it once every time for each electron that appears to exist in the universe. From the perspective, you might see why some content that the universe's construction seems oddly akin to a computer program.) So, to actually miniaturize something, you construct something that behaves identically to the original object, but with fewer particles. Whether you can actually do this with a biological entity is probably not a question for the physicists anymore, unless they're physicists who do biological modeling. (As an aside, universities that have a medical school may have some biology-oriented classes in the physics department, probably oriented toward pre-med students that do their undergrad degree in physics. You may also find mathematicians doing things like neurological modeling at such universities.) If it's sci-fi you're thinking about, you may want to look towards a couple possibilities: (1) The 'miniaturization' process that you're describing could be more like "nanomachine recreations of biological organisms," which again would means that someone builds a device to try to duplicate the behavior of a given organism. Then you just have to find out a bit more about nanomachines, if you want to try to be accurate within its constraints. (2) Look to the poorly-understand parts of physics for places where you can get creative. Regarding this...keep in mind that someone with a background in a a little chemistry and no physics may only think of three fundamental particles: protons, neutrons and electrons. (I suppose lots of people know about photons, but they overlook the fact that electrons are the "force mediators" for electrons.) That leads us to the place to dig deeper: If you crack open a particle physics textbook (or flip to the 'particle physics' chapter of a modern physics textbook), you'll see that there's a bunch more of these fundamental particles, some of which have been observed, some of which haven't. The "as of yet not understood" is a fertile place to find things you can make some 'informed speculation' for use in science fiction. (And if you're wondering about why the rest of the particles even exist....my not-particularly-informed response is "stars, stuff that comes from stars, 'mediation of physical effects' and then whatever machinery of the universe that we understood well enough to even suppose that it exists, but not well enough to explain it with any clarity.") Granted, I'm not suggesting that you try to make heads or tails of a particle physics textbook without having studied all the pre-requisites (eg. the usual year of calculus-based physics, intro to modern physics, intro to thermodynamics, undergrad Electricity and Magnetism, undergrad Quantum Mechanics; the in the preface to Griffith's Intro the Elementary Particles he suggests that 'most students in such a class' will have taken everything in that last, but he suggests that the last two don't need to be considered a strict prerequisite.) But unless you do, you'll probably have to fall back on 'informed speculation' ....but, of course, the less you know, the less informed your speculation will inevitably be. Final note: If story-telling is your aim, don't forget that the primary device for not getting bogged down in "accuracy" is to simply not bring it up. (How much you can get away with that will depend on the story you're trying to tell, of course.) • $\begingroup$ Sorry in advance for what I'm sure are copious typos. That answer wound up pretty long. (^^; $\endgroup$ – steve_0804 Nov 4 '16 at 18:00 • $\begingroup$ As it stands, it seems to me this is the best answer. $\endgroup$ – RudolfJelin Nov 4 '16 at 19:06 Humans are affected by quantum mechanics: some human eyes are able to detect a single quantum of light (a photon). • $\begingroup$ Some human eyes? All human eyes can detect singular photons, since the wavelength of a photon is exactly what we have evolved to process/interpret. $\endgroup$ – Harry David Nov 4 '16 at 1:27 • $\begingroup$ @HarryDavid not native speaker here, lets say it other way - rods are capable to detect single photon at frequencies of visible light. from wiki: A photon is an elementary particle, the quantum of all forms of electromagnetic radiation including light. $\endgroup$ – MolbOrg Nov 4 '16 at 4:21 • $\begingroup$ @MolbOrg That would be "quantum" as "smallest unit", not necessarily as in "quantum physics". $\endgroup$ – a CVn Nov 5 '16 at 13:40 • $\begingroup$ @MichaelKjörling can't claim I understand you sentence in full, is that about quantum mechanics in answer - we(a human body) working because that quantum mechanics exists, and one of the reasons why it (CM) is interesting. $\endgroup$ – MolbOrg Nov 5 '16 at 14:19 • $\begingroup$ @MolbOrg A quantum is a smallest unit of something. Quantum physics is physics as it applies to those smallest units. When Wikipedia states that "a photon is ... the quantum of EM radiation", the claim made is that a photon is the smallest, non-divisible portion of EM radiation. See for example merriam-webster.com/dictionary/quantum. $\endgroup$ – a CVn Nov 5 '16 at 14:31 Proteins are the smallest machines of the cell that can do anything interesting (for some definition of interesting, but I work with proteins and I am biased). They are long chains of hundreds of aminoacids (thousands of atoms) do things like pumping water, nutrients, and waste in and out of the cells, guide chemical reactions, send signals, etc. One of the tools to study them are molecular dynamics simulations. They pretty much use classical mechanics (replacing the atoms with a fancy version of soft balls) with minor numerical tweaks to reproduce quantum behaviour to a very accurate degree. The tweaks are mostly to avoid having to solve the full electrostatic problem of where are the electrons at each time step; but nothing of that would seem strange to a microscopical individual. So, to get generally quantum-weird behaviour you have to go smaller than the basic functional unit of the life as we know it. The actual question is : how big can a system be and still be quantic ? Some theories say that if enough particles are entangled, then the wave function may spontaneously collapse, which means that for example it is not possible to entangle Shrödinger's cat to a decaying atom. The limit for this would be also the size of this animal. Your Answer
6332bb42772b6c1e
Reactivity (chemistry) From Wikipedia, the free encyclopedia Jump to: navigation, search Reactivity in chemistry refers to • the chemical reactions of a single substance, • the chemical reactions of two or more substances that interact with each other, • the systematic study of sets of reactions of these two kinds, • methodology that applies to the study of reactivity of chemicals of all kinds, • experimental methods that are used to observe these processes, • theories to predict and to account for these processes. The chemical reactivity of a single substance (reactant) covers its behaviour in which it: • Decomposes • Forms new substances by addition of atoms from another reactant or reactants • Interacts with two or more other reactants to form two or more products The chemical reactivity of a substance can refer to the variety of circumstances (conditions that include temperature, pressure, presence of catalysts) in which it reacts, in combination with the: • Variety of substances with which it reacts, • Equilibrium point of the reaction (i.e., the extent to which all of it reacts) • Rate of the reaction The term reactivity is related to the concepts of chemical stability and chemical compatibility. An alternative point of view[edit] Reactivity is a somewhat vague concept in chemistry. It appears to embody both thermodynamic factors and kinetic factors—i.e., whether or not a substance reacts and how fast it reacts. Both factors are actually distinct, and both commonly depend on temperature. For example, it is commonly asserted that the reactivity of group one metals (Na, K, etc.) increases down the group in the periodic table, or that hydrogen's reactivity is evidenced by its reaction with oxygen. In fact, the rate of reaction of alkali metals (as evidenced by their reaction with water for example) is a function not only of position within the group but particle size. Hydrogen does not react with oxygen—even though the equilibrium constant is very large—unless a flame initiates the radical reaction, which leads to an explosion. Restriction of the term to refer to reaction rates leads to a more consistent view. Reactivity then refers to the rate at which a chemical substance tends to undergo a chemical reaction in time. In pure compounds, reactivity is regulated by the physical properties of the sample. For instance, grinding a sample to a higher specific surface area increases its reactivity. In impure compounds, the reactivity is also affected by the inclusion of contaminants. In crystalline compounds, the crystalline form can also affect reactivity. However in all cases, reactivity is primarily due to the sub-atomic properties of the compound. Although it is commonplace to make statements that substance 'X is reactive', all substances react with some reagents and not others. For example, in making the statement that 'sodium metal is reactive', we are alluding to the fact that sodium reacts with many common reagents (including pure oxygen, chlorine, hydrochloric acid, water) and/or that it reacts rapidly with such materials at either room temperature or using a bunsen flame. 'Stability' should not be confused with reactivity. For example, an isolated molecule of an electronically state of the oxygen molecule spontaneously emits light after a statistically defined period. The half-life of such a species is another manifestation of its stability, but its reactivity can only be ascertained via its reactions with other species. Causes of reactivity[edit] The second meaning of 'reactivity', that of whether or not a substance reacts, can be rationalised at the atomic and molecular level using older and simpler valence bond theory and also atomic and molecular orbital theory. Thermodynamically, a chemical reaction occurs because the products (taken as a group) are at a lower free energy than the reactants; the lower energy state is referred to as the 'more stable state'. Quantum chemistry provides the most in-depth and exact understanding of the reason this occurs. Generally, electrons exist in orbitals that are the result of solving the Schrödinger equation for specific situations. All things (values of the n and ml quantum numbers) being equal, the order of stability of electrons in a system from least to greatest is unpaired with no other electrons in similar orbitals, unpaired with all degenerate orbitals half filled and the most stable is a filled set of orbitals. To achieve one of these orders of stability, an atom reacts with another atom to stabilize both. For example, a lone hydrogen atom has a single electron in its 1s orbital. It becomes significantly more stable (as much as 100 kilocalories per mole, or 420 kilojoules per mole) when reacting to form H2. It is for this same reason that carbon almost always forms four bonds. Its ground state valence configuration is 2s2 2p2, half filled. However, the activation energy to go from half filled to fully filled p orbitals is so small it is negligible, and as such carbon forms them almost instantaneously. Meanwhile the process releases a significant amount of energy (exothermic). This four equal bond configuration is called sp3 hybridization. The above three paragraphs rationalise, albeit very generally, the reactions of some common species, particularly atoms, but chemists have so far been unable to jump from such general considerations to quantitative models of reactivity. Chemical kinetics: reaction rate as reactivity[edit] The rate of any given reaction, Reactants → Products is governed by the rate law: where the rate is the change in the molar concentration in one second in the rate-determining step of the reaction (the slowest step), [A] is the product of the molar concentration of all the reactants raised to the correct order, known as the reaction order, and k is the reaction constant, which is constant for one given set of circumstances (generally temperature and pressure) and independent of concentration. The greater the reactivity of a compound the higher the value of k and the higher the rate. For instance, if, A+B → C+D where n is the reaction order of A, m is the reaction order of B, n+m is the reaction order of the full reaction, and k is the reaction constant. See also[edit]
c132773fb0bd964b
Take the 2-minute tour × If we have a one dimensional system where the potential $$V~=~\begin{cases}\infty & |x|\geq d, \\ a\delta(x) &|x|<d, \end{cases}$$ where $a,d >0$ are positive constants, what then is the corresponding classical case -- the approximate classical case when the quantum number is large/energy is high? share|improve this question What is $V$ when $x \in (-d,0) \cup (0,d)$? –  Siyuan Ren Apr 27 '12 at 9:09 Did you mean "$\infty$ when $|x| > d$"? Also did you mean "$a$ when $x = 0$" i.e. $a\delta(x)$. Finally is $a$ of the order of classical energies or much less? If the latter, the system just looks like a square well with no barrier at classical energies. –  John Rennie Apr 27 '12 at 9:41 Dear @Sys, it's a virtue and necessity, not a bug, that the delta-function is infinite at $x=0$. If it were finite at a single point (i.e. interval of length zero), like in your example, it would have no impact on the particle because zero times finite is zero. So your potential as you wrote it is physically identical to $V=\infty$ for $|x|<d$ and $0$ otherwise which is just a well with the standing wave energy eigenstates. The finite modification of $V$ at one point, by $a$, plays no role at all. A potential with $a\delta(x)$ in it would be another problem. –  Luboš Motl Apr 27 '12 at 10:44 @LubošMotl: Thanks, actually the delta function version instead of V=a at x=0 is the right one. What is the classical limit of that? –  Sys Apr 27 '12 at 11:15 @JohnRennie: I think your comment suggestion was right, that there is a delta function at x=0. –  Sys Apr 27 '12 at 11:17 2 Answers 2 up vote 1 down vote accepted Here we derive the bound state spectrum from scratch. Not surprisingly, the conclusion is that the Dirac delta potential doesn't matter in the semi-classical continuum limit, in accordance with Spot's answer. The time-independent Schrödinger equation reads for positive $E>0$, $$ -\frac{\hbar^2}{2m}\psi^{\prime\prime}(x) ~=~ (E-V(x))\psi(x), \qquad V(x)~:=~V_0\delta(x)+\infty \theta(|x|-d), \qquad V_0~>~0, $$ with the convention that $0\cdot \infty=0$. Define $$v(x) ~:=~ \frac{2mV(x)}{\hbar^2}, \qquad e~:=~\frac{2mE}{\hbar^2}~>~0 \qquad k~:=~\sqrt{e}~>~0\qquad v_0 ~:=~ \frac{2mV_0}{\hbar^2}. $$ $$ \psi^{\prime\prime}(x) ~=~ (v(x)-e)\psi(x). $$ We know that the wave function $\psi$ is continuous with boundary conditions $$\psi(x)~=0 \qquad {\rm for}\qquad |x|\geq d.$$ Also the derivative $\psi^{\prime}$ is continuous for $0<|x|<d$, and possibly has a kink at $x=0$, $${\lim}_{\epsilon\to 0^+}[\psi^{\prime}(x)]^{x=\epsilon}_{x=-\epsilon} ~=~v_0\psi(x=0). $$ We get $$\psi_{\pm}(x)~=~A_{\pm}\sin(k(x\mp d))\qquad {\rm for } \qquad 0 \leq \pm x \leq d.$$ 1. $\underline{\text{Case} ~\psi(x=0)=0}$. Then $$n~:=~\frac{kd}{\pi}~\in~ \mathbb{N}.$$ We get an odd wave function $$\psi_n(x)~\propto~\sin(kx).$$ In particularly, the odd wave functions do not feel the presence of the Dirac delta potential. 2. $\underline{\text{Case} ~\psi(x=0)\neq 0}$. Then continuity at $x=0$ implies that the wave function is even $A_{+}+A_{-}=0$. Phrased equivalently, $$\psi(x)~=~A\sin(k(|x|-d)).$$ The kink condition at $x=0$ becomes $$ v_0A\sin(-kd)~=~2kA \cos(kd), $$ or equivalently, $$ v_0\tan(kd)~=~-2k.$$ In the semiclassical continuum limit $$k \gg \frac{1}{d}, \qquad k \gg v_0,$$ this becomes $$\frac{kd}{\pi}+\frac{1}{2}~\in ~\mathbb{Z}, $$ i.e., in the semiclassical continuum limit the even wave functions do not feel the presence of the Dirac delta potential as well. share|improve this answer Firstly, it's easy to start off with just the Dirac delta potential and see what that does. Wiki has a nice solution for the Delta fuction potential, and I am lifting off parts of it here. Consider a potential $V(x) = a\delta (x)$ and consider a scattering like configuration, where a plane wave $e^{ikx}$ is incident from the left. $$ \psi(x)=\begin{cases}e^{ikx}+re^{-ikx} & x<0 \\ te^{ikx} & x> 0\end{cases} $$ By matching the boundary conditions, like on the wiki page, you get $$ t = 1+r\\ (1-\alpha)t = 1-r $$ where $$ \alpha = \frac{ 2ma}{ik\hbar^2} $$ characterizes the effect of the delta potential. Solving for $r$ and $t$, $$ t = \frac{1}{1-\alpha/2}\\ r=-\frac{\alpha/2}{1-\alpha/2} $$ Now, it is easy to see that for high incident $k$, the only effect of the dirac delta potential is to write a phase discontinuity on the wavefuction. This is because, as $k$ increases, the transmission $|t|^2=1/(1+|\alpha|^2/4)$ approaches 1, but the transmitted wavefunction gets an extra phase given by $$ \text{Arg}(t) = -\tan^{-1}(|\alpha|/2) $$ Getting back to the problem at hand, for a particle in a box (without the delta function), the allowed $k$ vectors are given by forcing the wavefunction to be zero at the walls at $x=-d$ and $x=d$, which gives us the condition $$ k_n=\frac{\pi n}{2d} $$ If now, we add a delta potential, then for high values of $n$ (or $k$), all the delta function will do is introduce a phase discontinuity at the origin, and consequently what you should expect is that the boundary condition is matched not for $k_n$, but something slightly off $k_n+\delta k_n$, where $\delta k_n$ is a small correction due to the delta function potential. For high values of $n$, this correction would drop, as the phase discontinuity decreases, and for classical like states (very large $n$) you expect to recover 1D box states, as mentioned by John Rennie. share|improve this answer Thank you, Spot! –  Sys Apr 27 '12 at 17:59 Your Answer
6e9b4de738677b93
Take the 2-minute tour × In mathematical physics and other textbooks we find the Legendre polynomials are solutions of Legendre's differential equations. But I didn't understand where we encounter Legendre's differential equations (physical example). What is the basic physical concept behind the Legendre polynomials? How important are they in physics? Please explain simply and give a physical example. share|improve this question en.wikipedia.org/wiki/… –  John Rennie Jan 24 '13 at 13:48 These polynomials are not really physics, they are simply a useful mathematical tool that appear in the solutions to many physical problems with spherical symmetries. I think the question is fine because they do come up a lot. –  dmckee Jan 24 '13 at 14:55 There is a very definite physical notion behind Legendre polynomials: a rank-$l$ Legendre polynomial corresponds to a spin-$l$ representation of the orthogonal group $SO(3)$. These are the usual traceless, symmetric tensors of rank $l$ we use in field theory all the time. If in QM you scatter two spinless particles, you measure the angular distribution and find that it is described by $P_2(\cos \theta)$ (for example), then you can be sure that the particles are exchanging a spin-2 resonance. –  Vibert Jan 25 '13 at 0:01 @Vibert That is a physical notion attached to the polynomials, but the math exists independently of the physics. The distinction here is that physicists must learn the math but mathematicians can know the polynomials while being ignorant of the physics. –  dmckee Jan 25 '13 at 5:02 4 Answers 4 up vote 6 down vote accepted The Legendre polynomials occur whenever you solve a differential equation containing the Laplace operator in spherical coordinates with a separation ansatz (there is extensive literature on all of those keywords on the internet). Since the Laplace operator appears in many important equations (wave equation, Schrödinger equation, electrostatics, heat conductance), the Legendre polynomials are used all over physics. There is no (inarguable) physical concept behind the Legendre polynomials, they are just mathematical objects which form a complete basis between -1 and 1 (as do the Chebyshev polynomials). share|improve this answer I disagree with the last statement (see my comment above). Legendre polynomials correspond to $SO(3)$ (tensor) representations that are well-known, even to undergraduates. Just look up the partial wave expansion in a QM textbook. The Chebyshev polynomials play the same role, but in two dimensions. –  Vibert Jan 25 '13 at 0:03 If you do partial wave expansion, you do that because the Legendre polynomials are eigenfunctions of the $\vartheta$ part of the Laplace operator. The connection to the $SO(3)$ representation is interesting though. –  Rafael Reiter Jan 25 '13 at 0:06 Yes, that is exactly what I mean: Legendre polynomials are eigenfunctions of the Laplacian on $S^2$, and indeed they correspond to representations of $SO(3)$ - that's not an accident. You are free to think that this fact isn't important, but it's the 3D equivalent of classifying the representations of the Lorentz group - you do agree that it's meaningful to talk about scalars, spinors, currents, tensors etc. in particle physics, right? –  Vibert Jan 25 '13 at 0:12 I know too little about particle physics to give a qualified answer. But I think we disagree on the more fundamental question of how "physically" you interpret a mathematical object, which is more of a philosophical question. –  Rafael Reiter Jan 25 '13 at 0:19 Here's my 30 seconds hand waving argument for "Why is it that we always encounter new special functions $f_n$ with orthogonality relations??" $$\int f^*_n\cdot f_m=\delta_{mn}$$ Super broadly speaking, in physics we dealing with the dynamics of certain degrees of freedom. These often employ smooth symmetries, that is we're dealing with Lie groups, which are also manifold in themselfs. Take e.g. the Laplacian $\Delta=\nabla\cdot\nabla$ and the associated symmetries $R$ acting as $\nabla\to R\nabla$ in such a way that that $R\nabla\cdot R\nabla=\nabla\cdot\nabla$. Now in case one is dealing with a "rotation" in the broadest sense of the word, one often has a compact manifold, where we can savagely define things like integration on the group, and these symmetry groups also permit pretty unitary matrix representations. That is there are necessarily matrices $U$ with and well, the matrix coefficients $U_{kn}$ must be some complex functions. To put it short, special functions are representation theory magic. @zonk: Yes, it's the default theory. But of course, you only see the direct relation to special functions if you take the abstract Lie group theory and actually sit down and write down the matrices in some base. E.g. for the rotation group matrices $D$, you find $$ \begin{array}{lcl} D^j_{m'm}(\alpha,\beta,\gamma)&=& e^{-im'\alpha } [(j+m')!(j-m')!(j+m)!(j-m)!]^{1/2} \sum\limits_s \left[\frac{(-1)^{m'-m+s}}{(j+m-s)!s!(m'-m+s)!(j-m'-s)!} \right.\\ &&\left. \cdot \left(\cos\frac{\beta}{2}\right)^{2j+m-m'-2s}\left(\sin\frac{\beta}{2}\right)^{m'-m+2s} \right] e^{-i m\gamma} \end{array}.$$ Very sweet, right? Now here you have the Legendre Polynomials $P_\ell^m$ $$ D^{\ell}_{m 0}(\alpha,\beta,0) = \sqrt{\frac{4\pi}{2\ell+1}} Y_{\ell}^{m*} (\beta, \alpha ) = \sqrt{\frac{(\ell-m)!}{(\ell+m)!}} \, P_\ell^m ( \cos{\beta} ) \, e^{-i m \alpha } $$ so that $$ \int_0^{2\pi} d\alpha \int_0^\pi \sin \beta d\beta \int_0^{2\pi} d\gamma \,\, D^{j'}_{m'k'}(\alpha,\beta,\gamma)^\ast D^j_{mk}(\alpha,\beta,\gamma) = \frac{8\pi^2}{2j+1} \delta_{m'm}\delta_{k'k}\delta_{j'j}.$$ share|improve this answer That is a very interesting view/line of argument. Do you find this in standard Lie groups literature? –  Rafael Reiter Jan 24 '13 at 15:47 ...and how do the $f$'s relate to the $U$'s? –  Rafael Reiter Jan 24 '13 at 16:18 If you want to know why computational physicians like Legendre Polynomials, the answer is rather simple. As the other people has already pointed out, the Legendre Polynomials are orthogonal, they can be a very good basis for many applications. For example, if one tries to construct a function which fits the experiment or simulation data within the estimate error-bar and interpolates between the limited number of available data points, the Legendre Polynomials can be a very useful, so does the Chebyshev polynomials. The function constructed from the the Legendre Polynomials does not suffer the Runge's problem. share|improve this answer Rather than thinking about the abstract orthonormal basis of the Legendre polynomials $P_l(x)$, I find it easier to visualize these polynomials by looking at $P_l(\cos\theta)$. These are simply the Spherical Harmonics with azimuthal symmetry: $$ Y_l^{m=0} = n_l P_l(\cos\theta)$$ where $n_l$ is a normalization factor that only depends on $l$. In this beautiful image of the spherical harmonics on Wikipedia by Inigo.quilez, the $P_l(\cos\theta)$ correspond to the center column of the image ($m=0$). Note the symmetry about the $z$-axis. Plot of Spherical Harmonics by Inigo.quilez (Wikipedia) These are come up very often in physics, for example, while solving the Laplace equation ($\nabla^2\Phi = 0$) with azimuthally symmetric boundary conditions. share|improve this answer Your Answer
58721124a91ced0a
Take the 2-minute tour × Consider Schrödinger's time-independent equation $$ -\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi. $$ In typical examples, the potential $V(x)$ has discontinuities, called potential jumps. Outside these discontinuities of the potential, the wave function is required to be twice differentiable in order to solve Schrödinger's equation. In order to control what happens at the discontinuities of $V$ the following assumption seems to be standard (see, for instance, Keith Hannabus' An Introduction to Quantum Theory): Assumption: The wave function and its derivative are continuous at a potential jump. 1) Why is it necessary for a (physically meaningful) solution to fulfill this condition? 2) Why is it, on the other hand, okay to abandon twofold differentiability? Edit: One thing that just became clear to me is that the above assumption garanties for a well-defined probability/particle current. share|improve this question You may want to look at en.wikipedia.org/wiki/Weak_solution –  Willie Wong Jul 26 '10 at 13:11 Thanks Willie. This provides some explanation concerning my second question. –  Rasmus Bentmann Jul 26 '10 at 13:40 @Downvoter: Why did you downvote? –  Rasmus Bentmann Aug 20 '10 at 12:37 3 Answers 3 up vote 8 down vote accepted To answer your first question: Actually the assumption is not that the wave function and its derivative are continuous. That follows from the Schrödinger equation once you make the assumption that the probability amplitude $\langle \psi|\psi\rangle$ remains finite. That is the physical assumption. This is discussed in Chapter 1 of the first volume of Quantum mechanics by Cohen-Tannoudji, Diu and Laloe, for example. (Google books only has the second volume in English, it seems.) More generally, you may have potentials which are distributional, in which case the wave function may still be continuous, but not even once-differentiable. To answer your second question: Once you deduce that the wave function is continuous, the equation itself tells you that the wave function cannot be twice differentiable, since the second derivative is given in terms of the potential, and this is not continuous. share|improve this answer Your first argument is not clear to me - I'll take a look at Cohen-Tannoudji. –  Rasmus Bentmann Jul 26 '10 at 15:23 The idea is the following: suppose that $V$ has isolated discontinuities and let $x_0$ be the location of one such discontinuity. Replace $V$ on $[x_0-\epsilon, x_0+\epsilon]$ with another potential which is continuous and which tends to $V$ as $\epsilon\to 0$. Then you show that the wave-function which solves the Schrödinger equation for this new potential tends in the limit as $\epsilon\to0$ to the wave-function you want and that in this limit the first derivative remains continuous. This is not really proven in Cohen-Tannoudji et al. but only sketched. The details are not hard, though. –  José Figueroa-O'Farrill Jul 26 '10 at 16:56 There is a very clear physical reason why the wavefunction should be continuous: it's derivative is proportional to the momentum of the particle, so discontinuities imply that the state has an infinite-momentum component. –  Jess Riedel Apr 27 '11 at 3:47 Since you talk about 'jump' discontinuities, I guess you are interested in a one dimensional Schroedinger equation, i.e., $x\in\mathbb{R}$. In this situation a nice theory can be developed under the sole assumption that $V\in L^1(\mathbb{R})$ (and real valued of course). By a nice theory I mean that the operator $-d^2/dx^2+V(x)$ is selfadjoint, with continuous spectrum the positive real axis, and (possibly) a sequence of negative eigenvalues accumulating at 0. Better behaviour can be produced by requiring that $(1+|x|)^a V(x)$ be integrable (e.g. for $a=1$ the negative eigenvalues are at most finite in number). If you are interested in this point of view, a nice starting point might be the classical paper by Deift and Trubowitz on Communications Pure Appl. Math. 1979. Notice that the solutions are at least $H^1_{loc}$ (hence continuous) and even something more. A theory for the case $V$ = Dirac delta (or combination of a finite number of deltas) was developed by Albeverio et al.; the definition of the Schroedinger operator must be tweaked a little to make sense of it. This is probably beyond your interests. Summing up, no differentiability at all is required on the potential to solve the equation in a meaningful way. However, I suspect that this point of view is too mathematical and you are actually more interested in the physical relevance of the assumptions. share|improve this answer Here is a tangential response to your first question: sometimes these discontinuities do have physical significance and are not just issues of mathematical trickery surrounding pathological cases. Wavefunctions for molecular Hamiltonians become pointy where the atomic nuclei lie, which indicate places where the 1/r Coulomb operator becomes singular. There are equations like the Kato cusp conditions (T. Kato, Comm. Pure Appl. Math. 10, 151 (1957)) that relate the magnitude of the discontinuity at the nucleus to the size of the nuclear charge. I have heard this explained as a result of requiring the energy (which is the Hamiltonian's eigenvalue) to remain finite everywhere, thus at places where the potential is singular, the kinetic energy operator must also become singular at those places. Since the kinetic energy operator also controls the curvature of the wavefunction, the wavefunction at points of discontinuity must change in a nonsmooth way. share|improve this answer Your Answer
23da45d2f2d292e3
Testing Einstein? There has recently been a flurry of media stories about experiments searching for gravitational radiation, usually with headlines about “testing Einstein’s theory”.  In fact, these experiments are testing our ability to measure gravitational radiation, because there is already compelling proof that this prediction of the general theory of relativity (which is itself exactly 100 years old as I write) is correct.  This extract from my book Einstein’s Masterwork (http://www.iconbooks.com/blog/title/einsteins-masterwork/) should make everything clear.  But the experiments are still hugely important.  If we can detect gravitational radiation directly, we will have a new way to study things like black holes, supernovas — and binary pulsars. Massive objects, such as the Earth or a star, drag spacetime around with them as they rotate.  If they move back and forth, they can also generate waves in the fabric of spacetime, known as gravitational waves, or gravitational radiation, like the ripples you can make in a bowl of water by wiggling your finger about in it.  The resulting ripples in the fabric of space are very weak, unless a very large mass is involved in fairly rapid motion.  But the waves were predicted by Einstein in a paper published in 1916, where he showed that they should move at the speed of light.  Physicists have been trying for decades (as yet unsuccessfully) to detect gravitational radiation using very sensitive detectors here on Earth, and plan to put even more sensitive detectors into space.  But meanwhile absolute proof of the accuracy of his prediction has come from observations of compact objects far away in space — the latest, and most precise, of these observations being reported in 2013.  The objects involved are compact binary stars, systems in which one star orbits closely around another — or rather, where both stars orbit around their common centre of mass, like a whirling dumbbell or the twirling mace of a drum majorette.  The first of these systems extreme enough to test Einstein’s prediction was a “binary pulsar”, studied in the mid-1970s. A binary pulsar exists when two neutron stars, one of which is a pulsar, are in orbit around one another, forming a binary star system.  The term is also used to refer to a pulsar in orbit about any other star, for example, a white dwarf.  More than twenty binary pulsars are now known, but astronomers reserve the term “the binary pulsar” for the first one to be discovered, which is also known by its catalog number, as PSR 1913+16. The binary pulsar was discovered in 1974 by Russell Hulse and Joseph Taylor, of the University of Massachusetts, working with the Arecibo radio telescope in Puerto Rico.   This pulsar was at the time the most accurate clock yet discovered.  What they found that summer was so important that in 1993 the pair received the Nobel Prize for their work on the binary pulsar. The first hint of the existence of the binary pulsar came on 2 July, when the instruments recorded a very weak signal.  Had it been just 4 per cent weaker still, it would have been below the automatic cutoff level built in to the computer program running the search, and would not have been recorded.  The source was especially interesting because it had a very short period, only 0.059 seconds, making it the second fastest pulsar known at the time.  But it wasn’t until 25 August that Hulse was able to use the Arecibo telescope to take a more detailed look at the object. Over several days following 25 August, Hulse made a series of observations of the pulsar and found that it varied in a peculiar way.  Most pulsars are superbly accurate clocks, beating time with a precise period measured to six or seven decimal places; but this one seemed to have an erratic period which changed by as much as 30 microseconds (a huge “error” for a pulsar) from one day to the next.  Early in September 1974, Hulse realised that these variations themselves follow a periodic pattern, and could be explained by the Doppler effect caused by the motion of the pulsar in a tight orbit around a companion star.  Taylor flew down to Arecibo to join the investigation, and together he and Hulse found that the orbital period of the pulsar around its companion (its “year”) is 7 hours and 45 minutes, with the pulsar moving at a maximum speed (revealed by the Doppler effect) of 300 kilometers per second, one tenth of the speed of light, and an average speed of about 200 km/sec, as it zipped around its companion.  The size of the orbit traced out at this astonishing speed in just under 8 hours is about 6 million km, roughly the circumference of the Sun.  In other words, the average separation between the pulsar and its companion is about the radius of the Sun, and the entire binary pulsar system would neatly fit inside the Sun. All pulsars are neutron stars; the orbital parameters showed that in this case the companion star must also be a neutron star.  The system was immediately recognised as an almost perfect test bed for the General Theory — and, indeed, for the Special Theory, as well.  As I have explained, one of the key tests of the General Theory is the advance of the perihelion of Mercury.  The equivalent effect in the binary pulsar (the shift in the “periastron”) would be about a hundred times stronger than for Mercury, and whereas Mercury only orbits the Sun four times a year, the binary pulsar orbits its companion a thousand times a year, giving that much more opportunity to study the effect.  It was duly measured,and found to conform exactly with the predictions of Einstein’s theory —  the first direct test of the General Theory made using an object outside the Solar System.  By feeding back the measurements of the shift into the orbital data for the system, the total mass of the two stars in the system put together was eventually determined to unprecedented accuracy, as 2.8275 times the mass of our Sun. But this was only the beginning of the use of the binary pulsar as a gravitational laboratory in which to test and use Einstein’s theory.  Extended observations over many months showed that, once allowances were made for the regular changes caused by its orbital motion, the pulsar actually kept time very precisely.  Its period of 0.05903 seconds increased by only a quarter of a nanosecond (a quarter of a billionth of a second) in a year, equivalent to a clock that lost time at a rate of only 4 per cent in a million years. The numbers became more precise as the observations mounted up.  For 1 September, 1974, the data were: Period, 0.059029995271 sec; rate of increase, 0.253 nanoseconds per year; orbital period 27906.98163 seconds; rate of change of periastron, 4.2263 degrees of arc per year. The accuracy of the observations soon made it possible to carry out more tests and applications of the theory of relativity.  One involves the time dilation predicted by the Special Theory of relativity.  Because the speed of the pulsar around its companion is a sizeable fraction of the speed of light, the pulsar “clock” is slowed down, according to our observations, by an amount which depends on its speed.  Since the speed varies over the course of one orbit (from a maximum of 300 km/sec down to “only” 75 km/sec), this will show up as a regular variation of the pulsar’s period over each orbit. And because the pulsar is moving in an elliptical orbit around its companion, its distance from the second neutron star varies.  This means that it moves from regions of relatively high gravitational field to regions of relatively low gravitational field, and that its timekeeping mechanism should be subject to a regularly varying gravitational redshift. The combination of these two effects produces a maximum measured variation in the pulsar period of 58 nanoseconds over one orbit, and this information can be fed back in to the orbital calculations to determine the ratio of the masses of the two stars.  Since the periastron shift tells us that the combined mass is 2.8275 solar masses, the addition of these data reveals that the pulsar itself has 1.42 times the mass of our Sun, while its companion has 1.40 solar masses.  These were the first precise measurements of the masses of neutron stars. But the greatest triumph of the investigation of the binary pulsar was still to come.  Almost as soon as the discovery of the system had been announced, several relativists pointed out that in theory the binary pulsar should be losing energy as a result of gravitational radiation, generating ripples in the fabric of spacetime that would carry energy away and make the orbital period speed up as the binary pulsar and its companion spiraled closer together as a result. Even in a system as extreme as the binary pulsar, the effect is very small.  It would cause the orbital period (about 27,000 seconds) to increase by only a few tens of a millionth of a second(about 0.0000003 per cent) per year.  The theory was straightforward, but the observations would require unprecedented accuracy.  In December 1978, after four years of work, Taylor announced that the effect had been measured, and that it exactly matched he predictions of Einstein’s theory.  The precise prediction of that theory was that the orbital period should decrease by 75 millionths of a second per year; by 1983, nine years after the discovery of the binary pulsar, Taylor and his colleagues had measured the change to a precision of 2 millionths of a second per year, quoting the observed value as 76+2 millionths of a second per year.  Since then, the observations have been improved further, and show an agreement with Einstein’s theory that has an error less than 1 per cent.  This was a spectacular and comprehensive test of the General Theory, and effectively ruled out any other theory as a good description of the way the Universe works. But astronomers were not prepared to rest on their laurels, and kept searching for other objects which might be used to test the General Theory.  Their latest success involves a neutron star and a white dwarf star orbiting around each other some 7,000 light years from Earth.  The neutron star — another pulsar, dubbed PSR J0348+0432 — was discovered by radio astronomers using the Green Bank Telescope, and its companion was soon detected in optical light, with the system being studied using both optical and radio telescopes around the world from late 2011.  The two stars orbit around each other once every 2.46 hours, with the pulsar spinning on its axis once every 39 milliseconds — that is, roughy 25 times per second. The same kind of analysis as that used for the binary pulsar reveals that in this case the neutron star has a mass just over twice that of the Sun, with a diameter of about 20 km, while the white dwarf has a mass a bit less than 20 per cent of the mass of the Sun.  The distance between the two stars is about 1.2 times the radius of the Sun, just over half the Sun’s diameter, so once again the whole system would fit inside the Sun.  With the measured orbital properties, this implies that gravitational radiation should make the orbit “decay” at a rate of 2.6 x 10-13 seconds per second; the measured rate is 2.7 x 10-13 seconds per second, with an uncertainty of + 0.5.  Over a whole year, this amounts to just 8 millionths of a second.  This is an even better test of the General Theory, partly because of the larger mass of the pulsar (the most massive neutron star yet discovered) compared with the neutron stars in the original binary pulsar system. Over the years ahead, continuing observations will provide even more precise tests of the General Theory.  But the accuracy of the test is already so precise, and the agreement with the predictions of Einstein’s theory are so good, that the General Theory of relativity can now be regarded as one of the two most securely founded theories in the whole of science, alongside quantum electrodynamics. A Spooky Review My latest for the Wall Street Journal Spooky Action at a Distance George Musser Albert Einstein used the term “spooky action at a distance” to refer to the way that, according to quantum theory, particles that have once interacted with one another remain in some sense “entangled” even when they are far apart. Poke one particle, in the right quantum-mechanical way, and the other particle jumps, instantly, even if it is on the other side of the Universe. He did not mean the term as a compliment, and did not believe that the effect could be real. Alas for Einstein (but fortunately, perhaps, after his death) experiments based on the theoretical work of the physicist John Bell proved that this entanglement is real. More precisely, they proved that something called “local reality”, which is a feature of everyday commonsense, is not real. Local reality says that there is a real Universe out there, even when we are not observing it (trees that fall in the woods make a noise even if nobody is there to hear it). The “local” bit of the name says that these real objects can only influence one another by influences that travel at less than or equal to the speed of light. There is no instantaneous linkage. The experiments show that the combination, local reality, does not hold. The simplest explanation is that “locality” is violated – spooky action at a distance. Alternatively, there may not be a real world out there which exists independently of our measurements. In that case, the conceptual problems arise because we are trying to imagine what “particles” are like when they are not being measured; if we discard the idea of such a reality, we can preserve locality. And if you really want a sleepless night, consider that both locality and reality may be violated. George Musser’s book is likely to invoke such sleepless nights. He starts off with Bell’s work and its implications, building up through a rundown of the history of the development of quantum physics. This, you should be warned, is the easy bit. Having established that local reality is not a valid description of how the Universe works, Musser takes us out into far deeper waters. Quantum field theory is just the beginning, and introduces another form of non-locality. Then the general theory of relativity and black holes offer up another candidate for this entanglement, now see as a universal effect, not something that only particle physicists have to worry about. After all, the general theory allows for the existence of so-called “wormholes”, tunnels through space and time that link different parts of spacetime – distant parts of Universe, or (conceivably, although Musser does not elaborate on this) different universes. A wormhole is intrinsically nonlocal. Some theorists have suggested that mini-wormholes might link entangled particles, and explain their shared properties. The island of knowledge that we are swimming towards, frantically trying to keep afloat, is that both space and time are illusions. Non-locality is the natural order of things, and space itself is manufactured out of non-local building blocks. “Locality,” says Musser, “becomes the puzzle,” but so is the nature of those building blocks. The analogy he uses to explain this involves water. Individually, the building blocks of water – molecules – are not wet, but collectively they produce the sensation of wetness. Individually, the building blocks of the Universe, yet to be identified, are not spacial, but collectively they produce the sensation of space. This is tough going, and in spite of the author’s heroic efforts to make difficult concepts comprehensible, he does not always succeed. But the ideas he discusses, such as matrix theory, or the possibility that “our” Universe is a holographic image projected from some higher reality, are at the cutting edge of physics today, and nobody should expect all the loose ends to be neatly tied up. Indeed, the most powerful message to take from this book is tucked away, almost apologetically, near the end. Science is all about debate, and progress is made by arguing about cherished (but not necessarily correct) ideas, until some consensus emerges. When the consensus is reached, the physicists become bored and move on to something new. The sound of physicists arguing is the sound of science making progress. That is the “sound” of this book. But “yesterday’s drag-out fights are tomorrow’s homework problems”, as Musser succinctly puts it. As this example illustrates, he has a neat turn of phrase which helps to make the difficult ideas described here slightly less difficult to comprehend. But don’t think “less difficult” means “easy.” Spooky Action at a Distance is an important book which provides insight into key new developments in our understanding of the nature of space, time and the Universe. It will repay careful study, and I am sure it will become a well-thumbed feature of my reference shelf, while the extensive bibliography will help those who want to delve further. But it is not something you can digest in a single reading. A quantum myth for our times Adapted from my book Schrödinger’s Kittens The central problem that we have to explain, in order to persuade ourselves that we understand the mysteries of the quantum world, is encapsulated in the story of Schrödinger’s kittens that I told in my book. The experiment is set up in such a way that two kittens have been separated far apart in space, but are each under the influence of a 50:50 probability wave, associated with the collapse of an electron wave function to become a “real” particle in just one or other of their two spacecraft. At the moment when one of the capsules is opened and an observer notices whether or not the electron is inside, the probability wave collapses and the fate of the kitten is determined — and not just the fate of the kitten in that capsule, but also, simultaneously, that of the other kitten in the other capsule, on the other side of the Universe. At least, that is the old-fashioned (and increasingly discredited) Copenhagen Interpretation version of the correlation between the two kittens, and whichever quantum interpretation you favour (there are several!), the Aspect experiment and Bell’s inequality show that once quantum entities are entangled in an interaction then they really do behave, ever afterwards, as if they are parts of a single system under the influence of Einstein’s spooky action at a distance. The whole is greater than the sum of its parts, and the parts of the whole are interconnected by feedbacks — feedbacks which seem to operate instantaneously. This is where we can begin to make a fruitful analogy with living systems. A living system, such as your own body, is certainly greater than the sum of its parts. A human body is made up of millions of cells, but it can do things that a heap of the appropriate number of cells could never do; the cells themselves are alive in their own right, and they can do things that a simple chemical mixture of the elements they contain could not do. In both cases, one of the key reasons why the living cells and living bodies can do such interesting things is that there are feedbacks which convey information — from one side of the cell to another, and from one part of the body to another. At a deep level, inside the cells these feedbacks may involve chemical messengers which convey raw materials to the right places and use them to construct complex molecules of life. At a gross human level, just about every routine action, such as the way my fingers are moving to strike the right keys on my computer keyboard to create this sentence, involves feedbacks in which the brain constantly takes in information from senses such as sight and touch and uses that information to modify the behaviour of the body (in this case, to determine where my fingers will move to next). This really is feedback, a two-way process, not simply an instruction from the brain to tell the fingers where to go. The whole system is involved in assessing where those fingers are now, and how fast (and in what direction) they are moving, checking that the pressure on the keys is just right, going back (very often, in my case!) to correct mistakes, and so on. Even a touch typist is constantly adjusting the exact movements of the fingers in response to such feedbacks, in the same way that you can ride a bicycle by constantly making automatic adjustments in your balance to keep yourself upright. If you knew nothing about those feedbacks, and had no idea that the different parts of the body were interconnected by a communications system, it would seem miraculous that the elongated lumps of flesh and bone on the ends of my hands could “create” an intelligent message by poking away at the keyboard — just as it seems miraculous, unless we invoke some form of communication and feedback, that the polarization states of two photons flying out on opposite sides of an atom can be correlated in the way that the Aspect experiment reveals. The one big difference, the hurdle that we have to overcome, is the instantaneous nature of the feedback in the quantum world. But that is explained by the nature of light itself, both in the context of relativity theory and from the right perspective on the quantum nature of electrodynamics. That perspective is the relatively unsung Wheeler-Feynman model of electromagnetic radiation — a model which can also provide striking insights into the way gravity works. Making the most of mass Feynman’s unsung suggested, more than half a century ago, that the behaviour of electromagnetic radiation, and the way in which it interacts with charged particles, could be explained by taking seriously the fact that there are two sets of solutions to Maxwell’s equations, the equations that describe electromagnetic waves moving through space like ripples moving across the surface of a pond. One set of solutions, the “commonsense” solutions, describes waves moving outward from an accelerated charged particle and forwards in time, like ripples spreading from the point where astone has been dropped into the pond. The second set of solutions, largely ignored even today, describes waves travelling backwards in time and converging onto charged particles, like ripples that start from the edge of the pond and converge onto a point in the middle of the pond. When proper allowance is made for both sets of waves interacting with all the charged particles in the Universe, most of the complexity cancels out, leaving only the familiar commonsense (or “retarded”) waves to carry electromagnetic influences from one charged particle to another. But as a result of all these interactions, each individual charged particle — including each electron — is instantaneously aware of its position in relation to all the other charged particles in the Universe. The one tangible influence of the waves that travel backwards in time (the “advanced” waves) is that they provide feedback which makes every charged particle an integrated part of the whole electromagnetic web. Poke an electron in a laboratory here on Earth, and in principle every charged particle in, say, the Andromeda galaxy, more than two million light years away, immediately knows what has happened, even though any retarded wave produced by poking the electron here on Earth will take more than two million years to reach the Andromeda galaxy. Even supporters of the Wheeler-Feynman absorber theory usually stop short of expressing it that way. The conventional version (if anything about the theory can be said to be conventional) says that our electron here on Earth “knows where it is” in relation to the charged particles everywhere else, including those in the Andromeda galaxy. But it is at the very heart of the nature of feedback that it works both ways. If our electron knows where the Andromeda galaxy is, then for sure the Andromeda galaxy knows where our electron is. The result of the feedback — the result of the fact that our electron has to be considered not in isolation but as part of a holistic electromagnetic web filling the Universe — is that the electron resists our attempts to push it around, because of the influence of all those charged particles in distant galaxies, even though no information-carrying signal can travel between the galaxies faster than light. Now this explanation of why charged particles experience radiation resistance is rather similar to another puzzle that has long plagued physicists. Why do ordinary lumps of matter resist being pushed around, and how do they know how much resistance to offer when they are pushed? Where does inertia itself come from? Galileo seems to have been the first person to realise that it is not the velocity with which an object moves but its acceleration which reveals the effect of forces acting upon it. On Earth, friction — one of those external forces — is always present, and slows down (decelerates) any moving object, unless you keep pushing it. But without the influence of friction objects would keep moving in straight lines forever, unless they were pushed or pulled by forces. This became one of the cornerstones of Newton’s laws of mechanics. Things moved at constant velocity through empty space (relative to some absolute standard of rest), he argued, unless accelerated by external forces. For an object with a given mass, the acceleration produced by a particular force is given by dividing the force by the mass. One intriguing aspect of this discovery is that the mass which comes into the calculation is the same as the mass involved in gravity. It isn’t immediately obvious that this should be so. Gravitational mass determines the strength of the force which an object extends out into the Universe to tug on other objects; inertial mass, as it is called, determines the strength of the response of an object to being pushed and pulled by outside forces — not just gravity, but any outside forces. And they are the same. The “amount of matter” in an object determines both its influence on the outside world, and its response to the outside world. Don’t be confused by the fact that an object weighs less on the Moon than it does on Earth; this is not because the object itself changes, but because the gravitational force at the surface of the Moon is less than the gravitational force at the surface of the Earth. It is the outside force that is less on the Moon, and the inertial response of the object matches that reduced outside force, so that it “weighs less”. This already looks like a feedback at work, a two-way process linking each object to the Universe at large. But until very recently, nobody had any clear idea how the feedback could work. Newton himself described a neat experiment which seems to show that there really is a preferred frame of reference in the Universe, and later philosophers said that this experiment indicates just what it is that defines the absolute standard of rest. Writing in the Principia in 1686, Newton described what happens if you take a bucket of water hung from a long cord, twist the cord up tightly, and then let go. The bucket, of course, starts to spin as the cord untwists. At first, the surface of the water in the bucket stays level, but as friction gradually transfers the spinning of the bucket to the water itself, the water begins to rotate as well, and its surface takes up a concave shape, as “centrifugal force” pushes water out to the sides of the bucket. Now, if you grab the bucket to stop it spinning, the water carries on rotating, with a concave surface, but gradually slows down, becoming flatter and flatter, until it stops moving and has a completely flat surface. Newton pointed out that the concave shape of the surface of the rotating water shows that it “knows” that it is rotating. But what is it rotating relative to? The relative motion of the bucket and water seems completely unimportant. If the bucket and the water are both still, with no relative motion, the water is flat; if the bucket is rotating and the water is not, the surface is still flat even though there is relative motion between the water and the bucket; if the water is rotating and the bucket is not, there is relative motion between the two and the surface is concave; but if the water and the bucket are both rotating, so that once again there is no relative motion between the water and the bucket, the surface is concave. So, Newton reasoned, the water “knows” whether or not it is rotating relative to absolute space. In the 18th century, the philosopher George Berkeley offered another explanation. He argued that all motion must be measured relative to something tangible, and he pointed out that what seems to be important in the famous bucket experiment is how the water is moving relative to the most distant objects known at the time, the fixed stars. We now know, of course, that the stars are relatively near neighbours of ours in the cosmos, and that beyond the Milky Way there are many millions of other galaxies. But Berkeley’s insight still holds. The surface of a bucket of water will be flat if the water is not rotating relative to the distant galaxies, and it will be curved if the water is rotating relative to the distant galaxies. And acceleration seems also to be measured relative to the distant galaxies — that is, relative to the average distribution of all the matter in the Universe. It is as if, when you try to push something around, it takes stock of its situation relative to all the matter in the Universe, and responds accordingly. It is somehow held in place by gravity, which is why gravitational and inertial mass are the same. This idea that inertia is indeed produced by the response of a material object to the Universe at large is often known as Mach’s Principle, after the nineteenth century Austrian physicist Ernst Mach, whose name is immortalised in the number used to measure speeds relative to the speed of sound, but who also thought long and hard about the nature of inertia. As I have mentioned, Mach’s ideas, essentially an extension of those of Berkeley, strongly influenced Einstein, who argued that the identity between gravitational and inertial mass does indeed arise because inertial forces are really gravitational in origin, and tried to incorporate Mach’s Principle — the feedback of the entire Universe on any gravitational mass — into his general theory of relativity. It is fairly easy to make a naive argument along these lines. All the mass in all the distant galaxies (and anything else) reaches out with a gravitational influence to hold on to everything here on Earth (and everywhere else), including, say, the pile of books sitting on my desk. When I try to move one of those books, the amount of effort I have to put in to the task is a measure of how strongly the Universe holds that disk in its grip. But it is much harder to put all this on a secure scientific footing. How does the book “know”, instantaneously, just how much it should resist my efforts to move it? One appealing possibility (in the naive picture) is that by poking at an object and changing its motion we make it send some sort of gravitational ripple out into the Universe, and that this ripple disturbs everything else in the Universe, so that a kind of echo comes back, focussing down on the disturbed object and trying to maintain the status quo. But if signals, including gravitational ripples, can only travel at the speed of light, it looks as if it might take just about forever for the echo to get back and for the book to decide just how it ought to respond to being pushed around. Unless, of course, there is some way of incorporating the principle of the time-symmetric Wheeler-Feynman absorber theory into a description of gravity, so that some of the gravitational ripples involved in this feedback travel backwards in time. But since the Wheeler-Feynman theory of electromagnetic radiation came some thirty years after Einstein’s theory of gravity, and nobody took it very seriously even then, this resolution of the puzzle posed by Mach’s Principle had never been put on even a tentative proper mathematical footing when I started writing my book. I have hankered after such a resolution of Mach’s Principle for years, (see my book In Search of the Big Bang, published in 1986)‚ but lacked the skill to do anything more than make vague, hand-waving arguments about the desirability of explaining inertia in this way. Ever since Einstein came up with his general theory, there has been argument about whether or not it does incorporate Mach’s Principle in a satisfactory way. It does at least go some way towards including Mach’s Principle, because the behaviour of an object at any location in space depends on the curvature of spacetime at that location, which is determined by the combined gravitational influence of all the matter in the Universe. But it still seems to beg the question of how quickly the “signals” that determine the curvature of spacetime get from one place to another. Since those distant galaxies are themselves moving, their influence ought to be constantly changing. Do these changes propagate only at the speed of light, or instantaneously? And if instantaneously, how? One intriguing aspect of the debate is that Einstein’s equations only produce anything like the right kind of Machian influences if there is enough matter in the Universe to bend spacetime back on itself gravitationally. In an “open” Universe, extending to infinity in all directions, the equations can never be made to balance with a finite amount of inertia. This used to be an argument against claiming that the general theory incorporates Mach’s Principle, because people thought that the Universe was indeed “open”; but all that has changed, and there now seems to be compelling evidence that the Universe is indeed “closed” (just barely closed, but still closed). Which, of course, is one reason why the Wheeler-Feynman absorber theory itself is now taken more seriously. The philosophical foundations for a similar approach to quantum mechanics were laid by John Cramer, of the University of Washington, Seattle, in a series of largely unsung papers published in the 1980s. Cramer’s “transactional interpretation” of quantum mechanics uses exactly this approach, and is, the interpretation that provides the best all round picture of how the world works at the quantum level, for anyone who wants to have a single “answer” to the puzzles posed by Bell’s inequality, the Aspect experiment, and the fate of Schrödinger’s kittens. (New readers can find out more about these puzzles in my other blogs and books.) The simple face of complexity The original version of the Wheeler-Feynman theory was, strictly speaking, a classical theory, because it did not take account of quantum processes. Nevertheless, by the 1960s researchers had found that there are indeed only two stable situations that result from the complexity of overlapping and interacting waves, some going forwards in time and some backwards in time. Such a system must end up dominated either by retarded radiation (like our Universe) or by advanced radiation (equivalent to a universe in which time ran backward). In the early 1970s, a few cosmologists, intrigued by the puzzle of why there should be an arrow of time in the Universe at all, developed variations on the Wheeler-Feynman theory that did take on board quantum mechanics. In effect, they developed Wheeler-Feynman versions of QED. Fred Hoyle and Jayant Narlikar used a so-called path integral technique, while Paul Davies used an alternative mathematical approach called S-matrix theory. The details of the mathematics do not matter; what does matter is that in each case they found that Wheeler-Feynman absorber theory can be turned into a fully quantum-mechanical model. The reason for the interest of cosmologists in all this is the suggestion — still no more than a suggestion — that the reason why our Universe should be dominated by retarded waves, and that there should, therefore, be a definite arrow of time, is connected with the fact that the Universe itself shows time asymmetry, with a Big Bang in the past and either ultimate collapse into a Big Crunch or eternal expansion in the future. Wheeler-Feynman theory provides a way for particles here and now to “know” about the past and future states of the Universe — these “boundary conditions” could be what selects out the retarded waves for domination. But all of this still applied only to electromagnetic radiation. The giant leap taken by John Cramer was to extend these ideas to the wave equations of quantum mechanics — the Schrödinger equation itself, and the related equations describing the probability waves which travel, like photons, at the speed of light. His results appeared in an exhaustive review article published in 1986 (Reviews of Modern Physics, volume 58 page 647) but made very little impact; that is now being rectified (see http://www.amazon.com/Quantum-Handshake-Entanglement-Nonlocality-Transactions/dp/3319246402/ref=sr_1_1?ie=UTF8&qid=1448221522&sr=8-1&keywords=john+g+cramer). In order to apply the absorber theory ideas to quantum mechanics, you need an equation, like Maxwell’s equations, which yields two solutions, one equivalent to a positive energy wave flowing into the future, and the other describing a negative energy wave flowing into the past. At first sight, Schrödinger’s famous wave equation doesn’t fit the bill, because it only describes a flow in one direction, which (of course) we interpret as from past to future. But as all physicists learn at university (and most promptly forget) the most widely used version of this equation is incomplete. As the quantum pioneers themselves realised, it does not take account of the requirements of relativity theory. In most cases, this doesn’t matter, which is why physics students, and even most practicing quantum mechanics, happily use the simple version of the equation. But the full version of the wave equation, making proper allowance for relativistic effects, is much more like Maxwell’s equations. In particular, it has two sets of solutions — one corresponding to the familiar simple Schrödinger equation, and the other to a kind of mirror image Schrödinger equation describing the flow of negative energy into the past. This duality shows up most clearly in the calculation of probabilities in the context of quantum mechanics. The properties of a quantum system are described by a mathematical expression, sometimes known as the “state vector” (essentially another term for the wave function), which contains information about the state of a quantum entity — the position, momentum, energy and other properties of the system (which might, for example, simply be an electron wave packet). In general, this state vector includes a mixture of both ordinary (“real”) numbers and imaginary numbers — those numbers involving i, the square root of minus one. Such a mixture is called a complex variable, for obvious reasons; it is written down as a real part plus (or minus) an imaginary part. The probability calculations needed to work out the chance of finding an electron (say) in a particular place at a particular time actually depend on calculating the square of the state vector corresponding to that particular state of the electron. But calculating the square of a complex variable does not simply mean multiplying it by itself. Instead, you have to make another variable, a mirror image version called the complex conjugate, by changing the sign in front of the imaginary part — if it was + it becomes -, and vice versa. The two complex numbers are then multiplied together to give the probability. But for equations that describe how a system changes as time passes, this process of changing the sign of the imaginary part and finding the complex conjugate is equivalent to reversing the direction of time! The basic probability equation, developed by Max Born back in 1926, itself contains an explicit reference to the nature of time, and to the possibility of two kinds of Schrödinger equations, one describing advanced waves and the other representing retarded waves. It should be no surprise, after all this, to learn that the two sets of solutions to the fully relativistic version of the wave equation of quantum mechanics are indeed exactly these complex conjugates. But in time honoured tradition, for almost a century most physicists have largely ignored one of the two sets of solutions because “obviously” it didn’t make sense to talk about waves travelling backwards in time! The remarkable implication is that ever since 1926, every time a physicist has taken the complex conjugate of the simple Schrödinger equation and combined it with this equation to calculate a quantum probability, he or she has actually been taking account of the advanced wave solution to the equations, and the influence of waves that travel backwards in time, without knowing it. There is no problem at all with the mathematics of Cramer’s interpretation of quantum mechanics, because the mathematics, right down to Schrödinger’s equation, is exactly the same as in the Copenhagen interpretation. The difference is, literally, only in the interpretation. As Cramer put it in that 1986 paper (page 660), “the field in effect becomes a mathematical convenience for describing action-at-a-distance processes”. So, having (I hope) convinced you that this approach makes sense, let’s look at how it explains away some of the puzzles and paradoxes of the quantum world. Shaking hands with the Universe The way Cramer describes a typical quantum “transaction” is in terms of a particle “shaking hands” with another particle somewhere else in space and time. You can think of this in terms of an electron emitting electromagnetic radiation which is absorbed by another electron, although the description works just as well for the state vector of a quantum entity which starts out in one state and ends up in another state as a result of an interaction — for example, the state vector of a particle emitted from a source on one side of the experiment with two holes (Feynman’s term for Young’s double-slit experiment) and absorbed by a detector on the other side of the experiment. One of the difficulties with any such description in ordinary language is how to treat interactions that are going both ways in time simultaneously, and are therefore occurring instantaneously as far as clocks in the everyday world are concerned. Cramer does this by effectively standing outside of time, and using the semantic device of a description in terms of some kind of pseudotime. This is no more than a semantic device — but it certainly helps to get the picture straight. It works like this. When an electron vibrates, on this picture, it attempts to radiate by producing a field which is a time-symmetric mixture of a retarded wave propagating into the future and an advanced wave propagating into the past. As a first step in getting a picture of what happens, ignore the advanced wave and follow the story of the retarded wave. This heads off into the future until it encounters an electron which can absorb the energy being carried by the field. The process of absorption involves making the electron that is doing the absorbing vibrate, and this vibration produces a new retarded field which exactly cancels out the first retarded field. So in the future of the absorber, the net effect is that there is no retarded field. But the absorber also produces a negative energy advanced wave travelling backwards in time to the emitter, down the track of the original retarded wave. At the emitter, this advanced wave is absorbed, making the original electron recoil in such a way that it radiates a second advanced wave back into the past. This “new” advanced wave exactly cancels out the “original” advanced wave, so that there is no effective radiation going back in the past before the moment when the original emission occurred. All that is left is a double wave linking the emitter and the absorber, made up half of a retarded wave carrying positive energy into the future and half of an advanced wave carrying negative energy into the past (in the direction of negative time). Because two negatives make a positive, this advanced wave adds to the original retarded wave as if it too were a retarded wave travelling from the emitter to the absorber. The entire argument works just as well if you start with the “absorber” electron emitting radiation into the past; the transactional interpretation itself says nothing about which direction of time should be preferred, but suggests that this is linked to the boundary conditions of the Universe, which favour an arrow of time pointing away from the Big Bang. In Cramer’s words: The emitter can be considered to produce an “offer” wave which travels to the absorber. The absorber then returns a “confirmation” wave to the emitter, and the transaction is completed with a “handshake” across spacetime. (1986 paper, page 661) But this is only the sequence of events from the point of view of pseudotime. In reality, the process is atemporal; it happens all at once. This is because thanks to time dilation signals that travel at the speed of light take no time at all to complete any journey in their own frame of reference — in effect, for light signals every point in the Universe is next door to every other point in the Universe. Whether the signals are travelling backwards or forwards in time doesn’t matter, since they take zero time (in their own frame of reference), and +0 is the same as -0. The situation is more complicated in three dimensions, but the conclusions are exactly the same. Taking the most extreme possible case, in a universe which contained just a single electron, the electron would not be able to radiate at all (nor, if Mach’s Principle is correct, would it have any mass). If there were just one other electron in the universe, the first electron would be able to radiate, but only in the direction of this second “absorber” electron. In the real Universe, if matter were not distributed uniformly on the largest scales, and there was less potential for absorption in some directions than in others, we would find that emitters (such as radio antennas) would “refuse” to radiate equally strongly in all directions. Attempts have actually been made to test this possibility by beaming microwaves out into the Universe in different directions, but they show no sign of any reluctance of the electrons to radiate in any particular direction. Cramer is at pains to stress that his interpretation makes no predictions that are different from those of conventional quantum mechanics, and that it is offered as a conceptual model which may help people to think clearly about what is going on in the quantum world, a tool which is likely to be particularly useful in teaching, and which has considerable value in developing intuitions and insights into otherwise mysterious quantum phenomena. But there is no need to feel that the transactional interpretation suffers in comparison with other interpretations in this regard, because, as none of them is anything other than a conceptual model designed to help our understanding of quantum phenomena, and all of them make the same predictions. The only valid criterion for choosing one interpretation rather than another is how effective it is as an aid to our way of thinking about these mysteries — and on that score Cramer’s interpretation wins hands down as far as I am concerned. First, it not only offers something rather more than a hint of why there is an arrow of time, it also puts all physical processes on an equal footing. There is no need to assign a special status to the observer (intelligent or otherwise), or to the measuring apparatus. At a stroke, this removes the basis for a large part of the philosophical debate about the meaning of quantum mechanics that has gone on for nearly a century. And, going beyond the debate about the role of the observer, the transactional interpretation really does resolve those classic quantum mysteries. I’ll give just a couple of examples — how Cramer deals with the experiment with two holes, and how his interpretation makes sense of the Aspect experiment. If we are going to explain the central mystery of the experiment with two holes, we might as well go the whole hog and explain the ultimate version of this mystery, John Wheeler’s variation on the theme, the so-called “delayed choice” experiment. In one version of this experiment, a source of light emits a series of single photons which travel through the experiment with two holes. On the other side is a detector screen which can record the positions the photons arrive at, but which can be flipped down, while the photons are on their way, to allow them to pass on to one or other of a pair of telescopes focussed on the two slits (one focussed on each slit). If the screen is down, the telescopes will observe single photons each passing through one or other of the slits, with no sign of interference; if the screen is up, the photons will seem to pass through both slits, creating an interference pattern on the screen. And the screen can be flipped down after the photons have passed the slits, so that their decision about which pattern of behaviour to adopt seems to be determined by an event which occurs after they have made that decision. In Cramer’s version of events, a retarded “offer wave” (monitored in “pseudotime” for the purpose of this discussion) sets off through both holes in the experiment. If the screen is up, the wave is absorbed in the detector, triggering an advanced “confirmation wave” which travels back through both slits of the apparatus to the source. The final transaction forms along both possible paths (actually, as Feynman would have stressed, along every possible path), and there is interference. If the screen is down, the offer wave passes on to the two telescopes trained on the slits. Because each telescope is trained on just one slit, it is only possible for any confirmation wave produced when the offer wave interacts with the telescope itself to go back to the source through the slit on which that telescope is trained. And, of course, the absorption event must involve a whole photon, not a part of a photon. Although each telescope may send back a confirmation wave through its respective slit, the source has to “choose” (at random) which one to accept, and the result is a final transaction which involves the passage of a single photon through a single slit. The evolving state vector of the photon “knows” whether the screen is going to be up or down because the confirmation wave really does travel back in time through the apparatus, but the whole transaction is, as before, atemporal. “The issue of when the observer decides which experiment to perform is no longer significant. The observer determined the experimental configuration and boundary conditions, and the transaction formed accordingly. Furthermore, the fact that the detection event involves a measurement (as opposed to any other interaction) is no longer significant, and so the observer has no special role in the process.” (Cramer, 1986, page 673). You can amuse yourself by working out a similar explanation of what happens to Schrödinger’s cat. Once again, what matters is that the completed transaction only allows one possibility (dead cat or live cat) to become real, and because the “collapse of the wave function” does not have to wait for the observer to look into the box, there is never a time when the cat is half dead and half alive. It’s a sign of how powerful and straightforward the transactional interpretation is that I am sure you can indeed work out the details for yourself, without me spelling them out. But what about Bell’s inequality, the Einstein-Podolsky-Rosen Paradox, and the Aspect experiment? And those quantum kittens? This, after all, was what revived interest in the meaning of quantum mechanics in the 1980s. From the point of view of absorber theory, there is no difficulty in understanding what is going on. We imagine (still thinking in terms of pseudotime) that the excited atom which is about to emit two photons sends out offer waves in various directions and corresponding to various possible polarization states. The transaction is only completed, and the photons actually emitted, if confirmatory advanced waves are sent back in time from the appropriate pair of observers to the emitting atom. As soon as the transaction is complete, the photons are emitted and observed, producing a double detection event in which the polarizations of the photons are correlated, even though they are far apart in space. If the confirmatory waves do not match an allowed polarization correlation, then they cannot be “verifying” the same transaction, and they will not be able to establish the handshake. From the perspective of pseudotime, the pair of photons cannot be emitted until an arrangement has been made to absorb them, and that absorption arrangement itself determines the polarizations of the emitted photons, even though they are emitted “before” the absorption takes place. It is literally impossible for the atom to emit photons in a state that does not match the kind of absorption allowed by the detectors. Indeed, in the absorber model the atom cannot emit photons at all unless an agreement has already been reached to absorb them. It’s the same with those two kittens travelling in their separate spacecraft to the opposite ends of the Galaxy. The observation that determines which half-box the electron is in, and therefore which kitten lives and which kitten dies, echoes backwards in time to the start of the experiment, instantaneously (or rather, atemporally) determining the states of the kittens throughout the entire period when they were locked away, unobserved, in their respective spaceships. “If there is one particular link in the event chain that is special, it is not the one that ends the chain. It is the link at the beginning of the chain when the emitter, having received various confirmation waves from its offer wave, reinforces one of them in such a way that it brings that particular confirmation wave into reality as a completed transaction. The atemporal transaction does not have a “when” at the end.” (Cramer, 1986, page 674. This dramatic success in resolving all of the puzzles of quantum physics has been achieved at the cost of accepting just one idea that seems to run counter to commonsense — the idea that part of the quantum wave really can travel backwards through time. I stress, again, that all such interpretations are myths, crutches to help us imagine what is going on at the quantum level and to make testable predictions. They are not, any of them, uniquely “the truth”; rather, they are all “real”, even where they disagree with one another. But Cramer’s interpretation is very much a myth for our times; it is easy to work with and to use in constructing mental images of what is going on, and with any luck at all it will supercede the Copenhagen Interpretation as the standard way of thinking about quantum physics for the next generation of scientists.
751a864d97e63073
Теория бесконечности и время Russian Philosophical Society The suggested Philosophical Interpretation of Quantum Mechanics is clear, heuristic and it does not contradict either classical logic or fundamental physical laws and principles. 1. The real physical space is by definition three-dimensional and certainly it contains material objects. 2. A particle which is out of observation exists actually and continuously in the physical space (reason: the Energy conservation law). 3. It is not the material object that is in the state of superposition, but its wave function (an axiom). 4. There exist hidden parameters of a micro-object coordinates (reason: the Heisenberg uncertainty relation). 5. An elementary, microscopic displacement of a micro-particle has no trajectory (reason: inadequacy of Newtonian equation for description of a quantum particle displacement). 6. Microscopic dynamics is irreversible in time (reason: the Causality principle) http://pharmacy....r/. 7. There exist hidden parameters of micro-objects’ interactions (reason: it is impossible to consider the whole infinite diversity of micro-events). 8. Reduction of a wave function could not be understood as its transformation into a real particle (an axiom). 9. Both logic and temporal sequence of a wave function states are realized in the state of superposition. Even in an infinitely short segment (interval) of time t1 there exits finite probability of a particle presence in some volume dv, and in the next infinitely short interval of time t2 there exits finite probability of a particle absence in this volume (reason: existence of the Schrödinger equation, dependent and independent of time). All the mentioned positions directly or indirectly point at the validity of the following proposition. The Atemporality principle Some hidden parameters of a quantum micro-object (such as its coordinate in space or direction of polarization) change in the atemporal manner.
75b514d2c987d0f2
Terms of Ontological Endearment Mosaic of Reality Material Witness In chapter twelve of his On Physics and Philosophy Bernard d’Espagnat tackles three kinds of materialism: dialectical materialism (briefly), “scientific” materialism, and what he calls “neomaterialism.” Ultimately… ultimate reality isn’t the same as “empirical” or “epistemological” reality, something materialists just don’t get. At least that’s what he says, and I largely agree. Here’s my summary of the chapter. Dialectical Materialism vs Bohr D’Espagnat says he’s not going to do a detailed analysis of dialectical materialism. He says it’s been sufficiently dismantled elsewhere. However, he warns against seeing too many parallels between Neils Bohr’s approach and this form of materialism. Bohr’s thought and dialectics may share some general features, but that’s different from dialectical materialism. Bohr had a “human-centred” approach, which could be called materialism only if you radically changed the meaning of the word. Scientific Materialism vs Atomism D’Espagnat says “materialism” or “mechanism” doesn’t automatically refer to atomism. Descartes didn’t believe in atoms, and even in the 19th century ether and fields lay outside the realm of the atom. Macroman on the Street vs the Microworld The man on the street and even many scientists (particularly in the softer sciences such as biology) think of nature as composed of smaller and smaller grains or specks, eventually leading to atoms. This microworld has (roughly) the same nature as the macroscopic world we experience. The problem with that idea is that standard quantum theory and the experimental results used to test it show conclusively that atoms, particles, and the forces emanating from them just aren’t like the world at large (as we experience it). This material reductionism doesn’t work. Standard vs Non-standard Interpretations Penrose (calling himself a physicalist) adds gravitational effects to the Schrödinger equation. Sokal and Bricmont rely on Broglie–Bohm. However, the first choice is more a research program than a fully fledged theory, and the second choice runs into some trouble with relativity. The Sokal and Bricmont approach combines corpuscles with nonlocal entities or forces that have the same strength whatever the distance. This isn’t your grandmother’s materialism. Empirical Reality vs Materialist Reality Standard quantum mechanics rejects both approaches. At best these materialist approaches describe some “empirical” or “epistemological” reality, a product of how our “mind structure” divides and categorizes reality. Positivism vs Materialism Some materialist apologists say quantum mechanics is a product of its times: the 1920s, when positivism (and its emphasis on observation rather than underlying reality) reigned. D’Espagnat rejects that objection. He says that whatever the origins of quantum theory, rival interpretations still need to be bolstered by evidence. Research vs Traditions of Research Michel Bibol and Larry Laudan offer subtler challenges by examining the higher-level assumptions that scientists use. Laudan calls them “traditions of research,” which Bitbol calls “values.” They’re what imparts meaning to a scientific quest. Observations vs “Ampliative” Arguments D’Espagnat acknowledges that when mainstream physicists reject Broglie–Bohm because its concepts are unnecessarily complicated or because “action at a distance” messes with relativity they are using “ampliative” arguments. These are arguments that go beyond what the observations are telling us. After all, physicists could reject the relativity principle as long as they come up with some theory that uses other principles, but acts as if the relativity principle still works. Bohm vs Materialism However, even David Bohm rejected materialism. He first spoke of a wave function then later a quantum potential. Neither is localized, hardly what a conventional materialist would call real. Although Bohm found a way to explain physics without specifying consciousness, he also noted that quantum physics suggests a “mental pole” exists. Sophistication vs Atomic Materialism Adding sophistication to atomic materialism doesn’t rescue it. Rather, its “atomism” disappears and its materialism looks increasingly doubtful. Neomaterialism vs Matter A third approach to materialism comes from André Comte-Sponville. He acknowledges nonseparability, a concept that other materialists ignore. D’Espagnat calls this approach “neomaterialism.” Comte-Sponville gets himself into definitional circles trying to define “matter.” It’s supposed to be everything (but a vacuum), yet also produces the mind. However, if thoughts are real then they’d already be part of “matter.” Neutral vs Suggestive Terms D’Espagnat also criticizes Comte-Sponville for using “image-carrying words” such as “matter.” D’Espagnat notes that he himself doesn’t use “matter,” “God,” or “spirit.” Rather he tries to use neutral terms such as “mind-independent reality.” Nonseparability vs Neomaterialism Comte-Sponville says the primary question is whether matter is idealist or spiritualist on the one side, or of a physical nature similar to what we experience on the macroscopic level. He’s not an idealist or spiritualist, so he clearly believes in a physical reality. But as with scientific materialism the idea that reality bears any resemblance to our macroscopic experiences is blown out of the water by quantum physics. Nonseparability—which Comte-Sponville says is a “mystery”—is an issue whatever theory you choose. It ensures that “ultimate reality” is nothing like our everyday experiences. Utility vs Evidence Comte-Sponville eventually acknowledges that if matter includes thought then matter can’t be defined as everything except thought. However, he says that ultimately what the “natural sciences” say is less important than neomaterialism’s purpose: to explain mind from concepts other than mind, and to do all this to “defeat religion, superstition and illusion.” D’Espagnat says this argument about the usefulness of neomaterialism just ends up being a circular argument. Deeply held convictions are not themselves an argument. Empirical vs Ultimate Reality Ontologically interpretable theories are not consistent with experiment. D’Espagnat says particles and their attributes have a well-defined existence only in relation to knowledge, hence the mind. Our knowledge of particles and other micro-objects are just that: a kind of knowledge, hence pointing to elements of an empirical, not ultimate, reality. D’Espagnat says that he and Comte-Sponville both agree that “existence” comes before “knowledge.” But d’Espagnat says mind comes from an “independent reality” not “empirical reality.” This a materialism does not make. Convenient Ontologies vs Creeds Back to materialism in general, d’Espagnat agrees it’s a “tradition of research” as Laudan might put it. These traditions use values that neither explain nor predict. They are not testable. These research traditions may include contradictory theories under their umbrella. But some scientists attach a lot of meaning to this identity, and aren’t likely to give up on the term “materialism.” On a day-to-day basis physicists are using and abusing terms from classical physics such as “particles.” Since physicists would find it hard to move ahead just pondering observations and equations, these concepts are convenient components of a “fabricated ontology.” D’Espagnat warns these scientists that relying on this ontology to support their rationality may be useful from a practical point of view. Just don’t convert that choice into “an illegitimate doctrinal creed.” Tags: , , , , Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s %d bloggers like this:
3ab23293fc62bc35
Take the 2-minute tour × Hi I'm currently learning Hamiltonian and Lagrangian Mechanics (which I think also encompasses the calculus of variations) and I've also grown interested in functional analysis. I'm wondering if there is any connection between functional analysis and Hamiltonian/ Lagrangian mechanics? Is there a connection between functional analysis and calculus of variations? What is the relationship between functional analysis and quantum mechanics; I hear that that functional analysis is developed in part by the need for better understanding of quantum mechanics? share|improve this question The answer for your questions is Yes. In particular, for Quantum Mechanics, see von Neumann, J.: Mathematical Foundations of Quantum Mechanics. Anyway you can find also more information and reference, about this relations in wikipedia. –  Leandro Jul 1 '10 at 0:07 Also see Reed and Simon, Methods of Modern Mathematical Physics, vols 1 - 4. One might argue that the entire tome (well, maybe less so the first half of volume 2 and parts of volume 3) is about application of functional analysis as inspired by the study of Schrodinger equation. –  Willie Wong Jul 1 '10 at 0:13 @Willie: I'm very much a non-applications kind of analyst, but doesn't very basic linear ODE theory have a tinge of functional analysis -- at least in early attempts to get somewhere? –  Yemon Choi Jul 1 '10 at 3:21 @Yemon: The proof of Picard-Lindeloef (and cousins) is a functional analysis proof, since it's a fixed point theorem in Banach spaces. It still doesn't give the theory a functional analytic flavour. The key problem is that the functions, one considers do not live in nice spaces. (Exceptions are known. e.g. Sturm--Liouville Theory, but that is more quantum mechanics). –  Helge Jul 1 '10 at 9:39 @Yemon: I am going to channel a physicist acquaintance of mine to illustrate why I don't really consider the sort of stuff in basic ODE theory functional analysis (though you are absolutely right that there is a an application of functional analysis). He said, during a (physics) seminar, to the nodding approval of the (physics) big wigs in the room: "... and as we all know, ODEs good; PDEs bad." –  Willie Wong Jul 1 '10 at 10:25 6 Answers 6 up vote 4 down vote accepted (1) Depends on what you mean by Hamiltonian and Lagrangian mechanics. If you mean the classical mechanics aspect as in, say, Vladimir Arnold's "Mathematical Methods in ..." book, then the answer is no. Hamiltonian and Lagrangian mechanics in that sense has a lot more to do with ordinary differential equations and symplectic geometry than with functional analysis. In fact, if you consider Lagrangian mechanics in that sense as an "example" of calculus of variations, I'd tell you that you are missing out on the full power of the variational principle. Now, if you consider instead classical field theory (as in physics, not as in algebraic number theory) derived from an action principle, otherwise known as Lagrangian field theory, then yes, calculus of variations is what it's all about, and functional analysis is King in the Hamiltonian formulation of Lagrangian field theory. Now, you may also consider quantum mechanics as "Hamiltonian mechanics", either through first quantization or through considering the evolution as an ordinary differential equation in a Hilbert space. Then through this (somewhat stretched) definition, you can argue that there is a connection between Hamiltonian mechanics and functional analysis, just because to understand ODEs on a Hilbert space it is necessary to understand operators on the space. (2) Mechanics aside, functional analysis is deeply connected to the calculus of variations. In the past forty years or so, most of the development in this direction (that I know of) are within the community of nonlinear elasticity, in which objects of study are regularity properties, and existence of solutions, to stationary points of certain "energy functionals". The methods involved found most applications in elliptic type operators. For evolutionary equations, functional analysis plays less well with the calculus of variations for two reasons: (i) the action is often not bounded from below and (ii) reasonable spaces of functions often have poor integrability, so it is rather difficult to define appropriate function spaces to study. (Which is not to say that they are not done, just less developed.) (3) See Eric's answer and my comment about Reed and Simon about connection of functional analysis and quantum mechanics. share|improve this answer Well,I'm not sure about classical mechanics,but functional analysis certainly has many applications in quantum mechanics via the modeling of wavefunctions by PDEs and operators defined on Hilbert and Banach spaces. A great book for beginning the study of these properties is the classic text by S.B.Sobolev,Some Applications of Functional Analysis in Mathematical Physics,now I believe in it's 4th edition and avaliable through the AMS. A more comprehensive text is the 4-volume work by Barry Simon and Louis Reed, which covers not only basic functional analysis,but all the basic applications to modern physics,such as spectral analysis and scattering theory. Lastly,some less well known applications can be found in Elliott Lieb and Micheal Loss' Analysis. share|improve this answer While Lou Reed has surely enriched the lives of many mathematicians, it is primarily through his musical work with the Velvet Underground rather than any collaboration with Barry Simon. You must be thinking of the mathematician Michael Reed. –  Tom LaGatta Jul 1 '10 at 21:22 One of the biggest problems in mathematical physics is actually to understand the link between Hamiltonian/Lagrangian mechanics and functional analysis. This is because classical mechanics is formulated in the former setting while quantum mechanics is formulated in the functional analysis setting. The act of going from classical mechanics to quantum mechanics is called quantization and basically consists of assigning functional analytic operators to classical observables, in a way that respects the Poisson and Lie brackets. For example in classical quantization we assign position to the operator of multiplication by x and we assign to momentum the operator $-i\frac{d}{dx}$. Both of these act on (a dense subset of) the space $L^2(\mathbb R)$, which is taken to be the space of wave functions in one dimension. You may want to take a look at the orbit method, which is the mathematics involved in a quantization scheme called geometric quantization. Some relevant MO discussion about this are: What is Quantization ? What does "quantization is not a functor" really mean? share|improve this answer Hamilton-Jacobi PDE is a formulation of classical mechanics (as far as I understand; I am no expert in physics) and the unique weak solution is found by a certain calculus of variations problem inspired by optimal control theory. Hamilton-Jacobi is also, I think, somewhat related to the Schrödinger equation. share|improve this answer Very good point. HJE skipped my mind (maybe because the OP mentioned explicitly Hamitonian and Lagrangian mechanics). So it does brings in a tie to calculus of variations. And as a PDE, the general existence of the solution does have a bit of a flavour of functional analysis. –  Willie Wong Jul 1 '10 at 10:11 One instance, where classical mechanics has to be treated with 'functional analysis' are infinite dimensional systems. The prototypical example is the Korteweg-de Vries equation $$ u_t + u_{xxx} + 6 u u_x = 0 $$ which a priori looks like a non-linear PDE. The key now is that it is completely integrable, which means that one can associate to an equivalent evolution for operators on Hilbert spaces. Define $$ L(t) = - \frac{d^2}{dx^2} + u(x,t) $$ as an operator on $L^2(\mathbb{R})$. Then this operator obeys $$ L_t = [P, L], $$ where $P$ is another operator, one can construct from $u$. (The specific form doesn't matter). The operators $P$ and $L$ are known as Lax Pair. (The $P$ stands for Peter not for Pair ☺ ). This is just the Heisenberg picture of quantum mechanics, so one can use the tools developed there, i.e. functional analysis, to investigate this equation. Of special importance is something known as scattering theory. Just on a final point: KdV is a limit of Navier--Stokes, which is a classical system. P.S.: In shameless self-promotion for some details on another system, the Toda Lattice, where it is easier to see that it is classical mechanics (one can write down the Hamiltonian easily), see here. I just made the post about KdV, since it is well-known. share|improve this answer I think you may have copied the KDV equation wrong. (Check the last term on the LHS.) And if you are going to mention scattering theory, you might as well spell out that $(L,P)$ are what is known as a Lax pair to aid people in literature searching. :) –  Willie Wong Jul 1 '10 at 10:18 Fixed these things. Unfortunately, this forum does not support smileys. There should be an ;-) somewhere instead of &#9786; –  Helge Jul 1 '10 at 11:33 i see the smiley just fine. –  Willie Wong Jul 1 '10 at 11:54 There is a very good discussion of this issue in L. Takhtajan's excellent text Quantum Mechanics for Mathematicians; see especially section 2.1. Chapter 1 also treats classical mechanics in a way that naturally extends to the quantum picture. The idea as I read it is this: both classical and quantum mechanics consider some underlying phase space, and a collection of observables, physical values you can measure. These naturally form an algebra. In classical mechanics you assume that you can measure different observables simultaneously without the measurements affecting one another; this turns out to correspond to the condition that the algebra of observables is commutative. A good example is thinking of observables as continuous functions on the phase space, and the Gelfand representation says that this is essentially the only example. So a functional analysis result says that you don't need to do too much functional analysis here (or rather, it's of a fairly trivial kind). In quantum mechanics, the algebra of observables might not be commutative. A good example of such a thing is operators on a Hilbert space (again, in some sense the only example). If you could use a finite-dimensional Hilbert space, you'd just be doing linear algebra. But it turns out the commutation relations that the physics requires can only be satisfied by unbounded operators. This forces you to use infinite-dimensional Hilbert spaces, and puts you into the realm of functional analysis. share|improve this answer Your Answer
3bde94da3f1ec660
previous  home  next  PDF  The Simple Harmonic Oscillator Michael Fowler, University of Virginia Einstein’s Solution of the Specific Heat Puzzle The simple harmonic oscillator, a nonrelativistic particle in a potential ½Cx2,  is a system with wide application in both classical and quantum physics.  The simplest model is a mass sliding backwards and forwards on a frictionless surface, attached to a fixed wall by a spring, the rest position defined by the natural length of the spring. Many of the mechanical properties of a crystalline solid can be understood by visualizing it as a regular array of atoms, a cubic array in the simplest instance, with nearest neighbors connected by springs (the valence bonds) so that an atom in a cubic crystal has six such springs attached, parallel to the x, y, and z axes.  Provided the oscillations of the atoms are not too large, the springs behave well, and the atom sees itself in a potential . Now, as the solid is heated up, it should be a reasonable first approximation to take all the atoms to be jiggling about independently, and classical physics, the “Equipartition of Energy”,  would then assure us that at temperature T each atom would have on average energy 3kT, k being Boltzmann’s constant.  The specific heat per atom would then be just 3k.   But this is not what is observed!  The specific heats of all solids drop dramatically at low temperatures.  What’s going on here?  It took Einstein to figure it out.  Recall in the earlier lecture on Black Body Radiation that at low temperatures the blue modes were frozen out because energy could only be absorbed or emitted in quanta, photons, and the energy per quantum was directly proportional to the frequency, so only relatively low energy oscillators gained energy at low temperatures. Einstein realized that exactly the same considerations must apply to mechanical oscillators, such as atoms in a solid.  He assumed each atom to be an independent simple harmonic oscillator, and, just as in the case of black body radiation, the oscillators can only absorb energies in quanta. Consequently, at low enough temperatures there is rarely sufficient energy in the ambient thermal excitations to excite the oscillators, and they freeze out, just like blue oscillators in low temperature black body radiation.  Einstein’s picture was later somewhat refined—the basic set of oscillators was taken to be standing sound wave oscillations in the solid rather than individual atoms (even more like black body radiation in a cavity) but the main conclusion was not affected.  In the more modern picture of sound waves in a solid, the “elementary” sound wave, analogous to the photon, is called the phonon, and has energy hf, where h is again Planck’s constant, and f is the sound frequency. Oscillations of molecules can usually be analyzed fairly accurately as simple harmonic oscillations, in particular the diatomic molecule. Of course, this picture breaks down for sufficiently large amplitude oscillations—eventually any molecule breaks up.  Wave Functions for Oscillators What kind of wave function do we expect to see in a harmonic oscillator potential?  Whatever kinetic energy we give the particle, if it gets far enough from the origin the potential energy will win out, and the wave will decay for the particle going further out.  We know that when a particle penetrates a barrier of height V0, say, greater than the particle’s kinetic energy,  the wave function decreases exponentially into the barrier, like  , where  .  But the simple harmonic oscillator potential is less penetrable than a flat barrier, because its height increases as x2 as the particle penetrates, so we can see from the expression for a above that for large x a itself increases linearly in x.  Of course, this is something of a handwaving argument, the solution of a differential equation for a varying potential is not just a smooth sequence of solutions for constant potentials, but it does suggest that the right wavefunction for the oscillator potential might decay as .  We write it as , so that the probability distribution is proportional to , and a, which has the dimensions of length, is a natural measure of the spread of the wave function.  The Schrödinger equation for the simple harmonic oscillator is If  , it is straightforward to verify that Substituting this value in Schrödinger’s equation we find This equation can only be true for all x if the x2 terms are separately identically zero, that is, This fixes the wave function.  Requiring the remaining terms to balance fixes the energy: where w0 is the classical oscillator frequency—given the particle mass m and the spring constant C, the classical equation of motion of the oscillator is Taking a solution of the form gives . An important point here is that the energy is nonzero, just as it was for the square well.  The central part of the wave function must have some curvature to join together the decreasing wave function on the left to that on the right.  This “zero point energy” is sufficient in one case to melt the lattice—helium is liquid even down to absolute zero temperature (checked down to microkelvins!) because of this wave function spread.  Using the Spreadsheet The spreadsheet can be used to find the energies of the eigenstates of the simple harmonic oscillator in a very similar way to those for the square well.  One technical difference is that since the exponential increasing function diverges more violently, it is almost impossible to avoid it becoming dominant at large x.  However, provided the wave function is small over some range in x, in practice wave functions and energies are given quite accurately.  One point worth noting is that just as for the square well, the quantum number for the states is just the number of nodes, or zeros.  The argument we gave for the square well about how the extra nodes come into the wave function as the energy is increased also works here. For readers who have not at this point constructed the spreadsheet, which is a very educational exercise you should do at some point, you can download and play with one for the simple harmonic oscillator here: DOWNLOAD SPREADSHEET . Time Dependent States of the Simple Harmonic Oscillator Working with the time independent Schrödinger equation, as we have in the above, implies that we are restricting ourselves to solutions of the full Schrödinger equation which have a particularly simple time dependence, an overall phase factor , and are states of definite energy E.  However, the full time dependent Schrödinger equation is a linear equation, so if ψ1(x,t) and ψ2(x,t) are solutions, so is any linear combination 1+2.  Assuming ψ1 and ψ2 are definite energy solutions for different energies E1 and E2, the combination will not correspond to a definite energy—a measurement of the energy will give either E1 or E2, with appropriate probabilities.  In the jargon, the combination is not an “eigenstate” of the energy—but it is still a perfectly good, physically realizable wave function.  It is instructive to examine a combination state of this form a little more closely.  We know that for the ground state wave function, and for the first excited state, Suppose we simply add terms of this type together (neglecting the overall normalization constant for now), for example Looking at this wave function for t = 0, we notice that the two terms have the same sign for x > 0, and opposite signs for x < 0.  Therefore, sketching the probability distribution for the particle’s position, it is heavily skewed to the right (positive x).  However, the two terms have different time-dependent phases, differing by a factor , so after time  has elapsed, a factor of -1 has evolved between the terms.  If we now look at the probability distribution |ψ|2, it will be skewed to the left.  In other words, if the state is not of definite energy, the probability distribution can vary in time.  Of course, the total probability of finding the particle somewhere stays the same. Note that the probability distribution swings back and forth with the period of the oscillator.  This discussion also implies that an ordinary pendulum, which clearly swings back and forth, cannot be in a state of definite energy! The Three Dimensional Simple Harmonic Oscillator It is very simple to go from the one dimensional to the three dimensional simple harmonic oscillator, because the potential   is a sum of separate x, y, z potentials, and consequently any product of three solutions of the one- dimensional harmonic oscillator time independent Schrödinger equation will be a solution of the three-dimensional harmonic oscillator, with energy the sum of the three one-dimensional energies.  So the states are labeled with three quantum numbers, one for each direction, each can be 0, 1, 2, …   If we call these three quantum numbers nx, ny, nz then from what we already know about the one dimensional case, the energy of the three dimensional state must be .  For example, the lowest energy state of the three dimensional harmonic oscillator, the zero point energy, is  .  Obviously, the higher energy states are very degenerate—many sets of quantum numbers correspond to the same state—because the energy only depends on the sum of the three integer quantum numbers.  Note that this degeneracy arises from the symmetry of the potential, the spring constant k is the same in all three directions.  If the potential were of the form  for general k’s, there would be no degeneracy. (Such potentials approximately describe oscillations of an atom in an anisotropic crystal.) Another approach to the three dimensional symmetric ½kr2 simple harmonic oscillator is to try a separable wave function in spherical polar coordinates, .  This approach is covered in detail in later courses in quantum mechanics, and is the standard method for treating the hydrogen atom (where the potential cannot be written as a sum of x, y, and z potentials).  The angular functions describe the angular momentum of the particle.  Some insight can be gained by considering the two dimensional case.  Consider a pendulum swinging in the x direction (z is vertical).  Now give it a kick so it also has swing in the y direction.  In general, it will follow an elliptical path in the x, y plane.  The right kick will make it a circle.  For the circular orbit, the old fashioned Bohr quantization of angular momentum can be used to find the energy levels. previous  home  next  PDF
8a528e6659334bef
Bra-ket notation Bra-ket notation is a standard notation for describing quantum states in the theory of quantum mechanics composed of angle brackets (chevrons) and vertical bars. It can also be used to denote abstract vectors and linear functionals in pure mathematics. It is so called because the inner product (or dot product) of two states is denoted by a bracket, langlephi|psirangle, consisting of a left part, langlephi|, called the bra, and a right part, |psirangle, called the ket. The notation was invented by Paul Dirac, and is also known as Dirac notation. Bras and kets Most common use: Quantum mechanics In quantum mechanics, the state of a physical system is identified with a ray in a complex separable Hilbert space, mathcal{H}, or, equivalently, by a point in the projective Hilbert space of the system. Each vector in the ray is called a "ket" and written as |psirangle, which would be read as "ket psi ". (The ψ can be replaced by any symbols, letters, numbers, or even words—whatever serves as a convenient label for the ket.) The ket can be viewed as a column vector and (given a basis for the Hilbert space) written out in components, |psirangle = (c_0, c_1, c_2, ...)^T, when the considered Hilbert space is finite-dimensional. In infinite-dimensional spaces there are infinitely many components and the ket may be written in complex function notation, by prepending it with a bra (see below). For example, langle x|psirangle = psi(x) = c e^{- ikx}. Every ket |psirangle has a dual bra, written as langlepsi|. For example, the bra corresponding to the ket |psirangle above would be the row vector langlepsi| = (c_0^*, c_1^*, c_2^*, ...). This is a continuous linear functional from mathcal H to the complex numbers mathbb{C}, defined by: langlepsi| : mathcal H to mathbb{C}: langle psi | left(|rhorangle right) = operatorname{IP}left(|psirangle ;,; |rhorangle right) for all kets |rhorangle where operatorname{IP}(cdot , cdot ) denotes the inner product defined on the Hilbert space. Here an advantage of the bra-ket notation becomes clear: when we drop the parentheses (as is common with linear functionals) and meld the bars together we get langlepsi|rhorangle, which is common notation for an inner product in a Hilbert space. This combination of a bra with a ket to form a complex number is called a bra-ket or bracket. The bra is simply the conjugate transpose (also called the Hermitian conjugate) of the ket and vice versa. The notation is justified by the Riesz representation theorem, which states that a Hilbert space and its dual space are isometrically conjugate isomorphic. Thus, each bra corresponds to exactly one ket, and vice versa. More precisely, if J: mathcal H rightarrow mathcal H^* is the Riesz isomorphism between mathcal H and its dual space, then forall phi in mathcal H: ; langlephi| = J(|phirangle). Note that this only applies to states that are actually vectors in the Hilbert space. Non-normalizable states, such as those whose wavefunctions are Dirac delta functions or infinite plane waves, do not technically belong to the Hilbert space. So if such a state is written as a ket, it will not have a corresponding bra according to the above definition. This problem can be dealt with in either of two ways. First, since all physical quantum states are normalizable, one can carefully avoid non-normalizable states. Alternatively, the underlying theory can be modified and generalized to accommodate such states, as in the Gelfand-Naimark-Segal construction or rigged Hilbert spaces. In fact, physicists routinely use bra-ket notation for non-normalizable states, taking the second approach either implicitly or explicitly. In quantum mechanics the expression langlephi|psirangle (mathematically: the coefficient for the projection of psi! onto phi!) is typically interpreted as the probability amplitude for the state psi! to collapse into the state phi.! More general uses Bra-ket notation can be used even if the vector space is not a Hilbert space. In any Banach space B, the vectors may be notated by kets and the continuous linear functionals by bras. Over any vector space without topology, we may also notate the vectors by kets and the linear functionals by bras. In these more general contexts, the bracket does not have the meaning of an inner product, because the Riesz representation theorem does not apply. Linear operators If A : HH is a linear operator, we can apply A to the ket |psirangle to obtain the ket (A|psirangle). Linear operators are ubiquitous in the theory of quantum mechanics. For example, observable physical quantities are represented by self-adjoint operators, such as energy or momentum, whereas transformative processes are represented by unitary linear operators such as rotation or the progression of time. Operators can also be viewed as acting on bras from the right hand side. Composing the bra langlephi| with the operator A results in the bra bigg(langlephi|Abigg), defined as a linear functional on H by the rule bigg(langlephi|Abigg) ; |psirangle = langlephi| ; bigg(A|psiranglebigg). This expression is commonly written as If the same state vector appears on both bra and ket side, this expression gives the expectation value, or mean or average value, of the observable represented by operator A for the physical system in the state |psirangle, written as A convenient way to define linear operators on H is given by the outer product: if langlephi| is a bra and |psirangle is a ket, the outer product |phirang lang psi| denotes the rank-one operator that maps the ket |rhorangle to the ket |phiranglelanglepsi|rhorangle (where langlepsi|rhorangle is a scalar multiplying the vector |phirangle). One of the uses of the outer product is to construct projection operators. Given a ket |psirangle of norm 1, the orthogonal projection onto the subspace spanned by |psirangle is Just as kets and bras can be transformed into each other (making |psirangle into langlepsi|) the element from the dual space corresponding with A|psirangle is langle psi | A^dagger where A denotes the Hermitian conjugate of the operator A. It is usually taken as a postulate or axiom of quantum mechanics, that any operator corresponding to an observable quantity (shortly called observable) is self-adjoint, that is, it satisfies A = A. Then the identity langle psi | A | psi rangle^star = langle psi |A^dagger |psi rangle = langle psi | A | psi rangle holds (for the first equality, use the scalar product's conjugate symmetry and the conversion rule from the preceding paragraph). This implies that expectation values of observables are real. Bra-ket notation was designed to facilitate the formal manipulation of linear-algebraic expressions. Some of the properties that allow this manipulation are listed herein. In what follows, c1 and c2 denote arbitrary complex numbers, c* denotes the complex conjugate of c, A and B denote arbitrary linear operators, and these properties are to hold for any choice of bras and kets. • Since bras are linear functionals, langlephi| ; bigg(c_1|psi_1rangle + c_2|psi_2rangle bigg) = c_1langlephi|psi_1rangle + c_2langlephi|psi_2rangle. • By the definition of addition and scalar multiplication of linear functionals in the dual space, bigg(c_1 langlephi_1| + c_2 langlephi_2|bigg) ; |psirangle = c_1 langlephi_1|psirangle + c_2langlephi_2|psirangle. Given any expression involving complex numbers, bras, kets, inner products, outer products, and/or linear operators (but not addition), written in bra-ket notation, the parenthetical groupings do not matter (i.e., the associative property holds). For example: lang psi| (A |phirang) = (lang psi|A)|phirang (A|psirang)lang phi| = A(|psirang lang phi|) and so forth. The expressions can thus be written, unambiguously, with no parentheses whatsoever. Note that the associative property does not hold for expressions that include non-linear operators, such as the antilinear time reversal operator in physics. Hermitian conjugation Bra-ket notation makes it particularly easy to compute the Hermitian conjugate (also called dagger, and denoted †) of expressions. The formal rules are: • The Hermitian conjugate of a bra is the corresponding ket, and vice-versa. • The Hermitian conjugate of a complex number is its complex conjugate. • The Hermitian conjugate of the Hermitian conjugate of anything (linear operators, bras, kets, numbers) is itself—i.e., • Given any combination of complex numbers, bras, kets, inner products, outer products, and/or linear operators, written in bra-ket notation, its Hermitian conjugate can be computed by reversing the order of the components, and taking the Hermitian conjugate of each. These rules are sufficient to formally write the Hermitian conjugate of any such expression; some examples are as follows: • Kets: left(c_1|psi_1rangle + c_2|psi_2rangleright)^dagger = c_1^* langlepsi_1| + c_2^* langlepsi_2|. • Inner products: lang phi | psi rang^* = lang psi|phirang • Matrix elements: lang phi| A | psi rang^* = lang psi | A^dagger |phi rang lang phi| A^dagger B^dagger | psi rang^* = lang psi | BA |phi rang • Outer products: left((c_1|phi_1ranglang psi_1|) + (c_2|phi_2ranglangpsi_2|)right)^dagger = (c_1^* |psi_1ranglang phi_1|) + (c_2^*|psi_2ranglangphi_2|) Composite bras and kets Two Hilbert spaces V and W may form a third space V otimes W by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in V and W respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.) If |psirangle is a ket in V and |phirangle is a ket in W, the direct product of the two kets is a ket in V otimes W. This is written variously as |psirangle|phirangle or |psirangle otimes |phirangle or |psi phirangle or |psi ,phirangle. Representations in terms of bras and kets In quantum mechanics, it is often convenient to work with the projections of state vectors onto a particular basis, rather than the vectors themselves. The reason is that the former are simply complex numbers, and can be formulated in terms of partial differential equations (see, for example, the derivation of the position-basis Schrödinger equation). This process is very similar to the use of coordinate vectors in linear algebra. For instance, the Hilbert space of a zero-spin point particle is spanned by a position basis lbrace|mathbf{x}ranglerbrace, where the label x extends over the set of position vectors. Starting from any ket |psirangle in this Hilbert space, we can define a complex scalar function of x, known as a wavefunction: psi(mathbf{x}) stackrel{text{def}}{=} lang mathbf{x}|psirang. It is then customary to define linear operators acting on wavefunctions in terms of linear operators acting on kets, by A psi(mathbf{x}) stackrel{text{def}}{=} lang mathbf{x}|A|psirang. For instance, the momentum operator p has the following form: mathbf{p} psi(mathbf{x}) stackrel{text{def}}{=} lang mathbf{x} |mathbf{p}|psirang = - i hbar nabla psi(x). One occasionally encounters an expression like - i hbar nabla |psirang. This is something of an abuse of notation, though a fairly common one. The differential operator must be understood to be an abstract operator, acting on kets, that has the effect of differentiating wavefunctions once the expression is projected into the position basis: - i hbar nabla langmathbf{x}|psirang. For further details, see rigged Hilbert space. The unit operator Consider a complete orthonormal system (basis), { e_i | i in mathbb{N} }, for a Hilbert space H, with respect to the norm from an inner product langlecdot,cdotrangle. From basic functional analysis we know that any ket |psirangle can be written as |psirangle = sum_{i in mathbb{N}} langle e_i | psi rangle | e_i rangle, with langlecdot|cdotrangle the inner product on the Hilbert space. From the commutativity of kets with (complex) scalars now follows that sum_{i in mathbb{N}} | e_i rangle langle e_i | = hat{1} must be the unit operator, which sends each vector to itself. This can be inserted in any expression without affecting its value, for example langle v | w rangle = langle v | sum_{i in mathbb{N}} | e_i rangle langle e_i | w rangle = langle v | sum_{i in mathbb{N}} | e_i rangle langle e_i | sum_{j in mathbb{N}} | e_j rangle langle e_j | w rangle = langle v | e_i rangle langle e_i | e_j rangle langle e_j | w rangle where in the last identity Einstein summation convention has been used. In quantum mechanics it often occurs that little or no information about the inner product langlepsi|phirangle of two arbitrary (state) kets is present, while it is possible to say something about the expansion coefficients langlepsi|e_irangle = langle e_i|psirangle^* and langle e_i|phirangle of those vectors with respect to a chosen (orthonormalized) basis. In this case it is particularly useful to insert the unit operator into the bracket one time or more. Notation used by mathematicians The object physicists are considering when using the "bra-ket" notation is a Hilbert space (a complete inner product space). Let mathcal{H} be a Hilbert space and hinmathcal{H} . What physicists would denote as |hrangle is the vector itself. That is (|hrangle)in mathcal{H} . Let mathcal{H}^* be the dual space of mathcal{H} . This is the space of linear functionals on mathcal{H}. The isomorphism Phi:mathcal{H}tomathcal{H}^* is defined by Phi(h) = phi_h where for all ginmathcal{H} we have phi_h(g) = mbox{IP}(h,g) = (h,g) = langle h,g rangle = langle h|g rangle , mbox{IP}(cdot,cdot), (cdot,cdot),langle cdot,cdot rangle, langle cdot | cdot rangle are just different notations for expressing an inner product between two elements in a Hilbert space (or for the first three, in any inner product space). Notational confusion arises when identifying phi_h and g with langle h | and |g rangle respectively. This is because of literal symbolic substitutions. Let phi_h = H = langle h| and g=G=|grangle . This gives phi_h(g) = H(g) = H(G)=langle h|(G) = langle h|( One ignores the parentheses and removes the double bars. Some properties of this notation are convenient since we are dealing with linear operators and composition acts like a ring multiplication. References and notes Further reading External links Search another word or see bra'chylogouson Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
0ef14a8235be79ec
Dismiss Notice Join Physics Forums Today! Homework Help: Double Potential Well 1. Apr 24, 2010 #1 V(x) = [tex]\left\{\infty \textrm{ for } x<0[/tex] [tex]\left\{0 \textrm{ for } 0<x<a[/tex] [tex]\left\{V_0 \textrm{ for } a<x<a+2b[/tex] [tex]\left\{0 \textrm{ for } a+2bx<2a+2b[/tex] [tex]\left\{\infty \textrm{ for } 2a+2b<x[/tex] Set up the relevant equations in each region, write down the appropriate solution and then show that the wavenumber of the wave functions inside and outside the barrier satisfy a transcendental equation. 2. Relevant equations 3. The attempt at a solution I have basically used Schrödinger equations for the energy for regions 1 2 and 3. but this is where i get stuck. I have to show the energies that work (hope that makes sense). I'm not suppose to solve for [tex]E>V_0, E=V_0 or E<V_0[/tex] but find the energies that work. I am a bit confused about the boundary conditions also. I set them up the same as a finite potential barrier, but do i need boundary conditions for x<0 and x>2a+2b? as these sections are infinite, we should just get 100% reflection? I am also confused about the barrier in the middle. When solving for [tex]E>V_0, E=V_0 or E<V_0[/tex] we have different Schrödinger equations and different boundary conditions. How do i do it for any value of E? I guess you all know by now i am pretty stuck lol. Hope someone can help =) and hope my LaTeX and writing is easy to understand =P 2. jcsd 3. Apr 25, 2010 #2 User Avatar Staff Emeritus Science Advisor Homework Helper Education Advisor I'm not exactly sure what that means, but I'd try considering each case separately even if you're not going to have them in your final solution, just so you can develop an intuition for the problem and its solutions. Yes, you'll get 100% reflection and no penetration into those regions. You know that the wavefunction vanishes when x<0 or x>2a+2b because the potential is infinite in those regions. You still need continuity of the wavefunction at the boundaries, but because you're dealing with an infinite potential, the derivative doesn't need to be continuous. 4. Apr 25, 2010 #3 V(x)= \left\{ \begin{array}{ccccc} \infty & \textrm{ for } x<0 \\ 0 & \textrm{ for } 0 < x < a \\ Vo & \textrm{ for } a < x < a+2b \\ 0 & \textrm{ for } a+2bx<2a+2b \\ \infty & \textrm{ for } 2a+2b \end{array} \right Last edited: Apr 25, 2010 5. Apr 25, 2010 #4 you can not have the wavefunction and the potential infinity for the same region! .. please try to solve the question step by step.. 6. Apr 25, 2010 #5 whoops I just copyed some latex, forgot to replace Psi with v(x) my bad
c5f31c6c8cb0a06d
Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective? Do We Understand The Chemical Bond? Following the Alternative interpretations theme, I shall write a series of posts about the chemical bond. As to why, and I hope to suggest that there is somewhat more to the chemical bond than we now consider. I suspect the chemical bond is something almost all chemists "know" what it is, but most would have trouble articulating it. We can calculate its properties, or at least we believe we can, but do we understand what it is? I think part of the problem here is that not very many people actually think about what quantum mechanics implies. In the August Chemistry World it was stated that to understand molecules, all you have to do is to solve the Schrödinger equation for all the particles that are present. However, supposing this were possible, would you actually understand what is going on? How many chemists can claim to understand quantum mechanics, at least to some degree? We know there is something called "wave particle duality" but what does that mean? There are a number of interpretations of quantum mechanics, but to my mind the first question is, is there actually a wave? There are only two answers to such a discrete question: yes or no. De Broglie and Bohm said yes, and developed what they call the pilot wave theory. I agree with them, but I have made a couple of alterations, so I call my modification the guidance wave. The standard theory would answer no. There is no wave, and everything is calculated on the basis of a mathematical formalism. Each of these answers raises its own problems. The problem with there being a wave piloting or guiding the particle is that there is no physical evidence for the wave. There is absolutely no evidence so far that can be attributed solely to the wave because all we ever detect is the particle. The "empty wave" cannot be detected, and there have been efforts to find it. Of course just because you cannot find something does not mean it is not there; it merely means it is not detectable with whatever tool you are using, or it is not where you are looking. For my guidance wave, the problem is somewhat worse in some ways, although better in others. My guidance wave transmits energy, which is what waves do. This arises because the phase velocity of a wave equals E/p, where E is the energy and p the momentum. The problem is, while the momentum is unambiguous (the momentum of the particle) what is the energy? Bohm had a quantum potential, but the problem with this is it is not assignable because his relationship for it did not lead to a definable value. I have argued that to make the two slit experiment work, the phase velocity should equal the particle velocity, so that both arrive at the slits at the same time, and that is one of the two differences between my guidance wave and the pilot wave. The problem with that is, it puts the energy of the system at twice the particle kinetic energy. The question then is, why cannot we detect the energy in the wave? My answer probably requires another dimension. The wave function is known to be complex; if you try to make it real, e.g. represent it as a sine wave, quantum mechanics does not work. However, the "non-real" wave has its problems. If there is actually nothing there, how does the wave make the two-slit experiment work? The answer that the "particle" goes through both slits is demonstrably wrong, although there has been a lot of arm-waving to preserve this option. For example, if you shine light on electrons in the two slit experiment, it is clear the electron only goes through one slit. What we then see is claims that this procedure "collapsed the wave function", and herein lies a problem with such physics: if it is mysterious enough, there is always an escape clause. However, weak measurements have shown that photons go though only one slit, and the diffraction pattern still arises, exactly according to Bohm's calculations (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) There is another issue. If the wave has zero energy, the energy of the particle is known, and following Heisenberg, the phase velocity of the wave is half that of the particle. That implies everything happens, then the wave catches up and sorts things out. That seems to me to be bizarre in the extreme. So, you may ask, what has all this to do with the chemical bond? Well, my guidance wave approach actually leads to a dramatic simplification because if the waves transmit energy that equals the particle energy, then the stationary state can now be reduced to a wave problem. As an example of what I mean, think of the sound coming from a church organ pipe. In principle you could calculate it from the turbulent motion of all the air particles, and you could derive equations to statistically account for all the motion. Alternatively, you could argue that there will be sound, and it must form a standing wave in the pipe, so the sound frequency is defined by the dimensions of the pipe. That is somewhat easier, and also, in my opinion, it conveys more information. All of which is all very well, but where does it take us? I hope to offer some food for thought in the posts that will follow. Posted by Ian Miller on Aug 28, 2017 12:19 AM Europe/London Share this | Leave a comment? You must be signed in to leave a comment on MyRSC blogs. Register free for an account at
291d3568c9d13bf8
High Energy Particle Physics Source-Free Electromagnetism's Canonical Fields Reveal the Free-Photon Schrödinger Equation Authors: Steven Kenneth Kauffmann Classical equations of motion that are first-order in time and conserve energy can only be quantized after their variables have been transformed to canonical ones, i.e., variables in which the energy is the system's Hamiltonian. The source-free version of Maxwell's equations is purely dynamical, first-order in time and has a well-defined nonnegative conserved field energy, but is decidedly noncanonical. That should long ago have made source-free Maxwell equation canonical Hamiltonization a research priority, and afterward, standard textbook fare, but textbooks seem unaware of the issue. The opposite parities of the electric and magnetic fields and consequent curl operations that typify Maxwell's equations are especially at odds with their being canonical fields. Transformation of the magnetic field into the transverse part of the vector potential helps but is not sufficient; further simple nonnegative symmetric integral transforms, which commute with all differential operators, are needed for both fields; such transforms also supplant the curls in the equations of motion. The canonical replacements of the source-free electromagnetic fields remain transverse-vector fields, but are more diffuse than their predecessors, albeit less diffuse than the transverse vector potential. Combined as the real and imaginary parts of a complex field, the canonical fields prove to be the transverse-vector wave function of a time-dependent Schrödinger equation whose Hamiltonian operator is the quantization of the free photon's square-root relativistic energy. Thus proper quantization of the source-free Maxwell equations is identical to second quantization of free photons that have normal square-root energy. There is no physical reason why first and second quantization of any relativistic free particle ought not to proceed in precise parallel, utilizing the square-root Hamiltonian operator. This natural procedure leaves no role for the completely artificial Klein-Gordon and Dirac equations, as accords with their grossly unphysical properties. Comments: 12 pages, Also archived as arXiv:1011.6578 [physics.gen-ph]. Download: PDF Submission history [v1] 1 Dec 2010 Unique-IP document downloads: 230 times Add your own feedback and questions here: comments powered by Disqus
45fae7f62938ae59
The Role of Decoherence in Quantum Mechanics Interference phenomena are a well-known and crucial aspect of quantum mechanics, famously exemplified by the two-slit experiment. There are situations, however, in which interference effects are artificially or spontaneously suppressed. The theory of decoherence is precisely the study of (spontaneous) interactions between a system and its environment that lead to such suppression of interference. We shall make more precise what we mean by this in Section 1, which discusses the concept of suppression of interference and gives a simplified survey of the theory, emphasising features that will be relevant to the following discussion. In fact, the term decoherence refers to two largely overlapping areas of research. The characteristic feature of the first (often called ‘dynamical’ or ‘environmental’ decoherence) is the study of concrete models of (spontaneous) interactions between a system and its environment that lead to suppression of interference effects. That of the second (the theory of ‘decoherent histories’ or ‘consistent histories’) is an abstract (and in fact more general) formalism that captures the essential features of the phenomenon of decoherence. The two are obviously closely related, and will both be reviewed in turn in Section 1. Decoherence is relevant (or is claimed to be relevant) to a variety of questions ranging from the measurement problem to the arrow of time, and in particular to the question of whether and how the ‘classical world’ may emerge from quantum mechanics. This entry mainly deals with the role of decoherence in relation to the main problems and approaches in the foundations of quantum mechanics. Specifically, Section 2 analyses the claim that decoherence solves the measurement problem. It also discusses the exacerbation of the problem through the inclusion of environmental interactions, the idea of emergence of classicality, and the motivation for discussing decoherence together with approaches to the foundations of quantum mechanics. Section 3 then reviews the relation of decoherence to some of the main foundational approaches. Finally, in Section 4 we mention suggested applications that would push the role of decoherence even further. Suppression of interference has of course featured in many papers since the beginning of quantum mechanics, such as Mott's (1929) analysis of alpha-particle tracks. The modern foundation of decoherence as a subject in its own right was laid by H. D. Zeh in the early 1970s (Zeh 1970; 1973). Equally influential were the papers by W. Zurek from the early 1980s (Zurek 1981; 1982). Some of these earlier examples of decoherence (e.g., suppression of interference between left-handed and right-handed states of a molecule) are mathematically more accessible than more recent ones. A concise and readable introduction to the theory is provided by Zurek in Physics Today (1991). (This article was followed by publication of several letters with Zurek's replies (1993), which highlight controversial issues.) More recent surveys are the ones by Zeh (1995), which devotes much space to the interpretation of decoherence, Zurek (2003), and the books on decoherence by Giulini et al. (1996) and Schlosshauer (2007).[1] 1. Essentials of Decoherence The two-slit experiment is a paradigm example of an interference experiment. One repeatedly sends electrons or other particles through a screen with two narrow slits, the electrons impinge upon a second screen, and we ask for the probability distribution of detections over the surface of the screen. In order to calculate this, one cannot just take the probabilities of passage through the slits, multiply with the probabilities of detection at the screen conditional on passage through either slit, and sum over the contributions of the two slits.[2] There is an additional so-called interference term in the correct expression for the probability, and this term depends on both wave components that pass through one or the other slit. There are, however, situations in which this interference term (for detections at the screen) is not observed, i.e. in which the classical probability formula applies. This happens for instance when we perform a detection at the slits, whether or not we believe that measurements are related to a ‘true’ collapse of the wave function (i.e. that only one of the components survives the measurement and proceeds to hit the screen). The disappearence of the interference term, however, can happen also spontaneously, when no collapse (true or otherwise) is presumed to happen. Namely, if some other systems (say, sufficiently many stray cosmic particles scattering off the electron) suitably interact with the wave between the slits and the screen. In this case, the reason why the interference term is not observed is because the electron has become entangled with the stray particles.[3] The phase relation between the two components of the wave function, which is responsible for interference, is well-defined only at the level of the larger system composed of electron and stray particles, and can produce interference only in a suitable experiment including the larger system. Probabilities for results of measurements performed only on the electron are calculated as if the wave function had collapsed to one or the other of its two components, but in fact the phase relations have merely been distributed over a larger system.[4] It is this phenomenon of suppression of interference through suitable interaction with the environment that we call ‘dynamical’ or ‘environmental’ decoherence. 1.1 Dynamical decoherence The study of ‘dynamical’ decoherence consists to a large extent in the exploration of concrete spontaneous interactions that lead to suppression of interference. Several features of interest arise in models of such interactions (although by no means are all such features common to all models). One feature of these environmental interactions is that they suppress interference between states from some preferred set, be it a discrete set of states (e.g. left- and right-handed states in models of chiral molecules, or the upper and lower component of the wave function in our simple example of the two-slit experiment), or some continuous set (e.g. the coherent states of a harmonic oscillator). The intuitive picture is one in which the environment monitors the system of interest by continuously ‘measuring’ some quantity characterised by the set of preferred states (‘eigenstates of the decohering variable’). Formally, this is reflected in the (at least approximate) diagonalisation of the reduced state of the system of interest in the basis of privileged states (whether discrete or continuous). These preferred states can be characterised in terms of their robustness or stability with respect to the interaction with the environment. Roughly speaking, the system gets entangled with the environment, but the states between which interference is suppressed are the ones that would themselves get least entangled with the environment under further interaction. The robustness of the preferred states is related to the fact that information about them is stored in a redundant way in the environment (say, because a Schrödinger cat has interacted with so many stray particles: photons, air molecules, dust). This information can later be acquired by an observer without further disturbing the system (we observe—however that may be interpreted—whether the cat is alive or dead by intercepting on our retina a small fraction of the light that has interacted with the cat). In this connection, one also says that decoherence induces ‘effective superselection rules’. The concept of a (strict) superselection rule means that there are some observables—called classical in technical terminology—that commute with all observables (for a review, see Wightman (1995)). Intuitively, these observables are infinitely robust, since no possible interaction can disturb them (at least as long as the interaction Hamiltonian is considered to be an observable). By an effective superselection rule one means, analogously, that certain observables (e.g. chirality) will not be disturbed by the interactions that actually take place.[5] Interaction potentials are functions of position, so the preferred states will tend to be related to position. In the case of the chiral molecule, the left- and right-handed states are indeed characterised by different spatial configurations of the atoms in the molecule. In the case of the harmonic oscillator, one should think of the environment coupling to (‘measuring’) approximate eigenstates of position, or rather approximate joint eigenstates of position and momentum (since information about the time of flight is also recorded in the environment), thus leading to coherent states being preferred. (Rough intuitions should suffice here; see also the entries on quantum mechanics and measurement in quantum theory.) The resulting localisation can be on a very short length scale, i.e. the characteristic length above which coherence is dispersed (‘coherence length’) can be very short. A speck of dust of radius a = 10-5cm floating in the air will have interference suppressed between (position) components with a width of 10-13cm. Even more strikingly, the time scales for this process are minute. This coherence length is reached after a microsecond of exposure to air, and suppression of interference on a length scale of 10-12cm is achieved already after a nanosecond.[6] One can thus argue that generically the states privileged by decoherence at the level of components of the quantum state are localised in position or both position and momentum, and therefore kinematically classical. (One should be wary of overgeneralisations, as already pointed out, but this is certainly a feature of many concrete examples that have been investigated.) What about classical dynamical behaviour? Interference is a dynamical process that is distinctively quantum, so, intuitively, lack of interference might be thought of as classical-like. To make the intuition more precise, think of the two components of the wave going through the slits. If there is an interference term in the probability for detection at the screen, it must be the case that both components are indeed contributing to the particle manifesting itself on the screen. But if the interference term is suppressed, one can at least formally imagine that each detection at the screen is a manifestation of only one of the two components of the wave function, either the one that went through the upper slit, or the one that went through the lower slit. Thus, there is a sense in which one can recover at least one dynamical aspect of a classical description, a trajectory of sorts: from the source to either slit (with a certain probability), and from the slit to the screen (also with a certain probability). That is, one recovers a ‘classical trajectory’ at least in the sense used in classical stochastic processes. In the case of continuous models of decoherence based on the analogy of approximate joint measurements of position and momentum, one can do even better. In this case, the trajectories at the level of the components (the trajectories of the preferred states) will approximate surprisingly well the corresponding classical (Newtonian) trajectories. Intuitively, one can explain this by noting that if the preferred states (which are wave packets that are narrow in position and remain so because they are also narrow in momentum) are the states that tend to get least entangled with the environment, they will tend to follow the Schrödinger equation more or less undisturbed. But in fact, narrow wave packets follow approximately Newtonian trajectories, at least if the external potentials in which they move are uniform enough along the width of the packets (results of this kind are known as ‘Ehrenfest theorems’). Thus, the resulting ‘histories’ will be close to Newtonian ones (on the relevant scales).[7] The most intuitive physical example for this are the observed trajectories of alpha particles in a bubble chamber, which are indeed extremely close to Newtonian ones, except for additional tiny ‘kinks’. As a matter of fact, one should expect slight deviations from Newtonian behaviour. These are due both to the tendency of the individual components to spread and to the detection-like nature of the interaction with the environment, which further enhances the collective spreading of the components (a narrowing in position corresponds to a widening in momentum). These deviations appear as noise, i.e. particles being kicked slightly off course.[8] According to the type of system, and the details of the interaction, the noise component might actually dominate the motion, and one obtains (classical) Brownian-motion-type behaviour. Other examples include trajectories of a harmonic oscillator in equilibrium with a thermal bath, and trajectories of particles in a gas (without which the classical derivation of thermodynamics from statistical mechanics would make no sense; see below Section 4). None of these features are claimed to obtain in all cases of interaction with some environment. It is a matter of detailed physical investigation to assess which systems exhibit which features, and how general the lessons are that we might learn from studying specific models. In particular, one should beware of common overgeneralisations. For instance, decoherence does not affect only and all ‘macroscopic systems’. True, middle-sized objects, say, on the Earth's surface will be very effectively decohered by the air in the atmosphere, and this is an excellent example of decoherence at work. On the other hand, there are also very good examples of decoherence-like interactions affecting microscopic systems, such as in the interaction of alpha particles with the gas in a bubble chamber. And further, there are arguably macroscopic systems for which interference effects are not suppressed. For instance, it has been shown to be possible to sufficiently shield SQUIDS (a type of superconducting devices) from decoherence for the purpose of observing superpositions of different macroscopic currents—contrary to what one had expected (see e.g. Leggett 1984, and esp. 2002, Section 5.4). Anglin, Paz and Zurek (1997) examine some less well-behaved models of decoherence and provide a useful corrective as to the limits of decoherence. 1.2 Decoherent histories As we have just discussed, when interference is suppressed, e.g. in a two-slit experiment, we can also speak (at least formally) about the ‘trajectory’ followed by an individual electron. In particular, we can assign probabilities to the alternative trajectories, so that probabilities for detection at the screen can be calculated by summing over intermediate events. The decoherent histories formalism (originating with Griffiths 1984; Omnès 1988, 1989; and Gell-Mann and Hartle 1990) takes this as the defining feature of decoherence. In a nutshell, the formalism is as follows.[9] Take orthogonal families of projections with α1 Pα1 = 1,… , ∑αn Pαn = 1 Given times t1,… ,tn one defines histories as time-ordered sequences of projections at the given times, choosing one projection from each family, respectively. Such histories form a so-called alternative and exhaustive set of histories. Take a state ρ(t). We wish to define probabilities for the set of histories. If one takes the usual probability formula based on repeated application of the Born rule, one obtains Tr(PαnUtntn-1Pα1Ut1t0 ρ(t0) U*t1t0Pα1U*tntn-1Pαn) (where Uts represents the unitary evolution operator from time s to time t, and its adjoint U*ts the inverse evolution). We shall take (2) as defining ‘candidate probabilities’. In general these probabilities exhibit interference, in the sense that if one sums over intermediate events (if one ‘coarse-grains’ the histories), one does not obtain probabilities of the same form (2). But we can impose, as a consistency or (weak) decoherence condition, precisely that interference terms should vanish for any pair of distinct histories. It is easy to see that this condition takes the form ReTr(Pα′nUtntn-1Pα′1Ut1t0 ρ(t0) U*t1t0Pα1U*tntn-1Pαn) = 0 for any pair of distinct histories. If this is satisfied, we can view (2) as defining the distribution functions for a stochastic process with the histories as trajectories. (There are some differences between the various authors, but we shall gloss them over.) Decoherence in the sense of this abstract formalism is thus defined simply by the condition that (quantum) probabilities for wave components at a later time may be calculated from (quantum) probabilities for wave components at an earlier time and (quantum) conditional probabilities according to the standard classical formula, i.e. as if the wave had collapsed. Models of dynamical decoherence fall under the scope of decoherence thus defined, but the abstract definition is much more general. A stronger form of the decoherence condition, namely the vanishing of both the real and imaginary part of the trace expression in (3) (the ‘decoherence functional’), can be used to prove theorems on the existence of (later) ‘permanent records’ of (earlier) events in a history, which is a generalisation of the idea of ‘environmental monitoring’. For instance, if the state ρ is a pure state |ψ><ψ| this strong decoherence condition is equivalent, for all n, to the orthogonality of the vectors PαnUtntn-1Pα1Ut1t0 |ψ> and this in turn is equivalent to the existence of a set of orthogonal projections Rα1...αiti (for any titn) that extend consistently the given set of histories and are perfectly correlated with the histories of the original set (Gell-Mann and Hartle 1990). Note, however, that these ‘generalised records’ need not be stored in separate degrees of freedom, such as an environment or measuring apparatus.[10] Various authors have taken the theory of decoherent histories as providing an interpretation of quantum mechanics. For instance, Gell-Mann and Hartle sometimes talk of decoherent histories as a neo-Everettian approach, while Omnès appears to think of histories along neo-Copenhagen lines (perhaps as an experimental context creating a ‘quantum phenomenon‘ that can stretch back into the past).[11] Griffiths (2002) has probably developed the most detailed of these interpretational approaches (trying to do justice to various earlier criticisms, e.g. by Dowker and Kent (1995, 1996)).[12] In itself, however, the formalism is interpretationally neutral and has the particular merit of bringing out two crucial conceptual points: that wave components can be reidentified over time, and that if we do so, we can formally identify ‘trajectories’ for the system. As such, it is particularly useful as a tool for describing decoherence in connection with attempts to solve the problem of the classical regime in the context of various different interpretational approaches to quantum mechanics. In particular, it has become a standard tool in discussions of Everett interpretations, where ‘worlds’ can be formally described as histories in a consistent family (see, e.g., Saunders 1993). 2. Conceptual Appraisal 2.1 Solving the measurement problem? The fact that interference is typically very well suppressed between localised states of macroscopic objects suggests that it is relevant to why macroscopic objects in fact appear to us to be in localised states. A stronger claim is that decoherence is not only relevant to this question but by itself already provides the complete answer. In the special case of measuring apparatuses, it would explain why we never observe an apparatus pointing, say, to two different results, i.e. decoherence would provide a solution to the measurement problem of quantum mechanics. As pointed out by many authors, however (e.g. Adler 2003; Zeh 1995, pp. 14–15), this claim is not tenable. The measurement problem, in a nutshell, runs as follows. Quantum mechanical systems are described by wave-like mathematical objects (vectors) of which sums (superpositions) can be formed (see the entry on quantum mechanics). Time evolution (the Schrödinger equation) preserves such sums. Thus, if a quantum mechanical system (say, an electron) is described by a superposition of two given states, say, spin in x-direction equal +1/2 and spin in x-direction equal -1/2, and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components, one in which the apparatus has coupled to (has registered) x-spin = +1/2, and one in which the apparatus has coupled to (has registered) x-spin = -1/2. The problem is that, while we may accept the idea of microscopic systems being described by such sums, the meaning of such a sum for the (composite of electron and) apparatus is not immediately obvious. Now, what happens if we include decoherence in the description? Decoherence tells us, among other things, that plenty of interactions are taking place all the time in which differently localised states of macroscopic systems couple to different states of their environment. In particular, the differently localised states of the macroscopic system could be the states of the pointer of the apparatus registering the different x-spin values of the electron. By the same argument as above, the composite of electron, apparatus and environment will be a sum of (i) a state corresponding to the environment coupling to the apparatus coupling in turn to the value +1/2 for the spin, and of (ii) a state corresponding to the environment coupling to the apparatus coupling in turn to the value -1/2 for the spin. Again, the meaning of such a sum for the composite system is not obvious. We are left with the following choice whether or not we include decoherence: either the composite system is not described by such a sum, because the Schrödinger equation actually breaks down and needs to be modified, or it is described by such a sum, but then we need to understand what that means, and this requires giving an appropriate interpretation of quantum mechanics. Thus, decoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate interpretation of the theory (whether this be one that attempts to solve the measurement problem, such as Bohm, Everett or GRW; or one that attempts to dissolve it, such as various versions of the Copenhagen interpretation). Some of the main workers in the field such as Zeh (2000) and (perhaps) Zurek (1998) suggest that decoherence is most naturally understood in terms of Everett-like interpretations (see below Section 3.3, and the entries on Everett's relative-state interpretation and on the many-worlds interpretation). Unfortunately, naive claims of the kind that decoherence gives a complete answer to the measurement problem are still somewhat part of the ‘folklore’ of decoherence, and deservedly attract the wrath of physicists (e.g. Pearle 1997) and philosophers (e.g. Bub 1997, Chap. 8) alike. (To be fair, this ‘folk’ position has at least the merit of attempting to subject measurement interactions to further physical analysis, without assuming that measurements are a fundamental building block of the theory.) 2.2 Exacerbating the measurement problem Decoherence is clearly neither a dynamical evolution contradicting the Schrödinger equation, nor a new interpretation of the theory. As we shall discuss, however, it both reveals important dynamical effects within the Schrödinger evolution, and may be suggestive of possible interpretations of the theory. As such it has much to offer to the philosophy of quantum mechanics. At first, however, it seems that discussion of environmental interactions should actually exacerbate the existing problems. Intuitively, if the environment is carrying out, without our intervention, lots of approximate position measurements, then the measurement problem ought to apply more widely, also to these spontaneously occurring measurements. Indeed, while it is well-known that localised states of macroscopic objects spread very slowly with time under the free Schrödinger evolution (i.e., if there are no interactions), the situation turns out to be different if they are in interaction with the environment. Although the different components that couple to the environment will be individually incredibly localised, collectively they can have a spread that is many orders of magnitude larger. That is, the state of the object and the environment could be a superposition of zillions of very well localised terms, each with slightly different positions, and that are collectively spread over a macroscopic distance, even in the case of everyday objects.[13] Given that everyday macroscopic objects are particularly subject to decoherence interactions, this raises the question of whether quantum mechanics can account for the appearance of the everyday world even apart from the measurement problem in the strict sense. To put it crudely: if everything is in interaction with everything else, everything is generically entangled with everything else, and that is a worse problem than measuring apparatuses being entangled with the measured systems. And indeed, discussing the measurement problem without taking decoherence (fully) into account may not be enough, as we shall illustrate by the case of some versions of the modal interpretation in Section 3.4. 2.3 Emergence of classicality What suggests that decoherence may be relevant to the issue of the classical appearance of the everyday world is that at the level of components of the wave function the quantum description of decoherence phenomena can display tantalisingly classical aspects. The question is then whether, if viewed in the context of any of the main foundational approaches to quantum mechanics, these classical aspects can be taken to explain corresponding classical aspects of the phenomena. The answer, perhaps unsurprisingly, turns out to depend on the chosen approach, and in the next section we shall discuss in turn the relation between decoherence and several of the main approaches to the foundations of quantum mechanics. Even more generally, one can ask whether the results of decoherence could thus be used to explain the emergence of the entire classicality of the everyday world, i.e. to explain both kinematical features such as macroscopic localisation and dynamical features such as approximately Newtonian or Brownian trajectories in all cases where such descriptions happen to be phenomenologically adequate. As we have mentioned already, there are cases in which a classical description is not a good description of a phenomenon, even if the phenomenon involves macroscopic systems. There are also cases, notably quantum measurements, in which the classical aspects of the everyday world are only kinematical (definiteness of pointer readings), while the dynamics is highly non-classical (indeterministic response of the apparatus). In a sense, if we follow Bohr in requiring the world of classical concepts in order to describe in the first place ‘quantum phenomena’ (see the entry on the Copenhagen interpretation), then, if decoherence gives us indeed the everyday classical world, the quantum phenomena themselves would become a consequence of decoherence (Zeh 1995, p. 33; see also Bacciagaluppi 2002, Section 6.2). The question of explaining the classicality of the everyday world becomes the question of whether one can derive from within quantum mechanics the conditions necessary to discover and practise quantum mechanics itself, and thus, in Shimony's (1989) words, close the epistemological circle. In this generality the question is clearly too hard to answer, depending as it does on how far the physical programme of decoherence (Zeh 1995, p. 9) can be successfully developed. We shall thus postpone the (partly speculative) discussion of how far this programme might go until Section 4. 3. Decoherence and Approaches to Quantum Mechanics There is a wide range of approaches to the foundations of quantum mechanics. The term ‘approach’ here is more appropriate than the term ‘interpretation’, because several of these approaches are in fact modifications of the theory, or at least introduce some prominent new theoretical aspects. A convenient way of classifying these approaches is in terms of their strategies for dealing with the measurement problem. Some approaches, so-called collapse approaches, seek to modify the Schrödinger equation, so that superpositions of different ‘everyday’ states do not arise or are very unstable. Such approaches may have intuitively little to do with decoherence since they seek to suppress precisely those superpositions that are created by decoherence. Nevertheless their relation to decoherence is interesting. Among collapse approaches (Section 3.1), we shall discuss von Neumann's collapse postulate and theories of spontaneous localisation (for which see also the entry on collapse theories). Other approaches, known as ‘hidden variables’ approaches, seek to explain quantum phenomena as equilibrium statistical effects arising from a deeper-level theory, rather strongly in analogy with attempts at understanding thermodynamics in terms of statistical mechanics (see the entry on philosophy of statistical mechanics). Of these, the most developed are the so-called pilot-wave theories (Section 3.2), in particular the theory by de Broglie and Bohm (see also the entry on Bohmian mechanics). Finally, there are approaches that seek to solve (or dissolve) the measurement problem strictly by providing an appropriate interpretation of the theory. Slightly tongue in cheek, one can group together under this heading approaches as diverse as Everett interpretations (see the entries on Everett's relative-state interpretation and on the many-worlds interpretation), modal interpretations and the Copenhagen interpretation. We shall be analysing these approaches specifically in their relation to decoherence (we discuss the Everett interpretation in Section 3.3, the modal interpretations in Section 3.4, and the Copenhagen interpretation in Section 3.5). 3.1 Collapse approaches 3.1.1 Von Neumann It is notorious that von Neumann (1932) proposed that the observer's consciousness is somehow related to what he called Process I, otherwise known as the collapse postulate or the projection postulate, which in his book is treated on a par with the Schrödinger equation (his Process II). There is some ambiguity in how to interpret von Neumann. He may have been advocating some sort of special access to our own consciousness that makes it appear to us that the wave function has collapsed; this would suggest a phenomenological reading of Process I. Alternatively, he may have proposed that consciousness plays some causal role in precipitating the collapse; this would suggest that Process I is a physical process taking place in the world on a par with Process II.[14] In either case, von Neumann's interpretation relies on the insensitivity of the final predictions (for what we consciously record) to exactly where and when Process I is used in modelling the evolution of the quantum system. This is often referred to as the movability of the von Neumann cut between the subject and the object, or some similar phrase. Collapse could occur anywhere along the so-called von Neumann chain: when a particle impinges on a screen, or when the screen blackens, or when an automatic printout of the result is made, or in our retina, or along the optic nerve, or when ultimately consciousness is involved. Von Neumann thus needs to show that all of these models are equivalent, as far as the final predictions are concerned, so that he can indeed maintain that collapse is related to consciousness, while in practice applying the projection postulate at a much earlier (and more practical) stage in the description. Von Neumann poses this problem in Section VI.1 of his book. In Section VI.2, by way of preparation, he discusses the relation between states of systems and subsystems, in particular the partial trace, and the biorthogonal decomposition theorem, i.e. the theorem stating that an entangled quantum state can always be written in the special form k ckφkξk for two suitable bases (note the perfect correlations in (5)). Then in Section VI.3, after discussing his insolubility argument (see again footnote 14), von Neumann shows that there always is a Hamiltonian that will lead from a state of the form ∑k ckφkξ0 to a state of the form (5). This concludes von Neumann's argument. What von Neumann has shown is that, under suitable modelling of the measurement interaction, applying the collapse postulate directly to the measured observable or applying it to the pointer observable of the apparatus (or by extension to the ‘optic nerve signal observable’, etc.) leads to the same statistics of results. What he has not shown is that the assumption that the collapse occurs at the level of consciousness is equivalent to the assumption that it happens at any other earlier stage if one considers also other possible measurements that could be carried out along the von Neumann chain. Indeed, if collapse occurs only at the level of consciousness, it is in principle possible, instead of looking at the pointer, to perform a different measurement on the composite of system and apparatus that would detect interference between the different components of (5). This is now precisely where decoherence plays a role. Indeed, while such measurements are possible in principle, decoherence will make them impossible to perform in practice. Therefore, if we assume that Process I is a real physical process, decoherence makes it in practice impossible to detect where along the measurement chain this process takes place, thus allowing von Neumann to postulate that it happens when consciousness gets involved. This aspect will be relevant also in the next subsection. 3.1.2 Spontaneous collapse theories The best known theory of spontaneous collapse is the so-called GRW theory (Ghirardi Rimini & Weber 1986), in which a material particle spontaneously undergoes localisation in the sense that at random times it experiences a collapse of the form used to describe approximate position measurements.[15] In the original model, the collapse occurs independently for each particle (a large number of particles thus ‘triggering’ collapse much more frequently); in later models the frequency for each particle is weighted by its mass, and the overall frequency for collapse is thus tied to mass density.[16] Thus, formally, the effect of spontaneous collapse is the same as in some of the models of decoherence, at least for one particle.[17] Two crucial differences on the other hand are that we have ‘true’ collapse instead of suppression of interference (cf. above Section 1), and that spontaneous collapse occurs without there being any interaction between the system and anything else, while in the case of decoherence suppression of interference generally arises through interaction with the environment. Can decoherence be put to use in GRW? The situation may be rather complex when the decoherence interaction does not approximately privilege position (e.g. when it selects for currents in a SQUID instead), because collapse and decoherence might actually ‘pull’ in different directions.[18] But in those cases in which the decoherence interaction also takes the form of approximate position measurements, the answer presumably boils down to a quantitative comparison. If collapse happens faster than decoherence, then the superposition of components relevant to decoherence will not have time to arise, and insofar as the collapse theory is successful in recovering classical phenomena, decoherence plays no role in this recovery. Instead, if decoherence takes place faster than collapse, then (as in von Neumann's case) the collapse mechanism can find ‘ready-made’ structures onto which to truly collapse the wave function. Simple comparison of the relevant rates in models of decoherence and in spontaneous collapse theories (Tegmark 1993, esp. Table 2) suggests that this is generally the case. Thus, it seems that decoherence should play a role also in spontaneous collapse theories. A further aspect of the relation between decoherence and spontaneous collapse theories relates to the experimental testability of spontaneous collapse theories. Exactly as we have just discussed in the previous subsection in the context of von Neumann's Process I, if we assume that collapse is a real physical process, decoherence will make it extremely difficult in practice to detect empirically when and where exactly spontaneous collapse takes place (see the nice discussion of this point in Chapter 5 of Albert (1992)). Even worse, at least with the proviso that decoherence may be put to use also in no-collapse approaches such as pilot-wave or Everett (possibilities that we discuss in the next sub-sections), then in all cases in which decoherence is faster than collapse, what might be interpreted as evidence for collapse could be reinterpreted as ‘mere’ suppression of interference (for instance in the case of measurements), and only those cases in which the collapse theory predicts collapse but the system is shielded from decoherence (or perhaps in which the two pull in different directions) could be used to test collapse theories experimentally. One particularly bad scenario for experimental testability is related to the speculation (in the context of the ‘mass density’ version) that the cause of spontaneous collapse may be connected with gravitation. Tegmark 1993 (Table 2) quotes some admittedly uncertain estimates for the suppression of interference due to a putative quantum gravity, but they are quantitatively very close to the rate of destruction of interference due to the GRW collapse (at least outside of the microscopic domain). Similar conclusions are arrived at by Kay (1998). If there is indeed such a quantitative similarity between these possible effects, then it would become extremely difficult to distinguish between the two. In the presence of gravitation, any positive effect could be interpreted as support for either collapse or decoherence (with the above proviso). And in those cases in which the system is effectively shielded from decoherence (say, if the experiment is performed in free fall), if the collapse mechanism is indeed triggered by gravitational effects, then no collapse should be expected either. The relation between decoherence and spontaneous collapse theories is thus indeed far from straightforward. 3.2 Pilot-wave theories 3.2.1 De Broglie-Bohm and related theories Pilot-wave theories are no-collapse formulations of quantum mechanics that assign to the wave function the role of determining the evolution of (‘piloting’, ‘guiding’) the variables characterising the system, say particle configurations, as in de Broglie's (1928) and Bohm's (1952) theory, or fermion number density, as in Bell's (1987, Chap. 19) ‘beable’ quantum field theory, or again field configurations, as in various proposals for pilot-wave quantum field theories (for a recent survey, see Struyve 2011). De Broglie's idea was to modify classical Hamiltonian mechanics in such a way as to make it analogous to classical wave optics, by substituting for Hamilton and Jacobi's action function the phase S of a physical wave. Such a ‘wave mechanics’ of course yields non-classical motions, but in order to understand how de Broglie's dynamics relates to typical quantum phenomena, we must include Bohm's (1952, Part II) analysis of the appearance of collapse. In the case of measurements, Bohm argued that the wave function evolves into a superposition of components that are and remain separated in the total configuration space of measured system and apparatus, so that the total configuration is ‘trapped’ inside a single component of the wave function, which will guide its further evolution, as if the wave had collapsed (‘effective’ wave function). This analysis allows one to recover qualitatively the measurement collapse and by extension such typical quantum features as the uncertainty principle and the perfect correlations in an Einstein-Podolsky-Rosen experiment. (The quantitative aspects of the theory are also very well developed, but we shall not describe them here.) It is natural to extend this analysis from the case of measurements induced by an apparatus to that of ‘spontaneous measurements’ as performed by the environment in the theory of decoherence, thus applying the same strategy to recover both quantum and classical phenomena. The resulting picture is one in which de Broglie-Bohm theory, in cases of decoherence, describes the motion of particles that are trapped inside one of the extremely well localised components selected by the decoherence interaction. Thus, de Broglie-Bohm trajectories will partake of the classical motions on the level defined by decoherence (the width of the components). This use of decoherence would arguably resolve the puzzles discussed, e.g., by Holland (1996) with regard to the possibility of a ‘classical limit’ of de Broglie's theory. One baffling problem, for instance, is that trajectories with different initial conditions cannot cross in de Broglie-Bohm theory, because the wave guides the particles by way of a first-order equation, while, as is well known, Newton's equations are second-order and possible trajectories in Newton's theory do cross. Now, however, the non-interfering components produced by decoherence can indeed cross, and so will the trajectories of particles trapped inside them. The above picture is natural, but it is not obvious. De Broglie-Bohm theory and decoherence contemplate two a priori distinct mechanisms connected to apparent collapse: respectively, separation of components in configuration space and suppression of interference. While the former obviously implies the latter, it is equally obvious that decoherence need not imply separation in configuration space. One can expect, however, that decoherence interactions of the form of approximate position measurements will. If the main instances of decoherence are indeed coextensive with instances of separation in configuration, de Broglie-Bohm theory can thus use the results of decoherence relating to the formation of classical structures, while providing an interpretation of quantum mechanics that explains why these structures are indeed observationally relevant. In that case, the question that arises for de Broglie-Bohm theory is not only the standard question of whether all apparent measurement collapses can be associated with separation in configuration (by arguing that at some stage all measurement results are recorded in macroscopically different configurations), but also whether all appearance of classicality can be associated with separation in configuration space.[19] A discussion of the role of decoherence in pilot-wave theory in the form suggested above is still largely outstanding. An informal discussion is given in Bohm and Hiley (1993, Chap. 8), partial results are given by Appleby (1999), some simulations have been realised by Sanz and co-workers (e.g. Sanz and Borondo 2009); and a different approach is suggested by Allori (2001; see also Allori & Zanghì 2009). Appleby discusses Bohmian trajectories in a model of decoherence and obtains approximately classical trajectories, but under a special assumption.[20] The simulations currently published by Sanz and co-workers are based on simplified models, but fuller results have been announced.[21] Allori investigates in the first place the ‘short wavelength’ limit of de Broglie-Bohm theory (suggested by the analogy to the geometric limit in wave optics). The role of decoherence in her analysis is crucial but limited to maintaining the classical behaviour obtained under the appropriate short wavelength conditions, because the behaviour would otherwise break down after a certain time. While, as argued above, it appears plausible that decoherence might be instrumental in recovering the classicality of pilot-wave trajectories in the case of the non-relativistic particle theory, it is less clear whether this strategy might work equally well in the case of field theory. Doubts to this effect have been raised, e.g., by Saunders (1999) and by Wallace (2008). Essentially, these authors doubt whether the configuration-space variables, or some coarse-grainings thereof, are, indeed, decohering variables.[22] At least in the opinion of the present author, further detailed investigation is needed. 3.2.2 Nelson's stochastic mechanics Nelson's (1966, 1985) stochastic mechanics is strictly speaking not a pilot-wave theory. It is a proposal to recover the wave function and the Schrödinger equation as effective elements in the description of a fundamental diffusion process in configuration space. Insofar as the proposal is successful, however, it then shares many features with de Broglie-Bohm theory. In particular, the current velocity for the particles in Nelson's theory turns out to be equal to the de Broglie-Bohm velocity, and the particle distribution in Nelson's theory is equal to that in de Broglie-Bohm theory (in equilibrium). It follows that many results from pilot-wave theories can be imported into Nelson's stochastic mechanics. However, decoherence has been very little discussed in the literature on stochastic mechanics, if at all, and the strategies used in pilot-wave theories to recover the appearance of collapse and the emergence of a classical regime still need to be applied specifically in the case of stochastic mechanics. This would presumably also resolve some conceptual puzzles specific to Nelson's theory, such as the problem of two-time correlations raised in Nelson (2006). 3.3 Everett interpretations Over the years, since the original paper by Everett (1957), some very diverse ‘Everett interpretations’ have been proposed, which possibly only share the core intuition that a single wave function of the universe should be interpreted in terms of a multiplicity of ‘realities’ at some level or other. This multiplicity, however understood, is formally associated with components of the wave function in some decomposition.[23] Various such Everett interpretations, roughly speaking, differ as to how to identify the relevant components of the universal wave function, and how to justify such an identification (the so-called problem of the ‘preferred basis’ — although this may be a misnomer), and differ as to how to interpret the resulting multiplicity (various ‘many-worlds’ or various ‘many-minds’ interpretations), in particular with regard to the interpretation of the (emerging?) probabilities at the level of the components (problem of the ‘meaning of probabilities’). The last problem is perhaps the most hotly debated aspect of Everett. Clearly, decoherence enables reidentification over time of both observers and of results of repeated measurement (and thus definition of empirical frequencies). In recent years progress has been made especially along the lines of interpreting the probabilities in decision-theoretic terms for a ‘splitting’ agent (see in particular Deutsch (1999) and Wallace (2003b, 2007)).[24] The most useful application of decoherence to Everett, however, seems to be in the context of the problem of the preferred basis. Decoherence yields a natural solution to the problem, in that it identifies a class of ‘preferred’ states (not necessarily an orthonormal basis!), and allows one to reidentify them over time, so that one can identify ‘worlds’ with the trajectories defined by decoherence (or more abstractly with decoherent histories).[25] If part of the aim of Everett is to interpret quantum mechanics without introducing extra structure, in particular without postulating the existence of some preferred basis, then one will try to look for potentially relevant structures that are already present in the wave function. In this sense, decoherence is the ideal candidate for identifying ‘worlds’ (see e.g. Wallace 2003a). A justification for this identification can be variously given by suggesting that a ‘world’ should be a temporally extended structure and thus reidentification over time will be a necessary condition for defining worlds; or similarly by suggesting that in order for observers to have evolved there must be stable records of past events (Saunders 1993, and the unpublished Gell-Mann & Hartle 1994) (see the Other Internet Resources section below); or that observers must be able to access robust states, preferably through the existence of redundant information in the environment (Zurek's ‘existential interpretation’, 1998). Alternatively to some global notion of ‘world’, one can look at the components of the (mixed) state of a (local) system, either from the point of view that the different components defined by decoherence will separately affect (different components of the state of) another system, or from the point of view that they will separately underlie the conscious experience (if any) of the system. The former sits well with Everett's (1957) original notion of relative state, and with the relational interpretation of Everett preferred by Saunders (e.g. 1993) and, it would seem, Zurek (1998) (see the entry on Everett's relative-state interpretation). The latter leads directly to the idea of many-minds interpretations.[26] The idea of many minds was suggested early on by Zeh (2000; also 1995, p. 24). As Zeh puts it, von Neumann's motivation for introducing collapse was to save what he called ‘psycho-physical parallelism’ (arguably to be understood as supervenience of the mental on the physical: only one mental state is experienced, so there should be only one corresponding component in the physical state). In a decohering no-collapse universe one can instead introduce a new psycho-physical parallelism, in which individual minds supervene on each non-interfering component in the physical state. Zeh indeed suggests that, given decoherence, this is the most natural interpretation of quantum mechanics.[27] 3.4 Modal interpretations Modal interpretations originated with Van Fraassen (1973, 1991) as pure reinterpretations of quantum mechanics (other later versions coming more to resemble pilot-wave theories). Van Fraassen's basic intuition was that the quantum state of a system should be understood as describing a collection of possibilities, represented by components in the (mixed) quantum state. His proposal considers only decompositions at single instants, and is agnostic about reidentification over time. Thus, it can directly exploit only the fact that decoherence produces descriptions in terms of classical-like states, which will count as possibilities in Van Fraassen's interpretation. This ensures ‘empirical adequacy’ of the quantum description (a crucial concept in Van Fraassen's philosophy of science). The dynamical aspects of decoherence can be exploited indirectly, in that single-time components will exhibit records of the past, which ensure adequacy with respect to observations, but about whose veridicity Van Fraassen remains agnostic. A different strand of modal interpretations is loosely associated with the (distinct) views of Kochen (1985), Healey (1989) and Dieks and Vermaas (e.g. 1998). We focus on the last of these to fix ideas. Van Fraassen's possible decompositions are restricted to one singled out by a mathematical criterion (related to the biorthogonal decomposition theorem mentioned above in Section 3.1), and a dynamical picture is explicitly sought (and was later developed). In the case of an ideal (non-approximate) quantum measurement, this special decomposition coincides with that defined by the eigenstates of the measured observable and the corresponding pointer states, and the interpretation thus appears to solve the measurement problem (for this case at least). At least in Dieks's original intentions, however, the approach was meant to provide an attractive interpretation of quantum mechanics also in the case of decoherence interactions, since at least in simple models of decoherence the same kind of decomposition singles out more or less also those states between which interference is suppressed (with a proviso about very degenerate states). However, this approach fails badly when applied to other models of decoherence, e.g., that in Joos and Zeh (1985, Section III.2). Indeed, it appears that in more general models of decoherence the components singled out by this version of the modal interpretation are given by delocalised states, and are unrelated to the localised components naturally privileged by decoherence (Donald 1998; Bacciagaluppi 2000). Note that Van Fraassen's original interpretation is untouched by this problem, and so are possibly some more recent modal or modal-like interpretations by Spekkens and Sipe (2001), Bene and Dieks (2002) and Berkovitz and Hemmo (2006). Finally, some of the views espoused in the decoherent histories literature could be considered as cognate to Van Fraassen's views, identifying possibilities, however, at the level of possible courses of world history. Such ‘possible worlds’ would be those temporal sequences of (quantum) propositions satisfying the decoherence condition and in this sense supporting a description in terms of a probabilistic evolution. This view would be using decoherence as an essential ingredient, and in fact may turn out to be the most fruitful way yet of implementing modal ideas; a discussion in these terms has been outlined by Hemmo (1996). 3.5 Bohr's Copenhagen interpretation Bohr is often credited with more or less the following view. Everyday concepts, in fact the concepts of classical physics, are indispensable to the description of any physical phenomena (in a way and terminology somewhat reminiscent of Kant's transcendental arguments). However, experimental evidence from atomic phenomena shows that classical concepts have fundamental limitations in their applicability: they can only give partial (complementary) pictures of physical objects. While these limitations are quantitatively negligible for most purposes in dealing with macroscopic objects, they apply also at that level (as shown by Bohr's willingness to apply the uncertainty relations to parts of the experimental apparatus in the Einstein-Bohr debates), and they are of paramount importance when dealing with microscopic objects. Indeed, they shape the characteristic features of quantum phenomena, e.g., indeterminism. The quantum state is not an ‘intuitive’ (anschaulich, also translated as ‘visualisable’) representation of a quantum object, but only a ‘symbolic’ representation, a shorthand for the quantum phenomena that are constituted by applying the various complementary classical pictures. While it is difficult to pinpoint exactly what Bohr's views were (the concept and even the term ‘Copenhagen interpretation’ have been argued to be a later construct; see Howard 2004), it is clear that according to Bohr, classical concepts are autonomous from, and indeed conceptually prior to, quantum theory. If we understand the theory of decoherence as pointing to how classical concepts might in fact emerge from quantum mechanics, this seems to undermine Bohr's basic position. Of course it would be a mistake to say that decoherence (a part of quantum theory) contradicts the Copenhagen approach (an interpretation of quantum theory). However, decoherence does suggest that one might want to adopt alternative interpretations, in which it is the quantum concepts that are prior to the classical ones, or, more precisely, the classical concepts at the everyday level emerge from quantum mechanics (irrespectively of whether there are even more fundamental concepts, as in pilot-wave theories). In this sense, if the programme of decoherence is successful in the sense sketched in Section 2.3, it will indeed be a blow to Bohr's interpretation coming from quantum physics itself. On the other hand, Bohr's intuition that quantum mechanics as practised requires a classical domain would in fact be confirmed by decoherence, if it turns out that decoherence is indeed the basis for the phenomenology of quantum mechanics, as the Everettian and possibly the Bohmian analysis suggest.[28] As a matter of fact, Zurek (2003) locates his existential interpretation half-way between Bohr and Everett. 4. Scope of Decoherence We have already mentioned in Section 1.1 that some care has to be taken lest one overgeneralise conclusions based on examining only well-behaved models of decoherence. On the other hand, in order to assess the programme of explaining the emergence of classicality using decoherence (together with appropriate foundational approaches), one has to probe how far the applications of decoherence can be pushed. In this final section, we survey some of the further applications that have been proposed for decoherence, beyond the easier examples we have seen such as chirality or alpha-particle tracks. Whether decoherence can indeed be successfully applied to all of these fields will be in part a matter for further assessment, as more detailed models are proposed and investigated. A straightforward application of the techniques allowing one to derive Newtonian trajectories at the level of components has been employed by Zurek and Paz (1994) to derive chaotic trajectories in quantum mechanics. The problem with the quantum description of chaotic behaviour is that prima facie there should be none. Chaos is characterised roughly as extreme sensitivity in the behaviour of a system on its initial conditions, in the sense that the distance between the trajectories arising from different initial conditions increases exponentially in time. Since the Schrödinger evolution is unitary, it preserves all scalar products and all distances between quantum state vectors. Thus, it would seem, close initial conditions lead to trajectories that are uniformly close throughout all of time, and no chaotic behaviour is possible (‘problem of quantum chaos’). The crucial point that enables Zurek and Paz's analysis is that the relevant trajectories defined by decoherence are at the level of components of the state of the system. Unitarity is preserved because the vectors in the environment, to which these different components are coupled, are and remain orthogonal: how the components themselves more specifically evolve is immaterial. Explicit modelling yields a picture of quantum chaos in which different trajectories branch (a feature absent from classical chaos, which is deterministic) and then indeed diverge exponentially. As with the crossing of trajectories in de Broglie-Bohm theory (Section 3.2), one has behaviour at the level of components that is qualitatively different from the behaviour derived for wave functions of an isolated system. The idea of effective superselection rules was mentioned in Section 1.1. As pointed out by Giulini, Kiefer and Zeh (1995, see also Giulini et al. 1996, Section 6.4), the justification for the (strict) superselection rule for charge in quantum field theory can also be phrased in terms of decoherence. The idea is simple: an electric charge is surrounded by a Coulomb field (which electrostatically is infinitely extended; the argument can also be carried through using the retarded field, though). States of different electric charge of a particle are thus coupled to different, presumably orthogonal, states of its electric field. One can consider the far-field as an effectively uncontrollable environment that decoheres the particle (and the near-field), so that superpositions of different charges are indeed never observed. Another claim about the significance of decoherence relates to time asymmetry (see e.g. the entries on time asymmetry in thermodynamics and philosophy of statistical mechanics), in particular to whether decoherence can explain the apparent time-directedness in our (classical) world. The issue is again one of time-directedness at the level of components emerging from a time-symmetric evolution at the level of the universal wave function (presumably with special initial conditions). Insofar as (apparent) collapse is indeed a time-directed process, decoherence will have direct relevance to the emergence of this ‘quantum mechanical arrow of time’ (for a spectrum of discussions, see Zeh 2001, Chap. 4; Hartle 1998, and references therein; Bacciagaluppi 2002, Section 6.1, and Bacciagaluppi 2007). Whether decoherence is connected to the other familiar arrows of time is a more specific question, various discussions of which are given, e.g., by Zurek and Paz (1994), Hemmo and Shenker (2001) and the unpublished Wallace (2001) (see the Other Internet Resources below). Zeh (2003) argues from the notion that decoherence can explain ‘quantum phenomena’ such as particle detections that the concept of a particle in quantum field theory is itself a consequence of decoherence. That is, only fields need to be included in the fundamental concepts, and ‘particles’ are a derived concept, unlike what might be suggested by the customary introduction of fields through a process of ‘second quantisation’. Thus decoherence seems to provide a further powerful argument for the conceptual primacy of fields over particles in the question of the interpretation of quantum field theory. Finally, it has been suggested that decoherence could be a useful ingredient in a theory of quantum gravity, for two reasons. First, because a suitable generalisation of decoherence theory to a full theory of quantum gravity should yield suppression of interference between different classical spacetimes (Giulini et al. 1996, Section 4.2). Second, it is speculated that decoherence might solve the so-called problem of time, which arises as a prominent puzzle in (the ‘canonical’ approach to) quantum gravity. This is the problem that the candidate fundamental equation (in this approach)—the Wheeler-DeWitt equation—is an analogue of a time-independent Schrödinger equation, and does not contain time at all. The problem is thus in a sense simply: where does time come from? In the context of decoherence theory, one can construct toy models in which the analogue of the Wheeler-DeWitt wave function decomposes into non-interfering components (for a suitable sub-system) each satisfying a time-dependent Schrödinger equation, so that decoherence appears in fact as the source of time.[29] An accessible introduction to and philosophical discussion of these models is given by Ridderbos (1999), with references to the original papers. • Adler, S. L., 2003, ‘Why Decoherence has not Solved the Measurement Problem: A Response to P. W. Anderson’, Studies in History and Philosophy of Modern Physics, 34B: 135–142. [Preprint available online] • Albert, D., 1992, Quantum Mechanics and Experience, Cambridge, Mass.: Harvard University Press. • Albert, D., and Loewer, B., 1988, ‘Interpreting the Many Worlds Interpretation’, Synthese, 77: 195–213. • Allori, V., 2001, Decoherence and the Classical Limit of Quantum Mechanics, Ph.D. Thesis, Università di Genova, Dipartimento di Fisica. • Allori, V., and Zanghì, N., 2009, ‘On the Classical Limit of Quantum Mechanics’, Foundations of Physics, 39(1): 20–32. • Anglin, J. R., Paz, J. P., and Zurek, W. H., 1997, ‘Deconstructing Decoherence’, Physical Review, A 55: 4041–4053. [Preprint available online] • Appleby, D. M., 1999, ‘Bohmian Trajectories Post-Decoherence’, Foundations of Physics, 29: 1885–1916. [Preprint available online] • Bacciagaluppi, G., 2000, ‘Delocalized Properties in the Modal Interpretation of a Continuous Model of Decoherence’, Foundations of Physics, 30: 1431–1444. • –––, 2002, ‘Remarks on Space-Time and Locality in Everett's Interpretation’, in T. Placek and J. Butterfield (eds), Non-Locality and Modality (NATO Science Series, II. Mathematics, Physics and Chemistry, Volume 64), Dordrecht: Kluwer, pp. 105–122. [Preprint available online] • –––, 2007, ‘Probability, Arrow of Time and Decoherence’, Studies in History and Philosophy of Modern Physics, 38: 439–456. [Preprint available online] • Barbour, J., 1999, The End of Time (London: Weidenfeld and Nicolson). • Barrett, J. A., 2000, ‘The Persistence of Memory: Surreal Trajectories in Bohm's Theory’, Philosophy of Science, 67(4): 680–703. • Bell, J. S., 1987, Speakable and Unspeakable in Quantum Mechanics, Cambridge: Cambridge University Press. • Bene, G., and Dieks, D., 2002, ‘A Perspectival Version of the Modal Interpretation of Quantum Mechanics and the Origin of Macroscopic Behavior’, Foundations of Physics, 32: 645–672. [Preprint available online] • Berkovitz, J., and Hemmo, M., 2006, ‘Modal Interpretations and Relativity: A Reconsideration’, in W. Demopoulos and I. Pitowsky (eds.), Physical Theory and its Interpretation: Essays in Honor of Jeffrey Bub (Western Ontario Series in Philosophy of Science, Vol. 72), New York: Springer, pp. 1–28. • Broglie, L. de, 1928, ‘La nouvelle dynamique des quanta’, in [H. Lorentz (ed.)], Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique […] Solvay, Paris: Gauthiers-Villars. Transl. as ‘The New Dynamics of Quanta’ in G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge: Cambridge University Press, pp. 374–407. • Bohm, D., 1952, ‘A Suggested Interpretation of the Quantum Theory in Terms of “Hidden” Variables: I and II’, Physical Review, 85: 166–179 and 180–193. • Bohm, D., and Hiley, B., 1993, The Undivided Universe, London: Routledge. • Bub, J., 1997, Interpreting the Quantum World, Cambridge: Cambridge University Press; corrected edition, 1999. • Deutsch, D., 1999, ‘Quantum Theory of Probability and Decisions’, Proceedings of the Royal Society of London, A 455: 3129–3137. [Preprint available online] • Dieks, D., and Vermaas, P. E. (eds.), 1998, The Modal Interpretation of Quantum Mechanics, Dordrecht: Kluwer. • Donald, M., 1998, ‘Discontinuity and Continuity of Definite Properties in the Modal Interpretation’, in Dieks and Vermaas (1998), pp. 213–222. [Preprint available online] • Dowker, F., and Kent, A., 1995, ‘Properties of Consistent Histories’, Physical Review Letters, 75: 3038–3041. [Preprint available online] • Dowker, F., and Kent, A., 1996, ‘On the Consistent Histories Approach to Quantum Mechanics’, Journal of Statistical Physics, 82: 1575–1646. • Epstein, S. T., 1953, ‘The Causal Interpretation of Quantum Mechanics’, Physical Review, 89: 319. • Everett, H. III, 1957, ‘“Relative-State” Formulation of Quantum Mechanics’, Reviews of Modern Physics, 29: 454–462. Reprinted in Wheeler and Zurek (1983), pp. 315–323. • van Fraassen, B., 1973, ‘Semantic Analysis of Quantum Logic’, in C. A. Hooker (ed.), Contemporary Research in the Foundations and Philosophy of Quantum Theory, Dordrecht: Reidel, pp. 180–213. • –––, 1991, Quantum Mechanics: An Empiricist View, Oxford: Clarendon Press. • Gell-Mann, M., and Hartle, J. B., 1990, ‘Quantum Mechanics in the Light of Quantum Cosmology’, in W. H. Zurek (ed.), Complexity, Entropy, and the Physics of Information, Reading, Mass.: Addison-Wesley, pp. 425-458. • Ghirardi, G., Rimini, A., and Weber, T., 1986, ‘Unified Dynamics for Microscopic and Macroscopic Systems’, Physical Review, D 34: 470–479. • Giulini, D., Joos, E., Kiefer, C., Kupsch, J., Stamatescu, I.-O., and Zeh, H. D., 1996, Decoherence and the Appearance of a Classical World in Quantum Theory, Berlin: Springer; second revised edition, 2003. • Greaves, H., 2007, ‘On the Everettian Epistemic Problem’, Studies in History and Philosophy of Modern Physics, 38: 120–152. [Preprint available online] • Greaves, H., and Myrvold, W., 2010, ‘Everett and Evidence’, in Saunders et al. (2010), pp. 264–304. [Preprint available online] • Griffiths, R. B., 1984, ‘Consistent Histories and the Interpretation of Quantum Mechanics’, Journal of Statistical Physics, 36: 219–272. • –––, 2002, Consistent Quantum Theory, Cambridge: Cambridge University Press. • Halliwell, J. J., 1995, ‘A Review of the Decoherent Histories Approach to Quantum Mechanics’, Annals of the New York Academy of Sciences, 755: 726–740. [Preprint available online] • –––, 1999, ‘Somewhere in the Universe: Where is the Information Stored when Histories Decohere?’, Physical Review, D 60: 105031/1–17. • Halliwell, J. J., and Thorwart, J., 2002, ‘Life in an Energy Eigenstate: Decoherent Histories Analysis of a Model Timeless Universe’, Physical Review, D 65: 104009/1–19. [Preprint available online] • Hartle, J. B., 1998, ‘Quantum Pasts and the Utility of History’, Physica Scripta, T 76: 67–77. [Preprint available online] • Healey, R., 1989, The Philosophy of Quantum Mechanics: An Interactive Interpretation, Cambridge: Cambridge University Press. • Hemmo, M., 1996, Quantum Mechanics Without Collapse: Modal Interpretations, Histories and Many Worlds, Ph.D. Thesis, University of Cambridge, Department of History and Philosophy of Science. • Hemmo, M. and Shenker, O., 2001, ‘Can we Explain Thermodynamics by Quantum Decoherence?’, Studies in History and Philosophy of Modern Physics, 32 B: 555–568. • Hermann, G., 1935, ‘Die naturphilosophischen Grundlagen der Quantenmechanik’, Abhandlungen der Fries'schen Schule, 6: 75–152. • Holland, P. R., 1996, ‘Is Quantum Mechanics Universal?’, in J. T. Cushing, A. Fine and S. Goldstein (eds.), Bohmian Mechanics and Quantum Theory: An Appraisal, Dordrecht: Kluwer, pp. 99–110. • Howard, D., 2004, ‘Who Invented the “Copenhagen Interpretation”? A Study in Mythology’, Philosophy of Science, 71: 669–682. [Preprint available online] • Joos, E. and Zeh, H. D., 1985, ‘The Emergence of Classical Properties through Interaction with the Environment’, Zeitschrift für Physik, B 59: 223–243. • Kay, B. S., 1998, ‘Decoherence of Macroscopic Closed Systems within Newtonian Quantum Gravity’, Classical and Quantum Gravity, 15: L89-L98. [Preprint available online] • Kochen, S., 1985, ‘A new Interpretation of Quantum Mechanics’, in P. Mittelstaedt and P. Lahti (eds.), Symposium on the Foundations of Modern Physics 1985, Singapore: World Scientific, pp. 151–169. • Leggett, A. J., 1984, ‘Schrödinger's Cat and her Laboratory Cousins’, Contemporary Physics, 25: 583–594. • –––, 2002, ‘Testing the Limits of Quantum Mechanics: Motivation, State of Play, Prospects’, Journal of Physics, C 14: R415-R451. • Lewis, P. J., 2010, ‘Probability in Everettian Quantum Mechanics’, Manuscrito, 33(1): 285–306. • Mott, N. F., 1929, ‘The Wave Mechanics of α-ray Tracks’, Proceedings of the Royal Society of London, A 126 (1930, No. 800 of 2 December 1929): 79–84. • Nelson, E., 1966, ‘Derivation of the Schrödinger Equation from Newtonian Mechanics’, Physical Review, 150: 1079–1085. • Nelson, E., 1985, Quantum Fluctuations, Princeton: Princeton University Press. • –––, 2006, ‘Afterword’, in W. G. Faris (ed.), Diffusion, Quantum Theory, and Radically Elementary Mathematics (Mathematical Notes 47), Princeton: Princeton University Press, pp. 227–230. • von Neumann, J., 1932, Mathematische Grundlagen der Quantenmechanik, Berlin: Springer. Transl. by R. T. Beyer as Mathematical Foundations of Quantum Mechanics, Princeton: Princeton University Press, 1955. • Omnès, R., 1988, ‘Logical Reformulations of Quantum Mechanics: I. Foundations’, Reviews of Modern Physics, 53: 893–932; ‘II. Inferences and the Einstein-Podolsky-Rosen Experiment’, 933-955; ‘III. Classical Limit and Irreversibility’, 957–975. • –––, 1989, ‘Logical Reformulations of Quantum Mechanics: IV. Projectors in Semi-Classical Physics’, Reviews of Modern Physics, 57: 357–382. • Pearle, P., 1997, ‘True Collapse and False Collapse’, in Da Hsuan Feng and Bei Lok Hu (eds.), Quantum Classical Correspondence: Proceedings of the 4th Drexel Symposium on Quantum Nonintegrability, Philadelphia, PA, USA, September 8–11, 1994, Cambridge, Mass.: International Press, pp. 51–68. [Preprint available online] • –––, 1989, ‘Combining Stochastic Dynamical State-Vector Reduction with Spontaneous Localization’, Physical Review, A 39: 2277–2289. • Pearle, P., and Squires, E., 1994, ‘Bound-State Excitation, Nucleon Decay Experiments, and Models of Wave-Function Collapse’, Physical Review Letters, 73: 1–5. • Ridderbos, K., 1999, ‘The Loss of Coherence in Quantum Cosmology’, Studies in History and Philosophy of Modern Physics, 30 B: 41–60. • Sanz, A. S., and Borondo, F., 2009, ‘Contextuality, Decoherence and Quantum Trajectories’, 
Chemical Physics Letters, 478: 301–306. [Preprint available online] • Saunders, S., 1993, ‘Decoherence, Relative States, and Evolutionary Adaptation’, Foundations of Physics, 23: 1553–1585. • –––, 1999, ‘The “Beables” of Relativistic Pilot-Wave Theory’, in J. Butterfield and C. Pagonis (eds.), From Physics to Philosophy, Cambridge: Cambridge University Press, pp. 71–89. • Saunders, S., Barrett, J., Kent, A., and Wallace, D. (eds.), 2010, Many Worlds? Everett, Quantum Theory, & Reality, Oxford and New York: Oxford University Press. • Schlosshauer, M., 2007, Decoherence and the Quantum-to-Classical Transition, Heidelberg and Berlin: Springer. • Shimony, A., 1989, ‘Search for a Worldview which can Accommodate our Knowledge of Microphysics’, in J. T. Cushing and E. McMullin (eds.), Philosophical Consequences of Quantum Theory, Notre Dame, Indiana: University of Notre Dame Press. Reprinted in A. Shimony, Search for a Naturalistic Worldview, Vol. 1, Cambridge: Cambridge University Press, 1993, pp. 62–76. • Spekkens, R. W., and Sipe, J. E., 2001, ‘A Modal Interpretation of Quantum Mechanics based on a Principle of Entropy Minimization’, Foundations of Physics, 31: 1431–1464. • Struyve, W., 2011, ‘Pilot-wave Approaches to Quantum Field Theory’, Journal of Physics: Conference Series, 306: 012047/1–10. • Struyve, W., and Westman, H., 2007, ‘A Minimalist Pilot-wave Model for Quantum Electrodynamics’, Proceedings of the Royal Society of London, A 463: 3115–3129. • Tegmark, M., 1993, ‘Apparent Wave Function Collapse Caused by Scattering’, Foundations of Physics Letters, 6: 571–590. [Preprint available online] • Wallace, D., 2003a, ‘Everett and Structure’, Studies in History and Philosophy of Modern Physics, 34 B: 87–105. [Preprint available online] • –––, 2003b, ‘Everettian Rationality: Defending Deutsch's Approach to Probability in the Everett Interpretation’, Studies in History and Philosophy of Modern Physics, 34 B: 415–439. [Preprint available online] [See also the longer, unpublished version titled ‘Quantum Probability and Decision Theory, Revisited’ referenced in the Other Internet Resources.] • –––, 2007, ‘Quantum Probability from Subjective Likelihood: Improving on Deutsch's Proof of the Probability Rule’, Studies in History and Philosophy of Modern Physics, 38: 311–332. [Preprint available online] • –––, 2008, ‘Philosophy of Quantum Mechanics’, in D. Rickles (ed.), The Ashgate Companion to Contemporary Philosophy of Physics, Aldershot: Ashgate, pp. 16–98. [Preliminary version available online as ‘The Quantum Measurement Problem: State of Play’, December 2007.] • Wheeler, J. A., and Zurek, W. H. (eds.), 1983, Quantum Theory and Measurement, Princeton: Princeton University Press. • Wightman, A. S., 1995, ‘Superselection Rules: Old and New’, Il Nuovo Cimento, 110 B: 751–769. • Zeh, H. D., 1970, ‘On the Interpretation of Measurement in Quantum Theory’, Foundations of Physics, 1: 69–76. Reprinted in Wheeler and Zurek (1983), pp. 342–349. • –––, 1973, ‘Toward a Quantum Theory of Observation’, Foundations of Physics, 3: 109–116. • –––, 1995, ‘Basic Concepts and Their Interpretation’. Revised edition of Chapter 2 of Giulini et al. (1996). [Page numbers refer to the preprint available online, entitled ‘Decoherence: Basic Concepts and Their Interpretation’.] • –––, 2000, ‘The Problem of Conscious Observation in Quantum Mechanical Description’, Foundations of Physics Letters, 13: 221–233. [Preprint available online] • –––, 2001, The Physical Basis of the Direction of Time, Berlin: Springer, 4th edition. • –––, 2003, ‘There is no “First” Quantization’, Physics Letters, A 309: 329–334. [Preprint available online] • Zurek, W. H., 1981, ‘Pointer Basis of Quantum Apparatus: Into what Mixture does the Wave Packet Collapse?’, Physical Review, D 24: 1516–1525. • –––, 1982, ‘Environment-Induced Superselection Rules’, Physical Review, D 26: 1862–1880. • –––, 1991, ‘Decoherence and the Transition from Quantum to Classical’, Physics Today, 44 (October): 36–44. [Abstract and updated (2003) version available online, under the title ‘Decoherence and the Transition from Quantum to Classical—Revisited’.] • –––, 1993, ‘Negotiating the Tricky Border Between Quantum and Classical’, Physics Today, 46 (April): 84–90. • –––, 1998, ‘Decoherence, Einselection, and the Existential Interpretation (The Rough Guide)’, Philosophical Transactions of the Royal Society of London, A 356: 1793–1820. [Preprint available online] • –––, 2003, ‘Decoherence, Einselection, and the Quantum Origins of the Classical’, Reviews of Modern Physics, 75: 715–775. [Page numbers refer to the preprint available online.] • Zurek, W. H., and Paz, J.-P., 1994, ‘Decoherence, Chaos, and the Second Law’, Physical Review Letters, 72: 2508–2511. Other Internet Resources • Crull, E. (University of Aberdeen), and Bacciagaluppi, G. (University of Aberdeen), 2011, ‘Translation of W. Heisenberg: “Ist eine deterministische Ergänzung der Quantenmechanik möglich?”’, available online in the Pittsburgh Phil-Sci Archive. • Felline, L. (Universidad Autóonoma de Barcelona) and Bacciagaluppi, G. (University of Aberdeen), 2011, ‘Locality and Mentality in Everett Interpretations: Albert and Loewer's Many Minds’, available online in the Pittsburgh Phil-Sci Archive. • Gell-Mann, M. (Santa Fe Institute), and Hartle, J. B. (UC/Santa Barbara), 1994, ‘Equivalent Sets of Histories and Multiple Quasiclassical Realms’, available online in the e-Print archive. • Wallace, D. (Oxford University), 2000, ‘Implications of Quantum Theory in the Foundations of Statistical Mechanics’, available online in the Pittsburgh Phil-Sci Archive. • Wallace, D. (Oxford University), 2002, ‘Quantum Probability and Decision Theory, Revisited’, available online in the e-Print archive. This is a longer version of Wallace (2003b). • The e-Print archive, formerly the Los Alamos archive. This is the main physics preprint archive. • The Pittsburgh Phil-Sci Archive. This is the main philosophy of science preprint archive. • A Many-Minds Interpretation Of Quantum Theory, maintained by Matthew Donald (Cavendish Lab, Physics, University of Cambridge). This page contains details of his many-minds interpretation, as well as discussions of some of the books and papers quoted above (and others of interest). Follow also the link to the ‘Frequently Asked Questions’, some of which (and the ensuing dialogue) contain useful discussion of decoherence. • Quantum Mechanics on the Large Scale, maintained by Philip Stamp (Physics, University of British Columbia). This page has links to the available talks from the Vancouver workshop mentioned in footnote 1; see especially the papers by Tony Leggett and by Philip Stamp. • Decoherence Website, maintained by Erich Joos. This is a site with information, references and further links to people and institutes working on decoherence, especially in Germany and the rest of Europe. Copyright © 2012 by Guido Bacciagaluppi <> Please Read How You Can Help Keep the Encyclopedia Free
93960259709259c1
Friday, June 12, 2015 Where are we on the road to quantum gravity? Damned if I know! But I got to ask some questions to Lee Smolin which he kindly replied to, and you can read his answers over at Starts with a Bang. If you’re a string theorist you don’t have to read it of course because we already know you’ll hate it. But I would be acting out of character if not having an answer to the question posed in the title did prevent me from going on and distributing opinions, so here we go. On my postdoctoral path through institutions I’ve passed by string theory and loop quantum gravity, and after some closer inspection stayed at a distance from both because I wanted to do physics and not math. I wanted to describe something in the real world and not spend my days proving convergence theorems or doing stability analyses of imaginary things. I wanted to do something meaningful with my life, and I was – still am – deeply disturbed by how detached quantum gravity is from experiment. So detached in fact one has to wonder if it’s science at all. That’s why I’ve worked for years on quantum gravity phenomenology. The recent developments in string theory to apply the AdS/CFT duality to the description of strongly coupled systems are another way to make this contact to reality, but then we were talking about quantum gravity. For me the most interesting theoretical developments in quantum gravity are the ones Lee hasn’t mentioned. There are various emergent gravity scenarios and though I don’t find any of them too convincing, there might be something to the idea that gravity is a statistical effect. And then there is Achim Kempf’s spectral geometry that for all I can see would just fit together very nicely with causal sets. But yeah, there are like two people in the world working on this and they’re flying below the pop sci radar. So you’d probably never have heard of them if it wasn’t for my awesome blog, so listen: Have an eye on Achim Kempf and Raffael Sorkin, they’re both brilliant and their work is totally underappreciated. Personally, I am not so secretly convinced that the actual reason we haven’t yet figured out which theory of quantum gravity describes our universe is that we haven’t understood quantization. The so-called “problem of time”, the past hypothesis, the measurement problem, the cosmological constant – all this signals to me the problem isn’t gravity, the problem is the quantization prescription itself. And what a strange procedure this is, to take a classical theory and then quantize and second quantize it to obtain something more fundamental. How do we know this procedure isn’t scale dependent? How do we know it works the same at the Planck scale as in our labs? We don’t. Unfortunately, this topic rests at the intersection of quantum gravity and quantum foundations and is dismissed by both sides, unless you count my own small contribution. It’s a research area with only one paper! Having said that, I found Lee’s answers interesting because I understand better now the optimism behind the quote from his 2001 book, that predicted we’d know the theory of quantum gravity by 2015. I originally studied mathematics, and it just so happened that the first journal club I ever attended, in '97 or '98, was held by a professor for mathematical physics on the topic of Ashtekar’s variables. I knew some General Relativity and was just taking a class on quantum field theory, and this fit in nicely. It was somewhat over my head but basically the same math and not too difficult to follow. And it all seemed to make much sense! I switched from math to physics and in fact for several years to come I lived under the impression that gravity had been quantized and it wouldn’t take long until somebody calculated exactly what is inside a black hole and how the big bang works. That, however, never happened. And here we are in 2015, still looking to answer the same questions. I’ll restrain from making a prediction because predicting when we’ll know the theory for quantum gravity is more difficult than finding it in the first place ;o) George Musser said... How could the problem of time be blamed on quantization? It seems to be rooted in classical GR. Sabine Hossenfelder said... It seems to be rooted in the Hamiltonian formulism. Uncle Al said... "how detached quantum gravity is from experiment." The only predictive gravitation is geometric, thus 90 days in a geometric Eotvos experiment. Everything exactly cancels except geometry where the most extreme composition and field contrasts are inert. A non-zero signal is definitive. "Experimental Search for Quantum Gravity: The Hard Facts" Green's function imposes mirror-symmetry. There is only contrary evidence that the vacuum is exactly mirror-symmetric toward matter. Baryogenesis eludes theory, Sakharov conditions or otherwise. Ashtekar (plus Immirzi) is GR chiral decomposition. The Coupe du Roi also shows how a perfectly symmetric ball has hidden structure. Vincelovesfreefood said... "...discourage people who follow long standing established research programs..." So, people like Lee Smolin? He's been doing LQG for how long and GR still hasn't been derived from it in any kind of limit? He says, "The emergence of general relativity from the semiclassical approximation of the path integral is understood." which is clearly a lie. What does he mean by "understood"? If it hasn't been explicitly shown, then it is not "understood". It doesn't surprise me that he says all these things. I remember a while back Lee Smolin hyping up a particular research program concerning braids in LQG and how it was going to show us that the Standard Model can be shown to emerge from the dynamics of LQG. Nothing has come from that. At this point, I take very little of what he says seriously. To be honest, I think all these research programs are more or less worthless. The program that comes out ahead is string theory since, at least, it allows for some kind of unification. Even quantum gravity phenomenology is semi-worthless since, by definition, quantum gravity cannot be observed with current methods. Quantum gravity can only be observed at extremely high energies, way beyond what we are capable of. Oh well.... andrew said... Call me an optimist. It is taken a few decades, but we are approaching an era in which voluminous and precise astronomy observations, coupled with computational power that would have been almost unimaginable when the Standard Model was formulated and Sting Theory started to take shape, can provide genuine empirical tests of various proposals for inflation; dark energy/cosmological constant; the behavior of particles in very strong field regime of white dwarfs, neutron stars, and black hole fringes; the possibility the dark matter phenomena are cause is part or predominantly by modifications to gravity. The litany of null results out of the LHC for dark matter candidates or other new physics also isn't nothing. A huge swath of parameter space for new physics has been definitively ruled out. A space telescope program with an LHC scale budget could take that to a whole new level. Something as simple as sending space telescopes to opposite ends of the solar system to allow more observations to be calibrated against parallax measurements could greatly reduce systemic error in gobs of data that we already have in hand. Progress in particle physics and engineering has also pushed our instrumentation that allows us to test every detail of gravitational phenomena at the solar system level to almost maximal theoretically possible precision. We also have a very deep bench of investigators worldwide who through mechanisms like arVix are sharing information with each other with near theoretically minimal friction. There are more new publicly available papers on GR and quantum gravity written by well trained PhDs each week than there would be for whole years for the first half century of GR. The biggest threat we face, I think, is group-think. Because so many thousands of investigators are so intimately in touch with what each other are thinking, the risk that conventional wisdom will discourage out of the box thinking and destroy the benefits of having a legion of skilled people doing the work is a very real one. There might be something to be said for figuratively locking a few hundred of the most innovative and divergent physicists in a box at some institution in the middle of nowhere to at least have two independent communities of investigators to pursue their own sequence of insights for a few decades, imitating for the theoretical community the notion of having dual independent experiments at Tevatron and the LHC. Tom Andersen said... The class of solutions that Lee and Sabine (and 99.99% of the physics establishment) all seem to think have weight have one major problem - they all assume that QM is some sort of bedrock. The fact that 1000's of PhDs and postdocs have been spent on chasing the Quantization of Gravity should mean one thing. It can't be done. Lee does show some light when he says: "I believe that quantum theory requires a completion, in a deeper theory that allows a complete description of individual processes. I see no other way to resolve the measurement problem." There are not many non linear physical theories which we know work, other than General Relativity. Yet the world of physics is so stuck in the linear QM world that it is GR which is assumed to be some approximation, when it is more than likely that the exact opposite is the case. Recent experiments show that QM like behaviour can emerge from classical fields, so it follows that GR may have the strength and flexibility to build QM, rather than the other way around. Vincelovesfreefood said... I guess you're not posting my comment. It's not like I said anything inappropriate. I just offered a critical view. Last I checked, this wasn't The Reference Frame. Oh well... David Brown said... "... how detached quantum gravity is from experiment ..." If the space roar number is 6 ± .1 then MOND from string theory with the finite nature hypothesis yields 4.99 ± .03 for the number in the photon underproduction crisis. Is string theory with the finite nature hypothesis a revolution waiting to happen? "In the physics I have learned there were many examples of where the mathematics was giving infinite degenerate solutions to a certain problem (classical mechanical problems e.g.) There the problem was always a mistake in the physics assumption. Infinity is mathematical not physical, as far as I know." — Maria Spiropulu See Maria Spiropulu, THE LANDSCAPE, . Is MOND empirically valid because a complete infinity does not occur in nature? “The failures of the standard model of cosmology require a new paradigm”, Jan. 2013 Sabine Hossenfelder said... Sorry for the wait, but I can't sit at my computer 24/7 and approve comments. Please give me at least 24 hours, more when traveling. I know it's annoying, but I've gotten really tired of all the crackpottery in my comment sections. I generally don't check email between 7pm and 7am. Sabine Hossenfelder said... You didn't actually read what I wrote, did you? Sabine Hossenfelder said... Regarding your comment about qg pheno, you are talking about direct detection, and you're just demonstrating you don't know a lot about the research area if you think that's it. For starters, please read this. Hermannus Contractus said... I think you are too impatient. What are 15 years of the life of a person in comparison with the entire life of the cosmos? And I think, that a physicist must truly excel in mathematics (and must always be angry of his/her ignorance) and must have a strong mathematical curiosity and a mathematical mind. If one underestimates mathematics compared to physics as you do, one is automatically led astray. Because mathematics is the language of nature and there exists and there shall never exist any other language. Even good experiments, when they are carefully planned, have some kind of mathematical/logical design. If one neglects sophisticated mathematics, one neglects seeking for the appropriate means of expression in which a thought in physics can be properly uttered. If one needs to devote time to problems on convergence and stability it is because the problems in physics that one is addressing require the consideration of those problems. A blackhole is a singularity and this has both, a mathematical and a physical meaning. Mathematics is also wonderful for the sake of itself and that is the reason why most physicists are attracted by her. And never fatally attracted: Mathematics truly satisfies all human needs of some people (even people who live below a bridge cannot fail to do mathematics if they happen to be mathematicians: I have seen this) and one can speak of a happiness that is so full of joy that one happily renounces to the world of riches, pomps and vanities (the 'physical' world there where it is lacking in humility) for the Platonic world of joy and order with which God delights the mind of those who are strong enough and which appear as weak, poor and masochistic in the world of pomps and riches. All good physicists that I know want always to learn more mathematics and to express things clearly and rigorously. A theoretical physicist is a poet of the real things and his/her word is the equation. An equation, where all terms are properly defined, belongs to the realm of mathematics and opens the realm of the infinite. One has to study an equation in itself in order to know where the equation cannot fail to be valid. The unified theory of physics should be valid in every instance. The complement of the set of things explained by theory is God, whose beauty and magnificence cannot be grasped with our words, concepts and equations. Hermannus Contractus said... I find the interview with Smolin quite useful and informative, thanks for sharing. And I will also read your preprint. Hermannus Contractus said... I have read your article and I have found it very interesting. I think your point is reflected in the quantization condition, where you introduce the field alpha, so that you can tune the quantization condition and decouple gravity as hbar tends to zero (the gravitational coupling constant G being proportional to hbar). I find the idea nice and simple. In fact, I had also a very similar approach that I did never try to publish (I am not an specialist in quantum gravity and did not know what to do with that idea, even when I considered it interesting because of linking quantum mechanics with things in which I was then involved). What I considered, instead of your alpha, was the mean field r of a system of globally coupled nonlinear oscillators described by a Kuramoto model. The coupling constant of the model was as in your case so that, when hbar ->0 one has an incoherent population of oscillators (and hence the average order parameter r ->0). This would represent your 'unquantized' state. When hbar is nonzero, however, one has a synchronized phase emerging out of incoherence, the order parameter r -> 1 as more and more oscillators are synchronized, and one approaches the traditional quantization condition. I did not know how to connect the Kuramoto model with gravity. Now, in reading your article, I have got another interesting idea and probably it is time to rescue those old crazy exercises with nonlinear oscillators. I share these ideas here freely for if you have any suggestion to make. I have to study your work in more detail. The monk Zacharias is also interested. Tom Andersen said... Thanks for your article, which I did read - I can see how I was perhaps too hard on you. You do seem to want a way of the stagnant morass that theoretical physics has become. Emergent QM along the lines of Bush, Couder, Brady and others is something that mainstream physics wrongly ignores. In fact all emergent phenomena are often looked at as something 'below physics'. Perhaps we don't need new fundamental equations to advance physics at all. Sabine Hossenfelder said... I wrote about Couder's theory here. I haven't had time to read the more recent paper. Best, Sabine Hossenfelder said... How interesting that you had a similar idea! Unfortunately I don't know anything about the Kuramoto model. I'm not sure what you mean with 'globally coupled', I'd hope they are locally coupled, otherwise you'll run into trouble combining your model with gravity. Best, Hermannus Contractus said... Ok, thank you very much 'globally coupled' do not mean globally coupled in space but in their phases. I considered, in a rather crazy way, that each point in space time contains (locally) an infinite collection of oscillators. When they are decoupled then I reproduced the Poisson brackets of classical mechanics and when they are all coupled Heisenberg's commutation relationship. When there is a 'something in between situation' then I had something as your Eq. (3), with r, the order parameter of the collection of oscillators, replacing your alpha. A wonderful introduction to the Kuramoto model is provided by Strogatz (I have teach this article at the University) If you have problems in downloading the article let me know. And if you are interested I can send you an script with more details when I finish it. I cannot reveal you my identity, however, because this would go against the benedictine law to which I am subject and which I must carefully observe. Hermannus Contractus said... Usually, the lectures on the Kuramoto model I gave where accompanied with an experiment that I did on synchronization of metronomes, which constitute an example of application of the Kuramoto model: What I did consider is that we do not have strings but a local collection of 'metronomes' that do something as above. One solves the equations of motion of a system of metronomes horizontally coupled by a common support. The equations of motion reduce to the Kuramoto model in the limit of weak coupling (introduced by the common support and by the conservation of momentum of the center of mass of the whole system). Best regards fiksacie said... Dear Sabine! I am interested what do You think about approach like this one: J. Ambjorn (NBI Copenhagen and U. Utrecht), J. Jurkiewicz (U. Krakow), R. Loll (U. Utrecht) (Submitted on 17 May 2005 (v1), last revised 6 Jun 2005 (this version, v2)) We provide detailed evidence for the claim that nonperturbative quantum gravity, defined through state sums of causal triangulated geometries, possesses a large-scale limit in which the dimension of spacetime is four and the dynamics of the volume of the universe behaves semiclassically. This is a first step in reconstructing the universe from a dynamical principle at the Planck scale, and at the same time provides a nontrivial consistency check of the method of causal dynamical triangulations. A closer look at the quantum geometry reveals a number of highly nonclassical aspects, including a dynamical reduction of spacetime to two dimensions on short scales and a fractal structure of slices of constant time. Sabine Hossenfelder said... I wrote about CDT here and most recently here. Giotis said... Nobody is claiming this, so I think you are banging on open doors here. Such approach should work in the perturbative regime though with weakly coupled Langrangians. At strong coupling if you a have a continuous limit you are ok. But don't forget that there are theories with no classical limit and without Langrangians. For QG it is quite obvious for me that you need new degrees of freedom, the new degrees of freedom are stringy ones. Eric said... I read the interview with Lee and saw his remark that asymptotically safe gravity has a problem with stability. I wasn't quite sure what he was referring to at first. I'm thinking now that he must have meant that at any larger structure size than a proton the weak force comes into play and particles and energy can escape. (Remember the fission nuclear bomb. If that is what he is thinking then he is overlooking something big. ASG depends on a closed, finite universe. That isn't so strange. If you can't define the borders of what you are attempting to define then there is no hope for solving it. Just assume it and see how far you can get. You can get pretty darn far! If one assumes a closed universe then any coming apart of a bound structure, at any scale you can name, will release energy that will accelerate something else in the universe that will then become bound together with that same asymptotically safe force. The only difference between a proton, which seems to be unconditionally stable, and the universe as whole with an asymptotically safe structure, is that the stability hops from one structure to the next. This is all dependent on a closed universe. Why not? Eric said... I guess I should add that the fission atomic bomb was used to deploy the fusion atomic bomb. That should lead to some basic intuition about the "global" stability of asymptotically safe gravity. Vincelovesfreefood said... Thanks for the reference. I'll have a look. I do have an open mind. :-) kashyap vasavada said... Hi Bee: Does Ashtekar have his own theory of quantum gravity, different from Smolin's LQG? Can you summarize results in few lines?(!!) Sabine Hossenfelder said... kashyap, It's both the same theory but with different variables, that's the short story. Sabine Hossenfelder said... No, that's not what he meant. He probably meant that it's not known whether the fixed point has a Hamiltonian that is bounded from below, and last time I looked they still didn't know that. Arun said... Bravo, Bee, bravo! Now to read the rest of the article :) Arun said... When we do not know what the majority of the gravitating mass in the universe consists of; we know little about it except its gravitational effects - why do we think we are in a position to produce a unified theory? Is it because we seem to know that dark matter interactions are so weak that there are no additional forces, beyond the electroweak, strong and gravity? Of course, not understanding the matter content of the universe has nothing to do with quantizing gravity, except in unification scenarios like string theory? marten said... Do we know the way to quantum gravity? Uncle Al said... @Arun: Gravitation ignores composition and field - black holes, neutron stars, white dwarfs, hydrogen stars, Nordtvedt effect, lab stuff. Don't describe it or challenge it with such. Newton does not parameterize to GPS. Quantum gravitation is not predictive and the standard model has no SUSY. Perfect derivation creates non-empirical models. A founding postulate is geometrically anomalous at the starting gate where physics "knows" it need not look. Nothing else matters, by observation. Listen to the dog that does not bark. First heretical experiment, then applicable theory. Phillip Helbig said... I haven't (yet) read any of Smolin's books. One problem with popular-science books (I'm not sure if his fit into that category) is that someone with a background in the science in general, but not in the topic of the book (e.g. physics but not quantum gravity) learns little, if anything, new. (Such books might be OK for "interested laymen" who are interested in a very broad-brush overview.) On the other hand, one hasn't the time to read the technical literature outside of one's own field. There is a real need for something in between. For example, in 1991 Narlikar and Padmanabhan wrote a review called "Inflation for astronomers". Another good example: John Barrow's The Book of Universes (the level is not as high as that of the review, but higher than the typical popular-science book). So, could I learn anything from Smolin's books? Sabine Hossenfelder said... Yes, I know what you mean. I read popular science books in physics primarily because I have an interest in writing. Did I learn anything from that book? It's been a decade ago that I read it and honestly I can't recall very much about it. I think I didn't previously know anything about spin networks, and that was the first time I heard about it. I vaguely recall having to look up "node" in a dictionary :p I sometimes find lecture notes quite useful to get an introduction to a field I'm not so familiar with, but then you're not always lucky and find something suitable. Sabine Hossenfelder said... I think it's because most particle physicists expect that whatever the unified theory is it will contain a suitable dark matter candidate. To them it's kind of exactly the opposite: instead of dark matter standing in the way of unification, dark matter is a motivation for unification. nemo said... I read a lot of divulgative book on the same arguments (gravity) because I wanted and I want to understand it. I stopped to buy divulgative books when I discover to main authors: Einstein and J.A. Wheeler. I haven't stop yet to read and rad again. Every time I understand a bit more. Some authors are really.... Fantastic! For sure I'll never miss a Sabine's book! Plato Hagel said... Hi Bee, As with three Roads to Quantum Gravity we see where Lee evolves his perspective over time. Subject to change, of course, his perception may evolve too. Anyway it has been sort of enlightening when one sees what we are doing in context of Quantum Cognition utilizing quantum theory as a foundational base when exploring our potentials. Why not, when it comes to Quantum Gravity? :) Que dos to MarkusM. Robert L. Oldershaw said... Perhaps we should try a road less traveled? Chris Mannering said... Something noticeable about 'stringy' culture, is that almost exclusively people from that culture use extreme put-downs of basically anything that looks another way. Look above at the attack on Lee Smolin. It's not that he's wrong, or been wrong in the past about this matter or that matter. Stringy people don't look at things that way, I think because, phrase like that, it's self-evident there is no case to answer: nothing wrong with trying things and being wrong. So what they do, the stringy people, is take things to a personal and couch their 'criticism' in terms of dishonesty, lies, deliberate omission, theft, and so on. Now, sometimes in life it's true that there is gross dishonesty and deliberate omission and all the rest. At times like that, it's right to call a spade a spade. Problem is, dishonest, lying, strategies are just as likely to take the offensive, if not more likely. Therefore the old adage, that when someone is being attacked by someone else on personal grounds, involving dishonest, lying, etc, etc, then someone is always guilty of exactly that. But which one? One way to resolve this in detective logic, is to observe that it's not actually easy to fallaciously attack people the way Smolin is attacked. Severe moral and ethical compromises need to be made. And it's just one of those things, that we can't do that, and then stay the same in the other areas of our lives. We can't switch it on and off. If we give up our standards, we go to the new standard that we effectively choose. The whole of us. So from that we can identify which side have sold themselves out for less along the way. They will be targeting indiscriminately. Their views will display cynicism across the board except for their 'own'. How about something more local like that can be demonstrated here in the comment. Well yeah, that's doable, because the one thing that always HAS to go, when an intellectual makes that compromise so that he can project onto his victim, is the normal high-standards practice, of seeking always to see past communication shortcomings, and secondary items, flawed instantiations of examples in what the other person is saying, so as to to 'see' as far as possible, what the other person is 'seeing' for the fundamentally critical purpose of putting their position to its strongest form, and answering it, only there. And for why? Because if you don't do that, you are answering a wholly different matter, that not only is not what that person is saying, but not what anyone is saying or has ever said. You don't even know what your answering. And that's a corrosive harmful infliction on yourself. That's the price. Of selling out for less. Look here at the answer to Sabine's very good, highly plausible, and original idea. Nobody is claiming this, so I think you are banging on open doors here." That's coming from a stringy friend. Does he address a reasonable proxy for her position here? I am seeing the mirror opposite of that. Maybe I'm the dishonest one in putting that example down? Or it representative of a lot of what's coming out of the string place these days? If it's me, then I'm sorry, because that isn't something I'd want to do. And that means I don't think it is me. But if it is me it's settled immediately by the fact no one else recognizes anything of the sort from their observations and experiences of the stringy friends. Sabine Hossenfelder said... If I'm banging on open doors, then all the rooms seem to be empty ;) If you apply a different quantization prescription to strings you get a different theory (I believe Thiemann wrote a paper about this 10 years ago or so), so it's an assumption that matters. I don't know if you can include the prescription in the effective action and if that makes hbar (and possibly other constants) run. Giotis said... This was shot down by Helling, Policastro Sabine Hossenfelder said... Thanks for the reference. I'm not sure what you mean though. I'm not saying that this is a good quantization method or one that one should use, I was just using this as an example that the assumption of the quantization method makes a difference for the outcome. John Baez said... "Have an eye on Achim Kempf and Raffael Sorkin..." By the way, his name is Rafael. Great guy, too! Steve Agnew said... Lee Smolin was correct...there is a theory of quantum gravity in 2015. In fact, there are any number of theories of quantum gravity in 2015. He then further stipulates that experiment must validate that theory, but which experiment he does not stipulate. Any experiment? I doubt that just any old experiment would satisfy Smolin. My own sense is from the outside looking in and qg seems to be caught in a recursion of space and motion and continuous time. Science builds its theories with space and motion and continuous time, but there are other conjugate axioms besides space and motion. The Schrödinger equation works well for other conjugates like discrete matter and time. Why Smolin and others do not build their theories on discrete instead of continuous time is a mystery to me. The answer seems so obvious... R said... I thing the Freidel-Leigh-Minic preprint that Smolin mentions [] and also the same authors' previous paper [] are fascinating, and may be the most important pair of QG papers I've read in a long time. And as you suggest, they are explicitly doing something other than the standard form of quantization -- in fact they have a rather plausible-sounding argument that to quantize gravity, or as they put it equivalently to gravitize quantum mechanics, you have to do an extrapolation of Born's suggestion: both the space-time and the quantum momentum space need to have curvature metrics, and these both need to be dynamical. Which normally would cause horrible failures of locality and unitarity, but they show that for string theory, it doesn't. Seriously, go read these two papers.
411448798fda9e8e
Split-step method From Wikipedia, the free encyclopedia Jump to: navigation, search In numerical analysis, the split-step (Fourier) method is a pseudo-spectral numerical method used to solve nonlinear partial differential equations like the nonlinear Schrödinger equation. The name arises for two reasons. First, the method relies on computing the solution in small steps, and treating the linear and the nonlinear steps separately (see below). Second, it is necessary to Fourier transform back and forth because the linear step is made in the frequency domain while the nonlinear step is made in the time domain. An example of usage of this method is in the field of light pulse propagation in optical fibers, where the interaction of linear and nonlinear mechanisms makes it difficult to find general analytical solutions. However, the split-step method provides a numerical solution to the problem. Description of the method[edit] Consider, for example, the nonlinear Schrödinger equation[1] where describes the pulse envelope in time at the spatial position . The equation can be split into a linear part, and a nonlinear part, Both the linear and the nonlinear parts have analytical solutions, but the nonlinear Schrödinger equation containing both parts does not have a general analytical solution. However, if only a 'small' step is taken along , then the two parts can be treated separately with only a 'small' numerical error. One can therefore first take a small nonlinear step, using the analytical solution. The dispersion step has an analytical solution in the frequency domain, so it is first necessary to Fourier transform using where is the center frequency of the pulse. It can be shown that using the above definition of the Fourier transform, the analytical solution to the linear step, commuted with the frequency domain solution for the nonlinear step, is By taking the inverse Fourier transform of one obtains ; the pulse has thus been propagated a small step . By repeating the above times, the pulse can be propagated over a length of . The above shows how to use the method to propagate a solution forward in space; however, many physics applications, such as studying the evolution of a wave packet describing a particle, require one to propagate the solution forward in time rather than in space. The non-linear Schrödinger equation, when used to govern the time evolution of a wave function, takes the form where describes the wave function at position and time . Note that and , and that is the mass of the particle and is Planck's constant over . The formal solution to this equation is a complex exponential, so we have that Since and are operators, they do not in general commute. However, the Baker-Hausdorff formula can be applied to show that the error from treating them as if they do will be of order if we are taking a small but finite time step . We therefore can write The part of this equation involving can be computed directly using the wave function at time , but to compute the exponential involving we use the fact that in frequency space, the partial derivative operator can be converted into a number by substituting for , where is the frequency (or more properly, wave number, as we are dealing with a spatial variable and thus transforming to a space of spatial frequencies—i.e. wave numbers) associated with the Fourier transform of whatever is being operated on. Thus, we take the Fourier transform of recover the associated wave number, compute the quantity and use it to find the product of the complex exponentials involving and in frequency space as below: where denotes a Fourier transform. We then inverse Fourier transform this expression to find the final result in physical space, yielding the final expression A variation on this method is the symmetrized split-step Fourier method, which takes half a time step using one operator, then takes a full-time step with only the other, and then takes a second half time step again with only the first. This method is an improvement upon the generic split-step Fourier method because its error is of order for a time step . The Fourier transforms of this algorithm can be computed relatively fast using the fast Fourier transform (FFT). The split-step Fourier method can therefore be much faster than typical finite difference methods.[2] 1. ^ Agrawal, Govind P. (2001). Nonlinear Fiber Optics (3rd ed.). San Diego, CA, USA: Academic Press. ISBN 0-12-045143-3.  2. ^ T. R. Taha and M. J. Ablowitz (1984). "Analytical and numerical aspects of certain nonlinear evolution equations. II. Numerical, nonlinear Schrödinger equation". J. Comput. Phys. 55 (2): 203–230. Bibcode:1984JCoPh..55..203T. doi:10.1016/0021-9991(84)90003-2.  External references[edit]
62453561cc25c180
Tuesday, February 28, 2017 'I am an Amateur of Velocipedes' (1941) The first Surrealist manifestation in China Miéville's new book, "The Last Days of New Paris". "I am an Amateur of Velocipedes" by Leonora Carrington "Like the antiquated title with which it is inscribed, the fine hatching of I am an Amateur of Velocipedes evokes the atmosphere of nineteenth-century novels and children’s stories, with engraved illustrations, admired by both Carrington and Ernst. "The drawing shows two figures merged into a hybrid bicycle. The bare-breasted part-figure, at the front, is reminiscent of the Rolls-Royce car figurehead, The Spirit of Ecstasy, with the figure behind providing the wing-like robes. "The form of this cloak, together with the lines around the oddly formed front wheel (which suggest bone or turned wood rather than spokes), suggest speed. The arm of the front figure also doubles as the bicycle handle-bars, as it merges with a feathery structure grasped by her blind-eyed companion. " I think the quirky title is best understood in the old-fashioned sense of 'lover'. Here's a picture of Leonora Carrington. Leonora Carrington - Surrealist I leave it to you to judge whether she casts herself as the prow of that bike. Here's her self-portrait (1937-8) .. with a female hyena .. . And here's a short overview of her life and work: Britain's Lost Surrealist, 1917 - 2011. And a video: Monday, February 27, 2017 Vatican productivity: a modest proposal Press Release: PFN/FANUC 26 February 2017. "PFN/FANUC is pleased to announce today a strategic partnership with the Vatican. For many decades the Catholic priesthood has been contracting. Despite the consolidation of dioceses and enhanced roles for the laity, churches have been closing and some Masses have been cancelled. The problem is particularly acute during periods such as Lent when extra weekday Masses are normally scheduled for parishioners. The root cause is the flat productivity of the Catholic Church over 2,000 years. The ministry is extraordinarily labour-intensive, even more so than the health care sector. Finally, however, advances in Robotics and Deep Learning have allowed PFN/FANUC to introduce the Klerjibot™, based on the FANUC pedestal robot. Capable of being rolled out on a massive scale, the automated celebrant requires only a standard power supply and broadband connection. The system ships with three languages: the vernacular for celebrating Mass, Italian as the lingua franca of the priesthood, and Latin, for dealing with the Curia. The Klerjibot can perform Mass in fully automated mode, using its visual and auditory sensors to detect parishioners approaching for communion. More conservative jurisdictions may feel that an automaton cannot preside over such sacred mysteries as transubstantiation. In such cases the system can revert, during this phase of the Mass, to teleoperation. PFN/FANUC and the Vatican will jointly set up a system of national call-centres where banks of trained priests will be alerted as communion approaches, remotely logging-in to the Klerjibot to take over that part of the ceremony. It is anticipated that one priest could control as many as ten different machines in widely-spaced locations, leading to an incredible increase in productivity. First trials are expected soon in California." Saturday, February 25, 2017 "The Last Days of New Paris" - China Miéville Amazon link This is a novella which rewards your imagination, creativity, and openness to experience. There is a plot, after a fashion, though as usual with this author, it's the journey not the destination. Vichy Marseilles in 1941 sees a gathering of Surrealists fighting fascism through their works. The atmosphere positively fizzes with their dangerous, subversive 'art of the unconscious'. To hand is an American scientist (and occultist), Jack Parsons, who will bottle this energy in a 'battery' of his own devising to further his own plans to fight fascism. Unfortunately, the thoroughly-charged battery is stolen and ends up in New Paris where it detonates: the S-bomb. In New Paris 1950, the city is blockaded as Surrealist manifestations stalk the arrondissements aided by the Surrealist underground, La Main à plume. Trapped Nazis and Gestapo agents contract with the denizens of Hell to combat them. The Germans have a top-secret plan: Fall Rot. But what is it? Thibaut, a leader of La Main à plume, has to find out. The afterword purports to document a meeting between an aged Thibaut and Mr Miéville in a London hotel where the story is recounted in frenzied haste. At the end you will find detailed notes on most of the surrealist works of art which 'manifested themselves' in New Paris. Of particular note is the Surrealist publication/manifesto, "Le Surréalisme au service de la révolution", seeming to serve as a blueprint for the S-bomb. You may spend most of the novella wondering where all this is going. It does, however, resolve; there is an overarching theme. I might wish it had been a little more subtle than the banality of evil. However, Miéville can make even Manicheanism look chic. Friday, February 24, 2017 Something to conjure with This post serves as an ideal introduction for my review of "The Last Days of New Paris" by China Miéville. The Ritual's first outing is today, by the way. If you're keen you'd better rush to hit the shops for that orange candle stub. I somehow feel the baby carrot just won't cut it. From here via Charles Stross. Some lodges/covens are doing a variation of this as a group working, while a number of solitary practitioners are planning to connect and live-stream via Facebook, Twitter, and other social media. • Unflattering photo of Trump (small) • Tower tarot card (from any deck) • Tiny stub of an orange candle (cheap via Amazon) • Pin or small nail (to inscribe candle) • Small bowl of water, representing elemental Water • Small bowl of salt, representing elemental Earth • Feather (any), representing the element of Air • Matches or lighter • Ashtray or dish of sand • Piece of pyrite (fool’s gold) • Sulfur • Black thread (for traditional binding variant) • Baby carrot (as substitute for orange candle stub) • Arrange other items in a pleasing circle in front of you RITUAL (v. 2.1) (Light white candle) Hear me, oh spirits Of Water, Earth, Fire, and Air Heavenly hosts Demons of the infernal realms And spirits of the ancestors (Light inscribed orange candle stub) I call upon you To bind Donald J. Trump So that he may fail utterly That he may do no harm To any human soul Nor any tree or Sea Bind him so that he shall not break our polity Usurp our liberty And bind, too, All those who enable his wickedness And those whose mouths speak his poisonous lies I beseech thee, spirits, bind all of them As with chains of iron Bind their malicious tongues Strike down their towers of vanity (Invert Tower tarot card) I beseech thee in my name (Say your full name) In the name of all who walk Crawl, swim, or fly Of all the trees, the forests, Streams, deserts, Rivers and seas In the name of Justice And Liberty And Love And Equality And Peace Bind them in chains Bind their tongues Bind their works Bind their wickedness So mote it be! So mote it be! So mote it be! (Blow out orange candle, visualizing Trump blowing apart into dust or ash) In a spirit of ecumenicalism or something, I imagine that you could replace Trump in this ritual with any other celebrity you have a problem with. Bono, .. or Shami Chakrabarti, for example. Domestic servants: a modest proposal I have fond memories of our Roomba, spinning mindlessly around our previous house, unable to empty itself and needing help to escape from under the bed. Didn't work in our current home - carpets too thick. We're as far as ever from artificial general intelligence (AGI). But why try to solve a thousand problems of mental and physical agility when biology has already done the work for us over 400 million years? "Wait!", I hear you say. Domestic servitude died out with the Edwardians. Besides, the idea of all those servants observing and judging your every move. It's so creepy. No privacy. It's like your home isn't your own. But that was before CRISPR ... "She paused. "Just out of curiosity, what's planned fo' the serfs along these lines?" He relaxed. "Oh, much less. That was debated at the highest levels of authority, an' they decided to do very little beyond selectin' within the normal human range. "Same sort of cleanup on things like hereditary diseases. Average height about 50 millimeters lower than ours. No IQs below 90, which'll bring the average up to 110. No improvements or increase in lifespan so they'll be closer to the original norm than the Race. "Some selection within the personality spectrum: toward gentle, emotional, nonaggressive types. About what you'd expect." If they like serving you, can it really be slavery? More about the Draka. Thursday, February 23, 2017 Liberals meet the Moties: high-K vs. high-r My current after-dinner read to Clare is that wonderful old (1974) classic, "The Mote in God's Eye" by Larry Niven and Jerry Pournelle. Amazon Link Here's a brief plot summary (it's surely OK now for spoilers). "An alien spaceship using a light sail to fly at speeds below the speed of light appears in the human system of New Caledonia. The ship has been traveling for at least 135 years. The Viceroy ruling the New Caledonia system, the Emperor's appointed representative, determines that a mission of two ships will be sent to the alien system that sent out the pod. Both species are wary, protective, and secretive. Communication is established; understanding, however, remains more distant. One of the warships is invaded by a lower alien species and must be destroyed. Three Midshipmen end up alone on the planet, uncovering the aliens' desperate secrets of uncontrollable population explosion and giving their lives to protect Empire secrets. Three aliens are sent to accompany the remaining human ship and crew back to New Caledonia as ambassadors." The Moties are an example of an r-selected species which, via their technological civilization, solve the high death-rate problem - at least until their numbers hit ultimate carrying capacity. From Wikipedia: By contrast, K-selected species display traits associated with living at densities close to carrying capacity, and typically are strong competitors in such crowded niches, investing more heavily in fewer offspring, each of which has a relatively high probability of surviving to adulthood (i.e., low r, high K). ... Traits that are thought to be characteristic of K-selection include large body size, long life expectancy, and the production of fewer offspring, which often require extensive parental care until they mature. ... Organisms with K-selected traits include large organisms such as elephants, humans and whales, but also smaller, long-lived organisms such as Arctic terns. ... The novel revolves around the debate between liberals and conservatives as to how to deal with the Moties (who pretend to be high-K liberals). It's an eerie book to read, weirdly analogous to contemporary debates in the media. It's been suggested on a regular basis that it would make a stupendous film, but until Hollywood learns to love The Donald, I fear their immune system makes that prospect stillborn. Advanced capitalist countries have been carrying out an interesting experiment with effective birth control over the last fifty years or so. A possible result has been a decline in fertility leading to projected population collapses: Japan is often mentioned as a trend-setter. Birth control is a central theme in the novel too. The treatment is very game-theoretic: over their history, some Motie continents under strong political leadership did in fact practice population control. They were simply outbred and then conquered by their more numerous competitors. Wednesday, February 22, 2017 Perhaps no-one understands Maxwell's equations "Microsoft CEO Satya Nadella spoke at a public event in India on Monday ... "The first time I put on a HoloLens was to see something Cleveland Clinic [a non-profit academic medical center] had built for medical innovation... As an electrical engineer who never understood Maxwell's equations, I thought if I had a HoloLens, I would have been a better electrical engineer. Overall I feel that augmented reality is perhaps the ultimate computer," he said. From here. Actually, the article was entitled, "Microsoft CEO says artificial intelligence is the 'ultimate breakthrough'", but I was struck by his confession of ignorance about the foundational theory of classical electromagnetism. Maxwell's equations This looks bad, of course. But let's cut the CEO some slack: Maxwell's equations are notoriously unintuitive, as I observed in this post. You can use the equations, solving them for particular physical configurations. But what picture do they give of the nature of the field(s) themselves? What are we to make of the fact that a stationary observer of a motionless charged ball sees a static spherical electric field E, while an observer moving past that same ball sees electric and magnetic fields (E' and B')? The magnetic field is a relativistic effect. That's implicit in Maxwell's equations, but don't tell me it's obvious. For that, you need to write the equations in a manifestly covariant form, but they don't teach you that in undergraduate electrical engineering. Oh, and he's surely right about augmented reality being the main deliverable from the current state of the art in artificial neural net technologies. The Holy Grail of AGI will be a product of mastering situated social cognition, a post for another day (but see here). Tuesday, February 21, 2017 "From Eternity to Here" - Sean Carroll Amazon Link Just finished Sean Carroll's 2011 book, which - after an exhaustive exploration of all other options - locates the origin of 'the arrow of time' in the quantum-fluctuation emergence of super-low-entropy 'baby universes' from a preceding high-entropy de Sitter universe. Yep, that would be the baby universe in which I'm sitting writing this post. This may seem extravagant, to explain why eggs produce omelettes but not the reverse, but he refutes all the simpler explanations. It presently seems unclear, however, whether a de Sitter universe could even make baby universes, absent a better theory of quantum gravity. Carroll's latest thinking tends in a different direction, suggesting that framing the issue within the spacetime realm may itself be a mistake; the true nature of reality may be Hilbert space with Schrödinger equation dynamics. Spacetime, with its arrow of time, may be emergent. Strange that the weirdest ideas of modern physics - the MWI, emergent spacetime - seem to be the most plausible. This is a fine book, and an excellent introduction for the smart non-physicist to general relativity, quantum theory (QM/QFT) and cosmology. Monday, February 20, 2017 The deep state *is* the state Much excitement in the States (but also in Brexit Britain) about elements of the state apparatus actively working to undermine the policies of an 'insurgent' government. The governments in question are of the right, not the extreme left, but the founders of Marxism were clear that a truly revolutionary government cannot just take over the 'controls' of a 'neutral' state apparatus. They will be thwarted at every turn. This is, of course, true of all bureaucracies. Successful 'turnaround' CEOs bring their own posse of senior managers - and waste no time clearing out the old regime. If they don't, their mission is toast. The political right, in order to to secure their victory, will have to purge the state of their enemies and repopulate it with supporters in their own ideological image. Marxists, however, are seldom called out on what they think should replace the bourgeois state. Lenin and Trotsky, following the model of the Paris Commune, thought that the mobilised masses, organised in soviets (councils), would constitute a non-bureaucratic socialist state - a social formation not separate from broader society. Marx himself never wrote much about it. Yet the state is neither a contingent formation nor just 'bodies of armed men' safeguarding existing property relations. The state has an enormous, and under-analysed, role in social-coordination. Turn now to the economy. All Marxists agree that their post-capitalist economic model will replace private decision-making about the allocation of resources with 'consciously democratic', centrally-planned directives. Given the complex and highly-technical nature of policy-making and implementation, both as regards the state and the putative planned-economy, this is not going to something managed by the masses through their councils. Nor is it going to be that old favourite, the mere 'administration of things, not of people'. The PDF version is here It's hard to resist the conclusion that - until the AIs take over (and where does that leave us?) - the post-capitalist state will continue to be large, bureaucratic and unresponsive. Much like the bourgeois state. History tends to support that view. I would guess that Marxists would not agree, but for people proposing a radical makeover of society's fundamentals, they seem remarkably coy about their proposed replacement model. Such studied casualness wouldn't work in engineering. Why does the far left get a free pass on something of such fundamental importance? Sunday, February 19, 2017 People are boring - so what chance chatbots? The BBC's Dave Lee writes: "It's been nearly a year since Microsoft's Satya Nadella proclaimed "bots are the new apps". The CNN news chatbot, for example, is worse at giving you the news than any of CNN’s other products. ... "And that's because there's no compelling reason to bother with Allo. None of its features - like asking it for directions - provide enough of a benefit beyond what you'd get from just tapping in your request the "old fashioned" way. Users have an incredibly short fuse for chatbots not working exactly as we expect." We have a special name for those few people who we (mostly) don't find boring. We don't much enjoy extended interactions with random folk. People who, nevertheless: • have been completely socialised into our culture for decades,  • come with detailed background knowledge of the world, • are endowed with common sense and full conversational abilities. So why did the AI companies think we'd enjoy interacting with chatbots, which are cognitively impoverished in every conceivable way? It's a good question and I'm not sure of the answer. - Were they over-impressed by their mighty artificial neural nets? But they're only fantastic recognisers and classifiers, a far cry from artificial general intelligences (AGI). - Did they think that we're all keen to have conversational, hands-free interaction with our pocket devices? In fact that's socially way too intrusive most of the time, plus we're talking to conversational muppets. - Was there a belief that in some narrow, vertical and tightly-constrained domains there might be a niche for a conversational interface? There's almost certainly something to that - but we don't yet know what. My own feeling is that the successful mass consumer chatbot can be nothing less than a truly effective virtual friend. To that end it will have to posses AGI and be malleable to your own personality and 'friend-preferences'. We'll have starships before we have that; I haven't seen the first clue we're on that road. All of the above presupposes peer-relationships, typically with kids or adults. Those conversations - most of the time! - exhibit an irreducible core of rational and relevant 'aboutness'. So hard to replicate for an artefact. But there are natural agents around without much cognitive competence: babies, small children and pets. The bond here is emotional .. and so is the interaction. So if you're in the business of designing chatbots which could conceivably bond with your customers, you might want to take note .. .* * In the old days, we called them 'dolls', and they came without batteries. Saturday, February 18, 2017 Diary: my day with a Prolog interpreter in Lisp The subject: implementing a Prolog interpreter in Common Lisp, using an excellent - if rather old-fashioned - textbook as my guide. I had copied-and-pasted the code across (it never quite works first time) and was debating this morning: finally get Peter Norvig's code to run, or strike out in my own direction? There's a lot in Chapter 11 of "Paradigms of Artificial Intelligence programming" to be wary of: • For efficiency reasons he stores the clauses on clause-head-predicate property lists • He uses destructive operations such as nconc • The resolution inference step and depth-first control strategy are interwoven. I'd prefer to keep the clause database as an explicit object to be transformed and analysed throughout the proof. I'd also like to modularise the 'which clauses to try to resolve next' strategy, trying different things such as unit preference, set-of-support, hyperresolution. And it would be good to extract a proof tree, not just the final question-answering bindings. All these things demand a rewrite. But I was glad I persevered, because Peter Norvig's code finally came good this afternoon and I was able to successfully run it on a serious logic puzzle - the Zebra problem. Where it ran remarkably quickly - forty times faster than in Dr Norvig's textbook-writing days. I was reminded of AI programming back in the late 1980s, when everything was slow and hard. I imagined him developing his code on some ancient VAX 11/780 (as I used to do). A simple Prolog interpreter in Lisp I happen to think that the code for 'prove' et al below is extremely opaque. You will search in vain for any clear modularisation into: • Resolution operation • Depth-first control strategy • Bindings and solution management. For conceptual, pedagogical and tool-creation reasons, a rational reconstruction is necessary. When this is completed, I will post a link to the rewritten - and hopefully clearer code - here. ; ---- Lisp code starts here --- ;;;  Prolog in Lisp:  February 14th 2017 - February 18th 2017 ;;;  From Peter Norvig's book: Chapter 11 ;;; "Paradigms of Artificial Intelligence Programming" ;;; Unification, Prolog, Resolution, inference control strategies ;;; Posted: ; http://interweave-consulting.blogspot.co.uk/2017/02/a-simple-prolog-interpreter-in-lisp.html ;;;  Reminder: (how to load and access files) ; (load  "C:\\Users\\HP Owner\\Google Drive\\Lisp\\Prog-Prolog\\Prolog-in-Lisp.lisp") ;;; --- Unification --- ;;;  Bindings: a list of dotted pairs created by unify (defconstant +fail+ nil "Indicates pat-match failure.") (defconstant +no-bindings+ '((t . t)) "Indicates pat-match success but with no variables.") (defun variable-p (x)    ; Symbol -> Bool   "Is x a variable, a symbol beginning with '?'"   (and (symbolp x) (equal (char (symbol-name x) 0) #\? ))) (defun get-binding (var bindings)   "Find a (variable . value) pair in a binding list."   (assoc var bindings)) (defun binding-val (binding)   "Get the value part of a single binding."   (cdr binding)) (defun lookup (var bindings)   "Get the value part (for var) from a binding list."   (binding-val (get-binding var bindings) )) (defun extend-bindings (var val bindings)   "Add a (var . value) pair to a binding list, remove (t . t)"   (cons (cons var val) (if (equal bindings +no-bindings+) nil bindings))) (defparameter *occurs-check* t "Should we do the occurs check? If yes, t") (defun unify (x y &optional (bindings +no-bindings+))   ; -> Bindings   "See if x and y match with given bindings"         ((eql x y) bindings)         ((variable-p x) (unify-variable x y bindings))         ((variable-p y) (unify-variable y x bindings))         ((and (consp x) (consp y))          (unify (rest x) (rest y)                 (unify (first x) (first y) bindings) ) )         (t +fail+) ) ) (defun unify-variable (var x bindings)              ; -> Bindings   "Unify var with x, using (and maybe extending) bindings"   (cond ((get-binding var bindings)                   (unify (lookup var bindings) x bindings))             ((and (variable-p x) (get-binding x bindings))                   (unify var (lookup x bindings) bindings))               ((and *occurs-check* (occurs-check var x bindings))             (t (extend-bindings var x bindings)))) (defun occurs-check (var x bindings)             ; -> Bool   "Does var occur anywhere 'inside x'? Returns t if it does (so fail)"   (cond ((eq var x) t)          (occurs-check var (lookup x bindings) bindings))         ((consp x) (or (occurs-check var (first x) bindings)                        (occurs-check var (rest x) bindings)))         (t nil))) (defun unifier (x y )   "Return something that unifies with both x and y (or +fail+)"   (subst-bindings (unify x y) x ) ) (defun subst-bindings (bindings x)   ; Bindings x Term -> Term   "Substitute the value of variables in bindings into x, taking recursively bound variables into account"   (cond ((eql bindings +fail+) +fail+ )         ((eql bindings +no-bindings+) x)               (subst-bindings bindings (lookup x bindings) ))         ((atom x) x)         ( t (cons (subst-bindings bindings (car x))                         (subst-bindings bindings (cdr x )) ) ) ) ) ;;; -------------------------- Theorem Prover ----------------- ;;; Clauses are represented as (head . body) cons cells: example clauses ;;;  ( (member ?item (?item . rest))) )                         ; fact ;;;  ( (member ?item (?x . ?rest)) . ((member ?item ?rest)))    ; rule (defun clause-head (clause) (first clause))     ; Clause -> Literal (defun clause-body (clause) (rest clause))      ; Clause -> Literal-list ;; Clauses are stored on the predicate's plist (defun get-clauses (pred) (get pred 'clauses)) ; symbol -> Clause-list (defun predicate (literal) (first literal))    ; Literal -> Symbol (predicate) (defvar *db-predicates* nil   "A list of all predicates stored in the database") (defun replace-?-vars (exp)   "Replace any ? within exp with a var of the form ?123"   (cond ((eq exp '?) (gensym "?"))         ((atom exp) exp)         (t (cons (replace-?-vars (first exp))                  (replace-?-vars (rest exp))) ))  ) (defun add-clause (clause)   "Add a clause to the data base, indexed by head's predicate"   ;; The predicate must be a non-variable symbol.   (let* ((clause1 (replace-?-vars clause))          (pred (predicate (clause-head clause1))))     (assert (and (symbolp pred) (not (variable-p pred))))     (pushnew pred *db-predicates*)     (setf (get pred 'clauses)           (append (get-clauses pred) (list clause1))) pred) )   ; changed nconc to append ; (setf clause '((P ?x ?y) . ((Q ?x ?y)))) ; (add-clause clause)                       ; => P ; *db-predicates*                           ; => (P) ; (get 'P 'clauses)                         ; (((P ?X ?Y) (Q ?X ?Y))) (defun show-all-clauses ()   ; *db-predicates*  -> Clause-list    (new function)     "Retrieve all the clauses in the knowledge-base"      (apply #'append (mapcar #'get-clauses *db-predicates*))) (defun clear-db ()   "Remove all clauses (for all predicates) from the database"   (mapc #'clear-predicate *db-predicates*)) (defun clear-predicate (predicate)   "Remove the clauses for a single predicate"   (setf (get predicate 'clauses) nil) ) (defun test ()   (setf *db-predicates* nil)   (add-clause '((likes Kim Robin)))   (add-clause '((likes Sandy Lee)))   (add-clause '((likes Sandy Kim)))   (add-clause '((likes Robin cats)))   (add-clause '((likes Sandy ?x) (likes ?x cats)))   (add-clause '((likes Kim ?x) (likes ?x Lee) (likes ?x Kim)))   (add-clause '((likes ?x ?x)))   (format t "~&Predicate database *db-predicates* = ~a" *db-predicates*)   (pprint (show-all-clauses))   (prove '(likes Sandy ?who) +no-bindings+) ) (defun prove (goal b)    ; Literal x Bindings -> Bindings   "Return a list of possible solutions to goal"   (let ((kb-clauses (get-clauses (predicate goal))))  ; kb clauses which match goal     (apply #'append (mapcar #'(lambda (kb-clause1)                                 (let* ((kb-clause       (rename-variables kb-clause1))                                        (kb-clause-head  (clause-head kb-clause))                                        (kb-clause-body  (clause-body kb-clause))                                        (goal-bindings   (unify goal kb-clause-head b)))                                   (prove-all kb-clause-body goal-bindings)))                             kb-clauses)) ) ) (defun prove-all (goals goal-bindings) ; Literal-list x Bindings -> Bindings   "Return a list of solutions to the conjunction of goals"   (cond ((eql goal-bindings +fail+) +fail+)         ((null goals) (list goal-bindings))         (t  (let* ((next-bindings (prove (car goals) goal-bindings)))               (apply #'append (mapcar #'(lambda (b) (prove-all (cdr goals) b))                                       next-bindings)) ) ) ) ) (defun rename-variables (x)       ; clause -> clause (with vars renamed)   "Replace all variables in x with new ones"   (sublis (mapcar #'(lambda (var) (cons var (gensym (string var))))                   (variables-in x)) ;;; --- Example --- (prove '(likes Sandy ?who) +no-bindings+)   ; => ; (((?WHO . LEE)) ;  ((?WHO . KIM)) ;  ((#:?X846 . ROBIN) ;  (?WHO . #:?X846)) ;  ((#:?X850 . CATS) (#:?X847 . CATS) (#:?X846 . SANDY) (?WHO . #:?X846)) ;  ((#:?X855 . CATS) (#:?X846 . #:?X855) (?WHO . #:?X846)) ;  ((?WHO . SANDY) (#:?X857 . SANDY))) ; - End Example ;;; --- Prolog-like macros - defined also in Zebra-Puzzle --- (defmacro <- (&rest clause)   "Add a clause to the database"   `(add-clause ',clause)) ; (macroexpand-1 '(<- (likes Kim Robin)))    ; => ; (ADD-CLAUSE (QUOTE ((LIKES KIM ROBIN))))   ; which is correct.  (defmacro ?- (&rest goals) `(top-level-prove ',(replace-?-vars goals))) ; (macroexpand-1 '(?- goals)) ;;; --- Prove removing spurious bindings --- (defun variables-in (exp)   "Return a list of all the variables in exp"   (unique-find-anywhere-if #'variable-p exp)) (defun unique-find-anywhere-if (predicate tree &optional found-so-far)   "Return a list of leaves of tree satisfying predicate, with duplicates removed"   (if (atom tree)       (if (funcall predicate tree) (adjoin tree found-so-far) found-so-far)       (unique-find-anywhere-if predicate (first tree)                             (unique-find-anywhere-if predicate (rest tree) found-so-far) (defun top-level-prove (goals)   "Prove the goals, and print variables readably"   (show-prolog-solutions (variables-in goals) (prove-all goals +no-bindings+))) (defun show-prolog-solutions (vars solutions)   "Print the variables in each of the solutions"   (if (null solutions)       (format t "~&No.")       (mapc #'(lambda (solution) (show-prolog-vars vars solution)) solutions)) (defun show-prolog-vars (vars bindings)   "Print each variable with its binding"   (if (null vars)       (format t "~&Yes")     (dolist (var vars)       (format t "~&~a = ~a" var (subst-bindings bindings var))))   (princ ";")) ;;; --- Zebra Puzzle  --- ; (load  "C:\\Users\\HP Owner\\Google Drive\\Lisp\\Prog-RTP\\Zebra-Puzzle.lisp") ;;; --- Zebra Puzzle  --- #| Begin comments Here is an example of something Prolog is very good at: a logic puzzle. There are fifteen facts, or constraints, in the puzzle: 1. There are five houses in a line, each with an owner, a pet, a cigarette, a drink, and a color. 2. The Englishman lives in the red house. 3. The Spaniard owns the dog. 4. Coffee is drunk in the green house. 5. The Ukrainian drinks tea. 7. The Winston smoker owns snails. 8. Kools are smoked in the yellow house. 9. Milk is drunk in the middle house. 10. The Norwegian lives in the first house on the left. 11. The man who smokes Chesterfields lives next to the man with the fox. 12. Kools are smoked in the house next to the house with the horse. 13. The Lucky Strike smoker drinks orange juice. 14. The Japanese smokes Parliaments. 15. The Norwegian lives next to the blue house. The questions to be answered are: who drinks water and who owns the zebra? End comments (<- (member ?item (?item . ?rest ))) (<- (member ?item (?x . ?rest) ) (member ?item ?rest)) (<- (iright ?left ?right (?left ?right . ?rest))) (<- (iright ?left ?right (?x . ?rest)) (iright ?left ?right ?rest)) (<- (= ?x ?x)) ;; Each house is of the form: ;; (house nationality pet cigarette drink house-color) ;; ?h is the variable representing the list of the five houses (<- (zebra ?h ?w ?z)   (= ?h ((house norwegian ? ? ? ?) ? (house ? ? ? milk ?) ? ? ) )   ; 1, 10, 9   (iright (house ? ? ? ? ivory)                                                        ; 6           (house ? ? ? ? green) ?h)   (nextto (house ? ? chesterfield ? ?)                                             ; 11           (house ? fox ? ? ?) ?h)   (nextto (house ? ? kools ? ?)                                                       ; 12           (house ? horse ? ? ?) ?h)   (member (house ? ? luckystrike orange-juice ?) ?h)                  ; 13   (nextto (house norwegian ? ? ? ?)                                              ; 15           (house ? ? ? ? blue) ?h) ;; Now for the questions:    (member (house ?z zebra ? ? ?) ?h) )                                        ; Q2 ; Here's the query ; .. and here's the result ; No. ; The Zebra Puzzle is at: ;  http://interweave-consulting.blogspot.co.uk/2017/02/the-zebra-puzzle-prolog-in-lisp.html The Zebra Puzzle (Prolog in Lisp) I am delighted to have got Peter Norvig's Lisp interpreter for Prolog working (in its simplest form) from Chapter 11 of his book, "Paradigms of Artificial Intelligence programming". I tested it on his Zebra puzzle, where he states: "This took 278 seconds, and profiling (see page 288) reveals that the function prove was called 12,825 times. A call to prove has been termed a logical inference, so our system is performing 12825/278 = 46 logical inferences per second, or LIPS. Good Prolog systems perform at 10,000 to 100,000 LIPS or more, so this is barely limping along." On my Windows 10 HP laptop, the computation took 7 seconds with 29,272 calls to 'prove'. I guess that's 4,182 LIPS for compiled Lisp from LispWorks - program totally unoptimised. His book is rather old (1992). The Zebra puzzle "Here is an example of something Prolog is very good at: a logic puzzle. There are fifteen facts, or constraints, in the puzzle: 1. Five houses in a line, each with an owner, a pet, a cigarette, a drink, and a color. 2. The Englishman lives in the red house. 3. The Spaniard owns the dog. 4. Coffee is drunk in the green house. 5. The Ukrainian drinks tea. 7. The Winston smoker owns snails. 8. Kools are smoked in the yellow house. 9. Milk is drunk in the middle house. 13. The Lucky Strike smoker drinks orange juice. 14. The Japanese smokes Parliaments. 15. The Norwegian lives next to the blue house. The questions to be answered are: who drinks water and who owns the zebra?" The translation into Prolog in Lisp uses a Lisp-like syntax for Prolog facts and rules (both are clauses). Here's the Prolog 'program' coding the above problem. ;;; --- Lisp code follows - comments from Norvig's book  --- ;;; To solve this puzzle, we first define the relations nextto (for "next to") and ;;;  iright (for 'immediately to the right of'). They are closely related to member, ;;; which is repeated here. (<- (= ?x ?x)) ;;; We also defined the identity relation, =. It has a single clause that says that any x is ;;; equal to itself. One might think that this implements eq or equal. Actually, since ;;; Prolog uses unification to see if the two arguments of a goal each unify with ?x, this ;;; means that = is unification. ;;; Each house is of the form: ;;; (house nationality pet cigarette drink house-color) ;;; ?h is the variable representing the list of the five houses (<- (zebra ?h ?w ?z)   (iright (house ? ? ? ? ivory)                                                                     ; 6              (house ? ? ? ? green) ?h)   (nextto (house ? ? chesterfield ? ?)                                                         ; 11              (house ? fox ? ? ?) ?h)   (nextto (house ? ? kools ? ?)                                                                   ; 12               (house ? horse ? ? ?) ?h)   (nextto (house norwegian ? ? ? ?)                                                          ; 15               (house ? ? ? ? blue) ?h) ;; Now for the questions: ; Here's the query ; .. and here's the result I'll publish the Lisp code for the Prolog interpreter separately (here): Friday, February 17, 2017 Le terrible dilemme de la glorieuse France Prior to Germany's unification in the mid-nineteenth century, France had regularly been top dog in Europe for a thousand years. From Charlemagne through the Sun King to Bonaparte, the French do not forget. After the second world war, with Germany both beaten and shamed, the French seized upon the idea of the EEC/EU. They would lead an exhausted continent, and the humiliated Germans would pay. This vision of the EU as a greater, more glorious France endured for fifty years. It took the post-Cold War reunification of Germany and a new generation of Germans with declining guilt, to parlay Germany's economic power into European political leadership. Who can forget those pictures of Angela Merkel, striding into power-summits with a diminutive Francois Hollande trotting beside her, as if a handbag dog? A Germany coming into its own will have its natural satellites: Austria, parts of Eastern Europe. But France? Here is the dilemma. The EU project has become a suffocating straitjacket for France, forcing it into subservience to Germany. German-interest policies inflicted upon the whole EU have created popular discontent within France as elsewhere. The result has been the collapse of the establishment parties in France, the Socialists and the Republicans, and the emergence of a Tony Blair-lite politician (Emmanuel Macron) along with the nationalist Front National as the main contenders for power. The French establishment is solidly pro-EU, and as defeatist as the British powers-that-be. Its 'realistic' view is that the economic realities cannot be denied and that the best France can hope for is German satellite status with influence. Marine Le Pen speaks for a France which, like Brexit UK, decouples from German domination. It's not clear whether the upcoming Presidential election will coincide with a decisive rupture in the dynamics of French politics, which could hand Le Pen the victory. But the tensions between la glorieuse France and its default deferential future will only grow. If not this time, then the next. Thursday, February 16, 2017 Stonehenge in the February rain Perverse to visit Stonehenge yesterday, with heavy rain forecast for most of the day. Perhaps it was only Bristol Water, turning off our supply for 8 hours for 'urgent repairs', who could have forced us out. Anyway, a chance to see the new Visitor Centre and admire the A303 before it vanishes into a tunnel forever. Clare approaches the new Visitor Centre - henge-themed An ancestor of Trevor Eve was found here 5,000 years ago (click on image to make larger and read the caption). Never have I seen such bedraggled sheep Traffic jams on the A303 .. oh, and some rocks Clare was sampling something from the local druids I guess We did have a conversation about it. Afterwards we took a late lunch in Warminster. Much as I would like the town to be associated with the heavy military presence on the adjacent Salisbury Plain, in fact: "The town's name has evolved over time, known as Worgemynstre in approximately 912 and it was referred to in the Domesday Book in 1086 as Guerminstre. The town name of Warminster is thought to derive from the River Were, a tributary of River Wylye which runs through the town, and from an Anglo-Saxon minster or monastery, which existed in the area of St Denys's Church. The river's name, "Were" may derive from the Old English "worian" to wander." Wednesday, February 15, 2017 The problem of dialogue as regards chatbots Me: "So here's my plan." Clare: "OK." Me: "I've copied Peter Norvig's chapter 11 code for a Prolog interpreter in Lisp into a Notepad file. Unfortunately copying from PDF introduces all kinds of formatting errors, so it takes a while to get it into a compilable state." Clare: "?" Me: "I need to work through his main functions - principally to exactly understand his unification algorithm to start with. I may modify the code a bit, make it more transparent." Clare: "??" Me (oblivious): "And then I'm not sure I like the way he's storing clauses as plists on the clause head predicate. I think an assoc list would be clearer, if less efficient. Also, I need to do a few paper and pen examples to understand exactly the rationale for variable-renaming between clauses during resolution." Clare: "???" Me: "I'll get his standard Prolog depth-first search to work first of course. But I think it would be a good idea to explicitly break out the resolution procedure as a standalone function, and then parameterise the proof procedure by a range of possible search methods including breadth-first and best-first." Clare: "????" Me: "Of course, all this is just to get a series of tools in place for my speech-act planner, the one that's going to make my chatbot able to steer a conversation." Clare: "?????" Me: "I really think this could be a stellar app. I can see it on Google Play. The trouble is, 'in the knowledge lies the power' is just as relevant here and I might need to spend way too much time just data-filling various knowledge-bases, ... hmm, unless I can get it to safely learn from users. That's where Google does have an advantage ..." Clare: "Is that Mr Bullfinch I see on the fat block?" Tuesday, February 14, 2017 The Outstanding Leader is unwell .. The BBC website reports: Kim Jong-nam, 45, is said to have been targeted at the airport in Kuala Lumpur, the capital. A source close to the Malaysian PM's office told the BBC that Mr Kim was killed in the city, saying his body was now undergoing an autopsy. Kim Jong-nam is the eldest son of former North Korean leader Kim Jong-il." The US President Mr Trump says 'something will be done' about North Korea's somewhat provocative ballistic missile tests. Perhaps one imagines a pre-emptive strike on military facilities? Easily done, but it's hard not to see the Chinese reacting badly. Something a little more subtle is needed. Some virus, engineered to attach to cells expressing a certain genotype. Thanks to CRISPR, that kind of virus-editing seems quite practicable. A weaponised flu variant should do it. How to get a copy of the Dear Leader's genome? You can go a long way with a half brother. The other half of the mission is transportation. The US authorities should not be relying on a pandemic. A long range stealth-drone delivering a miniature RPV close to the OL should do it: something disposable and wasp-like. Send a swarm: only way to be sure. What are the chances the CIA has this mission design in front of the President as we speak? Greg Cochran was writing about jihadis, but the principle seems to apply. Monday, February 13, 2017 "The Genome Factor" - Conley and Fletcher Amazon link For decades the Standard Social Science Model has dominated academic social science and mainstream elite thinking. Broadly speaking, the model states: (i) there are no innate precursors for cognitive traits such as personality, intelligence, character and interests - everything is environmental; and (ii) consequentially, there are no innate psychological differences between women and men - or between blacks, whites, east asians or the Ashkenazim. Since everything is environmental (the 'blank slate' hypothesis) then any observed differences must be due to selective discrimination which can therefore be addressed by public policy. The consequence is a litany of discriminations with which we are all familiar: sexism, racism and various phobias. Plainly there are differences in the physical realm. Some sports are gender-segregated, for example. But even acknowledging that makes people nervous. Physical differences are played down as inconsequential. Less well-educated folk know that the blank-slate hypothesis is rubbish. A little experience of families, a little observation both of everyday life and of the world at large will convince most people that it is more likely that the moon truly is made of green cheese. So what explains the astonishing durability of the SSSM? Plainly it speaks to that powerful liberal sense of compassion and fairness highlighted in Jonathan Haidt's Moral Foundations Theory.  Most western democracies are multiracial, patchwork 'internal-empires' - the legacy of centuries of immigration and in some cases slavery. Race- and gender-blind application of legislation and social norms is considerably enhanced by taking the view that formal social and legal equality is also biological equality. Once the argument for population-genetic differences is admissible, it seems to liberals that the floodgates of discrimination would be opened once again. The ideology of the SSSM also makes it easy to justify generic immigration: high, low and zero-skilled ('everyone is the same'). This can be very convenient for company executives who suffer few of the frequently-negative social consequences of the latter. Recent advances in genetic sequencing over very large populations pose a grave threat to the convenient untruths of the SSSM. It is already known that almost all psychological and behavioural traits of interest to social scientists are heritable at c. 50-60%. This means that about half the trait-variation in the population is attributable to genetic differences, the rest being due to differences in contemporary shared and personal environments. Apart from hands-on confirmation of these heritability results, genomics also adds personalisation. Once we understand how to map a person's genome to such phenotypic attributes as IQ, personality, character and a myriad of narrower traits (such as political orientation) with high precision - and correlational accuracies of 0.7-0.9 seem potentially in reach - then it seems that genome truly is life-destiny. And most likely this is the case. The life-history similarities of twins, even when raised apart, tends to show the way. Social scientists mostly ignore the incoming tsunami of new research. But the genomic telescope has been invented, it's not going to go away. A more sophisticated strategy is deployed in "The Genome Factor" by Conley and Fletcher. The authors are sociologists by profession but research the social science implications of genomic surveys. They had a choice - to go with the trend of such research to transcend the SSSM - or to find ever more intricate arguments to preserve it. In choosing the latter approach, their strategy is to freely accept the theoretical results of population genetics and the empirical data of GWAS (genome-wide association studies) where this does not threaten blank-slatism. They then labour to find fault in every study which might cast it into doubt while feeding plenty of slack to the many purported environment-only explanations of race and gender differences. You will see plenty of uncritical space given to: continuing discrimination and poor institutions (pp. 107 ff.); subconscious bias, priming and stereotype threat (appendix 5). In chapter 4, the authors address the claims of Herrnstein and Murray's seminal 1994 book, The Bell Curve. The three theses they wish to 'take seriously' are (to summarise): (i) increasing genetic stratification due to cognitive meritocracy; (ii) increasing assortative mating for intelligence; (iii) cognitive dysgenics via reduced fertility in the cognitive elite. They announce, to their evident satisfaction, that none of these theses is born out by the evidence. But how convinced should we be by their arguments? The answer is, not very. There are many confounding variables - particular the massive changes in education and employment practices over the decades relevant to analysis - as Conley and Fletcher themselves spell out. In some cases the phenotypic attributes measured do, in fact, accord with Herrnstein and Murray's theses but the authors rapidly draw our attention to their underlying genetic correlates, as derived from GWAS. Here they find no such trends. But unfortunately, we do not yet know the genetic markers for the relevant cognitive traits. Instead, the genomic indicator the authors use is the incredibly noisy 'polygenic score' (PGS). All we can really conclude is that the effects are small, and that as far as Herrnstein and Murray's proposed theses are concerned, it's too early to be sure. Chapter 6, 'The Wealth of Nations', engages with Ashraf and Galor's 'Goldilocks' hypothesis of correlations between degrees of genetic diversity (too much in Africa?) and higher income and growth. Yet the correlations are poor (p. 124).  I wish they had engaged with work such as Garett Jones' 'Hive Mind', which focuses on ideas that country differences in IQ and size of the 'smart fraction' have something to do with it. Jones finds remarkably high correlations. But you can see the dangers. So this is a book with an agenda although I think it's subconscious bias. The authors take too much pleasure in 'refuting' challenges to the core doctrines of the SSSM to make me think they're just doing so to protect their positions. There are things to learn from this book. As critics they look for every conceivable flaw in twin and GWAS studies - this is socially useful. They also explain various techniques such as GWAS well, although the book is too technical and too dry for both the general public and mainstream social science academics. In all, I regard this book as a missed opportunity. Sunday, February 12, 2017 Chatbots: conversation as AI planning I just love those YouTube videos of Internet-connected dolls. They're Wifi-linked to IBM's Watson, or some similar AI-in-the-cloud. You'd think that the conversation would be scintillating - insofar as talking to a small child might have its moments. But no, in the clip the adorable child is always shown asking the doll, "How far away is the Moon?" and the doll shows its conversational prowess by reciting Wikipedia. Surely we can do better. It's good to recall that we converse in order to achieve objectives. Sometime those objectives are indeed objective - as when we wonder how far away is the Moon. Most of the time, though, the objectives are more about our more basic human desires. The need for sympathy, approbation, bonding against a third party, humiliation and revenge. Oh dear, I've been listening to Cersei Lannister again.. Cersei and Margaery Conversation is composed of speech acts directed towards goals, the goals often being the alteration of emotional states. Real-life chat is therefore very dependent on the accurate reading of emotional cues: facial expressions, tone of voice or other body language. No wonder email and WhatsApp chats go awry so often. This suggests that a chatbot (a better Eliza) should treat conversation as an AI planning task.* Naturally I'm not the first to realise that - this 1980 article - but it's the next place to go. This relatively recent Stanford review - Conversational Agents - is quite illuminating (PDF slides). Call me an old GOFAI-nostalgic but bottom-up number-crunching the stats is subject to diminishing returns as we move to higher cognitive tasks. Which is why I'm less than impressed by the tedious Google Assistant in Allo. * I mean like GPS. Most conversational chatbots - like Google Assistant - cope with the extreme difficulties of unrestricted natural language input by giving a series of response-text buttons for the user to select ("Tell me a joke"). In a formalised conversational model, the chatbot should treat the conversation as a two-person game-tree. The opponent (user) has a choice of possible conversational rejoinders ('moves') which the program has a chance of understanding, and which propel the conversation forward in some direction. Best to look at the planner conversation 'speech-act' rule-base and list the possible responses (need a way to flag input-variables) so as to guide the user. Chatbot>: "What's your brother's name? [I understand best:    1. <first-name>    2. <first-name> <second-name>    3. <name> but we call him <nickname>    4. I don't have a brother.    5. Could we move on to another topic? You can enter the response number if there's no information to give.] User>:  Jimmie. "When I am dead, my dearest" When I am dead, my dearest,     Sing no sad songs for me; Plant thou no roses at my head,     Nor shady cypress tree: Be the green grass above me     With showers and dewdrops wet; And if thou wilt, remember,     And if thou wilt, forget. I shall not see the shadows,    I shall not feel the rain; I shall not hear the nightingale    Sing on, as if in pain: And dreaming through the twilight     That doth not rise nor set, Haply I may remember,     And haply may forget. Saturday, February 11, 2017 Diary: snow + books + chatbot backstory We last had snow here back in December 2010 (when it was heavy). This morning only the lightest dusting .. and as I write, it's gone. A light dusting of snow in our garden this morning This new book by Dalton Conley and Jason Fletcher should be arriving today. Amazon link Expect a review in due course if it's any good. Update: it's now arrived and I've had a peek. The authors are both sociology professors, which rings alarm bells. And they have an agenda. They're here to defend the standard social science model against the unsettling results of recent DNA sequencing programs and GWAS (genome-wide association studies). Their idea is to make the most minimal concessions to evidence while still preserving the traditional social-sciences advocacy-based value system. So races don't exist, racial genetic differences in cognition don't exist, and most else which might disturb the liberal agenda of the irrelevance of genetics for outcomes is explained away. [Update (Sunday): my first impressions were wrong. The book is both more honest and more erudite than had appeared from a first skim. The authors may be billed as sociologists but they are both genetics research practitioners. Once you have absorbed the theory and methodology, it's harder to be agenda-driven - if you're honest. I think a review will be in order once I've finished.] [Update (Monday): Finished. It was as my first impressions. A sophisticated attempt to save the SSSM from those pesky geneticists. Sophistry rules: don't waste your money. Review here.] Genetics, especially human genomics, is the one area of contemporary science most at odds with prevailing western ideology. Most of the time this didn't matter - you can believe whatever you like if it has no practical consequences: in fact most human beliefs are like that. But with emergent technologies for DNA sequencing, embryo selection and genome-editing, the science soon will matter. You will be able, in principle, to improve attributes (such as the health, emotional stability and intelligence) of your children. The issues will be regulation. cost and your desire to do so. I anticipate, with no joy, decades of screaming hysteria. I'm also working my way through Sean Carroll's book on entropy and the 'Past Hypothesis', and Stephen Cohen's excellent biography of Bukharin, not to mention Scott Bakker's 'Prince of Nothing'. It's hard to timeshare: particularly when one is programming. Incidentally, just as it is said that no-one feels hunger pangs when stepping out of a plane to do a parachute jump, an afternoon programming tends to obscure the fact you've skipped lunch. Just over 68kg this morning (10st 10.4lb) as the abdominal bulge recedes somewhat. I'm pleased. Just a short note-to-self about the chatbot stuff. The next thing (after the recent Eliza-style vacuity) is to capture the 'aboutness' of conversation. In chatting, each conversational partner brings some backstory to a discussion. The essence of being an effective chatbot is: 1. Manage a dynamic back-story of your own, 2. Engage with the backstory of your human conversational partner. If the chatbot is a cat, the backstory can be quite constrained - for example, nocturnal adventures in the garden with other cats, badgers, voles, birds, food and vomiting. I was looking at Peter Norvig's GPS reconstruction (previously discussed here) as a possible route to writing a dynamic simulation model. The various entities I mentioned (cats, badgers, voles, ..) should have actions which require their preconditions in the garden knowledge-base (KB) to hold, and which then assert their post conditions in the next KB iteration. Make actions probabilistic and you've got an interesting simulation. GPS doesn't quite work in this forward-chaining mode so it's not a case of just adapting his existing code. But the Eliza pattern-matching engine can be the reusable heart of it.
b1402c11f0ba4393
Dismiss Notice Join Physics Forums Today! What is time evolution? 1. Apr 27, 2007 #1 What is time evolution? Is it a term only applicabe to matter waves or does it apply to other waves as well? 2. jcsd 3. Apr 27, 2007 #2 User Avatar Staff Emeritus Science Advisor Gold Member In quantum mechanics, the state of any physical system is represented by a vector. Suppose that [itex]|\alpha\rangle[/itex] is such a vector. Time evolution is the process [tex]|\alpha\rangle\rightarrow e^{-iHt}|\alpha\rangle[/tex] where H is the Hamiltonian operator. You can think of the state vector as a representation of all properties of the system, in the past, present, and future. The effect of the time evolution operator is then to transform our state vector to the state vector that another observer would use to describe the same system. This would be an observer whose clock shows zero t seconds after ours does. That point of view is called the Heisenberg picture. (If we're using the Heisenberg picture, I prefer to call it time translation rather than time evolution). Another point of view is the Schrödinger picture. Here we think of the state vector as a time-dependent quantity: We think of this as the state of the system at time t. It's easy to verify that this time dependent state vector satisfies the Schrödinger equation (because the time evolution operator does): [tex]i\frac{\partial}{\partial t}|\alpha;t\rangle=H|\alpha;t\rangle[/tex] Last edited: Apr 28, 2007 Similar Discussions: What is time evolution?
3456dff6226fd391
Next Contents Previous 7.4. Generating a small cosmological constant from Inflationary particle production A novel means of generating a small Lambda at the present epoch was suggested by Sahni & Habib (1998). Massive scalar fields in curved spacetime satisfy the wave equation Equation 85 (85) where R is the Ricci scalar and xi parametrizes the coupling to gravity. In a spatially flat FRW universe the field variables separate so that for each wave mode. The comoving wavenumber k = 2pia / lambda where lambda is the physical wavelength of scalar field quanta. Defining the conformal field chik = aphik and substituting R = 6addot / a3 into Eq. (85) leads to Equation 86 (86) where differentiation is carried out with respect to the conformal time eta = integdt / a. Equation (86) closely resembles the one dimensional Schrödinger equation in quantum mechanics Equation 87 (87) Comparing (87) and (86) we find that the role of the "potential barrier in space" V(x) is played by the time dependent term V(eta) = -m2 a2 + (1 - 6xi) addot / a which may be thought of as a "potential barrier in time" [82, 178, 84]. (The form of the barrier is shown in Fig. 14 assuming that Inflation is succeeded by radiative and matter dominated eras.) In quantum mechanics the presence of a barrier leads to particles being reflected and transmitted so that Psiin(x) = exp(ikx) + R(k)exp(-ikx) in the incoming region, and Psiout(x) = T(k)exp(ikx) in the outgoing region. Similarly, the presence of the time-like barrier V(eta) will lead to particles moving forwards in time as well as backwards, after being reflected off the barrier. The scalar field at late times will therefore not be in its vacuum state phik+ but will be described by a linear superposition of positive and negative frequency states Equation 88 (88) The role of reflection and transmission coefficients R, T is now played by the Bogoliubov coefficients alpha, beta which quantify particle production and vacuum polarization effects and are obtained by matching `in modes' during Inflation with `out modes' defined during the radiation or matter dominated eras. Due to the existence of space-time curvature, positive and negative frequencies can be defined only in the limiting case of small wavelengths, limkrightarrowinfty phik± appeq (1 / [2 k]1/2 a)exp(-/+ik eta), for which effects of curvature can be neglected. The value of alpha, beta is obtained by matching modes corresponding to the `out' vacuum with those of the `in' vacuum just after Inflation. (The `in' and `out' vacua are defined during Inflation and radiation/matter domination respectively.) Figure 14 Figure 14. The process of super-adiabatic amplification of zero-point fluctuations (particle production) is illustrated. The amplitude of modes having wavelengths smaller than the Hubble radius decreases conformally with the expansion of the universe, whereas that of larger-than Hubble radius modes freezes (if xi = 0) or grows with time (xi < 0). Consequently, modes with xi leq 0 have their amplitude super-adiabatically amplified on re-entering the Hubble radius after inflation (from Sahni & Habib 1998) (the case xi = 0 also describes quantum mechanical production of gravity waves in a FRW model [82].) The net effect of particle creation and vacuum polarization is quantified by the vacuum expectation value of the energy-momentum tensor <Tik>. For xi < 0,|xi| << 1 and m/H ltapprox 1 the leading order contribution to <Tik> is given by Equation 89 (89) We immediately see that the first term is simply proportional to the Einstein tensor and the second has the covariant form usually associated with a cosmological constant (i.e. Tik = gik Lambda). Substituting for <Tik> in the semiclassical Einstein equations Equation 90 (90) we find Equation 91 (91) Equation 92 (92) The term proportional to H2 <Phi2> in (92) may be absorbed into the left hand side of (91) leading to Equation 94 (94) where bar G appeq G / (1 + 8piG |xi| <Phi2>) is the new, time dependent gravitational constant. (Observational bounds on the rate of change of bar G set the constraint |xi| << 1.) As shown in [171] for xi < 0 the value of <Phi2> can be very large, so that bar G appeq 1 / (8pi |xi| <Phi2>) and Equation 95 (95) We therefore find that the energy density of created particles defines an effective cosmological constant which can contribute significantly to the total density of the universe at late times leading to Omegam + OmegaLambda appeq 1 [171]. However, it should be noted that this result was obtained in the Hatree-Fock (or. semiclassical gravity) approximation (90) which is not exact in considerations of a single quantum field, since metric and field fluctuations may significantly deviate from their rms values. So, further study of this problem using stochastic methods (similar to those used in stochastic inflation [182, 183, 196] and stochastic reheating after inflation Einstein is quoted as saying [115]) is desirable. Next Contents Previous
e910cee27253f6b0
Tuesday, April 29, 2014 FQXi essay contest 2014: How Should Humanity Steer the Future? This year’s essay contest of the Foundational Questions Institute “How Should Humanity Steer the Future?” broaches a question that is fundamental indeed, fundamental not for quantum gravity but for the future of mankind. I suspect the topic selection has been influenced by the contest being “presented in partnership with” (which I translate into “sponsored by”) not only the John Templeton foundation and Scientific American, but also a philanthropic organization called the “Gruber Foundation” (which I had never heard of before) and Jaan Tallinn. Tallinn is no unknown, he is one of the developers of Skype and when I type his name into Google the auto completion is “net worth”. I met him at the 2011 FQXi conference where he gave a little speech about his worries that artificial intelligence will turn into a threat to humans. I wrote back then a blogpost explaining that I don’t share this particular worry. However, I recall Tallinn’s speech vividly, not because it was so well delivered (in fact, he seemed to be reading off his phone), but because he was so very sincere about it. Most people’s standard reaction in the face of threats to the future of mankind is cynicism or sarcasm, essentially a vocal shoulder shrug, whereas Tallinn seems to have spent quite some time thinking about this. And well, somebody really should be thinking about this... And so I appreciate the topic of this year’s essay contest has a social dimension, not only because it gets tiresome to always circle the same question of where the next breakthrough in theoretical physics will be and the always same answers (let me guess, it’s what you work on), but also because it gives me an outlet for my interests besides quantum gravity. I have always been fascinated by the complex dynamics of systems that are driven by the individual actions of many humans because this reaches out to the larger question of where life on planet Earth is going and why and what all of this is good for. If somebody asks you how humanity should steer the future, a modest reply isn’t really an option, so I have submitted my five step plan to save the world. Well, at least you can’t blame me for not having a vision. The executive summary is that we will only be able to steer at all if we have a way to collectively react to large scale behavior and long-term trends of global systems, and this can only happen if we are able to make informed decisions intuitively, quickly and without much thinking. A steering wheel like this might not be sufficient to avoid running into obstacles, but it is definitely necessary, so that is what we have to start with. The trends that we need to react to are those of global and multi-leveled systems, including economic, social, ecological and politic systems, as well as various infrastructure networks. Presently, we basically fail to act when problems appear. While the problems arise from the interaction of many people and their environment, it is still the individual that has to make decisions. But the individual presently cannot tell how their own action works towards their goals on long distance or time scales. To enable them to make good decisions, the information about the whole system has to be routed back to the individual. But that feedback loop doesn’t presently exist. In principle it would be possible today, but the process is presently far too difficult. The vast majority of people do not have the time and energy to collect the necessary information and make decisions based on it. It doesn’t help to write essays about what we ‘should’ do. People will only act if it’s really simple to do and of immediate relevance for them. Thus my suggestion is to create individual ‘priority maps’ that chart personal values and provide people with intuitive feedback for how well a decision matches with their priorities. A simple example. Suppose you train some software to tell what kind of images you find aesthetically pleasing and what you dislike. You now have various parameters, say colors, shapes, symmetries, composition and so on. You then fill out a questionnaire about preferences for political values. Now rather than long explanations which candidate says what, you get an image that represents how good the match is by converting the match in political values to parameters in an image. You pick the image you like best and are done. The point is that you are being spared having to look into the information yourself, you only get to see the summary that encodes whether voting for that person would work towards what you regard important. Oh, I hear you say, but that vastly oversimplifies matters. Indeed, that is exactly the point. Oversimplification is the only way we’ll manage to overcome our present inability to act. If mankind is to be successful in the long run, we have to evolve to anticipate and react to interrelated global trends in systems of billions of people. Natural selection might do this, but it would take too much time. The priority maps are a technological shortcut to emulate an advanced species that is ‘fit’ in the Darwinian sense, fit to adapt to its changing environment. I envision this to become a brain extension one day. I had a runner up to this essay contribution, which was an argument that research in quantum gravity will be relevant for quantum computing, interstellar travel and technological progress in general. But it would have been a quite impractical speculation (not to mention a self-advertisement of my work on superdeterminism, superluminal information exchange and antigravity). In my mind of course it’s all related – the laws of physics are what eventually drive the evolution of consciousness and also of our species. But I decided to stick with a proposal that I think is indeed realizable today and that would go a long way to enable humanity to steer the future. I encourage you to check out the essays which cover a large variety of ideas. Some of the contributions seem to be very bent towards the aim of making a philosophical case for some understanding of natural law rather than the other, or to find parallels to unsolved problems in physics, but this seems quite a stretch to me. However, I am sure you will find something of interest there. At the very least it will give you some new things to worry about... Saturday, April 26, 2014 Academia isn’t what I expected The Ivory Tower from The Neverending Story. [Source] Talking to the students at the Sussex school let me realize how straight-forward it is today to get a realistic impression of what research in this field looks like. Blogs are a good source of information about scientist’s daily life and duties, and it has also become so much easier to find and make contact with people in the field, either using social networks or joining dedicated mentoring programs. Before I myself got an office at a physics institute I only had a vague idea of what people did there. Absent the lauded ‘role models’ my mental image of academic research formed mostly by reading biographies of the heroes of General Relativity and Quantum Mechanics, plus a stack of popular science books. The latter didn’t contain much about the average researcher’s daily tasks, and to the extent that the former captured university life, it was life in the first half of the 20nd century. I expected some things to have changed during 50 years, notably in technological advances and the ease of travel, publishing, and communication. I finished high school in ’95, so the biggest changes were yet to come. I also knew that disciplines had drifted apart, that philosophy and physics were mostly going separate ways now, and that the days in which a physicist could also be a chemist could also be an artist were long gone. It was clear that academia had generally grown, become more organized and institutionalized, and closer linked to industrial research and applications. I had heard that applying for money was a big part of the game. Those were the days. But my expectations were wrong in many other ways. 20 years, 9 moves and 6 jobs later, here’s the contrast of what I believed theoretical physics would be like to reality: 1. Specialization While I knew that interdisciplinarity had given in to specialization I thought that theoretical physicists would be in close connection to the experimentalists, that they would frequently discuss experiments that might be interesting to develop, or data that required explanation. I also expected theoretical physicists to work closely together with mathematicians, because in the history of physics the mathematics has often been developed alongside the physics. In both cases the reality is an almost complete disconnect. The exchange takes place mostly through published literature or especially dedicated meetings or initiatives. 2. Disconnect I expected a much larger general intellectual curiosity and social responsibility in academia. Instead I found that most researchers are very focused on their own work and nothing but their own work. Not only do institutes rarely if ever have organized public engagement or events that are not closely related to the local research, it’s also that most individual researchers are not interested. In most cases, they plainly don’t have the time to think about anything than their next paper. That disconnect is the root of complaints like Nicholas Kristof’s recent Op-Ed, where calls upon academics: “[P]rofessors, don’t cloister yourselves like medieval monks — we need you!” 3. The Machinery My biggest reality shock was how much of research has turned into manufacturing, into the production of PhDs and papers, papers that are necessary for the next grant, which is necessary to pay the next students, who will write the next papers, iterate. This unromantic hamster wheel still shocks me. It has its good side too though: The standardization of research procedures limits the risks of the individual. If you know how to play along, and are willing to, you have good chances that you can stay. The disadvantage is though that this can force students and postdocs to work on topics they are not actually interested in, and that turns off many bright and creative people. 4. Nonlocality I did not anticipate just how frequent travel and moves are necessary these days. If I had known about this in advance, I think I would have left academia after my diploma. But so I just slipped into it. Luckily I had a very patient boyfriend who turned husband who turned father of my children. 5. The 2nd family The specialization, the single-mindedness, the pressure and, most of all, the loss of friends due to frequent moves create close ties among those who are together in the same boat. It’s a mutual understanding, the nod of been-there-done-that, the sympathy with your own problems that make your colleagues and officemates, driftwood as they often are, a second family. In all these years I have felt welcome at every single institute that I have visited. The books hadn’t told me about this. Experience, as they say, is what you get when you were expecting something else. By and large, I enjoy my job. Most of the time anyway. My lectures at the Sussex school went well, except that the combination of a recent cold and several hours of speaking stressed my voice box to the point of total failure. Yesterday I could only whisper. Today I get out some freak sounds below C2 but that’s pretty much it. It would be funny if it wasn’t so painful. You can find the slides of my lectures here and the guide to further reading here. I hope they live up to your expectations :) Monday, April 21, 2014 Away note I will be traveling the rest of the week to give a lecture at the Sussex graduate school "From Classical to Quantum GR", so not much will happen on this blog. For the school, we were asked for discussion topics related to our lectures, below are my suggestions. Leave your thoughts in the comments, additional suggestions for topics are also welcome. • Is it socially responsible to spend money on quantum gravity research? Don't we have better things to do? How could mankind possibly benefit from quantum gravity? • Can we make any progress on the theory of quantum gravity without connection to experiment? Should we think at all about theories of quantum gravity that do not produce testable predictions? How much time do we grant researchers to come up with predictions? • What is your favorite approach towards quantum gravity? Why? Should you have a favorite approach at all? • Is our problem maybe not with the quantization of gravity but with the foundations of quantum mechanics and the process of quantization? • How plausible is it that gravity remains classical while all the other forces are quantized? Could gravity be neither classical nor quantized? • How convinced are you that the Planck length is at 10-33cm? Do you think it is plausible that it is lower? Should we continue looking for it? • What do you think is the most promising area to look for quantum gravitational effects and why? • Do you think that gravity can be successfully quantized without paying attention to unification? Lara and Gloria say hello and wish you a happy Easter :o) Thursday, April 17, 2014 The Problem of Now [Image Source] Einstein’s greatest blunder wasn’t the cosmological constant, and neither was it his conviction that god doesn’t throw dice. No, his greatest blunder was to speak to a philosopher named Carnap about the Now, with a capital. “The problem of Now”, Carnap wrote in 1963, “worried Einstein seriously. He explained that the experience of the Now means something special for men, something different from the past and the future, but that this important difference does not and cannot occur within physics” I call it Einstein’s greatest blunder because, unlike the cosmological constant and indeterminism, philosophers, and some physicists too, are still confused about this alleged “Problem of Now”. The problem is often presented like this. Most of us experience a present moment, which is a special moment in time, unlike the past and unlike the future. If you write down the equations governing the motion of some particle through space, then this particle is described, mathematically, by a function. In the simplest case this is a curve in space-time, meaning the function is a map from the real numbers to a four-dimensional manifold. The particle changes its location with time. But regardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”? You could argue rightfully that as long as there’s just one particle moving on a straight line, nothing is happening, and so it’s not very surprising that no notion of change appears in the mathematical description. If the particle would scatter on some other particle, or take a sudden turn, then these instances can be identified as events in space-time. Alas, that still doesn’t tell you whether they happen to the particle “now” or at some other time. Now what? The cause for this problem is often assigned to the timeless-ness of mathematics itself. Mathematics deals in its core with truth values and the very point of using math to describe nature is that these truths do not change. Lee Smolin has written a whole book about the problem with the timeless math, you can read my review here. It may or may not be that mathematics is able to describe all of our reality, but to solve the problem of now, excuse the heresy, you do not need to abandon a mathematical description of physical law. All you have to do is realize that the human experience of now is subjective. It can perfectly well be described by math, it’s just that humans are not elementary particles. The decisive ability that allows us to experience the present moment as being unlike other moments is that we have a memory. We have a memory of events in the past, an imperfect one, and we do not have memory of events in the future. Memory is not in and by itself tied to consciousness, it is tied to the increase of entropy, or the arrow of time if you wish. Many materials show memory; every system with a path dependence like eg hysteresis does. If you get a perm the molecule chains in your hair remember the bonds, not your brain. Memory has nothing to do with consciousness in particular which is good because it makes it much easier to find the flaw in the argument leading to the problem of now. If we want to describe systems with memory we need at the very least two time parameters: t to parameterize the location of the particle and τ to parameterize the strength of memory of other times depending on its present location. This means there is a function f(t,τ) that encodes how strong is the memory of time τ at moment t. You need, in other words, at the very least a two-point function, a plain particle trajectory will not do. That we experience a “now” means that the strength of memory peaks when both time parameters are identical, ie t-τ = 0. That we do not have any memory of the future means that the function vanishes when τ > t. For the past it must decay somehow, but the details don’t matter. This construction is already sufficient to explain why we have the subjective experience of the present moment being special. And it wasn’t that difficult, was it? The origin of the problem is not in the mathematics, but in the failure to distinguish subjective experience of physical existence from objective truth. Einstein spoke about “the experience of the Now [that] means something special for men”. Yes, it means something special for men. This does not mean however, and does not necessitate, that there is a present moment which is objectively special in the mathematical description. In the above construction all moments are special in the same way, but in every moment that very moment is perceived as special. This is perfectly compatible with both our experience and the block universe of general relativity. So Einstein should not have worried. I have a more detailed explanation of this argument – including a cartoon! – in a post from 2008. I was reminded of this now because Mermin had a comment in the recent issue of Nature magazine about the problem of now. In his piece, Mermin elaborates on qbism, a subjective interpretation of quantum mechanics. I was destined to dislike this just because it’s a waste of time and paper to write about non-existent problems. Amazingly however, Mermin uses the subjectiveness of qbism to arrive at the right conclusion, namely that the problem of the now does not exist because our experiences are by its very nature subjective. However, he fails to point out that you don’t need to buy into fancy interpretations of quantum mechanics for this. All you have to do is watch your hair recall sulphur bonds. The summary, please forgive me, is that Einstein was wrong and Mermin is right, but for the wrong reaons. It is possible to describe the human experience of the present moment with the “timeless” mathematics that we presently use for physical laws, it isn’t even difficult and you don’t have to give up the standard interpretation of quantum mechanics for this. There is no problem of Now and there is no problem with Tegmark’s mathematical universe either. And Lee Smolin, well, he is neither wrong nor right, he just has a shaky motivation for his cosmological philosophy. It is correct, as he argues, that mathematics doesn’t objectively describe a present moment. However, it’s a non sequitur that the current approach to physics has reached its limits because this timeless math doesn’t constitute a conflict with our experience. observation. Most people get a general feeling of uneasiness when they first realize that the block universe implies all the past and all the future is equally real as the present moment, that even though we experience the present moment as special, it is only subjectively so. But if you can combat your uneasiness for long enough, you might come to see the beauty in eternal mathematical truths that transcend the passage of time. We always have been, and always will be, children of the universe. Saturday, April 12, 2014 Book review: “The Theoretical Minimum – Quantum Mechanics” By Susskind and Friedman Quantum Mechanics: The Theoretical Minimum What You Need to Know to Start Doing Physics By Leonard Susskind, Art Friedman Basic Books (February 25, 2014) This book is the second volume in a series that we can expect to be continued. The first part covered Classical Mechanics. You can read my review here. The volume on quantum mechanics seems to have come into being much like the first, Leonard Susskind teamed up with Art Friedman, a data consultant whose role I envision being to say “Wait, wait, wait” whenever the professor’s pace gets too fast. The result is an introduction to quantum mechanics like I haven’t seen before. The ‘Theoretical Minimum’ focuses, as its name promises, on the absolute minimum and aims at being accessible with no previous knowledge other than the first volume. The necessary math is provided along the way in separate interludes that can be skipped. The book begins with explaining state vectors and operators, the bra-ket notation, then moves on to measurements, entanglement and time-evolution. It uses the concrete example of spin-states and works its way up to Bell’s theorem, which however isn’t explicitly derived, just captured verbally. However, everybody who has made it through Susskind’s book should be able to then understand Bell’s theorem. It is only in the last chapters that the general wave-function for particles and the Schrödinger equation make an appearance. The uncertainty principle is derived and path integrals are very briefly introduced. The book ends with a discussion of the harmonic oscillator, clearly building up towards quantum field theory there. I find the approach to quantum mechanics in this book valuable for several reasons. First, it gives a prominent role to entanglement and density matrices, pure and mixed states, Alice and Bob and traces over subspaces. The book thus provides you with the ‘minimal’ equipment you need to understand what all the fuzz with quantum optics, quantum computing, and black hole evaporation is about. Second, it doesn’t dismiss philosophical questions about the interpretation of quantum mechanics but also doesn’t give these very prominent space. They are acknowledged, but then it gets back to the physics. Third, the book is very careful in pointing out common misunderstandings or alternative notations, thus preventing much potential confusion. The decision to go from classical mechanics straight to quantum mechanics has its disadvantages though. Normally the student encounters Electrodynamics and Special Relativity in between, but if you want to read Susskind’s lectures as self-contained introductions, the author now doesn’t have much to work with. This time-ordering problem means that every once in a while a reference to Electrodynamics or Special Relativity is bound to confuse the reader who really doesn’t know anything besides this lecture series. It also must be said that the book, due to its emphasis on minimalism, will strike some readers as entirely disconnected from history and experiment. Not even the double-slit, the ultraviolet catastrophe, the hydrogen atom or the photoelectric effect made it into the book. This might not be for everybody. Again however, if you’ve made it through the book you are then in a good position to read up on these topics elsewhere. My only real complaint is that Ehrenfest’s name doesn’t appear together with his theorem. The book isn’t written like your typical textbook. It has fairly long passages that offer a lot of explanation around the equations, and the chapters are introduced with brief dialogues between fictitious characters. I don’t find these dialogues particularly witty, but at least the humor isn’t as nauseating as that in Goldberg’s book. All together, the “Theoretical Minimum” achieves what it promises. If you want to make the step from popular science literature to textbooks and the general scientific literature, then this book series is a must-read. If you can’t make your way through abstract mathematical discussions and prefer a close connection to example and history, you might however find it hard to get through this book. I am certainly looking forward to the next volume. (Disclaimer: Free review copy.) Monday, April 07, 2014 Will the social sciences ever become hard sciences? The term “hard science” as opposed to “soft science” has no clear definition. But roughly speaking, the less the predictive power and the smaller the statistical significance, the softer the science. Physics, without doubt, is the hard core of the sciences, followed by the other natural sciences and the life sciences. The higher the complexity of the systems a research area is dealing with, the softer it tends to be. The social sciences are at the soft end of the spectrum. To me the very purpose of research is making science increasingly harder. If you don’t want to improve on predictive power, what’s the point of science to begin with? The social sciences are soft mainly because data that quantifies the behavior of social, political, and economic systems is hard to come by: it’s huge amounts, difficult to obtain and even more difficult to handle. Historically, these research areas therefore worked with narratives relating plausible causal relations. Needless to say, as computing power skyrockets, increasingly larger data sets can be handled. So the social sciences are finally on the track to become useful. Or so you’d think if you’re a physicist. But interestingly, there is a large opposition to this trend of hardening the social sciences, and this opposition is particularly pronounced towards physicists who take their knowledge to work on data about social systems. You can see this opposition in the comment section to every popular science article on the topic. “Social engineering!” they will yell accusingly. It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted. This opposition is an echo of the desperate belief in free will that ignores all evidence to the contrary. The desperation in both cases is based on unfounded fears, but unfortunately it results in a forward defense. And so the world is full with people who argue that they must have free will because they believe they have free will, the ultimate confirmation bias. And when it comes to social systems they’ll snort at the physicists “People are not elementary particles”. That worries me, worries me more than their clinging to the belief in free will, because the only way we can solve the problems that mankind faces today – the global problems in highly connected and multi-layered political, social, economic and ecological networks – is to better understand and learn how to improve the systems that govern our lives. That people are not elementary particles is not a particularly deep insight, but it collects several valid points of criticism: 1. People are too difficult. You can’t predict them. Humans are made of a many elementary particles and even though you don’t have to know the exact motion of every single one of these particles, a person still has an awful lot of degrees of freedom and needs to be described by a lot of parameters. That’s a complicated way of saying people can do more things than electrons, and it isn’t always clear exactly why they do what they do. That is correct of course, but this objection fails to take into account that not all possible courses of action are always relevant. If it was true that people have too many possible ways to act to gather any useful knowledge about their behavior our world would be entirely dysfunctional. Our societies work only because people are to a large degree predictable. If you go shopping you expect certain behaviors of other people. You expect them to be dressed, you expect them to walk forwards, you expect them to read labels and put things into a cart. There, I’ve made a prediction about human behavior! Yawn, you say, I could have told you that. Sure you could, because making predictions about other people’s behavior is pretty much what we do all day. Modeling social systems is just a scientific version of this. This objection that people are just too complicated is also weak because, as a matter of fact, humans can and have been modeled with quite simple systems. This is particularly effective in situations when intuitive reaction trumps conscious deliberation. Existing examples are traffic flows or the density of crowds when they have to pass through narrow passages. So, yes, people are difficult and they can do strange things, more things than any model can presently capture. But modeling a system is always an oversimplification. The only way to find out whether that simplification works is to actually test it with data. 2. People have free will. You cannot predict what they will do. To begin with it is highly questionable that people have free will. But leaving this aside for a moment, this objection confuses the predictability of individual behavior with the statistical trend of large numbers of people. Maybe you don’t feel like going to work tomorrow, but most people will go. Maybe you like to take walks in the pouring rain, but most people don’t. The existence of free will is in no conflict with discovering correlations between certain types of behavior or preferences in groups. It’s the same difference that doesn’t allow you to tell when your children will speak the first word or make the first step, but that almost certainly by the age of three they’ll have mastered it. 3. People can understand the models and this knowledge makes predictions useless. This objection always stuns me. If that was true, why then isn’t obesity cured by telling people it will remain a problem? Why are the highways still clogged at 5pm if I predict they will be clogged? Why will people drink more beer if it’s free even though they know it’s free to make them drink more? Because the fact that a prediction exists in most cases doesn’t constitute any good reason to change behavior. I can predict that you will almost certainly still be alive when you finish reading this blogpost because I know this prediction is exceedingly unlikely to make you want to prove it wrong. Yes, there are cases when people’s knowledge of a prediction changes their behavior – self-fulfilling prophecies are the best-known examples of this. But this is the exception rather than the rule. In an earlier blogpost, I referred to this as societal fixed points. These are configurations in which the backreaction of the model into the system does not change the prediction. The simplest example is a model whose predictions few people know or care about. 4. Effects don’t scale and don’t transfer. This objection is the most subtle one. It posits that the social sciences aren’t really sciences until you can do and reproduce the outcome of “experiments”, which may be designed or naturally occurring. The typical social experiment that lends itself to analysis will be in relatively small and well-controlled communities (say, testing the implementation of a new policy). But then you have to extrapolate from this how the results will be in larger and potentially very different communities. Increasing the size of the system might bring in entirely new effects that you didn’t even know of (doesn’t scale), and there are a lot of cultural variables that your experimental outcome might have depended on that you didn’t know of and thus cannot adjust for (doesn’t transfer). As a consequence, repeating the experiment elsewhere will not reproduce the outcome. Indeed, this is likely to happen and I think it is the major challenge in this type of research. For complex relations it will take a long time to identify the relevant environmental parameters and to learn how to account for their variation. The more parameters there are and the more relevant they are, the less the predictive value of a model will be. If there are too many parameters that have to be accounted for it basically means doing experiments is the only thing we can ever do. It seems plausible to me, even likely, that there are types of social behavior that fall into this category, and that will leave us with questions that we just cannot answer. However, whether or not a certain trend can or cannot be modeled we will only know by trying. We know that there are cases where it can be done. Geoffry West’s city theory I find a beautiful example where quite simple laws can be found in the midst of all these cultural and contextual differences. In summary. The social sciences will never be as “hard” as the natural sciences because there is much more variation among people than among particles and among cities than among molecules. But the social sciences have become harder already and there is no reason why this trend shouldn’t continue. I certainly hope it will continue because we need this knowledge to collectively solve the problems we have collectively created. Tuesday, April 01, 2014 Do we live in a hologram? Really?? Physicists fly high on the idea that our three-dimensional world is actually two-dimensional, that we live in a hologram, and that we’re all projections on the boundary of space. Or something like this you’ve probably read somewhere. It’s been all over the pop science news ever since string theorists sang the Maldacena. Two weeks ago Scientific American produced this “Instant Egghead” video which is a condensed mashup of all the articles I’ve endured on the topic: The second most confusing thing about this video is the hook “Many physicist now believe that reality is not, in fact, 3-dimensional.” To begin with, physicists haven’t believed this since Minkowski doomed space and time to “fade away into mere shadows”. Moyer in his video apparently refers only to space when he says “reality.” That’s forgiveable. I am more disturbed by the word “reality” that always creeps up in this context. Last year I was at a workshop that mixed physicists with philosophers. Inevitably, upon mentioning the gauge-gravity duality, some philosopher would ask, well, how many dimensions then do we really live in? Really? I have some explanations for you about what this really means. Q: Do we really live in a hologram? A: What is “real” anyway? Q: Having a bad day, yes? Q: Well, do we? Here’s an example. Take a square made out of N2 smaller squares and think of each of them as one bit. They’re either black or white. There are 2N2 different patterns of black and white. In analogy, the square is a box full of matter in our universe and the colors are information about the particles in the inside. Now you want to encode the information about the pattern of that square on the boundary using pieces of the same length as the sidelength of the smaller squares. See image below for N=3. On the left is the division of the square and the boundary, on the right is one way these could encode information. There’s 4N of these boundary pieces and 24N different patterns for them. If N is larger than 4, there are more ways the square can be colored than you have different patterns for the boundary. This means you cannot uniquely encode the information about the volume on the boundary. Q: What then is the typical size of these pieces? A: They’re thought to be at the Planck scale, that’s about 10-33 cm. You should not however take the example with the box too seriously. That is just an illustration to explain the scaling of the number of different configurations with the system size. The theory on the surface looks entirely different than the theory in the volume. A: The reflection on his glasses. Q: Still having a bad day? A: It’s this time of the month. A: You’re so awesomely attentive. Q: Any plans on getting a dog? A: No, I have interesting conversations with my plants.
76183c56d3c2dcfd
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Terrence Deacon Lüder Deecke Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Fritz Zwicky Free Will Mental Causation James Symposium Space and Time Modern investigations into the fundamental nature of space and time have produced a number of paradoxes and puzzles that also might benefit from a careful examination of the information content in the problem. An information metaphysicist might throw new light on nonlocality, entanglement, spooky action-at-a-distance, the uncertainty principle, and even eliminate the conflict between special relativity and quantum mechanics! Space and time form an immaterial coordinate system that allows us to keep track of material events, the positions and velocities of the fundamental particles that make up every body in the universe. As such, space and time are pure information, a set of numbers that we use to describe matter in motion. When Immanuel Kant described space and time as a priori forms of perception, he was right that scientists and philosophers impose the four-dimensional coordinate system on the material world. But he was wrong that the coordinate geometry must therefore be a flat Euclidean space. That is an empirical and contingent fact, to be discovered a posteriori. Albert Einstein’s theories of relativity have wrenched the metaphysics of space and time away from Kant’s common-sense intuitive extrapolation from everyday experience. Einstein’s special relativity has shown that coordinate values in space and time depend on (are relative to) the velocity of the reference frame being used. It raises doubts about whether there is any “preferred” or “absolute” frame of reference in the universe. And Einstein’s theory of general relativity added new properties to space that depend on the overall distribution of matter. He showed that the motion of a material test particle follows a geodesic (the shortest distance between two points) through a curved space, where the curvature is produced by all the other matter in the universe. At a deep, metaphysical level the standard view of gravitational forces acting between all material particles has been replaced by geometry. The abstract immaterial curvature of space-time has the power to influence the motion of a test particle. It is one thing to say that something as immaterial as space itself is just information about the world. It is another to give that immaterial information a kind of power over the material world, a power that depends entirely on the geometry of the environment. Space and Time in Quantum Physics For over thirty years, from his 1905 discovery of nonlocal phenomena in his light-quantum hypothesis as an explanation of the photoelectric effect, until 1935, when he showed that two particles could exhibit nonlocal effects between themselves that Erwin Schrödinger called entanglement, Einstein was concerned about abstract functions of spatial coordinates that seemed to have a strange power to control the motion of material particles, a power that seemed to him to travel faster than the speed of light, violating his principle of relativity that nothing travels faster than light. Einstein’s first insight into these abstract functions may have started in 1905, but he made it quite clear at the Salzburg Congress in 1909. How exactly does the classical intensity of a light wave control the number of light particles at each point, he wondered. The classical wave theory assumes that light from a point source travels off as a spherical wave in all directions. But in the photoelectric effect, Einstein showed that all of the energy in a light quantum is available at a single point to eject an electron. Does the energy spread out as a light wave in space, then somehow collect itself at one point, moving faster than light to do so? Einstein already in 1905 saw something nonlocal about the photon and saw that there is both a wave aspect and a particle aspect to electromagnetic radiation. In 1909 he emphasized the dualist aspect and described the wave-particle relationship more clearly than it is usually presented today, with all the current confusion about whether photons and electrons are waves or particles or both. Einstein greatly expanded the 1905 light-quantum hypothesis in his presentation at the Salzburg conference in September, 1909. He argued that the interaction of radiation and matter involves elementary processes that are not reversible, providing a deep insight into the irreversibility of natural processes. The irreversibility of matter-radiation interactions can put microscopic statistical mechanics on a firm quantum-mechanical basis. While incoming spherical waves of radiation are mathematically possible, they are not practically achievable and never seen in nature. If outgoing waves are the only ones possible, nature appears to be asymmetric in time. Einstein speculated that the continuous electromagnetic field might be made up of large numbers of discontinuous discrete light quanta - singular points in a field that superimpose collectively to create the wavelike behavior. The parts of a light wave with the greatest intensity would have the largest number of light particles. Einstein’s connection between the wave and the particle is that the wave indicates the probability of finding particles somewhere. The wave is not in any way a particle. It is an abstract field carrying information about the probability of photons in that part of space. Einstein called it a “ghost field” or “guiding field,” with a most amazing power over the particles. The probability amplitude of the wave function includes interference points where the probability of finding a particle is zero! Different null points appear when the second slit in a two-slit experiment is opened. With one slit open, particles are arriving at a given point. Opening a second slit should add more particles to that point in space. Instead it prevents any particles at all from arriving there. Light falling at a point from one slit plus more light from a second open slit results in no light! Such is the power of a “ghost field” wave function, carrying only information about probabilities. Abstract information can influence the motions of matter and energy! We can ask where this information comes from? Similar to the general relativity theory, we find that it is information determined by the distribution of matter nearby, namely the wall with the two slits in it and the location of the particle detection screen. These are the “boundary conditions” which, together with the known wavelength of the incoming monochromatic radiation, immediately tells us the probability of finding particles everywhere, including the null points. We can think of the waves above as standing waves. Einstein might have seen that like his general relativity, the possible paths of a quantum particle are also determined by the spatial geometry. The boundary conditions and the wavelength tell us everything about where particles will be found and not found. The locations of null points where particles are never found, are all static, given the geometry. They are not moving. The fact that water waves are moving, and his sense that the apparent waves might be matter or energy moving, led Einstein to suspect something is moving faster than light, violating his relativity principle. But if we see the waves as pure information, mere probabilities, we may resolve a problem that remains today as the greatest problem facing interpretations of quantum mechanics, the idea that special relativity and quantum mechanics cannot be reconciled. Let us see how an information metaphysics might resolve it. First we must understand why Einstein thought that something might be moving faster than the speed of light. Then we must show that values of the probability amplitude wave function are static in space. Nothing other than the particles is moving at any speed, let alone faster than light. Although he had been concerned about this for over two decades, it was at the fifth Solvay conference in 1927 that Einstein went to a blackboard and drew the essential problem shown in the above figure. He clearly says that the square of the wave function |ψ|2 gives us the probability of finding a particle somewhere on the screen. But Einstein oddly fears some kind of action-at-a-distance is preventing that probability from producing an action elsewhere. He says that “implies to my mind a contradiction with the postulate of relativity.” As Werner Heisenberg described Einstein’s 1927 concern, the experimental detection of the particle at one point exerts a kind of action (reduction of the wave packet) at a distant point. How does the tiny remnant of probability on the left side of the screen “collapse” to the position where the particle is found? The simple answer is that nothing really “collapses,” in the sense of an object like a balloon collapsing, because the probability waves and their null points do not move. There is just an instantaneous change in the probabilities, which happens whenever one possibility among many becomes actualized. That possibility becomes probability one. Other possibilities disappear instantly. Their probabilities become zero, but not because any probabilities move anywhere. So “collapse” of the wave function is that non-zero probabilities go to zero everywhere, except the point where the particle is found. Immaterial information has changed everywhere, but not “moved.” If nothing but information changes, if no matter or energy moves, then there is no violation of the principle of relativity, and no conflict between relativity and quantum mechanics! Nonlocality and Entanglement Since 1905 Einstein had puzzled over information at one place instantly providing information about a distant place. He dramatized this as “spooky action-at-a-distance” in the 1935 Einstein-Podolsky-Rosen thought experiment with two “entangled” particles. Einstein’s simplest such concern was the case of two electrons that are fired apart from a central point with equal velocities, starting at rest so the total momentum is zero. If we measure electron 1 at a certain point, then we immediately have the information that electron 2 is an equal distance away on the other side of the center. We have information or knowledge about the second electron’s position, not because we are measuring it directly. We are calculating its position using the principle of the conservation of momentum. This metaphysical information analysis will be our basis for explaining the EPR “paradox,” which is actually not a paradox, because there is really no action-at-a-distance in the sense of matter or energy or even information moving from one place to another! It might better be called “knowledge-at-a-distance.” Einstein and his colleagues hoped to show that quantum theory could not describe certain intuitive “elements of reality” and thus is incomplete. They said that, as far as it goes, quantum mechanics is correct, just not “complete.” Einstein was correct that quantum theory is “incomplete” relative to classical physics, which has twice as many dynamical variables that can be known with arbitrary precision. The “complete” information of classical physics gives us the instantaneous position and momentum of every particle in space and time, so we have complete path information. Quantum mechanics does not give us that path information. This does not mean the continuous path of the particle, as demanded by conservation laws, does not exist - only that quantum measurements to determine that path are not possible! For Niels Bohr and others to deny the incompleteness of quantum mechanics was to juggle words, which annoyed Einstein. Einstein was also correct that indeterminacy makes quantum theory an irreducibly discontinuous and statistical theory. Its predictions and highly accurate experimental results are statistical in that they depend on an ensemble of identical experiments, not on any individual experiment. Einstein wanted physics to be a continuous field theory like relativity, in which all physical variables are completely and locally determined by the four-dimensional field of space-time in his theories of relativity. In classical physics we can have and in principle know complete path information. In quantum physics we cannot. Visualizing Entanglement Erwin Schrödinger said that his “wave mechanics” provided more “visualizability” (Anschaulichkeit) than the “damned quantum jumps” of the Copenhagen school, as he called them. He was right. We can use his wave function to visualize EPR. But we must focus on the probability amplitude wave function of the "entangled" two-particle state. We must not attempt to describe the paths or locations of independent particles - at least until after some measurement has been made. We must also keep in mind the conservation laws that Einstein used to describe nonlocal behavior in the first place. Then we can see that the “mystery” of nonlocality for two particles is primarily the same mystery as the single-particle collapse of the wave function. But there is an extra mystery, one we might call an “enigma,” that results from the nonseparability of identical indistinguishable particles. Richard Feynman said there is only one mystery in quantum mechanics (the superposition of multiple states, the probabilities of collapse into one state, and the consequent statistical outcomes). We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery. We cannot make the mystery go away by “explaining” how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics. The additional enigma in two-particle nonlocality is that two indistinguishable and nonseparable particles appear simultaneously (in their original interaction frame) when their joint wave function “collapses.” There are two particles but only one wave function. In the time evolution of an entangled two-particle state according to the Schrödinger equation, we can visualize it - as we visualize the single-particle wave function - as collapsing when a measurement is made. Probabilities go to zero except at the particles’ two locations. Quantum theory describes the two electrons as in a superposition of electron spin up states ( + ) and spin down states ( - ), | ψ > = 1/√2) | + - > - 1/√2) | - + > What this means is that when we square the probability amplitude there is a 1/2 chance electron 1 is spin up and electron 2 is spin down. It is equally probable that 1 is down and 2 is up. We simply cannot know. The discontinuous “quantum jump” is also described as the “reduction of the wave packet.” This is apt in the two-particle case, where the superposition of | + - > and | - + > states is “projected” or “reduced” by a measurement into one of these states, e.g., | + - >, and then further reduced - or “disentangled" - to the product of independent one-particle states | + > | - >. In the two-particle case (instead of just one particle making an appearance), when either particle is measured, we know instantly the now determinate properties of the other particle needed to satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source. But now we must also satisfy another conservation law, that of the total electron spin. It is another case of “knowledge-at-a-distance,” now about spin. If we measure electron 1 to have spin up, the conservation of electron spin requires that electron 2 have spin down, and instantly. Just as we do not know their paths and positions of the electron before a measurement, we don’t know their spins. But once we know one spin, we instantly know the other. And it is not that anything moved from one particle to “influence” the other. Can Metaphysics Disentangle the EPR Paradox? Yes, if the metaphysicist pays careful attention to the information available from moment to moment in space and time. When the EPR experiment starts, the prepared state of the two particles includes the fact that the total linear momentum and the total angular momentum (including electron spin) are zero. This must remain true after the experiment to satisfy conservation laws. These laws are the consequence of extremely deep properties of nature that arise from simple considerations of symmetry. Physicists regard these laws as “cosmological principles.” For the metaphysicist, these laws are metaphysical truths that arise from considerations of symmetry alone. Physical laws do not depend on the absolute place and time of experiments, nor their particular direction in space. Conservation of linear momentum depends on the translation invariance of physical systems, conservation of energy the independence of time, and conservation of angular momentum the invariance of experiments under rotations. A metaphysicist can see that in his zeal to attack quantum mechanics, Einstein may have introduced an asymmetry into the EPR experiment that simply does not exist. Removing that asymmetry completely resolves any paradox and any conflict between quantum mechanics and special relativity. To clearly see Einstein’s false asymmetry, remember that a “collapse” of a wave function just changes probabilities everywhere into certainties. For a two-particle wave function, any measurement produces information about the particles two new locations instantaneously. The possibilities of being anywhere that violate conservation principles vanish instantly. At the moment one electron is located, the other is also located. At that moment, one electron appears in a spacelike separation from the other electron and a causal relation is no longer possible between them. Before the measurement, we know nothing about their positions. Either might have been “here” and the other “there.” Immediately after the measurement, they are separated, we know where both are and no communication between them is possible. Let’s focus on Einstein’s introduction of the asymmetry in his narrative that isn’t there in the physics. It’s a great example of going beyond the logic and the language to the underlying information we need to solve both philosophical and physical problems. Just look at any introduction to the problem of entanglement and nonlocal behavior of two particles. It always starts with something like “We first measure particle 1 and then...” Here is Einstein in his 1949 autobiography... There is to be a system which at the time t of our observation consists of two partial systems S1 and S2, which at this time are spatially separated and (in the sense of the classical physics) are without significant reciprocity. [Such systems are not entangled!] All quantum theoreticians now agree upon the following: If I make a complete measurement of S1, I get from the results of the measurement and from ψ12 an entirely definite ψ-function ψ2 of the system S2... the real factual situation of the system S2 is independent of what is done with the system S1, which is spatially separated from the former. But two entangled particles are not separable before the measurement. No matter how far apart they may appear after the measurement, they are inseparable as long as they are described by a single two-particle wave function ψ12 that cannot be the product of two single-particle wave functions. As Erwin Schrödinger made clear to Einstein in late 1935, they are only separable after they have become disentangled, by some interaction with the environment, for example. If ψ12 has decohered, it can then be represented by the product of independent ψ-functions ψ1 * ψ2, and then what Einstein says about independent systems S1 and S2 would be entirely correct. Schrödinger more than once told Einstein these facts about entanglement, but Einstein appears never to have absorbed them. A proof that neither particle can be measured without instantly determining the other’s position is seen by noting that a spaceship moving at high speed from the left sees particle 1 measured before particle 2. A spaceship moving in the opposite direction reverses the time order of the measurements. These two views introduce the false asymmetries of assuming one measurement can be made prior to the other. In the special frame that is at rest with respect to the center of mass of the particles, the “two” measurements are simultaneous, because there is actually only one measurement “collapsing” the two-particle wave function. Any measurement collapsing the entangled two-particle wave function affects the two particles instantly and symmetrically. We hope that philosophers and metaphysicians who pride themselves as critical thinkers will be able to explain these information and symmetry implications to physicists who have been tied in knots by Einstein-Podolsky-Rosen and entanglement for so many decades. Normal | Teacher | Scholar
103deebcb00131e3
Bose–Einstein condensate From Wikipedia, the free encyclopedia   (Redirected from Bose-Einstein condensation) Jump to navigation Jump to search Schematic Bose–Einstein condensation versus temperature of the energy diagram A Bose–Einstein condensate (BEC) is a state of matter (also called the fifth state of matter) which is typically formed when a gas of bosons at low densities is cooled to temperatures very close to absolute zero (-273.15 °C). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which point microscopic quantum phenomena, particularly wavefunction interference, become apparent macroscopically. A BEC is formed by cooling a gas of extremely low density, about one-hundred-thousandth (1/100,000) the density of normal air, to ultra-low temperatures. This state was first predicted, generally, in 1924–1925 by Albert Einstein[1] following a paper written by Satyendra Nath Bose, although Bose came up with the pioneering paper on the new statistics.[2] Satyendra Nath Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), in which he derived Planck's quantum radiation law without any reference to classical physics. Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it in 1924.[3] (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.[4]) Einstein then extended Bose's ideas to matter in two other papers.[5][6] The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons, which include the photon as well as atoms such as helium-4 (4 In 1938, Fritz London proposed the BEC as a mechanism for superfluidity in 4 and superconductivity.[7][8] On June 5, 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NISTJILA lab, in a gas of rubidium atoms cooled to 170 nanokelvins (nK).[9] Shortly thereafter, Wolfgang Ketterle at MIT realized a BEC in a gas of sodium atoms. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics.[10] These early studies founded the field of ultracold atoms, and hundreds of research groups around the world now routinely produce BECs of dilute atomic vapors in their labs. Since 1995, many other atomic species have been condensed, and BECs have also been realized using molecules, quasi-particles, and photons.[11] Critical temperature[edit]  is  the critical temperature,  is  the particle density,  is  the mass per boson,  is  the reduced Planck constant,  is  the Boltzmann constant, and  is  the Riemann zeta function; [12] Interactions shift the value and the corrections can be calculated by mean-field theory. This formula is derived from finding the gas degeneracy in the Bose gas using Bose–Einstein statistics. Ideal Bose gas[edit] For an ideal Bose gas we have the equation of state: where is the per particle volume, the thermal wavelength, the fugacity and . It is noticeable that is a monotonically growing function of in , which are the only values for which the series converge. Recognizing that the second term on the right-hand side contains the expression for the average occupation number of the fundamental state , the equation of state can be rewritten as Because the left term on the second equation must always be positive, and because , a stronger condition is which defines a transition between a gas phase and a condensed phase. On the critical region it is possible to define a critical temperature and thermal wavelength: recovering the value indicated on the previous section. The critical values are such that if or we are in the presence of a Bose-Einstein Condensate. Understanding what happens with the fraction of particles on the fundamental level is crucial. As so, write the equation of state for , obtaining and equivalently . So, if the fraction and if the fraction . At temperatures near to absolute 0, particles tend to condensate in the fundamental state (state with momentum ). Bose Einstein's non-interacting gas[edit] Consider a collection of N non-interacting particles, which can each be in one of two quantum states, and . If the two states are equal in energy, each different configuration is equally likely. If we can tell which particle is which, there are different configurations, since each particle can be in or independently. In almost all of the configurations, about half the particles are in and the other half in . The balance is a statistical effect: the number of configurations is largest when the particles are divided equally. If the particles are indistinguishable, however, there are only N+1 different configurations. If there are K particles in state , there are N − K particles in state . Whether any particular particle is in state or in state cannot be determined, so each value of K determines a unique quantum state for the whole system. Suppose now that the energy of state is slightly greater than the energy of state by an amount E. At temperature T, a particle will have a lesser probability to be in state by . In the distinguishable case, the particle distribution will be biased slightly towards state . But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most-likely outcome is that most of the particles will collapse into state . In the distinguishable case, for large N, the fraction in state can be computed. It is the same as flipping a coin with probability proportional to p = exp(−E/T) to land tails. When the integral (also known as Bose-Einstein integral) is evaluated with factors of and ℏ restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of negligible chemical potential . In Bose–Einstein statistics distribution, is actually still nonzero for BECs; however, is less than the ground state energy. Except when specifically talking about the ground state, can be approximated for most energy or momentum states as . Bogoliubov theory for weakly interacting gas[edit] Nikolay Bogoliubov considered perturbations on the limit of dilute gas,[13] finding a finite pressure at zero temperature and positive chemical potential. This leads to corrections for the ground state. The Bogoliubov state has pressure (T = 0): . Gross–Pitaevskii equation[edit] In some simplest cases, the state of condensed particles can be described with a nonlinear Schrödinger equation, also known as Gross–Pitaevskii or Ginzburg–Landau equation. The validity of this approach is actually limited to the case of ultracold temperatures, which fits well for the most alkali atoms experiments. This approach originates from the assumption that the state of the BEC can be described by the unique wavefunction of the condensate . For a system of this nature, is interpreted as the particle density, so the total number of atoms is Minimizing this energy with respect to infinitesimal variations in , and holding the number of atoms constant, yields the Gross–Pitaevski equation (GPE) (also a non-linear Schrödinger equation):  is the mass of the bosons,  is the external potential,  is representative of the inter-particle interactions. In the case of zero external potential, the dispersion law of interacting Bose–Einstein-condensed particles is given by so-called Bogoliubov spectrum (for ): The Gross-Pitaevskii equation (GPE) provides a relatively good description of the behavior of atomic BEC's. However, GPE does not take into account the temperature dependence of dynamical variables, and is therefore valid only for . It is not applicable, for example, for the condensates of excitons, magnons and photons, where the critical temperature is comparable to room temperature. Numerical Solution[edit] The Gross-Pitaevskii equation is a partial differential equation in space and time variables. Usually it does not have analytic solution and different numerical methods, such as split-step Crank-Nicolson [14] and Fourier spectral [15] methods, are used for its solution. There are different Fortran and C programs for its solution for contact interaction [16][17] and long-range dipolar interaction [18] which can be freely used. Weaknesses of Gross–Pitaevskii model[edit] The Gross–Pitaevskii model of BEC is a physical approximation valid for certain classes of BECs. By construction, the GPE uses the following simplifications: it assumes that interactions between condensate particles are of the contact two-body type and also neglects anomalous contributions to self-energy.[19] These assumptions are suitable mostly for the dilute three-dimensional condensates. If one relaxes any of these assumptions, the equation for the condensate wavefunction acquires the terms containing higher-order powers of the wavefunction. Moreover, for some physical systems the amount of such terms turns out to be infinite, therefore, the equation becomes essentially non-polynomial. The examples where this could happen are the Bose–Fermi composite condensates,[20][21][22][23] effectively lower-dimensional condensates,[24] and dense condensates and superfluid clusters and droplets.[25] Superfluidity of BEC and Landau criterion[edit] Experimental observation[edit] Superfluid He-4[edit] Dilute atomic gases[edit] A group led by Randall Hulet at Rice University announced a condensate of lithium atoms only one month following the JILA work.[27] Lithium has attractive interactions, causing the condensate to be unstable and collapse for all but a few atoms. Hulet's team subsequently showed the condensate could be stabilized by confinement quantum pressure for up to about 1000 atoms. Various isotopes have since been condensed. Velocity-distribution data graph[edit] Bose–Einstein condensation also applies to quasiparticles in solids. Magnons, Excitons, and Polaritons have integer spin which means they are bosons that can form condensates.[29] ,[30] at temperatures as great as 14 K. The high transition temperature (relative to atomic gases) is due to the magnons' small mass (near that of an electron) and greater achievable density. In 2006, condensation in a ferromagnetic yttrium-iron-garnet thin film was seen even at room temperature,[31][32] with optical pumping. Excitons, electron-hole pairs, were predicted to condense at low temperature and high density by Boer et al., in 1961. Bilayer system experiments first demonstrated condensation in 2003, by Hall voltage disappearance. Fast optical exciton creation was used to form condensates in sub-kelvin Cu in 2005 on. Polariton condensation was first detected for exciton-polaritons in a quantum well microcavity kept at 5 K.[33] Peculiar properties[edit] As in many other systems, vortices can exist in BECs. These can be created, for example, by 'stirring' the condensate with lasers, or rotating the confining trap. The vortex created will be a quantum vortex. These phenomena are allowed for by the non-linear term in the GPE.[disputed ] As the vortices must have quantized angular momentum the wavefunction may have the form where and are as in the cylindrical coordinate system, and is the angular number. This is particularly likely for an axially symmetric (for instance, harmonic) confining potential, which is commonly used. The notion is easily generalized. To determine , the energy of must be minimized, according to the constraint . This is usually done computationally, however in a uniform medium the analytic form: , where:  is  density far from the vortex,  is  healing length of the condensate. demonstrates the correct behavior, and is a good approximation. A singly charged vortex () is in the ground state, with its energy given by where  is the farthest distance from the vortices considered.(To obtain an energy which is well defined it is necessary to include this boundary .) For multiply charged vortices () the energy is approximated by which is greater than that of singly charged vortices, indicating that these multiply charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes. Attractive interactions[edit] When the JILA team raised the magnetic field strength further, the condensate suddenly reverted to attraction, imploded and shrank beyond detection, then exploded, expelling about two-thirds of its 10,000 atoms. About half of the atoms in the condensate seemed to have disappeared from the experiment altogether, not seen in the cold remnant or expanding gas cloud.[26] Carl Wieman explained that under current atomic theory this characteristic of Bose–Einstein condensate could not be explained because the energy state of an atom near absolute zero should not be enough to cause an implosion; however, subsequent mean field theories have been proposed to explain it. Most likely they formed molecules of two rubidium atoms;[35] energy gained by this bond imparts velocity sufficient to leave the trap without being detected. The process of creation of molecular Bose condensate during the sweep of the magnetic field throughout the Feshbach resonance, as well as the reverse process, are described by the exactly solvable model that can explain many experimental observations.[36] Current research[edit] Question, Web Fundamentals.svg Unsolved problem in physics: How do we rigorously prove the existence of Bose–Einstein condensates for general interacting systems? (more unsolved problems in physics) Compared to more commonly encountered states of matter, Bose–Einstein condensates are extremely fragile.[37] The slightest interaction with the external environment can be enough to warm them past the condensation threshold, eliminating their interesting properties and forming a normal gas.[citation needed] Nevertheless, they have proven useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an increase in experimental and theoretical activity. Examples include experiments that have demonstrated interference between condensates due to wave–particle duality,[38] the study of superfluidity and quantized vortices, the creation of bright matter wave solitons from Bose condensates confined to one dimension, and the slowing of light pulses to very low speeds using electromagnetically induced transparency.[39] Vortices in Bose–Einstein condensates are also currently the subject of analogue gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the laboratory. Experimenters have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential. These have been used to explore the transition between a superfluid and a Mott insulator,[40] and may be useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Tonks–Girardeau gas. Further, the sensitivity of the pinning transition of strongly interacting bosons confined in a shallow one-dimensional optical lattice originally observed by Haller[41] has been explored via a tweaking of the primary optical lattice by a secondary weaker one.[42] Thus for a resulting weak bichromatic optical lattice, it has been found that the pinning transition is robust against the introduction of the weaker secondary optical lattice. Studies of vortices in nonuniform Bose–Einstein condensates [43] as well as excitatons of these systems by the application of moving repulsive or attractive obstacles, have also been undertaken.[44][45] Within this context, the conditions for order and chaos in the dynamics of a trapped Bose–Einstein condensate have been explored by the application of moving blue and red-detuned laser beams via the time-dependent Gross-Pitaevskii equation.[46] Bose–Einstein condensates composed of a wide range of isotopes have been produced.[47] In 1999, Danish physicist Lene Hau led a team from Harvard University which slowed a beam of light to about 17 meters per second[clarification needed] using a superfluid.[49] Hau and her associates have since made a group of condensate atoms recoil from a light pulse such that they recorded the light's phase and amplitude, recovered by a second nearby condensate, in what they term "slow-light-mediated atomic matter-wave amplification" using Bose–Einstein condensates: details are discussed in Nature.[50] Another current research interest is the creation of Bose–Einstein condensates in microgravity in order to use its properties for high precision atom interferometry. The first demonstration of a BEC in weightlessness was achieved in 2008 at a drop tower in Bremen, Germany by a consortium of researchers led by Ernst M. Rasel from Leibniz University Hannover.[51] The same team demonstrated in 2017 the first creation of a Bose–Einstein condensate in space[52] and it is also the subject of two upcoming experiments on the International Space Station.[53][54] In 1970, BECs were proposed by Emmanuel David Tannenbaum for anti-stealth technology.[56] Dark matter[edit] P. Sikivie and Q. Yang showed that cold dark matter axions form a Bose–Einstein condensate by thermalisation because of gravitational self-interactions.[57] Axions have not yet been confirmed to exist. However the important search for them has been greatly enhanced with the completion of upgrades to the Axion Dark Matter Experiment(ADMX) at the University of Washington in early 2018. In 2014 a potential dibaryon was detected at the Jülich Research Center at about 2380 MeV. The center claimed that the measurements confirm results from 2011, via a more replicable method.[58][59] The particle existed for 10−23 seconds and was named d*(2380).[60] This particle is hypothesized to consist of three up and three down quarks.[61] It is theorized that groups of d-stars could form Bose-Einstein condensates due to prevailing low temperatures in the early universe, and that BECs made of such hexaquarks with trapped electrons could behave like dark matter.[62][63][64] The effect has mainly been observed on alkaline atoms which have nuclear properties particularly suitable for working with traps. As of 2012, using ultra-low temperatures of or below, Bose–Einstein condensates had been obtained for a multitude of isotopes, mainly of alkali metal, alkaline earth metal, and lanthanide atoms (7 , 23 , 39 , 41 , 85 , 87 , 133 , 52 , 40 , 84 , 86 , 88 , 174 , 164 , and 168 ). Research was finally successful in hydrogen with the aid of the newly developed method of 'evaporative cooling'.[65] In contrast, the superfluid state of 4 See also[edit] 1. ^ Einstein, A (10 July 1924). "Quantentheorie des einatomigen idealen Gases" (PDF). Königliche Preußische Akademie der Wissenschaften. Sitzungsberichte: 261–267. 3. ^ S. N. Bose (1924). "Plancks Gesetz und Lichtquantenhypothese". Zeitschrift für Physik. 26 (1): 178–181. Bibcode:1924ZPhy...26..178B. doi:10.1007/BF01327326. 5. ^ A. Einstein (1925). "Quantentheorie des einatomigen idealen Gases". Sitzungsberichte der Preussischen Akademie der Wissenschaften. 1: 3. 6. ^ Clark, Ronald W. (1971). Einstein: The Life and Times. Avon Books. pp. 408–409. ISBN 978-0-380-01159-9. 7. ^ F. London (1938). "The λ-Phenomenon of liquid Helium and the Bose–Einstein degeneracy". Nature. 141 (3571): 643–644. Bibcode:1938Natur.141..643L. doi:10.1038/141643a0. 9. ^ Bose-Einstein Condensate: A New Form of Matter, NIST, 9 October 2001 11. ^ J. Klaers; J. Schmitt; F. Vewinger & M. Weitz. "Bose–Einstein condensation of photons in an optical microcavity/year 2010". Nature. 468 (7323): 545–548. arXiv:1007.4088. Bibcode:2010Natur.468..545K. doi:10.1038/nature09567. PMID 21107426. 12. ^ (sequence A078434 in the OEIS) 13. ^ N. N. Bogoliubov (1947). "On the theory of superfluidity". J. Phys. (USSR). 11: 23. 14. ^ P. Muruganandam and S. K. Adhikari (2009). "Fortran Programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap". Comput. Phys. Commun. 180 (3): 1888–1912. arXiv:0904.3131. Bibcode:2009CoPhC.180.1888M. doi:10.1016/j.cpc.2009.04.015. 15. ^ P. Muruganandam and S. K. Adhikari (2003). "Bose-Einstein condensation dynamics in three dimensions by the pseudospectral and finite-difference methods". J. Phys. B. 36 (12): 2501–2514. arXiv:cond-mat/0210177. Bibcode:2003JPhB...36.2501M. doi:10.1088/0953-4075/36/12/310. 16. ^ D. Vudragovic; et al. (2012). "C Programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap". Comput. Phys. Commun. 183 (9): 2021–2025. arXiv:1206.1361. Bibcode:2012CoPhC.183.2021V. doi:10.1016/j.cpc.2012.03.022. 17. ^ L. E. Young-S.; et al. (2016). "OpenMP Fortran and C Programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap". Comput. Phys. Commun. 204 (9): 209–213. arXiv:1605.03958. Bibcode:2016CoPhC.204..209Y. doi:10.1016/j.cpc.2016.03.015. 18. ^ K. Kishor Kumar; et al. (2015). "Fortran and C Programs for the time-dependent dipolar Gross-Pitaevskii equation in a fully anisotropic trap". Comput. Phys. Commun. 195: 117–128. arXiv:1506.03283. Bibcode:2015CoPhC.195..117K. doi:10.1016/j.cpc.2015.03.024. 19. ^ Beliaev, S. T. Zh. Eksp. Teor. Fiz. 34, 417–432 (1958) [Soviet Phys. JETP 7, 289 (1958)]; ibid. 34, 433–446 [Soviet Phys. JETP 7, 299 (1958)]. 20. ^ M. Schick (1971). "Two-dimensional system of hard-core bosons". Phys. Rev. A. 3 (3): 1067–1073. Bibcode:1971PhRvA...3.1067S. doi:10.1103/PhysRevA.3.1067. 21. ^ E. Kolomeisky; J. Straley (1992). "Renormalization-group analysis of the ground-state properties of dilute Bose systems in d spatial dimensions". Phys. Rev. B. 46 (18): 11749–11756. Bibcode:1992PhRvB..4611749K. doi:10.1103/PhysRevB.46.11749. PMID 10003067. 22. ^ E. B. Kolomeisky; T. J. Newman; J. P. Straley & X. Qi (2000). "Low-dimensional Bose liquids: Beyond the Gross-Pitaevskii approximation". Phys. Rev. Lett. 85 (6): 1146–1149. arXiv:cond-mat/0002282. Bibcode:2000PhRvL..85.1146K. doi:10.1103/PhysRevLett.85.1146. PMID 10991498. 23. ^ S. Chui; V. Ryzhov (2004). "Collapse transition in mixtures of bosons and fermions". Phys. Rev. A. 69 (4): 043607. arXiv:cond-mat/0211411. Bibcode:2004PhRvA..69d3607C. doi:10.1103/PhysRevA.69.043607. 24. ^ L. Salasnich; A. Parola & L. Reatto (2002). "Effective wave equations for the dynamics of cigar-shaped and disk-shaped Bose condensates". Phys. Rev. A. 65 (4): 043614. arXiv:cond-mat/0201395. Bibcode:2002PhRvA..65d3614S. doi:10.1103/PhysRevA.65.043614. 27. ^ C. C. Bradley; C. A. Sackett; J. J. Tollett & R. G. Hulet (1995). "Evidence of Bose–Einstein condensation in an atomic gas with attractive interactions" (PDF). Phys. Rev. Lett. 75 (9): 1687–1690. Bibcode:1995PhRvL..75.1687B. doi:10.1103/PhysRevLett.75.1687. PMID 10060366. 28. ^ Baierlein, Ralph (1999). Thermal Physics. Cambridge University Press. ISBN 978-0-521-65838-6. 29. ^ Monique Combescot and Shiue-Yuan Shiau, "Excitons and Cooper Pairs: Two Composite Bosons in Many-Body Physics", Oxford University Press (ISBN 9780198753735) 30. ^ T. Nikuni; M. Oshikawa; A. Oosawa & H. Tanaka (1999). "Bose–Einstein condensation of dilute magnons in TlCuCl3". Phys. Rev. Lett. 84 (25): 5868–71. arXiv:cond-mat/9908118. Bibcode:2000PhRvL..84.5868N. doi:10.1103/PhysRevLett.84.5868. PMID 10991075. 31. ^ S. O. Demokritov; V. E. Demidov; O. Dzyapko; G. A. Melkov; A. A. Serga; B. Hillebrands & A. N. Slavin (2006). "Bose–Einstein condensation of quasi-equilibrium magnons at room temperature under pumping". Nature. 443 (7110): 430–433. Bibcode:2006Natur.443..430D. doi:10.1038/nature05117. PMID 17006509. 33. ^ Kasprzak J, Richard M, Kundermann S, Baas A, Jeambrun P, Keeling JM, Marchetti FM, Szymańska MH, André R, Staehli JL, Savona V, Littlewood PB, Deveaud B, Dang (28 September 2006). "Bose–Einstein condensation of exciton polaritons". Nature. 443 (7110): 409–414. Bibcode:2006Natur.443..409K. doi:10.1038/nature05131. PMID 17006506.CS1 maint: multiple names: authors list (link) 34. ^ C. Becker; S. Stellmer; P. Soltan-Panahi; S. Dörscher; M. Baumert; E.-M. Richter; J. Kronjäger; K. Bongs & K. Sengstock (2008). "Oscillations and interactions of dark and dark–bright solitons in Bose–Einstein condensates". Nature Physics. 4 (6): 496–501. arXiv:0804.0544. Bibcode:2008NatPh...4..496B. doi:10.1038/nphys962. 35. ^ M. H. P. M. van Putten (2010). "Pair condensates produced in bosenovae". Phys. Lett. A. 374 (33): 3346–3347. Bibcode:2010PhLA..374.3346V. doi:10.1016/j.physleta.2010.06.020. 36. ^ C. Sun; N. A. Sinitsyn (2016). "Landau-Zener extension of the Tavis-Cummings model: Structure of the solution". Phys. Rev. A. 94 (3): 033808. arXiv:1606.08430. Bibcode:2016PhRvA..94c3808S. doi:10.1103/PhysRevA.94.033808. 37. ^ "How to watch a Bose–Einstein condensate for a very long time -". Retrieved 22 January 2018. 38. ^ Gorlitz, Axel. "Interference of Condensates (BEC@MIT)". Archived from the original on 4 March 2016. Retrieved 13 October 2009. 39. ^ Z. Dutton; N. S. Ginsberg; C. Slowe & L. Vestergaard Hau (2004). "The art of taming light: ultra-slow and stopped light". Europhysics News. 35 (2): 33–39. Bibcode:2004ENews..35...33D. doi:10.1051/epn:2004201. 41. ^ Elmar Haller; Russell Hart; Manfred J. Mark; Johann G. Danzl; Lukas Reichsoellner; Mattias Gustavsson; Marcello Dalmonte; Guido Pupillo; Hanns-Christoph Naegerl (2010). "Pinning quantum phase transition for a Luttinger liquid of strongly interacting bosons". Nature Letters. 466 (7306): 597–600. arXiv:1004.3168. doi:10.1038/nature09259. PMID 20671704. 42. ^ Asaad R. Sakhel (2016). "Properties of bosons in a one-dimensional bichromatic optical lattice in the regime of the pinning transition: A worm- algorithm Monte Carlo study". Physical Review A. 94 (3): 033622. arXiv:1511.00745. doi:10.1103/PhysRevA.94.033622. 43. ^ Roger R. Sakhel; Asaad R. Sakhel (2016). "Elements of Vortex-Dipole Dynamics in a Nonuniform Bose–Einstein Condensate". Journal of Low Temperature Physics. 184 (5–6): 1092–1113. doi:10.1007/s10909-016-1636-3. 44. ^ Roger R. Sakhel; Asaad R. Sakhel; Humam B. Ghassib (2011). "Self-interfering matter-wave patterns generated by a moving laser obstacle in a two-dimensional Bose-Einstein condensate inside a power trap cut off by box potential boundaries". Physical Review A. 84 (3): 033634. arXiv:1107.0369. doi:10.1103/PhysRevA.84.033634. 45. ^ Roger R. Sakhel; Asaad R. Sakhel; Humam B. Ghassib (2013). "Nonequilibrium Dynamics of a Bose-Einstein Condensate Excited by a Red Laser Inside a Power-Law Trap with Hard Walls". Journal of Low Temperature Physics. 173 (3–4): 177–206. doi:10.1007/s10909-013-0894-6. 46. ^ Roger R. Sakhel; Asaad R. Sakhel; Humam B. Ghassib; Antun Balaz (2016). "Conditions for order and chaos in the dynamics of a trapped Bose-Einstein condensate in coordinate and energy space". European Physical Journal D. 70 (3): 66. arXiv:1604.01349. doi:10.1140/epjd/e2016-60085-2. 47. ^ "Ten of the best for BEC". 1 June 2005. 48. ^ "Fermionic condensate makes its debut". 28 January 2004. 50. ^ N. S. Ginsberg; S. R. Garner & L. V. Hau (2007). "Coherent control of optical information with matter wave dynamics". Nature. 445 (7128): 623–626. doi:10.1038/nature05493. PMID 17287804. 51. ^ Zoest, T. van; Gaaloul, N.; Singh, Y.; Ahlers, H.; Herr, W.; Seidel, S. T.; Ertmer, W.; Rasel, E.; Eckart, M. (18 June 2010). "Bose-Einstein Condensation in Microgravity". Science. 328 (5985): 1540–1543. Bibcode:2010Sci...328.1540V. doi:10.1126/science.1189164. ISSN 0036-8075. PMID 20558713. 52. ^ DLR. "MAIUS 1 – First Bose-Einstein condensate generated in space". DLR Portal. Retrieved 23 May 2017. 53. ^ Laboratory, Jet Propulsion. "Cold Atom Laboratory". Retrieved 23 May 2017. 54. ^ "2017 NASA Fundamental Physics Workshop | Planetary News". Retrieved 23 May 2017. 55. ^ P. Weiss (12 February 2000). "Atomtronics may be the new electronics". Science News Online. 157 (7): 104. doi:10.2307/4012185. JSTOR 4012185. Retrieved 12 February 2011. 57. ^ P. Sikivie, Q. Yang; Phys. Rev. Lett.,103:111103; 2009 58. ^ "Forschungszentrum Jülich press release". 59. ^ "Massive news in the micro-world: a hexaquark particle". 60. ^ P. Adlarson; et al. (2014). "Evidence for a New Resonance from Polarized Neutron-Proton Scattering". Physical Review Letters. 112 (2): 202301. arXiv:1402.6844. Bibcode:2014PhRvL.112t2301A. doi:10.1103/PhysRevLett.112.202301. 61. ^ M. Bashkanov (2020). "A new possibility for light-quark dark matter". Journal of Physics G. 47 (3). 62. ^ "Did German physicists accidentally discover dark matter in 2014?". 63. ^ "Physicists Think We Might Have a New, Exciting Dark Matter Candidate". 64. ^ "Did this newfound particle form the universe's dark matter?". 65. ^ Dale G. Fried; Thomas C. Killian; Lorenz Willmann; David Landhuis; Stephen C. Moss; Daniel Kleppner & Thomas J. Greytak (1998). "Bose–Einstein Condensation of Atomic Hydrogen". Phys. Rev. Lett. 81 (18): 3811. arXiv:physics/9809017. Bibcode:1998PhRvL..81.3811F. doi:10.1103/PhysRevLett.81.3811. 66. ^ "Bose–Einstein Condensation in Alkali Gases" (PDF). The Royal Swedish Academy of Sciences. 2001. Retrieved 17 April 2017. Further reading[edit] External links[edit]
054bc734b23a7650
Spatial and internal control of atomic ensembles with radiofrequency and microwave driving This weeks AMOPP seminar was given by Dr. German Sinuco-Leon from the University of Sussex on the topic of “Spatial and internal control of atomic ensembles with radiofrequency and microwave driving”. The abstract for this talk can be found below. Spatial and internal control of atomic ensembles with radiofrequency and microwave driving The ability to apply well-controlled perturbation to quantum systems is essential to modern methodologies to study their properties (e.g. in high-precision-spectroscopy), and developing quantum technologies (e.g. atomic-clocks and quantum processors). In most of the experimental platforms available today, such perturbations arise from the interaction of a quantum system with electromagnetic radiation, which creates harmonically oscillating couplings between the states of the system. Within this context, in this talk, I will describe our recent studies of the use of low-frequency electromagnetic radiation to control the external and internal degrees of freedom of ultracold atomic ensembles [1,2]. I will outline the relation of this problem with Floquet Engineering and the more general issue of describing the dynamic of the driven quantum systems. Finally, I will explain the challenges of describing the quantum dynamics of systems driven and highlight eh need for developing new conceptual and mathematical tools to identify universal characteristics and limitation of their dynamics. [1] G. A. Sinuco-Leon, B. M. Garraway, H. Mas, S. Pandey, G. Vasilakis, V. Bolpasi, W. von Klitzing, B. Foxon, S. Jammi, K. Poulios, T. Fernholz, Microwave spectroscopy of radio-frequency dressed alkali atoms, Physical Review A, accepted (2019). [ArXiv:1904.12073]. [2] G. Sinuco-León and B.M. Garraway, Addressed qubit manipulation in radio-frequency dressed lattices, New Journal of Physics. 18, 035009 (2016) State-selective field ionization of Rydberg positronium In this experiment we detect the annihilation gamma rays from: • the direct annihilation of positronium Static Electric Field Configuration Pulsed Electric Field Configuration Ethics in research: what a student should know. Babbage’s taxonomy of fraud: References & Further Reading   Rydberg Ps electrostatically guided in curved quadrupole Efficient production of n = 2 Positronium in S states P.A.M. Dirac iγ·∂ψ =  mψ . Rydberg Positronium Special Report, ICPEAC 2015 One of the conferences that we attended during the summer (ICPEAC 2015) had the necessary set-up to film one of our talks about our recent Rydberg paper, this was summarised on a published IOP abstract. You can watch our talk along with the rest of the lectures on ICPEAC’s youtube channel: ANTIMATTER: who ordered that? The existence of antimatter became known following Dirac’s formulation of relativistic quantum mechanics, but this incredible development was not anticipated. These days conjuring up a new particle or field (or perhaps even new dimensions) to explain unknown observations is pretty much standard operating procedure, but it was not always so. The famous “who ordered that” statement of I. I. Rabi was made in reference to the discovery of the muon, a heavy electron whose existence seemed a bit unnecessary at the time; in fact it was the harbinger of a subatomic zoo. The story of Dirac’s relativistic reformulation of the Schrödinger wave equation, and the subsequent prediction of antiparticles, is particularly appealing; the story is nicely explained in a recent biography of Dirac (Farmelo 2009). As with Einstein’s theory of relativity, Dirac’s relativistic quantum mechanics seemed to spring into existence without any experimental imperative. That is to say, nobody ordered it! The reality, of course, is a good deal more complicated and nuanced, but it would not be inaccurate to suggest that Dirac was driven more by mathematical aesthetics than experimental anomalies when he developed his theory. The motivation for any modification of the Schrödinger equation is that it does not describe the energy of a free particle in a way that is consistent with the special theory of relativity. At first sight it might seem like a trivial matter to simply re-write the equation to include the energy in the necessary form, but things are not so simple. In order to illustrate why this is so it is instructive to briefly consider the Dirac equation, and how it was developed. For explicit mathematical details of the formulation and solution of the Dirac equation see, for example, Griffiths 2008. The basic form of the Schrödinger wave equation (SWE) is (-\frac{\hbar^2}{2m}\nabla^2+V)\psi = i\hbar \frac{\partial}{\partial t}\psi.                                                    (1) The fundamental departure from classical physics embodied in eq (1) is the quantity \psi , which represents not a particle but a wavefunction. That is, the SWE describes how this wavefunction (whatever it may be) will behave. This is not the same thing at all as describing, for example, the trajectory of a particle. Exactly what a wavefunction is remains to this day rather mysterious. For many years it was thought that the wavefunction was simply a handy mathematical tool that could be used to describe atoms and molecules even in the absence of a fully complete theory (e.g., Bohm 1952). This idea, originally suggested by de Broglie in his “pilot wave” description, has been disproved by numerous ingenious experiments (e.g., Aspect et al., 1982). It now seems unavoidable to conclude that wavefunctions represent actual descriptions of reality, and that the “weirdness” of the quantum world is in fact an intrinsic part of that reality, with the concept of “particle” being only an approximation to that reality, only appropriate to a coarse-grained view of the world. Nevertheless, by following the rules that have been developed regarding the application of the SWE, and quantum physics in general, it is possible to describe experimental observations with great accuracy. This is the primary reason why many physicists have, for over 80 years, eschewed the philosophical difficulties associated with wavefunctions and the like, and embraced the sheer predictive power of the theory. We will not discuss quantum mechanics in any detail here; there are many excellent books on the subject at all levels (e.g., Dirac 1934, Shankar 1994, Schiff 1968). In classical terms the total energy of a particle E can be described simply as the sum of the kinetic energy (KE) and the potential energy (PE) as KE+PE=\frac{p^2}{2m}+V=E                                                 (2) where p = mv represents the momentum of a particle of mass m and velocity v. In quantum theory such quantities are described not by simple formulae, but rather by operators that act on the wavefunction. We describe momentum via the operator -i \hbar\nabla and energy by i\hbar \partial / \partial t and so on. The first term of eq (1) represents the total energy of the system, and is also known as the Hamiltonian, H. Thus, the SWE may be written as H\psi=i\hbar\frac{\partial\psi}{\partial t}=E\psi                                                              (3) The reason why eq (3) is non-relativistic is that the energy-momentum relation in the Hamiltonian is described in the well-known non-relativistic form. As we know from Einstein, however, the total energy of a free particle does not reside only in its kinetic energy; there is also the rest mass energy, embodied in what may be the most famous equation in all of physics: E=mc^2.                                                                    (4) This equation tells us that a particle of mass m has an equivalent energy E, with c2 being a rather large number, illustrating that even a small amount of mass (m) can, in principle, be converted into a very large amount of energy (E). Despite being so famous as to qualify as a cultural icon, the equation E = mc2 is, at best, incomplete. In fact the total energy of a free particle (i.e., V = 0) as prescribed by the theory of relativity is given by E^2=m^2c^4 +p^2c^2.                                                        (5) Clearly this will reduce to E = mc2 for a particle at rest (i.e., p = 0): or will it? Actually, we shall have E = ± mc2, and in some sense one might say that the negative solutions to this energy equation represent antimatter, although, as we shall see, the situation is not so clear cut. In order to make the SWE relativistic then, one need only replace the classical kinetic energy E = p2/2m with the relativistic energy E = [m2c4+p2c2]1/2. This sounds simple enough, but the square root sign leads to quite a lot of trouble! This is largely because when we make the “quantum substitution” p \rightarrow -i\hbar\nabla  we find we have to deal with the square root of an operator, which, as it turns out, requires some mathematical sophistication. Moreover, in quantum physics we must deal with operators that act upon complex wavefunctions, so that negative square roots may in fact correspond to a physically meaningful aspect of the system, and cannot simply be discarded as might be the case in a classical system. To avoid these problems we can instead start with eq (5) interpreted via the operators for momentum and energy so that eq (3) becomes (- \frac{1}{c^2}\frac{\partial^2}{\partial t^2} + \nabla^2)\psi=\frac{m^2 c^2}{\hbar^2}\psi.                                                (6) This equation is known as the Klein Gordon equation (KGE), although it was first obtained by Schrödinger in his original development of the SWE. He abandoned it, however, when he found that it did not properly describe the energy levels of the hydrogen atom. It subsequently became clear that when applied to electrons this equation also implied two things that were considered to be unacceptable; negative energy solutions, and, even worse, negative probabilities. We now know that the KGE is not appropriate for electrons, but does describe some massive particles with spin zero when interpreted in the framework of quantum field theory (QFT); neither mesons nor QFT were known when the KGE was formulated. Some of the problems with the KGE arise from the second order time derivative, which is itself a direct result of squaring everything to avoid the intractable mathematical form of the square root of an operator. The fundamental connection between time and space at the heart of relativity leads to a similar connection between energy and momentum, a connection that is overlooked in the KGE. Dirac was thus motivated by the principles of relativity to keep a first order time derivative, which meant that he had to confront the difficulties associated with using the relativistic energy head on. We will not discuss the details of its derivation but will simply consider the form of the resulting Dirac equation: (c \alpha \cdot \mathrm{P}+\beta mc^2)\psi=i\hbar \frac{\partial\psi}{\partial t}.                                                     (7) This equation has the general form of the SWE, but with some significant differences. Perhaps the most important of these is that the Hamiltonian now includes both the kinetic energy and the electron rest mass, but the coefficients αi and \beta  have to be four-component matrices to satisfy the equation. That is, the Dirac equation is really a matrix equation, and the wavefunction it describes must be a four component wavefunction. Although there are no problems with negative probabilities, the negative energy solutions seen in the KGE remain. These initially seemed to be a fatal flaw in Dirac’s work, but were overlooked because in every other aspect the equation was spectacularly successful. It reproduced the hydrogen atomic spectra perfectly (at least, as perfectly as it was known at the time) and even included small relativistic effects, as a proper relativistic wave equation should. For example, when the electromagnetic interaction is included the Dirac equation predicts an electron magnetic moment: \mu_e = \frac{\hbar e}{2m} = \mu_B                                                                   (8) where \mu_B is known as the Bohr magneton. This expression is also in agreement with experiment, almost: it was later discovered that the magnetic moment of the electron differs from the value predicted by eq (8) by about 0.1% (Kusch and Foley 1948).  The fact that Dirac’s theory was able to predict these quantities was considered to be a triumph, despite the troublesome negative energy solutions. Another intriguing aspect of the Dirac equation was noticed by Schrödinger in 1930. He realised that interference between positive and negative energy terms would lead to oscillations of the wavepacket of an electron (or positron) about some central point at the speed of light. This fast motion was given the name zitterbewegung (which is German for “trembling motion”). The underlying physical mechanism that gives rise to the zitterbewegung effect may be interpreted in several different ways but one way to look at it is as an interaction of the electron with the zero-point energy of the (quantised) electromagnetic field. Such electronic oscillations have not been directly observed as they occur at a very high frequency (~ 1021 Hz), but since zitterbewegung also applies to electrons bound to atoms, this motion can affect atomic energy levels in an observable way. In a hydrogen atom the zitterbewegung acts to “smear out” the electron charge over a larger area, lowering the strength of its interaction with the proton charge. Since S states have a non-zero expectation value at the origin, the effect is larger for these than it is for P states. The splitting between the hydrogen 2S1/2 and 2P1/2 states, that are degenerate in the Dirac theory, is known as the Lamb Shift (Lamb, 1947). This shift, which amounts to ~1 GHz was observed in an experiment by Willis Lamb and his student Robert Retherford (not to be confused Ernest Rutherford!). The need to explain this shift, which requires a proper explanation of the electron interacting with the electromagnetic field, gave birth to the theory of quantum electrodynamics, pioneered by Bethe, Tomanoga, Schwinger and Feynman. The solutions to the SWE for free particles (i.e., neglecting the potential V) are of the form \psi = A \mathrm{exp}(-iEt / \hbar).                                                       (9) Here A is some function that depends only on the spatial properties of the wavefunction (i.e., not on t). Note that this wavefunction represents two electron states, corresponding to the two separate spin states. The corresponding solutions to the Dirac equation may be represented as                                                             \psi_1 = A_1 \mathrm{exp}(-iEt / \hbar), \psi_2 = A_2 \mathrm{exp}(+iEt / \hbar).                                                   (10) Here \psi_2 represents the negative energy solutions that have caused so much trouble. The existence of these states is central to the theory they cannot simply be labelled as “unphysical” and discarded. The complete set of solutions is required in quantum mechanics, in which everything is somewhat “unphysical”. More properly, since the wavefunction is essentially a complex probability density function that yields a real result when its absolute value is squared, the negative energy solutions are no less physical than the positive energy solutions; it is in fact simply a matter of convention as to which states are positive and which are negative. However you set things up, you will always have some “wrong” energy states that you can’t get rid of. Thus, Dirac was able to eliminate the negative probabilities and produce a wave equation that was consistent with special relativity, but the negative energy states turned out to be a fundamental part of the theory and could not be eliminated, despite many attempts to get rid of them. After his first paper in 1928 (The quantum theory of the electron) Dirac had established that his equation was a viable relativistic wave equation, but the negative energy aspects remained controversial. He worried about this for some time, and tried to develop a “hole” theory to explain their seemingly undeniable existence. A serious problem with negative energy solutions is that one would expect all electrons to decay into the lowest energy state available, which would be the negative energy states. Since this would not be consistent with observations there must, so Dirac reasoned, be some mechanism to prevent it. He suggested that the states were already filled with an infinite “sea” of electrons, and therefore the Pauli Exclusion Principle would prevent such decay, just as it prevents more than two electrons from occupying the lowest energy level in an atom. (Note that this scheme does not work for Bosons, which do not obey the exclusion principle). Such an infinite electron sea would have no observable properties, as long as the underlying vacuum has a positive “bare” charge to cancel out the negative electron charge. Since only changes in the energy density of this sea would be apparent, we would not normally notice its presence. Moreover, Dirac suggested that if a particle were missing from the sea the resulting hole would be indistinguishable from a positively charged particle, which he speculated was a proton, protons being the only positively charged subatomic particles known at the time. This idea was presented in a paper in 1930 (A Theory of Electrons and Protons, Dirac 1930). The theory was less than successful, however, and the deficiencies served only to undermine confidence in the entire Dirac theory. Attempts to identify holes as protons only made matters worse; it was shown independently by Heisenberg, Oppenheimer and Pauli that the holes must have the electron mass, but of course protons are almost 2000 times heavier. Moreover, the instability between electrons and holes completely ruled out stable atomic states made from these entities (bad news for hydrogen, and all other atoms). Eventually Dirac was forced to conclude that the negative energy solutions must correspond to real particles with the same mass as the electron and a positive charge. He called these anti-electrons (Quantised Singularities in the Electromagnetic Field, Dirac 1931). This almost reluctant conclusion was not based on a full understanding of what the negative energy states were, but rather the fact that the entire theory, which was so beautiful in other ways that it was hard to resist, depended on them. It turns out that to properly understand the negative energy solutions requires the formalism of quantum field theory (QFT). In this description particles (and antiparticles) can be created or destroyed, so it is no longer necessarily appropriate to consider these particles to be the fundamental elements of the theory. If the total number of particles in a system is not conserved then one might prefer to describe that system in terms of the entities that give rise to the particles rather than the particles themselves. These are the quantum fields, and the standard model of particle physics is at its heart a QFT. By describing particles as oscillations in a quantum field not only do we have an immediate mechanism by which they may be created or destroyed, but the problem of negative energies is also removed, as this simply becomes a different kind of variation in the underlying quantum field. Dirac didn’t explicitly know this at the time, although it would be fair to say that he essentially invented QFT, when he produced a quantum theory that included quantized electromagnetic fields (Dirac, 1927, The Quantum Theory of the Emission and Absorption of Radiation). This led, eventually, to what would be known as quantum electrodynamics. Dirac would undoubtedly have been able to make much more use of his creation if he had not been so appalled by the notion of renormalization. Unfortunately this procedure, which in some ways can be thought of as subtracting infinite quantities from each other to leave a finite quantity, was incompatible with his sense of mathematical aesthetics. So, despite initially struggling with the interpretation of his theory, there can be no question that Dirac did indeed explicitly predict the existence of the positron before it was experimentally observed. This observation came almost immediately in cloud chamber experiments conducted by Carl Anderson in California (C. D. Anderson: The apparent existence of easily deflectable positives, Science 76 238, 1932).  Curiously, however, Anderson was not aware of the prediction, and the proximity of the observation was apparently coincidental. We will discuss this remarkable observation in a later post. *This post is adapted from an as-yet unpublished book chapter by D. B. Cassidy and A. P. Mills, Jr. Griffiths, D. (2008). Introduction to Elementary Particles Wiley-VCH; 2nd edition. Farmelo, “The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom” Basic Books, New York, (2011). Dirac, P.A.M. (1927). The Quantum Theory of the Emission and Absorption of Radiation, Proceedings of the Royal Society of London, Series A, Vol. 114, p. 243. P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 117, 610 (1928). P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 126, 360 (1930). P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 133, 60 (1931). Anderson, C. D. (1932). The apparent existence of easily deflectable positives, Science 76, 238. A.  Aspect, D. Jean, R. Gerard (1982). Experimental Test of Bell’s Inequalities Using Time- Varying Analyzers, Phys. Rev. Lett. 49 1804 P. Kusch and H. M. Foley “The Magnetic Moment of the Electron”, Phys. Rev. 74, 250 (1948). System modification for Rydberg Ps imaging A key milestone along the road to Ps gravity measurements is control of the motion of  long-lived states of positronium. Using methods previously developed for atoms and molecules we aim to manipulate low-field seeking Stark states within the Rydberg-Stark manifold (see below) using inhomogeneous electric fields [1, 2]. The force exerted on Rydberg atoms due to their electric dipole moment can be described as: where n is the principal quantum number, k is the parabolic quantum number (ranging from –(n-1-|m|) to n-1-|m| in steps of 2), and F is the electric field strength [3, 4]. The figure above shows an example of Rydberg-Stark state manifold for n=11. We have recently modified our experimental system to accommodate an MCP for imaging Ps atoms. This involved the extension of our beamline with another multi-port vacuum chamber, within which we should be able to reproduce laser excitation of Ps to Rydberg states.  These will be formed at the centre of the chamber and directed along a 45 degree path towards the MCP. If imaging Ps* proves successful we will then use electrodes to create the inhomogeneous electric fields needed to manipulate their flight path. The addition of the new vacuum chamber to our beamline is shown below. [1] S. D. Hogan and F. Merkt (2008). Demonstration of Three-Dimensional Electrostatic Trapping of State-Selected Rydberg Atoms. Physical Review Letters, 100:043001.      [2] E. Vliegen, P. A. Limacher and F. Merkt (2006). Measurement of the three-dimensional velocity distribution of Stark-decelerated Rydberg atoms. European Journal of Physics D, 40:73-80. [3] E. Vliegen and F. Merkt (2006). Normal-Incidence Electrostatic Rydberg Atom Mirror. Physical Review Letters, 97:033002. [4] S. D. Hogan (2012). Cold atoms and molecules by Zeeman deceleration and Rydberg-Stark deceleration, Habilitation Thesis. Laboratory of Physical Chemistry, ETH Zurich.
80d94d6a2dc6504e
The supersymmetric quantum mechanical model based on higher-derivative supercharge operators possessing unbroken supersymmetry and discrete energies below the vacuum state energy is described. As an example harmonic oscillator potential is considered Modern Physics Letters A, Vol. 11, No. 19 (1996) 1563-1567 Boris F. Samsonov 111 email: Tomsk State University, 36 Lenin Ave. 634050, Tomsk, Russia 1. Ideas of the supersymmetry have appeared in physics for the first time in the quantum field theory for unifying the interactions of a different nature [14]. In a supersymmetric theory the supersymmetry can be either exact or spontaneously broken. The supersymmetric quantum mechanics has been introduced [2] to illustrate the problems of the supersymmetry breakdown in supersymmetric quantum field theories. For this purpose the Witten criterion based on the Witten index [2] has been elaborated. In the case of the broken supersymmetry the entire spectrum of the super-Hamiltonian is twofold degenerate and in the case of the exact one its vacuum state is nondegenerate. In the first case the supercharge operators map the two states corresponding to the vacuum energy (zero energy) one into another and in the second one the vacuum state is annihilated by both supercharges. (See for example a recent survey [3].) Recently higher-order derivative extension of the supersymmetric quantum mechanics has been elaborated [4]. In this approach supercharges are constructed in terms of the higher-derivative differential operators and the corresponding superalgebra is polynomial in the Hamiltonian. This model exhibits a number of unusual properties [5]. In particular, the Witten criterion of spontaneous supersymmetry breaking is no longer applicable [4]. We now want to describe an unusual property of such models in terms of supersymmetry breakdown which has not been described earlier. In our case the state which is nondegenerate and annihilated by both mutually conjugated supercharges is situated in the middle of the discrete spectrum of a super-Hamiltonian. It follows that if one associates with this state the zero energy value, the underlying energies should take negative values. This situation will not occur in the conventional supersymmetric quantum mechanics [3] and we can claim that our higher-derivative model exhibits at once the properties of the models with both exact and spontaneously broken supersymmetry. 2.The higher-derivative supersymmetry in quantum mechanics [4] is closely related to the higher-derivative Darboux transformation [5], [6], [7]. This transformation, denoted here as , is introduced in accordance with the general conception of the transformation operators [8] as an -order differential operator intertwining two Hamiltonians and . The proper functions of one of them (for example ) are assumed to be known: . One then obtains the proper function , corresponding to the same eigenvalue , of the other (i.e. ) with the help of the operator : , except for the functions which form the kernel of the operator Laplace adjoint to denoted by . We assume that the operators and are self-adjoint. (More precisely we suppose that their potentials are real-valued functions and the Hamiltonians are essentially self-adjoint in the sense of some scalar product). In this case operator assures the transformation in the inverse direction: from the eigenfunctions to the eigenfunctions When we have the well-known Darboux transformation [9] called first-order Dardoux transformation. It can be shown [6] that the operator can always be presented as a product of first-order Darboux transformation operators between every two juxtaposed Hamiltonians , , …, : , , . Some of the intermediate Hamiltonians can have complex-valued potentials but the final potential of the remains always real-valued function (so-called irreducible case [4]) In this latter we want to point out that other than that described in Ref. [4] irreducible case exists. It is connected with the choice of discrete spectrum functions of the Hamiltonian as the transformation functions. In this case the intermediate potentials are real-valued functions having additional singularities with respect to initial potential. It follows from theorem proved in Ref. [6] that the operator can always be presented in the form known as Crum-Krein formula [10], [11]: where stands for the usual symbol for the Wronskian of the functions called transformation functions and satisfied the initial Schrödinger equation (), the prime denotes the derivative with respect to real coordinate , and the determinant is a differential operator obtained by the development of the determinant in the last column with the functional coefficients placed before the derivative operators. Potential difference between the final Schrödinger equation potential and the initial one reads as follows: . The function is well defined if the Wronskian conserves its sign in the interval for the variable in the initial Schrödinger equation. If the discrete spectrum eigenfunctions of the Hamiltonian are enumerated by the number of their zeros, the condition for the Wronskian to conserve its sign is formulated by Krein [11]: the Wronskian conserves its sign, the integers being equal to the number of zeros of functions , if for all , the following inequality: holds. In particular, the functions may be two-by-two juxtaposed discrete spectrum eigenfunctions. The levels with , will be absent in the discrete spectrum of the new Hamiltonian . It follows from the formula (1) that span. For we have: span where [6] , is the -order Wronskian constructed from the functions except for the , . The product being a symmetry operator for the initial Schrödinger equation is a polynomial function of the initial Hamiltonian. Taking into account the condition , we obtain more precisely [4], [6]: The same is true for the product : 3. Let be two-by-two juxtaposed discrete spectrum eigenfunctions of and be a basis set of the Hilbert space ( is a discrete subsystem and is a continuous one). Introduce the notation . Then the system of functions is complete in [11], [12]. With the help of and we built up the supercharges which together with the super-Hamiltonian diag form an -order superalgebra [4], [6]: Every energy of the superhamiltonian is twofold degenerate if and nondegenerate if . The energy can be associated with the ground state of the super-Hamiltonian and the wave function being annihilated by both supercharges can be considered as the vacuum state. All the other nondegenerate states , are also annihilated by both supercharges. We can choose the set in such a way that the ground state of the Hamiltonian does not belong to this set. In this case proper functions of the super-Hamiltonian with the energies below to its vacuum state exist and are twofold degenerate. 4. We will cite an example of the above-described situation. Consider the harmonic potential with the discrete spectrum and the well-known discrete spectrum eigenfunctions where is the Hermit polynomial [13]. The double Darboux transformation with the juxtaposed functions and , produces a new potential of the form [6]: In its discrete spectrum the levels and are absent. The normalized to unity wave functions have the form The state having the minimal energy value among the two states annihilated by both supercharges can be associated with the vacuum state. All the states , , have the energies below energy of the vacuum state . 5. The supersymmetric quantum mechanics is now widely used in different branches of physics such as statistical physics, condensed matter, atomic physics [3], [14]. An essential ingredient of this theory is the Darboux transformation which permits us to construct for every exactly solvable potential a family of its exactly solvable partners. If we start (as in our example) from the harmonic potential we can construct exactly solvable potentials with equidistant or quaziequidistant spectra [6]. Coherent states of these potentials [15] known in quantum optics as wavelets being constructed represent nondispersive wave packets. The above considered potential in the particular case has been obtained earlier by other means [16]. This potential corresponds to the one of the rational solutions of the Painlevé IV differential equation [17]. The connection of the Painlevé IV and V transcendents with the Schrödinger equation was studied in Ref. [18]. In our opinion with the help of the Darboux transformation it is possible to establish the correspondence between the known rational solutions of these equations [19] and exactly solvable potentials with the quaziequidistant spectra. An application of the double Darboux transformation to the Coulomb potential which gives a new exactly solvable potential, in which the discrete spectrum the levels with and are absent, was made in Ref. [20]. Using these results and the above described approach we can construct for this system the second-order supersymmetric model analogous to the one discussed here.
04547803cada54a5
Wannier-Stark resonances in optical and semiconductor superlattices Markus Glück\address[kl]FB Physik, Universität Kaiserslautern, D-67653 Kaiserslautern, Germany, Andrey R. Kolovsky\addressmark[kl]\addressL. V. Kirensky Institute of Physics, 660036 Krasnoyarsk, Russia, and Hans Jürgen Korsch\addressmark[kl] In this work, we discuss the resonance states of a quantum particle in a periodic potential plus a static force. Originally this problem was formulated for a crystal electron subject to a static electric field and it is nowadays known as the Wannier-Stark problem. We describe a novel approach to the Wannier-Stark problem developed in recent years. This approach allows to compute the complex energy spectrum of a Wannier-Stark system as the poles of a rigorously constructed scattering matrix and solves the Wannier-Stark problem without any approximation. The suggested method is very efficient from the numerical point of view and has proven to be a powerful analytic tool for Wannier-Stark resonances appearing in different physical systems such as optical lattices or semiconductor superlattices. PACS: 03.65.-w; 05.45.+b; 32.80.Pj; -73.20.Dx Chapter 1 Introduction The problem of a Bloch particle in the presence of additional external fields is as old as the quantum theory of solids. Nevertheless, the topics introduced in the early studies of the system, Bloch oscillations [1], Zener tunneling [2] and the Wannier-Stark ladder [3], are still the subject of current research. The literature on the field is vast and manifold, with different, sometimes unconnected lines of evolution. In this introduction we try to give a survey of the field, summarize the different theoretical approaches and discuss the experimental realizations of the system. It should be noted from the very beginning that most of the literature deals with one-dimensional single-particle descriptions of the system, which, however, capture the essential physics of real systems. Indeed, we will also work in this context. 1.1 Wannier-Stark problem In the one-dimensional case the Hamiltonian of a Bloch particle in an additional external field, in the following referred to as the Wannier-Stark Hamiltonian, has the form where stands for the static force induced by the external field. Clearly, the external field destroys the translational symmetry of the field-free Hamiltonian . Instead, from an arbitrary eigenstate with , one can by a translation over periods construct a whole ladder of eigenstates with energies , the so-called Wannier-Stark ladder. Any superposition of these states has an oscillatory evolution with the time period known as the Bloch period. There has been a long-standing controversy about the existence of the Wannier-Stark ladder and Bloch oscillations [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], and only recently agreement about the nature of the Wannier-Stark ladder was reached. The history of this discussion is carefully summarized in [12, 20, 21, 22]. Figure 1.1: Schematic illustration of the Wannier-Stark ladder of resonances. The width of the levels is symbolized by the different strength of the lines. From today’s point of view the discussion mainly dealt with the effect of the single band approximation (effectively a projection on a subspace of the Hilbert space) on the spectral properties of the Wannier-Stark Hamiltonian. Within the single band approximation, the ’th band of the field-free Hamiltonian forms, if the field is applied, the Wannier-Stark ladder with the quantized energies where is the mean energy of the -th band (see Sec. 1.2). This Wannier-Stark quantization was the main point to be disputed in the discussions mentioned above. The process, which is neglected in the single band approximation and which couples the bands, is Zener tunneling [2]. For smooth potentials , the band gap decreases with increasing band index. Hence, as the tunneling rate increases with decreasing band gap, the Bloch particles asymmetrically tend to tunnel to higher bands and the band population depletes with time (see Sec. 1.3). This already gives a hint that Eq. (1.3) can be only an approximation to the actual spectrum of the sytem. Indeed, it has been proven that the spectrum of the Hamiltonian (1.1) is continuous [23, 24]. Thus the discrete spectrum (1.3) can refer only to resonances [25, 26, 27, 28, 29], and Eq. (1.3) should be corrected as (see Fig. 1.1). The eigenstates of the Hamiltonian (1.1) corresponding to these complex energies, referred in what follows as the Wannier-Stark states , are metastable states with the lifetime given by . To find the complex spectrum (1.4) (and corresponding eigenstates) is an ultimate aim of the Wannier-Stark problem. Several attempts have been made to calculate the Wannier-Stark ladder of resonances. Some analytical results have been obtained for nonlocal potentials [30, 31] and for potentials with a finite number of gaps [32, 33, 34, 35, 36, 37, 38]. (We note, however, that almost all periodic potentials have an infinite number of gaps.) A common numerical approach is the formalism of a transfer matrix to potentials which consist of piecewise constant or linear parts, eventually separated by delta function barriers [39, 40, 41, 42, 43]. Other methods approximate the periodic system by a finite one [44, 45, 46, 47]. Most of the results concerning Wannier-Stark systems, however, have been deduced from single- or finite-band approximations and strongly related tight-binding models. The main advantage of these models is that they, as well in the case of static (dc) field [48] as in the cases of oscillatory (ac) and dc-ac fields [49, 50, 51, 52, 53, 54, 55, 56], allow analytical solutions. Tight-binding models have been additionally used to investigate the effect of disorder [57, 58, 59, 60, 61, 62], noise [63] or alternating site energies [64, 65, 66, 67, 68] on the dynamics of Bloch particles in external fields. In two-band descriptions Zener tunneling has been studied [69, 70, 71, 72, 73], which leads to Rabi oscillations between Bloch bands [74]. Because of the importance of tight-binding and single-band models for understanding the properties of Wannier-Stark resonances we shall discuss them in some more detail. 1.2 Tight-binding model In a simple way, the tight-binding model can be introduced by using the so-called Wannier states (not to be confused with Wannier-Stark states), which are defined as follows. In the absence of a static field, the eigenstates of the field-free Hamiltonian, are known to be the Bloch waves with the quasimomentum defined in the first Brillouin zone . The functions (1.6) solve the eigenvalue equation where are the Bloch bands. Without affecting the energy spectrum, the free phase of the Bloch function can be chosen such that it is an analytic and periodic function of the quasimomentum [75]. Then we can expand it in a Fourier series in , where the expansion coefficients are the Wannier functions. Let us briefly recall the main properties of the Wannier and Bloch states. Both form orthogonal sets with respect to both indices. The Bloch functions are, in general, complex while the Wannier functions can be chosen to be real. While the Bloch states are extended over the whole coordinate space, the Wannier states are exponentially localized [76, 77], essentially within the -th cell of the potential. Furthermore, the Bloch functions are the eigenstates of the translation (over a lattice period) operator while the Wannier states satisfy the relation which directly follows from Eq. (1.8). Finally, the Bloch states are eigenstates of but the Wannier states are not. As an example, Fig. 1.2 shows the Bloch band spectrum and two Wannier functions of the system (1.5) with , and . The exponential decrease of the ground state is very fast, i.e. the relative occupancy of the adjacent wells is less than . For the second excited Wannier state it is a few percent. Figure 1.2: Left panel – lowest energy bands for the potential with parameters and . Right panel – associated Wannier states (solid line) and (dotted line). The localization property of the Wannier states suggests to use them as a basis for calculating the matrix elements of the Wannier-Stark Hamiltonian (1.1). (Note that the field-free Hamiltonian (1.5) is diagonal in the band index .) The tight-binding Hamiltonian is deduced in the following way. Considering a particular band , one takes into account only the main and the first diagonals of the Hamiltonian . From the field term only the diagonal part is taken into account. Then, denoting the Wannier states resulting from the -th band by , the tight-binding Hamiltonian reads The Hamiltonian (1.10) can be easily diagonalized which yields the spectrum with the eigenstates Thus, all states are localized and the spectrum is the discrete Wannier-Stark ladder (1.3). The obtained result has a transparent physical meaning. When the energy levels of Wannier states coincide and the tunneling couples them into Bloch waves . Correspondingly, the infinite degeneracy of the level is removed, producing the Bloch band  111Because only the nearest off-diagonal elements are taken into account in Eq. (1.10), the Bloch bands are always approximated by a cosine dispersion relation. When the Wannier levels are misaligned and the tunneling is suppressed. As a consequence, the Wannier-Stark state involves (effectively) a finite number of Wannier states, as indicated by Eq. (1.11). It will be demonstrated later on that for the low-lying bands Eq. (1.3) and Eq. (1.11) approximate quite well the real part of the complex Wannier-Stark spectrum and the resonance Wannier-Stark functions , respectively. The main drawback of the model, however, is its inability to predict the imaginary part of the spectrum (i.e. the lifetime of the Wannier-Stark states), which one has to estimate from an independent calculation. Usually this is done with the help of Landau-Zener theory. 1.3 Landau-Zener tunneling Let us address the following question: if we take an initial state in the form of a Bloch wave with quasimomentum , what will be the time evolution of this state when the external static field is switched on? The common approach to this problem is to look for the solution as the superposition of Houston functions [78] where is the Bloch function with the quasimomentum evolving according to the classical equation of motion , i.e . Substituting Eq. (1.12) into the time-dependent Schrödinger equation with the Hamiltonian (1.1), we obtain where . Neglecting the interband coupling, i.e.  for , we have This solution is the essence of the so-called single-band approximation. We note that within this approximation one can use the Houston functions (1.13) to construct the localized Wannier-Stark states similar to those obtained with the help of the tight-binding model. The correction to the solution (1.15) is obtained by using the formalism of Landau-Zener tunneling. In fact, when the quasimomentum explores the Brillouin zone, the adiabatic transition occurs at the points of “avoided” crossings between the adjacent Bloch bands [see, for example, the avoided crossing between the 4-th and 5-th bands in Fig. 1.2(a) at ]. Semiclassically, the probability of this transition is given by where is the energy gap between the bands and , stand for the slope of the bands at the point of avoided crossing in the limit [79]. In a first approximation, one can assume that the adiabatic transition occurs once for each Bloch cycle . Then the population of the -th band decreases exponentially with the decay time where and are band-dependent constants. In conclusion, within the approach described above one obtains from each Bloch band a set of localized states with energies given by Eq. (1.3). However, these states have a finite lifetime given by Eq. (1.17). It will be shown in Sec. 3.1 that the estimate (1.17) is, in fact, a good “first order” approximation for the lifetime of the metastable Wannier-Stark states. 1.4 Experimental realizations We proceed with experimental realizations of the Wannier-Stark Hamiltonian (1.1). Originally, the problem was formulated for a solid state electron system with an applied external electric field, and in fact, the first measurements concerning the existence of the Wannier-Stark ladder dealt with photo-absorption in crystals [80]. Although this system seems convenient at first glance, it meets several difficulties because of the intrinsic multi-particle character of the system. Namely, the dynamics of an electron in a solid is additionally influenced by electron-phonon and electron-electron interactions. In addition, scattering by impurities has to be taken into account. In fact, for all reasonable values of the field, the Bloch time (1.2) is longer than the relaxation time, and therefore neither Bloch oscillations nor Wannier-Stark ladders have been observed in solids yet. One possibility to overcome these problems is provided by semiconductor superlattices [81], which consists of alternating layers of different semiconductors, as for example, and . In the most simple approach, the wave function of a carrier (electron or hole) in the transverse direction of the semiconductor superlattice is approximated by a plane wave for a particle of mass (the effective mass of the electron in the conductance or valence bands, respectively). In the direction perpendicular to the semiconductor layers (let it be -axis) the carrier “sees” a periodic sequence of potential barriers where the height of the barrier is of the order of 100 meV and the period Å. Because the period of this potential is two orders of magnitude larger than the lattice period in bulk semiconductor, the Bloch time is reduced by this factor and may be smaller than the relaxation time. Indeed, semiconductor superlattices were the first systems where Wannier-Stark ladders were observed [82, 83, 84] and Bloch oscillations have been measured in four-wave-mixing experiments [85, 86] as proposed in [87]. In the following years, many facets of the topics have been investigated. Different methods for the observation of Bloch oscillation have been applied [88, 89, 90, 91], and nowadays it is possible to detect Bloch oscillations at room temperature [92], to directly measure [93] or even control [94] their amplitude. Wannier-Stark ladders have been found in a variety of superlattice structures [95, 96, 97, 98, 99], with different methods [100, 101]. The coupling between different Wannier-Stark ladders [102, 103, 104, 105, 106], the influence of scattering [107, 108, 109], the relation to the Franz-Keldysh effect [110, 111, 112], the influence of excitonic interactions [113, 114, 115, 116, 117] and the role of Zener tunneling [118, 119, 120, 121] have been investigated. Altogether, there is a large variety of interactions which affect the dynamics of the electrons in semiconductor superlattices, and it is still quite complicated to assign which effect is due to which origin. A second experimental realization of the Wannier-Stark Hamiltonian is provided by cold atoms in optical lattices. The majority of experiments with optical lattices deals with neutral alkali atoms such as lithium [122], sodium [123, 124, 125], rubidium [126, 127, 128] or cesium [129, 130, 131], but also optical lattices for argon have been realized [132]. The description of the atoms in an optical lattice is rather simple. One approximately treats the atom as a two-state system which is exposed to a strongly detuned standing laser wave. Then the light-induced force on the atom is described by the potential [133, 134] where is the Rabi frequency (which is proportional to the product of the dipole matrix elements of the optical transition and the amplitude of the electric component of the laser field), is the wave number of the laser, and is the detuning of the laser frequency from the frequency of the atomic transition.222 The atoms are additionally exposed to dissipative forces, which may have substantial effects on the dynamics [135]. However, since these forces are proportional to while the dipole force (1.19) is proportional to , for sufficiently large detuning one can reach the limit of non-dissipative optical lattices. In addition to the optical forces, the gravitational force acts on the atoms. Therefore, a laser aligned in vertical direction yields the Wannier-Stark Hamiltonian where is the mass of the atom and the gravitational constant. An approach where one can additionally vary the strength of the constant force is realized by introducing a tunable frequency difference between the two counter-propagating waves which form the standing laser wave. If this difference increases linearly in time, , the two laser waves gain a phase difference which increases quadratically in time according to . The superposition of both waves then yields an effective potential , which in the rest frame of the potential also yields the Hamiltonian (1.20) with the gravitational force substituted by . The atom-optical system provides a much cleaner realization of the single particle Wannier-Stark Hamiltonian (1.1) than the solid state systems. No scattering by phonons or lattice impurities occurs. The atoms are neutral and therefore no excitonic effects have to be taken into account. Finally, the interaction between the atoms can be neglected in most cases which justifies a single particle description of the system. Indeed, Wannier-Stark ladders, Bloch oscillations and Zener tunneling have been measured in several experiments in optical lattices [123, 124, 129, 136, 137, 138]. Besides the semiconductor and optical lattices, different attempts have been made to find the Wannier-Stark ladder and Bloch oscillations in other systems like natural superlattices, optical and acoustical waveguides, etc. [139, 140, 141, 142, 143, 144, 145, 146, 147, 148]. However, here we denote them mainly for completeness. In the applications of the theory to real systems we confine ourselves to optical lattices and semiconductor superlattices. A final remark of this section concerns the choice of the independent parameters of the systems. In fact, by using an appropriate scaling, four main parameters of the physical systems – the particle mass , the period of the lattice , the amplitude of the periodic potential and the amplitude of the static force – can be reduced to two independent parameters. In what follows we use the scaling which sets , and . Then the independent parameters of the system are the scaled Planck constant (entering the momentum operator) and the scaled static force . In particular, for the system (1.20) the scaling , () gives i.e. the scaled Planck constant is inversely proportional to the intensity of the laser field. For the semiconductor superlattice, the scaled Planck constant is . 1.5 This work In this work we describe a novel approach to the Wannier-Stark problem which has been developed by the authors during the last few years [149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164]. By using this approach, one finds the complex spectrum (1.3) as the poles of a rigorously constructed scattering matrix. The suggested method is very efficient from the numerical points of view and has proven to be a powerful tool for an analysis of the Wannier-Stark states in different physical systems. The review consists of two parts. The first part, which includes chapters 2-3, deals with the case of a dc field. After introducing a scattering matrix for the Wannier-Stark system we describe the basic properties of the Wannier-Stark states, such as lifetime, localization of the wave function, etc., and analyze their dependence on the magnitude of the static field. A comparison of the theoretical predictions with some recent experimental results is also given. In the second part (chapters 4-7) we study the case of combined ac-dc fields: We show that the scattering matrix introduced for the case of dc field can be extended to the latter case, provided that the period of the driving field and the Bloch period (1.1) are commensurate, i.e. with being integers. Moreover, the integer in the last equation appears as the number of scattering channels. The concept of the metastable quasienergy Wannier-Bloch states is introduced and used to analyze the dynamical and spectral properties of the system (1.22). Although the method of the quasienergy Wannier-Bloch states is formally applicable only to the case of “rational” values of the driving frequency (in the sense of equation ), the obtained results can be well interpolated for arbitrary values of . The last chapter of the second part of the work deals with the same Hamiltonian (1.22) but considers a very different topic. In chapters 2-6 the system parameters are assumed to be in the deep quantum region (which is actually the case realized in most experiments with semiconductors and optical lattices). In chapter 7, we turn to the semiclassical region of the parameters, where the system (1.22) exhibits chaotic scattering. We perform a statistical analysis of the complex (quasienergy) spectrum of the system and compare the results obtained with the prediction of random matrix theory for chaotic scattering. To conclude, it is worth to add few words about notations. Through the paper we use low case to denote the Bloch states, which are eigenstates of the field free Hamiltonian (1.5). The Wannier-Stark states, which solve the eigenvalue problem with Hamiltonian (1.1) and which are our main object of interest, are denoted by capital . These states should not be mismatched with the Wannier states (1.8) denoted by low case . Besides the Bloch, Wannier, and Wannier-Stark states we shall introduce later on the Wannier-Bloch states. These states generalize the notion of Bloch states to the case of nonzero static field and are denoted by capital . Thus we always use capital letters ( or ) to refer to the eigenfunctions for and low case letter ( or ) in the case of zero static field, as summarized in the table below. function name dc-field Bloch delocalized eigenfunctions of the Hamiltonian Wannier dual localized basis functions Wannier-Stark resonance eigenfunctions of the Hamiltonian Wannier-Bloch res. eigenfunctions of the evolution operator Chapter 2 Scattering theory for Wannier-Stark systems In this work we reverse the traditional view in treating the two contributions of the potential to the Wannier-Stark Hamiltonian Namely, we will now consider the external field as part of the unperturbed Hamiltonian and the periodic potential as a perturbation, i.e. , where . The combined potential cannot support bound states, because any state can tunnel through a finite number of barriers and finally decay in the negative -direction (). Therefore we treat this system using scattering theory. We then have two sets of eigenstates, namely the continuous set of scattering states, whose asymptotics define the S-matrix , and the discrete set of metastable resonance states, whose complex energies are given by the poles of the S-matrix. Due to the periodicity of the potential , the resonances are arranged in Wannier-Stark ladders of resonances. The existence of the Wannier-Stark ladders of resonances in different parameter regimes has been proven, e.g., in [25, 26, 27, 28]. 2.1 S-matrix and Floquet-Bloch operator The scattering matrix is calculated by comparing the asymptotes of the scattering states with the asymptotes of the “unscattered” states , which are the eigenstates of the “free” Hamiltonian In configuration space, the are Airy functions where , , , and [165]. Asymptotically the scattering states behave in the same way, however, they have an additional phase shift , i.e. for we have Actually, in the Stark case it is more convenient to compare the momentum space instead of the configuration space asymptotes. (Indeed, it can be shown that both approaches are equivalent [160, 164].) In momentum space the eigenstates (2.3) are given by For the direction of decay is the negative -axis, so the limit of is the outgoing part and the limit the incoming part of the free solution. The scattering states solve the Schrödinger equation with . (By omitting the second argument of the wave function, we stress that the equation holds both in the momentum and coordinate representations.) Asymptotically the potential can be neglected and the scattering states are eigenstates of the free Hamiltonian (2.2). In other words, we have With the help of Eqs. (2.5) and (2.7) we get which is the definition we use in the following. In terms of the phase shifts the S-matrix obviously reads and, thus, it is unitary. To proceed further, we use a trick inspired by the existence of the space-time translational symmetry of the system, the so-called electric translation [166]. Namely, instead of analyzing the spectral problem (2.6) for the Hamiltonian, we shall analyze the spectral properties of the evolution operator over a Bloch period Using the gauge transformation, which moves the static field into the kinetic energy, the operator (2.9) can be presented in the form where the hat over the exponential function denotes time ordering 111Indeed, substituting into the Schrödinger equation, , the wave function in the form , we obtain where . Thus or .. The advantage of the operator over the Hamiltonian is that it commutes with the translational operator and, thus, the formalism of the quasimomentum can be used.222The tight-binding version of the evolution operator (2.10) was studied in Ref. [167]. Besides this, the evolution operator also allows us to treat the combined case of an ac-dc field, which will be the topic of the second part of this work. There is a one to one correspondence between the eigenfunctions of the Hamiltonian and the eigenfunctions of the evolution operator. Indeed, let be an eigenfunction of corresponding to the energy . Then the function is a Bloch-like eigenfunction of corresponding to the eigenvalue , i.e. Equation (2.13) simply follows from the continuous time evolution of the function (2.12), which is , or Let us also note that the quasimomentum does not enter into the eigenvalue . Thus the spectrum of the evolution operator is degenerate along the Brillouin zone. Besides this, the relation between energy and is unique only if we restrict the energy interval considered to the first “energy Brillouin zone”, i.e. . When the energy is restricted by this first Brillouin zone, the transformation inverse to (2.12) reads This relation allows us to use the asymptotes of the Floquet-Bloch solution instead of the asymptotes of the in the S-matrix definition (2.8). In fact, since the functions are Bloch-like solution, they can be expanded in the basis of plane waves: From the integral (2.15) the relation follows directly, i.e. in the momentum representation the functions and coincide at the points . Thus we can substitute the asymptotes of in Eq. (2.8). This gives where the energy on the right-hand side of the equation enters implicitly through the eigenvalue . Let us also note that by construction in Eq. (2.17) does not depend on the particular choice of the quasimomentum . In numerical calculations this provides a test for controlling the accuracy. 2.2 S-matrix: basic equations Using the expansion (2.16), the eigenvalue equation (2.13) can be presented in matrix form and the unitary operator is given in Eq. (2.11). [Deriving Eq. (2.18) from Eq. (2.13), we took into account that in the plane wave basis the momentum shift operator has the matrix elements .] Because does not depend on the quasimomentum ,333This means that the operators are unitary equivalent – a fact, which can be directly concluded from the explicit form of this operator. we can set and shall drop this upper matrix index in what follows. Figure 2.1: Matrix of the Floquet-Bloch operator for with system parameters and : The absolute values of the elements are shown in a grey scale plot. With increasing indices the matrix tends to a diagonal one. For , the kinetic term of the Hamiltonian dominates the potential and the matrix tends to a diagonal one. This property is exemplified in Fig. 2.1, where we depict the Floquet-Bloch matrix for the potential . Suppose the effect of the off-diagonals elements can be neglected for . Then we have For the unscattered states the formulas (2.20) hold exactly for any and, given a energy or , the eigenvalue equation can be solved to yield the discrete version of the Airy function in the momentum representation: . With the help of the last equation we have which can be now substituted into the S-matrix definition (2.17). We proceed with the scattering states . Suppose we order the with indices increasing from bottom to top. Then we can decompose the vector into three parts, where contains the coefficients for , contains the coefficients for and contains all other coefficients for . The coefficients of recursively depend on the coefficient , via Analogously, the coefficients of recursively depend on , via Let us define the matrix as the matrix , truncated to the size . Furthermore, let be the matrix accomplished by zero column and row vectors: Then the resulting equation for can be written as where is a vector of the same length as , with the first element equal to one and all others equal to zero. For a given , Eq. (2.27) matches the asymptotes and by linking , via and Eq. (2.24), to and, via and Eq. (2.25), to . Let us now introduce the row vector with all elements equal to zero except the last one, which equals one. Multiplying with yields the last element of the latter one, i.e. . Assuming that is not an eigenvalue of the matrix (this case is treated in the next section) we can multiply Eq. (2.27) with the inverse of , which yields Finally, substituting Eq. (2.22) and Eq. (2.28) into Eq. (2.17), we obtain with a phase factor , which ensures the convergence of the limit . The derived Eq. (2.29) defines the scattering matrix of the Wannier-Stark system and is one of our basic equations. To conclude this section, we note that Eq. (2.29) also provides a direct method to calculate the so-called Wigner delay time As shown in Ref. [153], Thus, one can calculate the delay time from the norm of the , which is preferable to (2.30) from the numerical point of view, because it eliminates an estimation of the derivative. In the subsequent sections, we shall use the Wigner delay time to analyze the complex spectrum of the Wannier-Stark system. 2.3 Calculating the poles of the S-matrix Let us recall the S-matrix definitions for the Stark system, The S-Matrix is an analytic function of the (complex) energy, and we call its isolated poles located in the lower half of the complex plane, i.e. those which have an imaginary part less than zero, resonances. In terms of the asymptotes of the scattering states, resonances correspond to scattering states with purely outgoing asymptotes, i.e. with no incoming wave. (These are the so-called Siegert boundary conditions [168].) As one can see directly from (2.22), poles cannot arise from the contributions of the free solutions. In fact, decreases exponentially as a function of for complex energies . Therefore, poles can arise only from the scattering states . Actually, we already noted the condition for poles in the previous section. In the step from equation (2.27) to the S-matrix formula (2.29) we needed to invert the matrix . We therefore excluded the case when is an eigenvalue of . Let us treat it now. If is an eigenvalue of , the equation defining then reads The scattering state we get contains no incoming wave, i.e. it fulfills the Siegert boundary condition. In fact, the first element is equal to zero, which follows directly from the structure of , and consequently . In addition, the eigenvalues fulfill ,444This property follows directly from non-unitarity of : . which in terms of the energy means . Let us also note that, according to Eq. (2.25), the outgoing wave diverges exponentially as . Figure 2.2: The eigenvalues of the matrix calculated for system (2.1) with , and . The numerical parameters are , and . The eigenvalues corresponding to the first three Wannier-Stark ladders are marked by circles. On the right to the figure is the matlab source code which generates the depicted data. Equation (2.33) provides the basis for a numerical calculation of the Wannier-Stark resonances. A few words should be said about the numerical algorithm. The time evolution matrix (2.11) can be calculated by using plane wave basis states via where , and is the truncated matrix of the operator . Then, by adding zero elements, we obtain the matrix and calculate its eigenvalues . The resonance energies are given by . As an example, Fig 2.2 shows the eigenvalues in the polar representation for the system (2.1) with . Because of the numerical error (introduced by truncation procedure and round error) not all eigenvalues correspond to the S-matrix poles. The “true” can be distinguished from the “false” by varying the numerical parameters , and the quasimomentum (we recall that in the case of dc field is independent of ). The true are stable against variation of the parameters, but the false are not. In Fig 2.2, the stable are marked by circles and can be shown (see next section) to correspond to Wannier-Stark ladders originating from the first three Bloch bands. By increasing the accuracy, more true (corresponding to higher bands) can be detected. 2.4 Resonance eigenfunctions According to the results of preceding section, the resonance Bloch-like functions , referred to in what follows as the Wannier-Bloch functions, are given (in the momentum representation) by where are the elements of the eigenvector of Eq. (2.33) in the limit . The change of the notation indicates that from now on we deal with the resonance eigenfunctions corresponding to the discrete (complex) spectrum . The Wannier-Stark states , which are the resonance eigenfunction of the Wannier-Stark Hamiltonian , are calculated by using Eq. (2.14) and Eq. (2.15). In fact, according to Eq. (2.14), the quasimomentum of the Wannier-Bloch function changes linearly with time and explores the whole Brillouin zone during one Bloch period. Thus, one can obtain the Wannier-Stark states by calculating the eigenfunction of the evolution operator for, say, and propagating it over the Bloch period. (Additionally, the factor should be compensated.) We used the discrete version of the continuous evolution operator, given by (2.34) with the upper limit substituted by the actual number of timesteps. Resonance Wannier-Stark functions corresponding to two most stable resonances are shown in Fig. 2.3. Figure 2.3: Resonance wave functions of the two most stable resonances of system (2.1) with parameters and in momentum and in configuration space. The ground state is plotted as a dashed, the first excited state as a solid line. In the second figure the first excited state is shifted by one space period to enhance the visibility. Figure 2.4: Comparison of the wave functions calculated within the different approaches for and , shown on a linear (top) and on a logarithmic scale (bottom). The dotted line is the tight-binding, the dashed line the single-band and the solid line is the scattering result. The left panel in Fig. 2.3 shows the wave functions in the momentum representation, where the considered interval of is defined by the dimension of the matrix , i.e. . The (faster than exponential) decrease in the positive direction is clearly visible. The tail in the negative direction reflects the decay of resonances. Although it looks to be constant in the figure, its magnitude actually increases exponentially (linearly in the logarithmic scale of the figure) as . The wave functions in the coordinate representation (right panel) are obtained by a Fourier transform. Similar to the momentum space the resonance wave functions decrease in positive -direction and have a tail in the negative one. Obviously, a finite momentum basis implies a restriction to a domain in space, who’s size can be estimated from energy conservation as . Additionally the Fourier transformation introduces numerical errors due to which the wave functions decay only to some finite value in positive direction. We note, however, that for most practical purposes it is enough to know the Wannier-Stark states in the momentum representation. Now we discuss the normalization of the Wannier-Stark states. Indeed, because of the presence of the exponentially diverging tail, the wave functions or can not be normalized in the usual sense. This problem is easily resolved by noting that for the non-hermitian eigenfunctions (i.e. in the case considered here) the notion of scalar product is modified as where and are the left and right eigenfunctions, respectively. In Fig. 2.3 the right eigenfunctions are depicted. The left eigenfunctions can be calculated in the way described above, with the exception that one begins with the left eigenvalue equation for the row vector . In the momentum representation, the left function coincides with the right one, mirrored relative to . (Note that in coordinate space, the absolute values of both states are identical.) In other words, it corresponds to a scattering state with zero amplitude of the outgoing wave. Since for the right wave function a decay in the positive -direction is faster than the increase of the left eigenfunction (being inverted, the same is valid in the negative -direction), the scalar product of the left and right eigenfunctions is finite. In our numerical calculation we typically calculate both functions in the momentum representation and then normalize them according to (Here and below we use the Dirac notation for the left and right wave functions.) Let us also recall the relations for the wave functions in the coordinate representation and in the momentum space. Thus it is enough to normalize the function for . Then the normalization of the other functions for will hold automatically. For the purpose of future reference we also display a general (not restricted to the first energy Brillouin zone) relation between the Wannier-Bloch and Wannier-Stark states (compare with Eq. (1.8). It is interesting to compare the resonance Wannier-Stark states with those predicted by the tight-binding and single-band models. Such a comparison is given in Fig. 2.4, where the ground Wannier-Stark state for the potential is depicted for three different values of the static force . As expected, for small , where the resonance is long-lived, both approximations yield a good correspondence with the exact calculation. (In the limit of very small the single-band model typically gives a better approximation than the tight-binding model.) In the unstable case, where the resonance state has a visible tail due to the decay, the results differ in the negative direction. On logarithmic scale one can see that the order of magnitude up to which the results coincide is given by the decay tail of the resonances. In the positive -direction the resonance wave functions tend to be stronger localized. It should be noted that in Fig. 2.4 we considered the ground Wannier-Stark states only for moderate values of the static force . For larger , because of the exponential divergence, the comparison of the resonance Wannier-Stark states with the localized states of the single-band model loses its sense. The same is also true for higher () states. Moreover, the value of , below which the comparison is possible, rapidly decreases with increase of band index . Chapter 3 Interaction of Wannier-Stark ladders In this chapter we give a complete description of the dependence of the width of the Wannier-Stark resonances on the parameters of the Wannier-Stark Hamiltonian. In scaled units, the Hamiltonian has two independent parameters, the scaled Planck constant and the field strength . In our analysis we fix the value of and investigate the width as a function of the field strength. The calculated lifetimes are compared with the experimentally measured lifetimes of the Wannier-Stark states. 3.1 Resonant tunneling To get a first glimpse on the subject, we calculate the resonances for the Hamiltonian (2.1) with for . For the chosen periodic potential the field-free Hamiltonian has two bands with energies well below the potential barrier. For the third band, the energy can be larger than the potential height. Therefore, with the field switched on, one expects two long-lived resonance states in each potential well, which are related to the first two bands. Figure 3.1: a) Resonance width of the most stable resonances as a function of the inverse field strength . b) Energies of the most stable resonances as a function of (solid line: most stable resonance, dashed line: first excited resonance, dashed dotted line: second excited resonance). Parameters are and . Figure 3.1(a) shows the calculated widths of the six most stable resonances as a function of the inverse field strength . The two most stable resonances are clearly separated from the other ones. The second excited resonance can still be distinguished from the others, the lifetime of which is similar. Looking at the lifetime of the most stable state, the most striking phenomenon is the existence of very sharp resonance-like structures, where within a small range of the lifetime can decrease up to six orders of magnitude. In Fig. 3.1(b), we additionally depict the energies of the three most stable resonances as a function of the inverse field strength. As the Wannier-Stark resonances are arranged in a ladder with spacing , we show only the first energy Brillouin zone . Let us note that the mean slope of the lines in Fig. 3.1(b) defines the absolute position of the Wannier-Stark resonances in the limit . As follows from the single band model, these absolute positions can be approximated by the mean energies of the Bloch bands. Depending on the value of , we can identify a particular Wannier-Stark resonance either as under- or above-barrier resonance.111This classification holds only in the limit . In the opposite limit all resonances are obviously above-barrier resonances. Comparing Fig. 3.1(b) with Fig. 3.1(a), we observe that the decrease in lifetime coincides with crossings of the energies of the Wannier-Stark resonances. All three possible crossings manifest themselves in the lifetime: Crossings of the two most stable resonances coincide with the sharpest peaks in the ground state width. The smaller peaks can be found at crossings of the ground state and the second excited state. Finally, crossings of the first and the second excited state fit to the peaks in the width of the first excited state. Figure 3.2: Wannier-Stark resonances in different minima of the potential : The most stable resonance and some members of the first excited Wannier-Stark ladder are shown. The parameters are and . The explanation of this effect is the following: Suppose we have a set of resonances which localize in one of the -periodic minima of the potential . Let be the energy difference between two of these states. Now, due to the periodicity of the cosine, each resonance is a member of a Wannier-Stark ladder of resonances, i.e. of a set of resonances with the same width, but with energies separated by . Figure 3.2 shows an example: The two most stable resonances for one potential minimum are depicted, furthermore two other members of the Wannier-Stark ladder of the first excited resonance. To decay, the ground state has to tunnel three barriers. Clearly, if there is a resonance with nearly the same energy in one of the adjacent minima, this will enhance the decay due to phenomenon of resonant tunneling. The strongest effect will be given for degenerate energies, i.e. for , which can be achieved by properly adjusting , because the splitting is nearly independent of the field strength. For the case shown in Fig. 3.2, such a degeneracy will occur, e.g., for a slightly smaller value (see Fig. 3.1). Then we have two resonances with the same energies, which are separated by two potential barriers. In the next section we formalize this intuitive picture by introducing a simple two-ladder model. 3.2 Two interacting Wannier-Stark ladders It is well known that the interaction between two resonances can be well modeled by a two-state system [34, 169, 170, 171]. In this approach the problem reduces to the diagonalization of a matrix, where the diagonal matrix elements correspond to the non-interacting resonances. In our case, however, we have ladders of resonances. This fact can be properly taken into account by introducing the diagonal matrix in the form [155, 160] It is easy to see that the eigenvalues of correspond to the relative energies of the Wannier-Stark levels and, thus, the matrix models two crossing ladders of resonances.222 The resonance energies in Eq. (3.1) actually depend on but, considering a narrow interval of , this dependence can be neglected. Multiplying the matrix by the matrix we introduce an interaction between the ladders. The matrix can be diagonalized analytically, which yields Based on Eq. (3.3) we distinguish the cases of weak, moderate or strong ladder interaction. Figure 3.3: Illustration to the two-ladder model. Parameters are , , and (left column), (center), and (right column). Upper panels show the energies , lower panels the widths . The value obviously corresponds to non-interacting ladders. By choosing but we model the case of weakly interacting ladders. In this case the ladders show true crossing of the real parts and “anticrossing” of the imaginary parts. Thus the interaction affects only the stability of the ladders. Indeed, for Eq. (3.3) takes the form It follows from the last equation that at the points of crossing (where the phases of and coincide) the more stable ladder (let it be the ladder with index 0, i.e. or ) is destabilized () and, vice versa, the less stable ladder becomes more stable (). The case of weakly interacting ladders is illustrated by the left column in Fig. 3.3. By increasing above , the case of moderate interaction, where the true crossing of the real parts is substituted by an anticrossing, is met. As a consequence, the interacting Wannier-Stark ladders exchange their stability index at the point of the avoided crossing (see center column in Fig. 3.3). The maximally possible interaction is achieved by choosing . Then the eigenvalues of the matrix are
36f5cae07b9df77b
Monday, March 8, 2010 Rogue Waves Today I ran across an interesting essay on our changing understanding of scurvy. As often happens when you learn history better, the simple narratives turn out to be wrong. And you get strange things where as science progressed it discovered a good cure for scurvy, they lost the cure, they proved that their understanding was wrong, then wound up unable to provide any protection from the disease, and only accidentally eventually learned the real cause. The question was asked about how much else science has wrong. This will be a shorter version of a cautionary tale about science getting things wrong. I thought of it because of a a hilarious comedy routine I saw today. (If you should stop reading here, do yourself a favor and watch that for 2 minutes. I guarantee laughter.) That is based on a major 1991 oil spill. There is no proof, but one possibility for the cause of that accident was a rogue wave. (Rogue waves are also called freak waves.) If so then, comedy notwithstanding, the ship owners could in no way be blamed for the ship falling apart. Because the best science of the day said that such waves were impossible. Here is some background on that. The details of ocean waves are very complex. However if you look at the ratio between the height of waves and the average height of waves around it you get something very close to a Rayleigh distribution, which is what would be predicted based on a Gaussian random model. And indeed if you were patient enough to sit somewhere in the ocean and record waves for a month, the odds are good that you'd find a nice fit with theory. There was a lot of evidence in support of this theory. It was accepted science. There were stories of bigger waves. Much bigger waves. There were strange disasters. But science discounted them all until New Years Day, 1995. That is when the Draupner platform recorded a wave that should only happen once in 10,000 years. Then in case there was any doubt that something odd was going on, later that year the RMS Queen Elizabeth II encountered another "impossible" wave. Remember what I said about a month of data providing a good fit to theory? Well Julian Wolfram carried out the same experiment for 4 years. He found that the model fit observations for all but 24 waves. About once every other month there was a wave that was bigger than theory predicted. A lot bigger. If you got one that was 3x the sea height in a 5 foot sea, that was weird but not a problem. If it happened in a 30 foot sea, you had a monster previously thought to be impossible. One that would hit with many times the force that any ship was built to withstand. A wall of water that could easily sink ships. Once the possibility was discovered, it was not hard to look through records of shipwrecks and damage to see that it had happened. When this was done it was quickly discovered that huge waves appeared to be much more common in areas where wind and wave travel opposite to an ocean current. This data had been littering insurance records and ship yards for decades. But until scientists saw direct proof that such large waves existed, it was discounted. Unfortunately there were soon reports such as The Bremen and the Caledonian Star of rogue waves that didn't fit this simple theory. Then satellite observations of the open ocean over 3 weeks found about a dozen deadly giants in the open ocean. There was proof that rogue waves could happen anywhere. Now the question of how rogue waves can form is an active research topic. Multiple possibilities are known, including things from reflections of wave focusing to the Nonlinear Schrödinger equation. While we know a lot more about them, we know we don't know the whole story. But now we know that we must design ships to handle this. This leads to the question of how bad a 90 foot rogue wave is. Well it turns out that typical storm waves exert about 6 tonnes of pressure per square meter. Ships were designed to handle 15 tonnes of pressure per square meter without damage, and perhaps twice that with denting, etc. But due to their size and shape, rogue waves can hit with about 100 tonnes of pressure per square meter. Are you surprised that a major oil tanker could see its front fall off? If you want to see what one looks like, see this video. No comments:
cd87f099ed33ea50
Monday, January 14, 2013 Anomalies, anomalies,... Many interesting signals about new physics haved emerged during last week and all of could be signatures of the new physics predicted by TGD. Cosmological principle questioned One of the many hypes of the last year was that cosmological principle has been validated above some length scale. In other words, beyond certain length scale universe would appear homogenous and isotropic as cosmological principle assumes. From Wikipedia one lears that the scale is about 4 billion years. At that time I commented the announcement only in the comment section of some posting. Unfortunately, I do not remember the posting and could not find from web the appropriate link. During the erate of hype situations however change very rapidly. Now we learn that cosmological principle is under severe threat: see this. A structure consisting of quasars with gigantic size of 4 billion light years has been discovered. What says TGD? The notion of many-sheeted space-time means a revolution in cosmology based on TGD. In TGD cosmological principle is replaced by its fractal variant meaning Russian doll cosmology. In large enough scales space-time sheets are approximately Lorentz invariant (cosmological principle) and can be modelled by Robertson-Walker cosmologies. This is of course approximation using some length scale resolution. Furthermore, R-W cosmologies are vacuum extremals of Kähler action and as such non-physical except as models giving average energy momentum tensor via Einstein's equations. Einstein-Maxwell equations hold true for preferred extremals in all length scales- albeit with G and Λ comes as predictions rather than inputs. Astrophysics and magnetic ropes Magnetic flux ropes havr been discovered in the atmospheres of various planets, including Earth. Now they are discovered also around Venus . They carry superheated plasma gas from the one side of the rope to another one. Earlier I told about magnetic ropes in much longer scales: see Giant dark matter bridge between galaxy clusters discovered. Magnetic flux tubes carrying dark matter would be in question. Magnetic flux tubes in various scales define a basic prediction of TGD and they would have resulted as gradual thickening of "cosmic" strings predicted to be dominated primordial TGD inspired cosmology. These primordial cosmic strings have strictly 2-D Minkowski space projection. They would be what string model builders should be ore than happy abut but unfortunately they have nothing to do with superstrings. Looming dark matter announcements Lubos Motl has a posting summarizing several anomalous findings. For few years ago so called musket-ball galaxy cluster was discovered and the newest analysis of the data has yielded a surprise. Two colliding galaxy clusters are in question. Scientists believe that the visible stars in these galaxies make up only about 2 percent of the total mass in the cluster. About 12 percent of the mass is found in hot gas, which shines in X-ray wavelengths, while the remaining roughly 86 percent is made of invisible dark matter. Because the galaxies make up so little of the mass of the system and the spaces between them are so large, they don’t really do much of the crashing. Odds are that they will simply sail by one another as the clusters merge. It’s mostly the gas that collides, causing it to slow down and fall behind the galaxies as trails. Same is expected in the case of dark matter which should have only gravitational interactions. Astronomers were also able to make a maps of dark matter in musket ball galaxy using the bending of light in the field the galaxy as a diagnostic tool. The surprise was however that more precise measurements suggest that the dark matter does not behave as it should! The behaviour seems also now involve aspects similar to that of the gas phase, which is due to the short range forces- basically electromagnetic. Needless to say, this is in direct conflict with the dominating dark matter paradigm. Does this mean that the dark matter has also other than gravitational interactions with itself?! TGD based view of dark matter differs from standard one. There is entire hierarchy of dark matter phases corresponding to a hierarchy of effective values of Planck constants. Different levels of the hierarchy correspond to different space-time sheets so that Feynman diagram at given space-time sheet can contain only particles with the same value of effective Planck constant. Therefore dark matterparticles in TGD sense can have samemutual interactions as ordinary matter and the particle quantum numbers spectrum can be the same. A long cosmic string containing galaxies along it like pearls in the necklace is the TGD basec explanation for galactic dark matter manifesting itself as constant velocity spectrum of distant stars. This spectrum follows automatically from the 1-D character of the distribution of magnetic energy of flux tubes identified as dark energy and serving as a source of gravitational field. Also dark matter in the above sense is expected to be present. There are certainly also non-gravitational interactions between the long magnetic strings associated with colliding galactic clusters occurring via Kähler magnetic fields. Dark energy alternatives to Einstein are running of of room It is known that the expansion of the universe is accelerating. Cosmological constant appearing in Einstein's equations as a fundamental constant is a straighforward formal explanation for the accelerated expansion. One can also explain cosmological constant in terms of dark energy - or more precisely, in terms of the energy momentum tensor assignable with dark vacuum energy proportional to metric. This dark energy is often called quintessence. The vacuum expectation values of scalar fields determining the cosmological constant gradually change and so does cosmological constant. Remakably, also proton/eleectron mass ratio depends on the vacuum expectation values. It has become now possible to perform accurate enough measurements about proton/electron mass ratio and the recent analysis of the data shows that the ratio has not changed at all to one part in then million after a time when the universe was half about its current age, around 7 billion years ago. Huge variety of models for dark energy are excluded and the situation of inflationary scenario is becoming rather gloomy. What says TGD? In TGD framework p-adic mass calculations in principle predict the proton/electron mass ratio and there are no rolling scalar fields responsible for inflation: there is simply no need for inflation in TGD Universe since quantum criticality explains at general level the flatness of 3-space. Dark energy corresponds to magnetic energy of magnetic flux tubes structures which originate from the primordial cosmology and magnetic tension gives rise to "negative pressure" responsible for accelerated expansion. As a matter fact, TGD provides several descriptions of the accelerated expansion assignable to different scales of description. Einstein's equations with cosmological term are satisfied by all preferred extremals but G and Λ are now predictions rather than input parameters and depens in principle on space-time sheet. These equations could be called microscopic (the original interpretation was diametrical opposite of this!). Critical cosmology has imbedding as Robertson-Walker cosmology which is unique apart from its duration which is finite and corresponds to accelerated expansion with negative "pressure". Also overcritical cosmologies have finite duration. D0 of the Tevatron reports a potential particle physics anomaly: new evicence for M89 hadron physics? D0 of the Tevatron reports a potential new physics anomaly. Below is the abstract of their preprint titled Measurement of the ratio of differential cross sections σ(ppbar → Z +b jet)/σ (ppbar → Z + jet) in ppbar collisions at sqrt(s)=1.96 TeV. We measure the ratio of cross sections, σ(ppbar → Z +b jet)/σ(ppbar → Z + jet), for associated production of a Z boson with at least one jet. The ratio is also measured as a function of the jet transverse momentum, jet pseudorapidity, Z boson transverse momentum, and the azimuthal angle between the Z boson and the closest jet for events with at least oneb jet. These measurements use data collected by the D0 experiment in Run II of Fermilab's Tevatron ppbar Collider at a center-of-mass energy of 1.96 TeV, and correspond to an integrated luminosity of 9.7 fb-1. The results are compared to predictions from next-to-leading order calculations and various Monte Carlo event generators. The group reports that they have not been able able to build overall fit for the ratio of differential cross sections with respect to all variables in the entire region studied by using the Monte Carlos programs available. Also Higgs contributes to the ratio studied via the decays H→ bbar following associated production of Z and H. If Higgs behaves as standard model Higgs the experiment can be seen as a test of perturbative QCD since apart from Z emission the Feynman graphs involve only strong interaction vertices. Therefore the claimed anomaly could be seen as a further indication for M89 hadron physics in TGD framework. Lubos in turn hopes that the anomaly could be seen as evidence for a new Higgs like state predicted by N =1 SUSY in some form. The Feynman graphs at the second page of W/Z +b jets: discussion of possible improvements and planned/ongoing activities represent the leading QCD contributions for the process ppbar→ W +b jet. By replacing W with Z one has the recent situation. In the first graph quark q and antiquark qbar annihilate to Z and gluon g, which annihilates to bbar. In the second graph incoming quark q emits Z and incoming gluon g decays to bbar. After that b and q exchange gluon. Suppose that the decay of M89 color magnetic flux tubes representing low energy M89 mesons explains the production of correlated charged particle pairs moving in same or opposite directions. The same model predicts that M89 gluons and quarks move along the flux tube: effectively one has QCD in 2-D Minkowski space if one considers only gluon exchanges parallel to the flux tube. The exchanged gluons could be however also transversal to the flux tube if they have large enough transversal momentum. 1. For istance, in 2-D QCD variant of the first diagram could correspond to q and qbar moving in opposite directions along the flux tube and kenooverlineq emitting parallel Z and recoiling in opposite direction and after annihilating with q to gluon decying to bbar. 2. The 2-D QCD variant of the first diagram would correspond to g and q moving in opposite directions. The decays g→ bbar and q → q+Z take place and a gluon moving parallel to flux tube is exchanged between b and q. These diagrams would represent M89 contributions to studied process and might explain the claimed discrepancy. At 11:27 PM, Anonymous ◘Fractality◘ said... Salvinorin A is a diterpene compound that yields many "anomalies" in consciousness study. Well regards, At 7:37 AM, Blogger 11 said... About the huge Quasar group, I think you are right with the cosmological principle problem. It look like the start of two or three Lyman Alpha systems. At 12:17 PM, Blogger Ulla said... At 1:24 PM, Blogger Ulla said... At 2:46 AM, Anonymous Matti Pitkanen said... Comment to Ulla: Verlinde's idea has been already tested (neutron diffraction in Earth's gravitational field) and failed in the test. Besides this the idea is extremely primitive and confused. I am really astonished that a person working with refined string mathematics can suffer such a regression. Probably Verlinde is a victim of his own fame: if he had patience to wait for a few months before publishing the first preprint, he could have decided to abstain from publishing it at all. That this kind of idea can receive financial support at all - not so say anything about millions of euros- demonstrates the fatal consequences of name worship in theoretical physics. We are living very depressing times in theoretical physics. The field is badly in need of young brave intellectuals but continues to be dominated by old farts and their courts. The posting of Lubos ( is a good example of the ultraconservative attitude that quantum theory is final theory and every problem worth of solving have been solved by super string approach. In particular, consciousness is a pseudo problem which does not deserve scientific study because existing quantum theory says nothing about it. We must be happy with superstrings unless we want to be labelled as idiots. At 4:54 AM, Blogger Ulla said... At 8:59 AM, Blogger Ulla said... Within galaxies, there is a competition of sorts for the available gas; for either the formation of new stars or feeding the central black hole. For more than a decade the leading models and theories have assigned a fixed fraction of the gas to each process, effectively preserving the ratio of black hole mass to galaxy mass. New research to be published in The Astrophysical Journal reveals that this approach needs to be changed. "We now know that each ten-fold increase of a galaxy's stellar mass is associated with a much larger 100-fold increase in its black hole mass," Professor Graham said. At 2:38 PM, Anonymous Santeri Satama said... Matti, hopefully this cheers you up and gives some faith in open minded scientific study of consciousness, in cooperation with other traditions: (Seminar still going on and lot to see...) At 9:57 PM, Anonymous Matti Pitkanen said... Thank you. Cheering is indeed needed. I have again problems with homepage. The worrying signals began already a couple of months ago. I got repeatedly a message that I should pay my yearly net ID and webhotel rent. Strangely, no bank account number was given. Eventually I got a message containing this data bit and I payed the bills: of course so: homepage contains my life work and I do not want to take any risks of losing it. I wrote to the email address and told payment would be there in due date 12.1. 2013. I also checked that this was the case. I however got no response to any of my messages nor had got any response to the earlier messages. It began to become clear that the young fellow has automatized everything. He has bought few computers to his webhotel and just collects the money without providing absolutely any services: the sales department and technical help in web are just scenery. No responses to emails and no phone number so that the customer is totally helpless and completely at the mercy of this fellow. Understandably my fears began to grow. Yesterday my worst worries turned out to be true. It became impossible to send anything to my homepage and the homepage itself was replaced with idiotic advertisements. You can see yourself: . I really do not know what to do: the board for consumer rights answers to enquiries with a lapse of one year as it did last time when similar company told suddenly that it does not continue its services. This organization is totally toothless against these young business psychopaths because it does not get the needed resources: this elimination of consumer rights is is of course a fully conscious political choice by the right wing. They want unrestricted market economy and have got it. At 6:03 AM, Anonymous Santeri Satama said... So sorry for your trouble. To give you something else to think, His Holiness asked during the day two morning session, could dark matter have existed before or independent of Big Bang. How would you answer that question? At 6:31 AM, Anonymous Matti Pitkanen said... This is the standard new year's carastrophe. Either massive virus attack or necessity to transfer the material to a new page followed by disappearance from web and one month of hard work. This world clearly does not want TGD. Also social office gave its own new year gift: no support for this month. Even worse: they snipped more than one half 8325 euros) from my unemployment money without giving any reason for this. As results I have roughly minus 100 euros to buy food this month. To answer the question posed by his Holiness, one must first answer what Big Bang means. In TGD one express the situation by saying that Big Bang is replaced with a silent whisper amplified to a rather big bang. Means that mass per comoving volume vanishes near the boundary of light-cone defining what we call big bang usually. Also hierarchy of cosmologies within cosmologies is predicted this means Russian doll structure. Before Big Bang means bigger Big Bang containing the smaller one as topologically condesed space-time sheet (or actually pair of them glued together along boundaries: flattened ball instead of disk). Dark matter is present everywhere in 4-D sense. One must of course be careful with what one means with dark matter. Most of so called galactic dark matter could be Kahler magnetic energy associated with magnetic flux tubes originating from primordial cosmic strings - TGD counterpart for dark energy. This magnetic energy associated with flux tubes would create gravitational fields giving rise to constant velocity spectrum of stars. What is beautiful that the velocity spectrum comes automatically correctly and their is no need for elaborate fits. Basic prediction is free motion of galaxies along long string along which they are organized. Dark matter in TGD sense would be these large Planck constant phases and something very different from standard dark matter candidates: one big difference is that dark matter has standard electroweak and strong interactions with itself. Ordinary and large hbar particles do not however appear in same vertex and my belief is that this is enough for experimental purposes. Recently it has been indeed observed that contrary to the standard beliefs dark matter has self interactions as I told in the posting. Evidence for TGD view is slowly but steadily accumulating. Therefore it would be wonderful if communication with academic colleagues were possible. The problem of recent day physics is that only names and power matter instead of content. At 6:34 AM, Anonymous Matti Pitkanen said... Sorry, I exaggarated a little bit my monthly unemployment money as I mentioned the action of social office on my unemployment money: 325 euros or more than one half. Not 8325 euros!! By the way, I just heard from radio that with the income of 100 richest people of the world poverty could be removed from world. At 8:17 AM, Anonymous Santeri Satama said... To clarify, are the Russian Doll Dosmologies characterized by different values of hbar, larger inclusive cosmologies with larger values than the smaller smaller topologically condensed space-time sheet pairs? Or is each level of cosmology characterized by some spectrum of hbar? Or is this question total misunderstanding of your hypothesis? At 10:54 AM, Blogger Ulla said... But you have the backup copies in order? This actually happened to my friend too, suddenly the homepage was sold and blanked. At 3:35 AM, Anonymous Matti Pitkanen said... To Santeri: There are three hierarchies involved. Hierarchy of space-time sheets labelled by selected p-adic primes charactering their size scales. At imbedding space level the hierarchy of CD with size scale defined by what I call secondary p-adic length scale. And dark matter hierarchy with levels labelled by positive integers defining the multiple of effective Planck constant. All are present and cosmology in given scale would involve all these three hierarchy levels. Those associated with CD and space-time sheets with ordinary hbar being our cosmology (space-time sheets and its field body). Non-standard values of effective Planck constant would correspond to dark worlds to us. Do not ask about further details;-). To Ulla: The situation was resolved. I threated with court-room and the homepage returned back within few hours. I do not know whether this was the real reason. It is quite possible that this fellow has adopted the principle avoiding all contacts with clients and has computerized everything. He must feel himself lonely in the world in which the purpose of the rest of humankind is to feed money to his bank account. I have done a lot of automatization in the handling of large numbers of files. Saves work enormously but the unavoidable outcome is a big blunder now and then. Human factor. At 8:33 AM, Anonymous Santeri Satama said... Matti: "Do not ask about further details;-)." Why, isn't that where the devil lurks? ;) The presentations with Dalai Lama have been fun and interesting. Someone said that QM is very simple, just Schrödinger equation and that's it, no need to presuppose or demand any world view behind it (as any such attempt leads to paradoxes). And it also seems that the galilean view of time as isochronic pendulum is also already contained in that wave function. Wiki gave a nice Feynmann quote: "Where did we get that (equation) from? Nowhere. It is not possible to derive it from anything you know. It came out of the mind of Schrödinger." Speaking of Wiki, if you ever consider rewriting or reformatting your texts, a single hypertext file (e.g. Wiki-format) would be in many ways preferable to several pdf's. At 9:34 PM, Anonymous Matti Pitkanen said... You are asking too much! I simply cannot give details! This is like asking Columbus to give a detailed satellite map of the new contintent or whatever he thought it to be! Filling them would be a collective effort of theorists and experimentalists. Consider only the understanding of single scalar particle: Higgs. Still the situation is experimentally and theoretical half open! Despite enormous theoretical and experimental work done for for decades. At 9:43 PM, Anonymous Matti Pitkanen said... To Santeri: Thank you for nice questions. Wave mechanics is very simple if you forget the problems of quantum measurement theory as good Copenhagenist is taught to do. Lubos has been an excellent pupil: maybe because of his childhood background in authoritative political system has made him so good a student;-) who has now become Prussian Schulemeister for new generations;-). This belief has led to a stagnation of theoretical physics lasted almost century. I do not agree with Feynman about Schrodinger equation. Times have changed when Feynman mystified Schrodinger equation. *Already Feynman knew that Schrodinger equation can be derived from Dirac equation at non-relativistic limit and this in turn follows naturally from QFT, where spinor structure has geometric interpretation: maybe the geometric aspect was perhaps not too familiar for Feynman who wanted also to see gravitaty in erms of interactions of matter and spin two particles in Minkowski space rather than as curvature of space time. *When one replaces point like particles with 3-surfaces - about this Feynman knew nothing. With all respect, same is regrettably true about most of my colleagues in their ivory towers;-) - one is forced to construct Kahler geometry and spinor structure of "world of classical worlds" (WCW). *Spinor structure for WCW in turn requires second quantized spinor fields- a purely geometric structure -at space-time level. This means geometrization of Fermi statistics and fermions: not possible in standard QFT. Suddenly the mysteries disappear: what resolves them is the realization that particles are not mathematical points. *By infinite-dimensionality of WCW the basic rules of quantum theory are unique (probabilities as squares of inner products bilinear in quantum states). They cannot have any other than standard form because more complex expressions are mathematically ill-defined. This is the magic of infinite-D geometry where wrong mathematical form automatically leads to the appearance of infinities: the functional integral over WCW simply cease to exist. Infinite-D calculus is enormously restrictive discipline as compared to its finite-D counterpart. This is the reason why it took so long to find first candidates for QFTs free of infinities. At 9:50 PM, Anonymous Matti Pitkanen said... To Santeri: The core of the above arguments was that the basic formalism of QFT is forced by infinite-dimensionality of WCW. This is however only one half of the story. Quantum measurement theory and its mysteries remain. Copenhagen interpretation would mean givng up the idea that WCW spinor fields represent something real. Taking into account the beauty and elegance of this notion, this would be simply idiotic. In zero energy ontology WCW spinor fields correspond to zero energy states and are analogs of physical events in the usual positive energy ontology - something very real and geometric. No mystics involved. The identification of quantum jump as moment of consciousness is what removes the paradoxes revolving around the notion of state function reduction. The new ontological element is that single object reality as quantum state of the universe is replaced with an infinite number of possible and in principle reachable realities- zero energy states. Evolution as quantum jumps - moments of recreation - replacing zero energy state with a new one, emerges automatically and connections to biology and consciousness are obvious. One can also resolve the paradoxes produced to the wrong identification of subjective time with geometric time, and a radically view about time ("times" to be precise) emerges. To me this option is much more attractive than sticking to century old dogma giving up completely the idea of physical reality (realities in TGD framework) and taking physical theory as a mere collection of rules. At 1:01 AM, Blogger Ulla said... Look here, maybe the reason why TGD is an interesting domain? Post a Comment << Home
074c4b4f49c30b43
sabato 9 novembre 2013 Quantum Mechanics: Collapse Theories. Quantum mechanics, with its revolutionary implications, has posed innumerable problems to philosophers of science. In particular, it has suggested reconsidering basic concepts such as the existence of a world that is, at least to some extent, independent of the observer, the possibility of getting reliable and objective knowledge about it, and the possibility of taking (under appropriate circumstances) certain properties to be objectively possessed by physical systems. It has also raised many others questions which are well known to those involved in the debate on the interpretation of this pillar of modern science. One can argue that most of the problems are not only due to the intrinsic revolutionary nature of the phenomena which have led to the development of the theory. They are also related to the fact that, in its standard formulation and interpretation, quantum mechanics is a theory which is excellent (in fact it has met with a success unprecedented in the history of science) in telling us everything about what we observe, but it meets with serious difficulties in telling us what is. We are making here specific reference to the central problem of the theory, usually referred to as the measurement problem, or, with a more appropriate term, as the macro-objectification problem. It is just one of the many attempts to overcome the difficulties posed by this problem that has led to the development of Collapse Theories, i.e., to the Dynamical Reduction Program (DRP). As we shall see, this approach consists in accepting that the dynamical equation of the standard theory should be modified by the addition of stochastic and nonlinear terms. The nice fact is that the resulting theory is capable, on the basis of a single dynamics which is assumed to govern all natural processes, to account at the same time for all well-established facts about microscopic systems as described by the standard theory as well as for the so-called postulate of wave packet reduction (WPR). As is well known, such a postulate is assumed in the standard scheme just in order to guarantee that measurements have outcomes but, as we shall discuss below, it meets with insurmountable difficulties if one takes the measurement itself to be a process governed by the linear laws of the theory. Finally, the collapse theories account in a completely satisfactory way for the classical behavior of macroscopic systems. Two specifications are necessary in order to make clear from the beginning what are the limitations and the merits of the program. The only satisfactory explicit models of this type (which are essentially variations and refinements of the one proposed in the references Ghirardi, Rimini, and Weber (1985, 1986), and usually referred to as the GRW theory) are phenomenological attempts to solve a foundational problem. At present, they involve phenomenological parameters which, if the theory is taken seriously, acquire the status of new constants of nature. Moreover, the problem of building satisfactory relativistic generalizations of these models has encountered serious mathematical difficulties due to the appearance of intractable divergences. Only very recently, some important steps we will discuss in what follows have led to the first satisfactory formulations of genuinely relativistically invariant theories inducing reductions. More important, the debate raised by these attempts and by claims that the desired generalization is impossible to achieve have elucidated some crucial points and have made clear that there is no reason of principle preventing to reach this goal. In spite of their phenomenological character, we think that Collapse Theories have a remarkable relevance, since they have made clear that there are new ways to overcome the difficulties of the formalism, to close the circle in the precise sense defined by Abner Shimony (1989), ways which until a few years ago were considered impracticable, and which, on the contrary, have been shown to be perfectly viable. Moreover, they have allowed a clear identification of the formal features which should characterize any unified theory of micro and macro processes. Last but not least, Collapse theories qualify themselves as rival theories of quantum mechanics and one can easily identify some of their physical implications which, in principle, would allow crucial tests discriminating between the two. This possibility, for the moment, seems to require experiments which go beyond the present technological possibilities. However two aspects of the problem have to be taken into account: due to the remarkable improvements in dealing with mesoscopic systems a crucial test of GRW might become feasible, and the model suggests the kind of physical processes in which a violation of the linear nature of the formalism might occur. Accordingly, even though the experimental investigations might very well turn out not to confirm the proposed new dynamical features of natural processes, they might lead, in the end, to extremely relevant discoveries. 1. General Considerations As stated already, a very natural question which all scientists who are concerned about the meaning and the value of science have to face, is whether one can develop a coherent worldview that can accommodate our knowledge concerning natural phenomena as it is embodied in our best theories. Such a program meets serious difficulties with quantum mechanics, essentially because of two formal aspects of the theory which are common to all of its versions, from the original nonrelativistic formulations of the 1920s, to the quantum field theories of recent years: the linear nature of the state space and of the evolution equation, i.e., the validity of the superposition principle and the related phenomenon of entanglement, which, in Schrödinger's words: is not one but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought (Schrödinger, 1935, p. 807). These two formal features have embarrassing consequences, since they imply • objective chance in natural processes, i.e., the nonepistemic nature of quantum probabilities; • objective indefiniteness of physical properties both at the micro and macro level; • objective entanglement between spatially separated and non-interacting constituents of a composite system, entailing a sort of holism and a precise kind of nonlocality. For the sake of generality, we shall first of all present a very concise sketch of ‘the rules of the game’. 2. The Formalism: A Concise Sketch Let us recall the axiomatic structure of quantum theory: 1. States of physical systems are associated with normalized vectors in a Hilbert space, a complex, infinite-dimensional, complete and separable linear vector space equipped with a scalar product. Linearity implies that the superposition principle holds: if |f> is a state and |g> is a state, then (for a and b arbitrary complex numbers) also |K> = a|f> + b|g> is a state. Moreover, the state evolution is linear, i.e., it preserves superpositions: if |f,t> and |g,t> are the states obtained by evolving the states |f,0> and |g,0>, respectively, from the initial time t=0 to the time t, then a|f,t> + b|g,t> is the state obtained by the evolution of a|f,0> + b|g,0>. Finally, the completeness assumption is made, i.e., that the knowledge of its statevector represents, in principle, the most accurate information one can have about the state of an individual physical system. 2. The observable quantities are represented by self-adjoint operators B on the Hilbert space. The associated eigenvalue equations B|bk> = bk|bk> and the corresponding eigenmanifolds (the linear manifolds spanned by the eigenvectors associated to a given eigenvalue, also called eigenspaces) play a basic role for the predictive content of the theory. In fact: 1. The eigenvalues bk of an operator B represent the only possible outcomes in a measurement of the corresponding observable. 2. The square of the norm (i.e., the length) of the projection of the normalized vector (i.e., of length 1) describing the state of the system onto the eigenmanifold associated to a given eigenvalue gives the probability of obtaining the corresponding eigenvalue as the outcome of the measurement. In particular, it is useful to recall that when one is interested in the probability of finding a particle at a given place, one has to resort to the so-called configuration space representation of the statevector. In such a case the statevector becomes a square-integrable function of the position variables of the particles of the system, whose modulus squared yields the probability density for the outcomes of position measurements. We stress that, according to the above scheme, quantum mechanics makes only conditional probabilistic predictions (conditional on the measurement being actually performed) for the outcomes of prospective (and in general incompatible) measurement processes. Only if a state belongs already before the act of measurement to an eigenmanifold of the observable which is going to be measured, can one predict the outcome with certainty. In all other cases—if the completeness assumption is made—one has objective nonepistemic probabilities for different outcomes. The orthodox position gives a very simple answer to the question: what determines the outcome when different outcomes are possible? Nothing—the theory is complete and, as a consequence, it is illegitimate to raise any question about possessed properties referring to observables for which different outcomes have non-vanishing probabilities of being obtained. Correspondingly, the referent of the theory are the results of measurement procedures. These are to be described in classical terms and involve in general mutually exclusive physical conditions. As regards the legitimacy of attributing properties to physical systems, one could say that quantum mechanics warns us against requiring too many properties to be actually possessed by physical systems. However—with Einstein—one can adopt as a sufficient condition for the existence of an objective individual property that one be able (without in any way disturbing the system) to predict with certainty the outcome of a measurement. This implies that, whenever the overall statevector factorizes into the product of a state of the Hilbert space of the physical system S and of the rest of the world, S does possess some properties (actually a complete set of properties, i.e., those associated to a maximal set of commuting observables). Before concluding this section we must add some comments about the measurement process. Quantum theory was created to deal with microscopic phenomena. In order to obtain information about them one must be able to establish strict correlations between the states of the microscopic systems and the states of objects we can perceive. Within the formalism, this is described by considering appropriate micro-macro interactions. The fact that when the measurement is completed one can make statements about the outcome is accounted for by the already mentioned WPR postulate (Dirac 1948): a measurement always causes a system to jump in an eigenstate of the observed quantity. Correspondingly, also the statevector of the apparatus ‘jumps’ into the manifold associated to the recorded outcome. 3. The Macro-Objectification Problem In this section we shall clarify why the formalism we have just presented gives rise to the measurement or macro-objectification problem. To this purpose we shall, first of all, discuss the standard oversimplified argument based on the so-called von Neumann ideal measurement scheme. Then we shall discuss more recent results (Bassi and Ghirardi 2000), which relax von Neumann's assumptions. Let us begin by recalling the basic points of the standard argument: Suppose that a microsystem S, just before the measurement of an observable B, is in the eigenstate |bj> of the corresponding operator. The apparatus (a macrosystem) used to gain information about B is initially assumed to be in a precise macroscopic state, its ready state, corresponding to a definite macro property—e.g., its pointer points at 0 on a scale. Since the apparatus A is made of elementary particles, atoms and so on, it must be described by quantum mechanics, which will associate to it the state vector |A0>. One then assumes that there is an appropriate system-apparatus interaction lasting for a finite time, such that when the initial apparatus state is triggered by the state |bj> it ends up in a final configuration |Aj>, which is macroscopically distinguishable from the initial one and from the other configurations |Ak> in which it would end up if triggered by a different eigenstate |bk>. Moreover, one assumes that the system is left in its initial state. In brief, one assumes that one can dispose things in such a way that the system-apparatus interaction can be described as: 1. (initial state): |bk>|A0> (final state): |bk>|Ak> Equation (1) and the hypothesis that the superposition principle governs all natural processes tell us that, if the initial state of the microsystem is a linear superposition of different eigenstates (for simplicity we will consider only two of them), one has: 1. (initial state): (a|bk> + b|bj>)|A0> (final state): (a|bk>|Ak>+ b|bj>|Aj>). Some remarks about this are in order: • The scheme is highly idealized, both because it takes for granted that one can prepare the apparatus in a precise state, which is impossible since we cannot have control over all its degrees of freedom, and because it assumes that the apparatus registers the outcome without altering the state of the measured system. However, as we shall discuss below, these assumptions are by no means essential to derive the embarrassing conclusion we have to face, i.e., that the final state is a linear superposition of two states corresponding to two macroscopically different states of the apparatus. Since we know that the + representing linear superpositions cannot be replaced by the logical alternative either … or, the measurement problem arises: what meaning can one attach to a state of affairs in which two macroscopically and perceptively different states occur simultaneously? • As already mentioned, the standard solution to this problem is given by the WPR postulate: in a measurement process reduction occurs: the final state is not the one appearing at the right hand side of equation (2) but, since macro-objectification takes place, it is 1. either |bk>|Ak> or |bj>|Aj> with probabilities |a|2 and |b|2, respectively. Nowadays, there is a general consensus that this solution is absolutely unacceptable for two basic reasons: 1. It corresponds to assuming that the linear nature of the theory is broken at a certain level. Thus, quantum theory is unable to explain how it can happen that the apparata behave as required by the WPR postulate (which is one of the axioms of the theory). 2. Even if one were to accept that quantum mechanics has a limited field of applicability, so that it does not account for all natural processes and, in particular, it breaks down at the macrolevel, it is clear that the theory does not contain any precise criterion for identifying the borderline between micro and macro, linear and nonlinear, deterministic and stochastic, reversible and irreversible. To use J.S. Bell's words, there is nothing in the theory fixing such a borderline and the split between the two above types of processes is fundamentally shifty. As a matter of fact, if one looks at the historical debate on this problem, one can easily see that it is precisely by continuously resorting to this ambiguity about the split that adherents of the Copenhagen orthodoxy or easy solvers (Bell 1990) of the measurement problem have rejected the criticism of the heretics (Gottfried 2000). For instance, Bohr succeeded in rejecting Einstein's criticisms at the Solvay Conferences by stressing that some macroscopic parts of the apparatus had to be treated fully quantum mechanically; von Neumann and Wigner displaced the split by locating it between the physical and the conscious (but what is a conscious being?), and so on. Also other proposed solutions to the problem, notably certain versions of many-worlds interpretations, suffer from analogous ambiguities. It is not our task to review here the various attempts to solve the above difficulties. One can find many exhaustive treatments of this problem in the literature. On the contrary, we would like to discuss how the macro-objectification problem is indeed a consequence of very general, in fact unavoidable, assumptions on the nature of measurements, and not specifically of the assumptions of von Neumann's model. This was established in a series of theorems of increasing generality, notably the ones by Fine (1970), d'Espagnat (1971), Shimony (1974), Brown (1986) and Busch and Shimony (1996). Possibly the most general and direct proof is given by Bassi and Ghirardi (2000), whose results we briefly summarize. The assumptions of the theorem are: 1. that a microsystem can be prepared in two different eigenstates of an observable (such as, e.g., the spin component along the z-axis) and in a superposition of two such states; 2. that one has a sufficiently reliable way of ‘measuring’ such an observable, meaning that when the measurement is triggered by each of the two above eigenstates, the process leads in the vast majority of cases to macroscopically and perceptually different situations of the universe. This requirement allows for cases in which the experimenter does not have perfect control of the apparatus, the apparatus is entangled with the rest of the universe, the apparatus makes mistakes, or the measured system is altered or even destroyed in the measurement process; 3. that all natural processes obey the linear laws of the theory. From these very general assumptions one can show that, repeating the measurement on systems prepared in the superposition of the two given eigenstates, in the great majority of cases one ends up in a superposition of macroscopically and perceptually different situations of the whole universe. If one wishes to have an acceptable final situation, one mirroring the fact that we have definite perceptions, one is arguably compelled to break the linearity of the theory at an appropriate stage. 4. The Birth of Collapse Theories The debate on the macro-objectification problem continued for many years after the early days of quantum mechanics. In the early 1950s an important step was taken by D. Bohm who presented (Bohm 1952) a mathematically precise deterministic completion of quantum mechanics (see the entry on Bohmian Mechanics). In the area of Collapse Theories, one should mention the contribution by Bohm and Bub (1966), which was based on the interaction of the statevector with Wiener-Siegel hidden variables. But let us come to Collapse Theories in the sense currently attached to this expression. Various investigations during the 1970s can be considered as preliminary steps for the subsequent developments. In the years 1970-1973 L. Fonda, A. Rimini, T. Weber and G.C. Ghirardi were seriously concerned with quantum decay processes and in particular with the possibility of deriving, within a quantum context, the exponential decay law (Fonda, Ghirardi, Rimini, and Weber 1973; Fonda, Ghirardi, and Rimini et al. 1978). Some features of this approach are extremely relevant for the DRP. Let us list them: • One deals with individual physical systems; • The statevector is supposed to undergo random processes at random times, inducing sudden changes driving it either within the linear manifold of the unstable state or within the one of the decay products; • To make the treatment quite general (the apparatus does not know which kind of unstable system it is testing) one is led to identify the random processes with localization processes of the relative coordinates of the decay fragments. Such an assumption, combined with the peculiar resonant dynamics characterizing an unstable system, yields, completely in general, the desired result. The ‘relative position basis’ is the preferred basis of this theory; • Analogous ideas have been applied to measurement processes (Fonda, Ghirardi, and Rimini 1973); • The final equation for the evolution at the ensemble level is of the quantum dynamical semigroup type and has a structure extremely similar to the final one of the GRW theory. Obviously, in these papers the reduction processes which are involved were not assumed to be ‘spontaneous and fundamental’ natural processes, but due to system-environment interactions. Accordingly, these attempts did not represent original proposals for solving the macro-objectification problem but they have paved the way for the elaboration of the GRW theory. Almost in the same years, P. Pearle (1976, 1979), and subsequently N. Gisin (1984) and others, had entertained the idea of accounting for the reduction process in terms of a stochastic differential equation. These authors were really looking for a new dynamical equation and for a solution to the macro-objectification problem. Unfortunately, they were unable to give any precise suggestion about how to identify the states to which the dynamical equation should lead. Indeed, these states were assumed to depend on the particular measurement process one was considering. Without a clear indication on this point there was no way to identify a mechanism whose effect could be negligible for microsystems but extremely relevant for the macroscopic ones. N. Gisin gave subsequently an interesting (though not uncontroversial) argument (Gisin 1989) that nonlinear modifications of the standard equation without stochasticity are unacceptable since they imply the possibility of sending superluminal signals. Soon afterwards, G. C. Ghirardi and R. Grassi (1991) showed that stochastic modifications without nonlinearity can at most induce ensemble and not individual reductions, i.e., they do not guarantee that the state vector of each individual physical system is driven in a manifold corresponding to definite properties. 5. The Original Collapse Model As already mentioned, the Collapse Theory (Ghirardi, Rimini, and Weber 1986) we are going to describe amounts to accepting a modification of the standard evolution law of the theory such that microprocesses and macroprocesses are governed by a single dynamics. Such a dynamics must imply that the micro-macro interaction in a measurement process leads to WPR. Bearing this in mind, recall that the characteristic feature distinguishing quantum evolution from WPR is that, while Schrödinger's equation is linear and deterministic (at the wave function level), WPR is nonlinear and stochastic. It is then natural to consider, as was suggested for the first time in the above quoted papers by P. Pearle, the possibility of nonlinear and stochastic modifications of the standard Schrödinger dynamics. However, the initial attempts to implement this idea were unsatisfactory for various reasons. The first, which we have already discussed, concerns the choice of the preferred basis: if one wants to have a universal mechanism leading to reductions, to which linear manifolds should the reduction mechanism drive the statevector? Or, equivalently, which of the (generally) incompatible ‘potentialities’ of the standard theory should we choose to make actual? The second, referred to as the trigger problem by Pearle (1989), is the problem of how the reduction mechanism can become more and more effective in going from the micro to the macro domain. The solution to this problem constitutes the central feature of the Collapse Theories of the GRW type. To discuss these points, let us briefly review the first consistent Collapse model (Ghirardi, Rimini, and Weber 1985) to appear in the literature. Within such a model, originally referred to as QMSL (Quantum Mechanics with Spontaneous Localizations), the problem of the choice of the preferred basis is solved by noting that the most embarrassing superpositions, at the macroscopic level, are those involving different spatial locations of macroscopic objects. Actually, as Einstein has stressed, this is a crucial point which has to be faced by anybody aiming to take a macro-objective position about natural phenomena: ‘A macro-body must always have a quasi-sharply defined position in the objective description of reality’ (Born, 1971, p. 223). Accordingly, QMSL considers the possibility of spontaneous processes, which are assumed to occur instantaneously and at the microscopic level, which tend to suppress the linear superpositions of differently localized states. The required trigger mechanism must then follow consistently. The key assumption of QMSL is the following: each elementary constituent of any physical system is subjected, at random times, to random and spontaneous localization processes (which we will call hittings) around appropriate positions. To have a precise mathematical model one has to be very specific about the above assumptions; in particular one has to make explicit HOW the process works, i.e., which modifications of the wave function are induced by the localizations, WHERE it occurs, i.e., what determines the occurrence of a localization at a certain position rather than at another one, and finally WHEN, i.e., at what times, it occurs. The answers to these questions are as follows. Let us consider a system of N distinguishable particles and let us denote by F(q1, q2, … , qN) the coordinate representation (wave function) of the state vector (we disregard spin variables since hittings are assumed not to act on them). 1. The answer to the question HOW is then: if a hitting occurs for the i-th particle at point x, the wave function is instantaneously multiplied by a Gaussian function (appropriately normalized) G(qi, x) = K exp[−{1/(2 d2)}(qix)2], where d represents the localization accuracy. Let us denote as Li(q1, q2, … , qN ; x) = F(q1, q2, … , qN) G(qi, x) the wave function immediately after the localization, as yet unnormalized. 2. As concerns the specification of WHERE the localization occurs, it is assumed that the probability density P(x) of its taking place at the point x is given by the square of the norm of the state Li (the length, or to be more precise, the integral of the modulus squared of the function Li over the 3N-dimensional space). This implies that hittings occur with higher probability at those places where, in the standard quantum description, there is a higher probability of finding the particle. Note that the above prescription introduces nonlinear and stochastic elements in the dynamics. The constant K appearing in the expression of G(qi, x) is chosen in such a way that the integral of P(x) over the whole space equals 1. 3. Finally, the question WHEN is answered by assuming that the hittings occur at randomly distributed times, according to a Poisson distribution, with mean frequency f. It is straightforward to convince oneself that the hitting process leads, when it occurs, to the suppression of the linear superpositions of states in which the same particle is well localized at different positions separated by a distance greater than d. As a simple example we can consider a single particle whose wavefunction is different from zero only in two small and far apart regions h and t. Suppose that a localization occurs around h; the state after the hitting is then appreciably different from zero only in a region around h itself. A completely analogous argument holds for the case in which the hitting takes place around t. As concerns points which are far from both h and t, one easily sees that the probability density for such hittings , according to the multiplication rule determining Li, turns out to be practically zero, and moreover, that if such a hitting were to occur, after the wave function is normalized, the wave function of the system would remain almost unchanged. We can now discuss the most important feature of the theory, i.e., the Trigger Mechanism. To understand the way in which the spontaneous localization mechanism is enhanced by increasing the number of particles which are in far apart spatial regions (as compared to d), one can consider, for simplicity, the superposition |S>, with equal weights, of two macroscopic pointer states |H> and |T>, corresponding to two different pointer positions H and T, respectively. Taking into account that the pointer is ‘almost rigid’ and contains a macroscopic number N of microscopic constituents, the state can be written, in obvious notation, as: 1. |S> = [|1 near h1>… |N near hN> + |1 near t1> … |N near tN>], where hi is near H, and ti is near T. The states appearing in first term on the right-hand side of equation (4) have coordinate representations which are different from zero only when their arguments (1,…,N) are all near H, while those of the second term are different from zero only when they are all near T. It is now evident that if any of the particles (say, the i-th particle) undergoes a hitting process, e.g., near the point hi, the multiplication prescription leads practically to the suppression of the second term in (4). Thus any spontaneous localization of any of the constituents amounts to a localization of the pointer. The hitting frequency is therefore effectively amplified proportionally to the number of constituents. Notice that, for simplicity, the argument makes reference to an almost rigid body, i.e., to one for which all particles are around H in one of the states of the superposition and around T in the other. It should however be obvious that what really matters in amplifying the reductions is the number of particles which are in different positions in the two states appearing in the superposition itself. Under these premises we can now proceed to choose the parameters d and f of the theory, i.e., the localization accuracy and the mean localization frequency. The argument just given allows one to understand how one can choose the parameters in such a way that the quantum predictions for microscopic systems remain fully valid while the embarrassing macroscopic superpositions in measurement-like situations are suppressed in very short times. Accordingly, as a consequence of the unified dynamics governing all physical processes, individual macroscopic objects acquire definite macroscopic properties. The choice suggested in the GRW-model is: 1. f = 10−16 s−1 d = 10−5 cm It follows that a microscopic system undergoes a localization, on average, every hundred million years, while a macroscopic one undergoes a localization every 10−7 seconds. With reference to the challenging version of the macro-objectification problem presented by Schrödinger with the famous example of his cat, J.S. Bell comments (1987, p.44): [within QMSL] the cat is not both dead and alive for more than a split second . Besides the extremely low frequency of the hittings for microscopic systems, also the fact that the localization width is large compared to the dimensions of atoms (so that even when a localization occurs it does very little violence to the internal economy of an atom) plays an important role in guaranteeing that no violation of well-tested quantum mechanical predictions is implied by the modified dynamics. Some remarks are appropriate. First of all, QMSL, being precisely formulated, allows to locate precisely the ‘split’ between micro and macro, reversible and irreversible, quantum and classical. The transition between the two types of ‘regimes’ is governed by the number of particles which are well localized at positions further apart than 10−5 cm in the two states whose coherence is going to be dynamically suppressed. Second, the model is, in principle, testable against quantum mechanics. As a matter of fact, an essential part of the program consists in proving that its predictions do not contradict any already established fact about microsystems and macrosystems. 6. The Continuous Spontaneous Localization Model (CSL) The model just presented (QMSL) has a serious drawback: it does not allow to deal with systems containing identical constituents because it does not respect the symmetry or antisymmetry requirements for such particles. A quite natural idea to overcome this difficulty would be that of relating the hitting process not to the individual particles but to the particle number density averaged over an appropriate volume. This can be done by introducing a new phenomenological parameter in the theory which however can be eliminated by an appropriate limiting procedure (see below). Another way to overcome this problem derives from injecting the physically appropriate principles of the GRW model within the original approach of P. Pearle. This line of thought has led to a quite elegant formulation of a dynamical reduction model, usually referred to as CSL (Pearle 1989; Ghirardi, Pearle, and Rimini 1990) in which the discontinuous jumps which characterize QMSL are replaced by a continuous stochastic evolution in the Hilbert space (a sort of Brownian motion of the statevector). We will not enter into the rather technical details of this interesting development of the original GRW proposal, since the basic ideas and physical implications are precisely the same as those of the original formulation. Actually, one could argue that the above idea of tackling the problem of identical particles by considering the average particle number within an appropriate volume is correct. In fact it has been proved (Ghirardi, Pearle, and Rimini 1990) that for any CSL dynamics there is a hitting dynamics which, from a physical point of view, is ‘as close to it as one wants’. Instead of entering into the details of the CSL formalism, it is useful, for the discussion below, to analyze a simplified version of it. 7. A Simplified Version of CSL With the aim of understanding the physical implications of the CSL model, such as the rate of suppression of coherence, we make now some simplifying assumptions. First, we assume that we are dealing with only one kind of particles (e.g., the nucleons), secondly, we disregard the standard Schrödinger term in the evolution and, finally, we divide the whole space in cells of volume d3. We denote by |n1, n2, … > a Fock state in which there are ni particles in cell i, and we consider a superposition of two states |n1, n2, … > and |m1, m2, … > which differ in the occupation numbers of the various cells of the universe. With these assumptions it is quite easy to prove that the rate of suppression of the coherence between the two states (so that the final state is one of the two and not their superposition) is governed by the quantity: 1. exp{−f [(n1m1)2 + (n2m2)2 + …]t}, the sum being extended to all cells in the universe. Apart from differences relating to the identity of the constituents, the overall physics is quite similar to that implied by QMSL. Equation 6 offers the opportunity of discussing the possibility of relating the suppression of coherence to gravitational effects. In fact, with reference to this equation we notice that the worst case scenario (from the point of view of the time necessary to suppress coherence) is the one corresponding to the superposition of two states for which the occupation numbers of the individual cells differ only by one unit. Indeed, in this case the amplifying effect of taking the square of the differences disappears. Let us then raise the question: how many nucleons (at worst) should occupy different cells, in order for the given superposition to be dynamically suppressed within the time which characterizes human perceptual processes? Since such a time is of the order of 10−2 sec and f = 10−16 sec−1, the number of displaced nucleons must be of the order of 1018, which corresponds, to a remarkable accuracy, to a Planck mass. This figure seems to point in the same direction as Penrose's attempts to relate reduction mechanisms to quantum gravitational effects (Penrose 1989). Obviously, the model theory we are discussing implies various further physical effects which deserve to be discussed since they might allow a test of the theory with respect to standard quantum mechanics. For review, see (Bassi and Ghirardi 2001; Adler 2007). We briefly list the most promising type of experiments which in the future might allow such a crucial test. 1. Effects in superconducting devices. A detailed analysis has been presented in (Ghirardi and Rimini 1990). As shown there and as follows from estimates about possible effects for superconducting devices (Rae 1990; Gallis and Fleming 1990; Rimini 1995), and for the excitation of atoms (Squires 1991), it turns out not to be possible, with present technology, to perform clear-cut experiments allowing to discriminate the model from standard quantum mechanics (Benatti, et al. 1995). 2. Loss of coherence in diffraction experiments with macromolecules. The group of Arndt and Zeilinger in Vienna has performed several diffraction experiments involving macromolecules.The most well known include C60, (720 nucleons) (Arndt et al. 1999), C70, (840 nucleons) (Hackermueller et al. 2004) and C30H12F30N2O4, (1030 nucleons) (Gerlich et al. 2007). These experiments aim at testing the validity of the superposition principle towards the macroscopic scale. The challenge is very exciting and near-future technology will probably allow to perform interference experiments with molecules much bigger than those already employed. So far, the experimental results are compatible both with standard quantum predictions and with those of collapse models, so they do not represent decisive tests of these models. 3. Loss of coherence in o pto-mechanical interferometers. Very recently, an interesting proposal of testing the superposition principle by resorting to an experimental set-up involving a (mesoscopic) mirror has been advanced (Marshall et al. 2003). This stimulating proposal has led a group of scientists directly interested in Collapse Theories (Bassi et al. 2005) to check whether the proposed experiment might be a crucial one for testing dynamical reduction models versus quantum mechanics. The rigorous conclusion has been that this is not the case: in the devised situation the GRW and CSL theories have implications which agree with those of the standard theory, the main reason being that the (average) positions of the superposed states are much smaller than the localization accuracy of GRW, so that the localizations processes become ineffective. 4. Spontaneous X-ray emission from Germanium. Collapse models not only forbid macroscopic superpositions to be stable, they share several other features which are forbidden by the standard theory. One of these is the spontaneous emission of radiation from otherwise stable systems, like atoms. While the standard theory predicts that such systems—if not excited—do not emit radiation, collapse models allow for radiation to be produced. The emission rate has been computed both for free charged particles (Fu 1997) and for hydrogenic atoms (Adler et al. 2007). The theoretical predictions are compatible with current experimental data (Fu 1997), so that even this type of experiments do not represent decisive tests of collapse models. However, their importance lies in the fact that—so far—they provide the strongest upper bounds on the collapse parameters (Adler et al. 2007). 8. Some remarks about Collapse Theories A. Pais famously recalls in his biography of Einstein: We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it (Pais 1982, p. 5). In the context of Einstein's remarks in Albert Einstein, Philosopher-Scientist (Schilpp 1949), we can regard this reference to the moon as an extreme example of ‘a fact that belongs entirely within the sphere of macroscopic concepts’, as is also a mark on a strip of paper that is used to register the outcome of a decay experiment, so that as a consequence, there is hardly likely to be anyone who would be inclined to consider seriously […] that the existence of the location is essentially dependent upon the carrying out of an observation made on the registration strip. For, in the macroscopic sphere it simply is considered certain that one must adhere to the program of a realistic description in space and time; whereas in the sphere of microscopic situations one is more readily inclined to give up, or at least to modify, this program (p. 671). the ‘macroscopic’ and the ‘microscopic’ are so inter-related that it appears impracticable to give up this program in the ‘microscopic’ alone (p. 674). One might speculate that Einstein would not have taken the DRP seriously, given that it is a fundamentally indeterministic program. On the other hand, the DRP allows precisely for this middle ground, between giving up a ‘classical description in space and time’ altogether (the moon is not there when nobody looks), and requiring that it be applicable also at the microscopic level (as within some kind of ‘hidden variables’ theory). It would seem that the pursuit of ‘realism’ for Einstein was more a program that had been very successful rather than an a priori commitment, and that in principle he would have accepted attempts requiring a radical change in our classical conceptions concerning microsystems, provided they would nevertheless allow to take a macrorealist position matching our definite perceptions at this scale. In the DRP, we can say of an electron in an EPR-Bohm situation that ‘when nobody looks’, it has no definite spin in any direction , and in particular that when it is in a superposition of two states localised far away from each other, it cannot be thought to be at a definite place (see, however, the remarks in Section 11). In the macrorealm, however, objects do have definite positions and are generally describable in classical terms. That is, in spite of the fact that the DRP program is not adding ‘hidden variables’ to the theory, it implies that the moon is definitely there even if no sentient being has ever looked at it. In the words of J. S. Bell, the DRP allows electrons (in general microsystems) to enjoy the cloudiness of waves, while allowing tables and chairs, and ourselves, and black marks on photographs, to be rather definitely in one place rather than another, and to be described in classical terms (Bell 1986, p. 364). Such a program, as we have seen, is implemented by assuming only the existence of wave functions, and by proposing a unified dynamics that governs both microscopic processes and ‘measurements’. As regards the latter, no vague definitions are needed. The new dynamical equations govern the unfolding of any physical process, and the macroscopic ambiguities that would arise from the linear evolution are theoretically possible, but only of momentary duration, of no practical importance and no source of embarrassment. We have not yet analyzed the implications about locality, but since in the DRP program no hidden variables are introduced, the situation can be no worse than in ordinary quantum mechanics: ‘by adding mathematical precision to the jumps in the wave function’, the GRW theory ‘simply makes precise the action at a distance of ordinary quantum mechanics’ (Bell 1987, p. 46). Indeed, a detailed investigation of the locality properties of the theory becomes possible as shown by Bell himself (Bell 1987, p. 47). Moreover, as it will become clear when we will discuss the interpretation of the theory in terms of mass density, the QMSL and CSL theories lead in a natural way to account for a behaviour of macroscopic objects corresponding to our definite perceptions about them, the main objective of Einstein's requirements. The achievements of the DRP which are relevant for the debate about the foundations of quantum mechanics can also be concisely summarized in the words of H.P. Stapp: The collapse mechanisms so far proposed could, on the one hand, be viewed as ad hoc mutilations designed to force ontology to kneel to prejudice. On the other hand, these proposals show that one can certainly erect a coherent quantum ontology that generally conforms to ordinary ideas at the macroscopic level (Stapp 1989, p. 157). 9. Relativistic Dynamical Reduction Models As soon as the GRW proposal appeared and attracted the attention of J.S. Bell it also stimulated him to look at it from the point of view of relativity theory. As he stated subsequently (Bell 1989a): When I saw this theory first, I thought that I could blow it out of the water, by showing that it was grossly in violation of Lorentz invariance. That's connected with the problem of ‘quantum entanglement’, the EPR paradox. Actually, he had already investigated this point by studying the effect on the theory of a transformation mimicking a nonrelativistic approximation of a Lorentz transformation and he arrived (Bell 1987) at a surprising conclusion: … the model is as Lorentz invariant as it could be in its nonrelativistic version. It takes away the ground of my fear that any exact formulation of quantum mechanics must conflict with fundamental Lorentz invariance. What Bell had actually proved in a rather complicated way by resorting to a two-times formulation of the Schrödinger equation is that the model violates locality by violating outcome independence and not, as deterministic hidden variable theories do, parameter independence. Indeed, with reference to this point we recall that, as is well known, (Suppes and Zanotti 1976; van Fraassen 1982; Jarrett 1984; Shimony 1983; see also the entry on Bell's Theorem), Bell's locality assumption is equivalent to the conjunction of two other assumptions, viz., in Shimony's terminology, parameter independence and outcome independence. In view of the experimental violation of Bell's inequality, one has to give up either or both of these assumptions. The above splitting of the locality requirement into two logically independent conditions is particularly useful in discussing the different status of CSL and deterministic hidden variable theories with respect to relativistic requirements. Actually, as proved by Jarrett himself, when parameter independence is violated, if one had access to the variables which specify completely the state of individual physical systems, one could send faster-than-light signals from one wing of the apparatus to the other. Moreover, in Ghirardi and Grassi (1994, 1996) it has been proved that it is impossible to build a genuinely relativistically invariant theory which, in its nonrelativistic limit, exhibits parameter dependence. Here we use the term genuinely invariant to denote a theory for which there is no (hidden) preferred reference frame. On the other hand, if locality is violated only by the occurrence of outcome dependence then faster-than-light signaling cannot be achieved (Eberhard 1978; Ghirardi, Rimini, and Weber 1980; Ghirardi, Grassi, Rimini, and Weber 1988). Few years after the just mentioned proof by Bell, it has been shown in complete generality (Ghirardi, Grassi, Butterfield, and Fleming 1993; Butterfield et al. 1993) that the GRW and CSL theories, just as standard quantum mechanics, exhibit only outcome dependence. This is to some extent encouraging and shows that there are no reasons of principle making unviable the project of building a relativistically invariant DRM. Let us be more specific about this crucial problem. P. Pearle was the first to propose (Pearle 1990) a relativistic generalization of CSL to a quantum field theory describing a fermion field coupled to a meson scalar field enriched with the introduction of stochastic and nonlinear terms. A quite detailed discussion of this proposal was presented in (Ghirardi et al. 1990a) where it was shown that the theory enjoys of all properties which are necessary in order to meet the relativistic constraints. Pearle's approach requires the precise formulation of the idea of stochastic Lorentz invariance. The proposal can be summarized in the following terms: One considers a fermion field coupled to a meson field and puts forward the idea of inducing localizations for the fermions through their coupling to the mesons and a stochastic dynamical reduction mechanism acting on the meson variables. In practice, one considers Heisenberg evolution equations for the coupled fields and a Tomonaga-Schwinger CSL-type evolution equation with a skew-hermitian coupling to a c-number stochastic potential for the state vector. This approach has been systematically investigated by Ghirardi, Grassi, and Pearle (1990a, 1990b), to which we refer the reader for a detailed discussion. Here we limit ourselves to stressing that, under certain approximations, one obtains in the non-relativistic limit a CSL-type equation inducing spatial localization. However, due to the white noise nature of the stochastic potential, novel renormalization problems arise: the increase per unit time and per unit volume of the energy of the meson field is infinite due to the fact that infinitely many mesons are created. This point has also been lucidly discussed by Bell (1989b) in the talk he delivered at Trieste on the occasion of the 25th anniversary of the International Centre for Theoretical Physics. This talk appeared under the title The Trieste Lecture of John Stewart Bell edited by A. Bassi and G.C. Ghirardi. For these reasons one cannot consider this as a satisfactory example of a relativistic reduction model. In the years following the just mentioned attempts there has been a flourishing of researches aimed at getting the desired result. Let us briefly comment about them. As already mentioned, the source of the divergences is the assumption of point interactions between the quantum field operators in the dynamical equation for the statevector, or, equivalently, the white character of the stochastic noise. Having this aspect in mind P. Pearle (1999), L. Diosi (1990) and A. Bassi and G.C. Ghirardi (2002) reconsidered the problem from the beginning by investigating nonrelativistic theories with nonwhite Gaussian noises. The problem turns out to be very difficult from the mathematical point of view, but steps forward have been made. In recent years, a precise formulation of the nonwhite generalization (Bassi and Ferialdi 2009) of the so-called QMUPL model, which represents a simplified version of GRW and CSL, has been proposed. Moreover, a perturbative approach for the CSL model has been worked out (Adler and Bassi 2007, 2008). Further work is necessary. The program is very interesting at the nonrelativistic level; however, it is not yet clear whether it will lead to a real step forward in the development of relativistic theories of spontaneous collapse. In the same spirit, Nicrosini and Rimini (Nicrosini 2003) tried to smear out the point interactions without success because, in their approach, a preferred reference frame had to be chosen in order to circumvent the nonintegrability of the Tomonaga-Schwinger equation Also other interesting and different approaches have been suggested. Among them we mention the one by Dove and Squires (Dove 1996) based on discrete rather than continuous stochastic processes and those by Dawker and Herbauts (Dawker 2004a) and Dawker and Henson (Dawker 2004b) formulated on a discrete space-time. Before going on we consider it important to call attention to the fact that precisely in the same years similar attempts to get a relativistic generalization of the other existing ‘exact’ theory, i.e., Bohmian Mechanics, were going on and that they too have encountered some difficulties. Relevant steps are represented by a paper (Dürr 1999) resorting to a preferred spacetime slicing, by the investigations of Goldstein and Tumulka (Goldstein 2003) and by other scientists (Berndl 1996). However, we must recognize that no one of these attempts has led to a fully satisfactory solution of the problem of having a theory without observers, like Bohmian mechanics, which is perfectly satisfactory from the relativistic point of view, precisely due to the fact that they are not genuinely Lorentz invariant in the sense we have made precise before. Mention should be made also of the attempt by Dewdney and Horton (Dewdney 2001) to build a relativistically invariant model based on particle trajectories. Let us come back to the relativistic DRP. Some important changes have occurred quite recently. Tumulka (2006a) succeeded in proposing a relativistic version of the GRW theory for N non-interacting distinguishable particles, based on the consideration of a multi-time wavefunction whose evolution is governed by Dirac like equations and adopts as its Primitive Ontology (see the next section) the one which attaches a primary role to the space and time points at which spontaneous localizations occur, as originally suggested by Bell (1987). To my knowledge this represents the first proposal of a relativistic dynamical reduction mechanism which satisfies all relativistic requirements. In particular it is divergence free and foliation independent. However it can deal only with systems containing a fixed number of noninteracting fermions. At this point explicit mention should be made of the most recent steps which concern our problem. D. Bedingham (2011) following strictly the original proposal by Pearle (1990) of a quantum field theory inducing reductions based on a Tomonaga-Schwinger equation, has worked out an analogous model which, however, overcomes the difficulties of the original model. In fact, Bedingham has circumvented the crucial problems deriving from point interactions by (paying the price of) introducing, besides the fields characterizing the Quantum Field Theories he is interested in, an auxiliary relativistic field that amounts to a smearing of the interactions whilst preserving Lorentz invariance and frame independence. Adopting this point of view and taking advantage also of the proposal by Ghirardi (2000) concerning the appropriate way to define objective properties at any space-time point x, he has been able to work out a fully satisfactory and consistent relativistic scheme for almost all quantum field theories in which reduction processes may occur. In view of the last results by Tumulka and Bedingham and taking into account the interesting investigations concerning relativistic Bohmia-like theories,the conclusions that Tumulka has drawn concerning the status of attempts to account for the macro-objectification process from a relativistic perspective are well-founded: A somewhat surprising feature of the present situation is that we seem to arrive at the following alternative: Bohmian mechanics shows that one can explain quantum mechanics, exactly and completely, if one is willing to pay with using a preferred slicing of spacetime; our model suggests that one should be able to avoid a preferred slicing of spacetime if one is willing to pay with a certain deviation from quantum mechanics, a conclusion that he has rephrased and reinforced in (Tumulka 2006c): Thus, with the presently available models we have the alternative: either the conventional understanding of relativity is not right, or quantum mechanics is not exact. Very recently, a thorough and illuminating discussion of the important approach by Tumulka has been presented by Tim Maudlin (2011) in the third revised edition of his book Quantum Non-Locality and Relativity. Tumulka's position is perfectly consistent with the present ideas concerning the attempts to transform relativistic standard quantum mechanics into an ‘exact’ theory in the sense which has been made precise by J. Bell. Since the only unified, mathematically precise and formally consistent formulations of the quantum description of natural processes are Bohmian mechanics and GRW-like theories, if one chooses the first alternative one has to accept the existence of a preferred reference frame, while in the second case one is not led to such a drastic change of position with respect to relativistic concepts but must accept that the ensuing theory—even though only in a presently non-testable manner—disagrees with the predictions of quantum mechanics and acquires the status of a rival theory with respect to it. In spite of the fact that the situation is, to some extent, still open and requires further investigations, it has to be recognized that the efforts which have been spent on such a program have made possible a better understanding of some crucial points and have thrown light on some important conceptual issues. First, they have led to a completely general and rigorous formulation of the concept of stochastic invariance (Ghirardi, Grassi, and Pearle 1990a). Second, they have prompted a critical reconsideration, based on the discussion of smeared observables with compact support, of the problem of locality at the individual level. This analysis has brought out the necessity of reconsidering the criteria for the attribution of objective local properties to physical systems. In specific situations, one cannot attribute any local property to a microsystem: any attempt to do so gives rise to ambiguities. However, in the case of macroscopic systems, the impossibility of attributing to them local properties (or, equivalently, the ambiguity associated to such properties) lasts only for time intervals of the order of those necessary for the dynamical reduction to take place. Moreover, no objective property corresponding to a local observable, even for microsystems, can emerge as a consequence of a measurement-like event occurring in a space-like separated region: such properties emerge only in the future light cone of the considered macroscopic event. Finally, recent investigations (Ghirardi and Grassi 1994, 1996; Ghirardi 1996, 2000) have shown that the very formal structure of the theory is such that it does not allow, even conceptually, to establish cause-effect relations between space-like events. Accordingly, in concluding this section, we stress that the question of whether a relativistic dynamical reduction program can find a satisfactory formulation seems to admit a positive answer. A last comment. Recently, a paper by Conway and Kochen (Conway 2006), which has raised a lot of interest, has been published. A few words about it are in order, to clarify possible misunderstandings. The first and most important aim of the paper is the derivation of what the authors have called The Free Will Theorem , putting forward the provocative idea that if human beings are free to make their choices about the measurements they will perform on one of a pair of far-away entangled particles, then one must admit that also the elementary particles involved in the experiment have free will. One might make several comments on this statement. For what concerns us here the relevant fact is that the authors claim that their theorem implies, as a byproduct, the impossibility of elaborating a relativistically invariant dynamical reduction model. A lively debate has arisen; we refer the reader to the papers by Adler (2006), Bassi and Ghirardi (Bassi 2007), Tumulka (2007) in which it is proved that the conclusion drawn by Conway and Kochen is not pertinent to the problem. Recently the above authors have replied (Conway et al. 2007) to all criticisms raised in the just mentioned papers. However, (Goldstein et al. 2010) have made clear why the argument of Conway and Kochen is not pertinent. We may conclude that nothing in principle forbids a relativistic generalization of the GRW theory, and, actually, as repeatedly stressed previously, there are many elements which indicate that this is actually feasible. 10. Collapse Theories and Definite Perceptions Some authors (Albert and Vaidman 1989; Albert 1990, 1992) have raised an interesting objection concerning the emergence of definite perceptions within Collapse Theories. The objection is based on the fact that one can easily imagine situations leading to definite perceptions, that nevertheless do not involve the displacement of a large number of particles up to the stage of the perception itself. These cases would then constitute actual measurement situations which cannot be described by the GRW theory, contrary to what happens for the idealized (according to the authors) situations considered in many presentations of it, i.e., those involving the displacement of some sort of pointer. To be more specific, the above papers consider a ‘measurement-like’ process whose output is the emission of a burst of few photons triggered by the position in which a particle hits a screen. This can easily be devised by considering, e.g., a Stern-Gerlach set-up in which the path followed by the microsystem according to the value of its spin component hit a fluorescent screen and excite a small number of atoms which subsequently decay, emitting a small number of photons. The argument goes as follows: if one triggers the apparatus with a superposition of two spin states, since only a few atoms are excited, since the excitations involve displacements which are smaller than the characteristic localization distance of GRW, since GRW does not induce reductions on photon states and, finally, since the photon states immediately overlap, there is no way for the spontaneous localization mechanism to become effective in suppressing the ensuing superposition of the states ‘photons emerging from point A of the screen’ and ‘photons emerging from point B of the screen’. On the other hand, since the visual perception threshold is quite low (about 6-7 photons), there is no doubt that the naked eye of a human observer is sufficient to detect whether the luminous spot on the screen is at A or at B. The conclusion follows: in the case under consideration no dynamical reduction can take place and as a consequence no measurement is over, no outcome is definite, up to the moment in which a conscious observer perceives the spot. Aicardi et al. (1991) have presented a detailed answer to this criticism. The crucial points of the argument are the following: it is agreed that in the case considered the superposition persists for long times (actually the superposition must persist, since, the system under consideration being microscopic, one could perform interference experiments which everybody would expect to confirm quantum mechanics). However, to deal in the appropriate and correct way with such a criticism, one has to consider all the systems which enter into play (electron, screen, photons and brain) and the universal dynamics governing all relevant physical processes. A simple estimate of the number of ions which are involved in the visual perception mechanism makes perfectly plausible that, in the process, a sufficient number of particles are displaced by a sufficient spatial amount to satisfy the conditions under which, according to the GRW theory, the suppression of the superposition of the two nervous signals will take place within the time scale of perception. To avoid misunderstandings, this analysis by no means amounts to attributing a special role to the conscious observer or to perception. The observer's brain is the only system present in the set-up in which a superposition of two states involving different locations of a large number of particles occurs. As such it is the only place where the reduction can and actually must take place according to the theory. It is extremely important to stress that if in place of the eye of a human being one puts in front of the photon beams a spark chamber or a device leading to the displacement of a macroscopic pointer, or producing ink spots on a computer output, reduction will equally take place. In the given example, the human nervous system is simply a physical system, a specific assembly of particles, which performs the same function as one of these devices, if no other such device interacts with the photons before the human observer does. It follows that it is incorrect and seriously misleading to claim that the GRW theory requires a conscious observer in order that measurements have a definite outcome. A further remark may be appropriate. The above analysis could be taken by the reader as indicating a very naive and oversimplified attitude towards the deep problem of the mind-brain correspondence. There is no claim and no presumption that GRW allows a physicalist explanation of conscious perception. It is only pointed out that, for what we know about the purely physical aspects of the process, one can state that before the nervous pulses reach the higher visual cortex, the conditions guaranteeing the suppression of one of the two signals are verified. In brief, a consistent use of the dynamical reduction mechanism in the above situation accounts for the definiteness of the conscious perception, even in the extremely peculiar situation devised by Albert and Vaidman. 11. The Interpretation of the Theory and its Primitive Ontologies As stressed in the opening sentences of this contribution, the most serious problem of standard quantum mechanics lies in its being extremely successful in telling us about what we observe, but being basically silent on what is. This specific feature is closely related to the probabilistic interpretation of the statevector, combined with the completeness assumption of the theory. Notice that what is under discussion is the probabilistic interpretation, not the probabilistic character, of the theory. Also collapse theories have a fundamentally stochastic character, but, due to their most specific feature, i.e., that of driving the statevector of any individual physical system into appropriate and physically meaningful manifolds, they allow for a different interpretation. One could even say (if one wants to avoid that they too, as the standard theory, speak only of what we find) that they require a different interpretation, one that accounts for our perceptions at the appropriate, i.e., macroscopic, level. We must admit that this opinion is not universally shared. According to various authors, the ‘rules of the game’ embodied in the precise formulation of the GRW and CSL theories represent all there is to say about them. However, this cannot be the whole story: stricter and more precise requirements than the purely formal ones must be imposed for a theory to be taken seriously as a fundamental description of natural processes (an opinion shared by J. Bell). This request of going beyond the purely formal aspects of a theoretical scheme has been denoted as (the necessity of specifying) the Primitive Ontology (PO) of the theory in an extremely interesting recent paper (Allori et al. 2007, Other Internet Resources). The fundamental requisite of the PO is that it should make absolutely precise what the theory is fundamentally about. This is not a new problem; as already mentioned it has been raised by J. Bell since his first presentation of the GRW theory. Let me summarize the terms of the debate. Given that the wavefunction of a many-particle system lives in a (high-dimensional) configuration space, which is not endowed with a direct physical meaning connected to our experience of the world around us, Bell wanted to identify the ‘local beables’ of the theory, the quantities on which one could base a description of the perceived reality in ordinary three-dimensional space. In the specific context of QMSL, he (Bell 1987 p. 45) suggested that the ‘GRW jumps’, which we called ‘hittings’, could play this role. In fact they occur at precise times in precise positions of the three-dimensional space. As suggested in (Allori et al. 2007, Other Internet Resources) we will denote this position concerning the PO of the GRW theory as the ‘flashes ontology.’ However, later, Bell himself suggested that the most natural interpretation of the wavefunction in the context of a collapse theory would be that it describes the ‘density […] of stuff’ in the 3N-dimensional configuration space (Bell 1990, p. 30), the natural mathematical framework for describing a system of N particles. Allori et al. (2007, Other Internet Resources) appropriately have pointed out that this position amounts to avoiding commitment about the PO ontology of the theory and, consequently, to leaving vague the precise and meaningful connections it permits to be established between the mathematical description of the unfolding of physical processes and our perception of them. The interpretation which, in the opinion of the present writer, is most appropriate for collapse theories, has been proposed in series of papers (Ghirardi, Grassi and Benatti 1995; Ghirardi 1997a, 1997b) and has been referred in Allori et al. 2007 (Other Internet Resources) as ‘the mass density ontology’. Let us briefly describe it. First of all, various investigations (Pearle and Squires 1994) had made clear that QMSL and CSL needed a modification, i.e., the characteristic localization frequency of the elementary constituents of matter had to be made proportional to the mass characterizing the particle under consideration. In particular, the original frequency for the hitting processes f = 10−16 sec−1 is the one characterizing the nucleons, while, e.g., electrons would suffer hittings with a frequency reduced by about 2000 times. Unfortunately we have no space to discuss here the physical reasons which make this choice appropriate; we refer the reader to the above paper, as well as to the recent detailed analysis by Peruzzi and Rimini (2000). With this modification, what the nonlinear dynamics strives to make ‘objectively definite’ is the mass distribution in the whole universe. Second, a deep critical reconsideration (Ghirardi, Grassi, and Benatti 1995) has made evident how the concept of ‘distance’ that characterizes the Hilbert space is inappropriate in accounting for the similarity or difference between macroscopic situations. Just to give a convincing example, consider three states |h>, |h*> and |t> of a macrosystem (let us say a massive macroscopic bulk of matter), the first corresponding to its being located here, the second to its having the same location but one of its atoms (or molecules) being in a state orthogonal to the corresponding state in |h>, and the third having exactly the same internal state of the first but being differently located (there). Then, despite the fact that the first two states are indistinguishable from each other at the macrolevel, while the first and the third correspond to completely different and directly perceivable situations, the Hilbert space distance between |h> and |h*>, is equal to that between |h> and |t>. When the localization frequency is related to the mass of the constituents, then, in completely generality (i.e., even when one is dealing with a body which is not almost rigid, such as a gas or a cloud), the mechanism leading to the suppression of the superpositions of macroscopically different states is fundamentally governed by the the integral of the squared differences of the mass densities associated to the two superposed states. Actually, in the original paper (Ghirardi, Grassi and Benatti 1995) the mass density at a point was identified with its average over the characteristic volume of the theory, i.e., 10−15 cm 3 around that point. It is however easy to convince oneself that there is no need to do so (Ghirardi 2007) and that the mass density at any point, directly identified by the statevector (see below), is the appropriate quantity on which to base an appropriate ontology. Accordingly, we take the following attitude: what the theory is about, what is real ‘out there’ at a given space point x, is just a field, i.e., a variable m(x,t) given by the expectation value of the mass density operator M(x) at x obtained by multiplying the mass of any kind of particle times the number density operator for the considered type of particle and summing over all possible types of particles which can be present: 1. m(x,t) =< F,t | M(x) | F,t >; Here |F,t> is the statevector characterizing the system at the given time, and a*(k)(x) and a(k)(x) are the creation and annihilation operators for a particle of type k at point x. It is obvious that within standard quantum mechanics such a function cannot be endowed with any objective physical meaning due to the occurrence of linear superpositions which give rise to values that do not correspond to what we find in a measurement process or what we perceive. In the case of GRW or CSL theories, if one considers only the states allowed by the dynamics one can give a description of the world in terms of m(x,t), i.e., one recovers a physically meaningful account of physical reality in the usual 3-dimensional space and time. To illustrate this crucial point we consider, first of all, the embarrassing situation of a macroscopic object in the superposition of two differently located position states. We have then simply to recall that in a collapse model relating reductions to mass density differences, the dynamics suppresses in extremely short times the embarrassing superpositions of such states to recover the mass distribution corresponding to our perceptions. Let us come now to a microsystem and let us consider the equal weight superposition of two states |h> and |t> describing a microscopic particle in two different locations. Such a state gives rise to a mass distribution corresponding to 1/2 of the mass of the particle in the two considered space regions. This seems, at first sight, to contradict what is revealed by any measurement process. But in such a case we know that the theory implies that the dynamics running all natural processes within GRW ensures that whenever one tries to locate the particle he will always find it in a definite position, i.e., one and only one of the Geiger counters which might be triggered by the passage of the proton will fire, just because a superposition of ‘a counter which has fired’ and ‘one which has not fired’ is dynamically forbidden. This analysis shows that one can consider at all levels (the micro and the macroscopic ones) the field m(x,t) as accounting for ‘what is out there’, as originally suggested by Schrödinger with his realistic interpretation of the square of the wave function of a particle as representing the ‘ fuzzy’ character of the mass (or charge) of the particle. Obviously, within standard quantum mechanics such a position cannot be maintained because ‘wavepackets diffuse, and with the passage of time become infinitely extended … but however far the wavefunction has extended, the reaction of a detector … remains spotty’, as appropriately remarked in (Bell 1990). As we hope to have made clear, the picture is radically different when one takes into account the new dynamics which succeeds perfectly in reconciling the spread and sharp features of the wavefunction and of the detection process, respectively. It is also extremely important to stress that, by resorting to the quantity (7) one can define an appropriate ‘distance’ between two states as the integral over the whole 3-dimensional space of the square of the difference of m(x,t) for the two given states, a quantity which turns out to be perfectly appropriate to ground the concept of macroscopically similar or distinguishable Hilbert space states. In turn, this distance can be used as a basis to define a sensible psychophysical correspondence within the theory. 12. The Problem of the Tails of the Wave Function In recent years, there has been a lively debate around a problem which has its origin, according to some of the authors which have raised it, in the fact that even though the localization process which corresponds to multiplying the wave function times a Gaussian and thus lead to wave functions strongly peaked around the position of the hitting, they allow nevertheless the final wavefuntion to be different from zero over the whole of space. The first criticism of this kind was raised by A. Shimony (1990) and can be summarized by his sentence, one should not tolerate tails in wave functions which are so broad that their different parts can be discriminated by the senses, even if very low probability amplitude is assigned to them. After a localization of a macroscopic system, typically the pointer of the apparatus, its centre of mass will be associated to a wave function which is different from zero over the whole space. If one adopts the probabilistic interpretation of the standard theory, this means that even when the measurement process is over, there is a nonzero (even though extremely small) probability of finding its pointer in an arbitrary position, instead of the one corresponding to the registered outcome. This is taken as unacceptable, as indicating that the DRP does not actually overcome the macro-objectification problem. Let us state immediately that the (alleged) problem arises entirely from keeping the standard interpretation of the wave function unchanged, in particular assuming that its modulus squared gives the probability density of the position variable. However, as we have discussed in the previous section, there are much more serious reasons of principle which require to abandon the probabilistic interpretation and replace it either with the ‘flash ontology’, or with the ‘ mass density ontology’ which we have discussed above. Before entering into a detailed discussion of this subtle point we need to focus better the problem. We cannot avoid making two remarks. Suppose one adopts, for the moment, the conventional quantum position. We agree that, within such a framework, the fact that wave functions never have strictly compact spatial support can be considered puzzling. However this is an unavoidable problem arising directly from the mathematical features (spreading of wave functions) and from the probabilistic interpretation of the theory, and not at all a problem peculiar to the dynamical reduction models. Indeed, the fact that, e.g., the wave function of the centre of mass of a pointer or of a table has not a compact support has never been taken to be a problem for standard quantum mechanics. When, e.g., the wave function of a table is extremely well peaked around a given point in space, it has always been accepted that it describes a table located at a certain position, and that this corresponds in some way to our perception of it. It is obviously true that, for the given wave function, the quantum rules entail that if a measurement were performed the table could be found (with an extremely small probability) to be kilometers far away, but this is not the measurement or the macro-objectification problem of the standard theory. The latter concerns a completely different situation, i.e., that in which one is confronted with a superposition with comparable weights of two macroscopically separated wave functions, both of which possess tails (i.e., have non-compact support) but are appreciably different from zero only in far-away narrow intervals. This is the really embarrassing situation which conventional quantum mechanics is unable to make understandable. To which perception of the position of the pointer (of the table) does this wave function correspond? The implications for this problem of the adoption of the QMSL theory should be obvious. Within GRW, the superposition of two states which, when considered individually, are assumed to lead to different and definite perceptions of macroscopic locations, are dynamically forbidden. If some process tends to produce such superpositions, then the reducing dynamics induces the localization of the centre of mass (the associated wave function being appreciably different from zero only in a narrow and precise interval). Correspondingly, the possibility arises of attributing to the system the property of being in a definite place and thus of accounting for our definite perception of it. Summarizing, we stress once more that the criticism about the tails as well as the requirement that the appearance of macroscopically extended (even though extremely small) tails be strictly forbidden is exclusively motivated by uncritically committing oneself to the probabilistic interpretation of the theory, even for what concerns the psycho-physical correspondence: when this position is taken, states assigning non-exactly vanishing probabilities to different outcomes of position measurements should correspond to ambiguous perceptions about these positions. Since neither within the standard formalism nor within the framework of dynamical reduction models a wave function can have compact support, taking such a position leads to conclude that it is just the Hilbert space description of physical systems which has to be given up. It ought to be stressed that there is nothing in the GRW theory which would make the choice of functions with compact support problematic for the purpose of the localizations, but it also has to be noted that following this line would be totally useless: since the evolution equation contains the kinetic energy term, any function, even if it has compact support at a given time, will instantaneously spread acquiring a tail extending over the whole of space. If one sticks to the probabilistic interpretation and one accepts the completeness of the description of the states of physical systems in terms of the wave function, the tail problem cannot be avoided. The solution to the tails problem can only derive from abandoning completely the probabilistic interpretation and from adopting a more physical and realistic interpretation relating ‘what is out there’ to, e.g., the mass density distribution over the whole universe. In this connection, the following example will be instructive (Ghirardi, Grassi and Benatti 1995). Take a massive sphere of normal density and mass of about 1 kg. Classically, the mass of this body would be totally concentrated within the radius of the sphere, call it r. In QMSL, after the extremely short time interval in which the collapse dynamics leads to a ‘regime’ situation, and if one considers a sphere with radius r + 10−5 cm, the integral of the mass density over the rest of space turns out to be an incredibly small fraction (of the order of 1 over 10 to the power 1015) of the mass of a single proton. In such conditions, it seems quite legitimate to claim that the macroscopic body is localised within the sphere. However, also this quite reasonable position has been questioned and it has been claimed (Lewis 1997), that the very existence of the tails implies that the enumeration principle (i.e., the fact that the claim ‘particle 1 is within this box & particle 2 is within this box & … & particle n is within this box& no other particle is within this box’ implies the claim ‘there are n particles within this box’) does not hold, if one takes seriously the mass density interpretation of collapse theories. This paper has given rise to a long debate which would be inappropriate to reproduce here. We refer the reader to the following papers: Ghirardi and Bassi (1999), Clifton and Monton (1999a, 1999b), Bassi and Ghirardi (1999, 2001). Various arguments have been presented in favour and against the criticism by Lewis. We conclude this brief analysis by stressing once more that, in the opinion of the present writer, all the disagreements and the misunderstandings concerning this problem have their origin in the fact that the idea that the probabilistic interpretation of the wave function must be abandoned has not been fully accepted by the authors who find some difficulties in the proposed mass density interpretation of the Collapse Theories. For a recent reconsideration of the problem we refer the reader to the paper by Lewis (2003). 13. The Status of Collapse Models and Recent Positions about them We recall that, as stated in Section 3, the macro-objectification problem has been at the centre of the most lively and most challenging debate originated by the quantum view of natural processes. According to the majority of those who adhere to the orthodox position such a problem does not deserve a particular attention: classical concepts are a logical prerequisite for the very formulation of quantum mechanics and, consequently, the measurement process itself, the dividing line between the quantum and the classical world, cannot and must not be investigated, but simply accepted. This position has been lucidly summarized by J. Bell himself (1981): Making a virtue of necessity and influenced by positivistic and instrumentalist philosophies, many came to hold not only that it is difficult to find a coherent picture but that it is wrong to look for one—if not actually immoral then certainly unprofessional The situation has seen many changes in the course of time, and the necessity of making a clear distinction between what is quantum and what is classical has given rise to many proposals for ‘easy solutions’ to the problem which are based on the possibility, for all practical purposes (FAPP), of locating the splitting between these two faces of reality at different levels. Then came Bohmian mechanics, a theory which has made clear, in a lucid and perfectly consistent way, that there is no reason of principle requiring a dichotomic description of the world. A universal dynamical principle runs all physical processes and even though ‘it completely agrees with standard quantum predictions’ it implies wave-packet reduction in micro-macro interactions and the classical behaviour of classical objects. As we have mentioned, the other consistent proposal, at the nonrelativistic level, of a conceptually satisfactory solution of the macro-objectification problem is represented by the Collapse Theories which are the subject of these pages. Contrary to bohmian mechanics, they are rival theory of quantum mechanics, since they make different predictions (even though quite difficult to put into evidence) concerning various physical processes. Let us now analyze some of the recent critical positions concerning the two just mentioned approaches (in what follows I will take advantage of the nice analysis of a paper which I have been asked to referee and of which I do not know the author). Various physicists have criticized Bohm approach on the basis that, being empirically indistinguishable from quantum mechanics, such an approach is an example of ‘bad science’ or of ‘a degenerate research program’. Useless to say, I do not consider such criticisms as appropriate; the conceptual advantages and the internal consistency of the approach render it an extremely appealing theoretical scheme (incidentally, one should not forget that it has been just the critical investigations on such a theory which have led Bell to derive his famous and conceptually extremely relevant inequality). This being the situation, one would think that theories like the GRW model would be exempt from an analogous charge, since they actually are (in principle) empirically different from the standard theory. For instance they disagree from such a theory since they forbid the occurrence of macroscopic massive entangled states. In spite of this, they have been the object of an analogous attack by the adherents to the ‘new orthodoxy’ (Bub 1997; Joos et al. 1996; Zurek, 1993) pointing out that environmental induced decoherence shows that, FAPP, collapse theories are simply phenomenological accounts of the reduced state to which one has to resort since one has no control of the degrees of freedom of the environment. When one takes such a position, one is claiming that, essentially, GRW cannot be taken as a fundamental description of nature, mainly because it suffers from the limitation of being empirically indistinguishable from the standard theory, provided such a theory is correctly applied taking into account the actual physical situation. Also in this case, and even at the level at which such an analysis is performed, the practical indistinguishability from the standard approach should not be regarded as a sufficient reason to not take seriously collapse models. In fact, there are many very well known and compelling reasons (see, e.g., Bassi 2000; Adler 2003) to prefer a logically consistent unified theory to one which makes sense only due to the alleged practical impossibility of detecting the superpositions of macroscopically distinguishable states. At any rate, in principle, such theories can be tested against the standard one. But this is not the whole story. Another criticism, aimed to ‘deny’ the potential interest of collapse theories makes reference to the fact that within any such theory the ensuing dynamics for the statistical operator can be considered as the reduced dynamics deriving from a unitary (and, consequently, essentially a standard quantum) dynamics for the states of an enlarged Hilbert space of a composite quantum system S+E involving, besides the physical system S of interest, an ancilla E whose degrees of freedom are completely unaccessible:due to the quantum dynamical semigroup nature of the evolution equation for the statistical operator, any GRW-like model can always be seen as a phenomenological model deriving from a standard quantum evolution on a larger Hilbert space. In this way, the unitary deterministic evolution characterizing quantum mechanics would be fully restored. Apart from the obvious remark that such a critical attitude completely fails to grasp---and indeed, purposefully ignores---the most important feature of collapse theories, i.e., of dealing with individual quantum systems and not with statistical ensembles and of yielding a perfectly satisfactory description, matching our perceptions concerning individual macroscopic systems. Invoking an unaccessible ancilla to account for the nonlinear and stochastic character of GRW-type theories is once more a purely verbal way of avoiding facing the real puzzling aspects of the quantum description of macroscopic systems. This is not the only negative aspect of such a position; any attempt considering legitimate to introduce unaccessible entities in the theory, when one takes into consideration that there are infinitely possible and inequivalent ways of doing this, amounts really to embarking oneself in a ‘degenerate research program’. Other reasons for ignoring the dynamical reduction program have been put forward recently by the community of scientists involved in the interesting and exciting field of quantum information. We will not spend too much time in analyzing and discussing the new position about the foundational issues which have motivated the elaboration of collapse theories. The crucial fact is that, from this perspective, one takes the theory not to be about something real ‘occurring out there’ in a real word, but simply about information. This point is made extremely explicit in a recent paper (Zeilinger 2006): information is the most basic notion of quantum mechanics, and it is information about possible measurement results that is represented in the quantum state. Measurement results are nothing more than state of the classical apparatus used by the experimentalist. The quantum system then is nothing other than the consistently constructed referent of the information represented in the quantum state. It is clear that if one takes such a position almost all motivations to be worried by the measurement problem disappear, and with them the reasons to work out what Bell has denoted as ‘an exact version of quantum mechanics’. The most appropriate reply to this type of criticisms is to recall that J. Bell (1990) has included ‘information’ among the words which must have no place in a formulation with any pretension to physical precision. In particular he has stressed that one cannot even mention information unless one has given a precise answer to the two following questions: Whose information? and Information about what? A much more serious attitude is to call attention, as many serious authors do, to the fact that since collapse theories represent rival theories with respect to standard quantum mechanics they lead to the identification of experimental situations which would allow, in principle, crucial tests to discriminate between the two. As we have discussed above, presently such tests seem not to be readily feasible, but the analysis we have performed, shows that such tests are not completely out of reach, and will become feasible as soon as some technological improvements in dealing with mesoscopic systems will become available. We hope to have succeeded in giving a clear picture of the ideas, the implications, the achievements and the problems of the DRP. We conclude by stressing once more our position with respect to the Collapse Theories. Their interest derives entirely from the fact that they have given some hints about a possible way out from the difficulties characterizing standard quantum mechanics, by proving that explicit and precise models can be worked out which agree with all known predictions of the theory and nevertheless allow, on the basis of a universal dynamics governing all natural processes, to overcome in a mathematically clean and precise way the basic problems of the standard theory. In particular, the Collapse Models show how one can work out a theory that makes perfectly legitimate to take a macrorealistic position about natural processes, without contradicting any of the experimentally tested predictions of standard quantum mechanics. Finally, they might give precise hints about where to look in order to put into evidence, experimentally, possible violations of the superposition principle. • Adler, S., 2003, “Why Decoherence has not Solved the Measurement Problem: A Response to P. W. Anderson”, Stud.Hist.Philos.Mod.Phys., 34: 135. • Adler, S., 2007, “Lower and Upper Bounds on CSL Parameters from Latent Image Formation and IGM Heating”, Journal of Physics, A40: 2935. • Adler, S. and Bassi, A., 2007, “Collapse models with non-white noises” Journal of Physics, A41: 395308. • –––, 2008, “Collapse models with non-white noises II”, Journal of Physics, A40: 15083. • Adler, S. and Ramazanoglu, F.M., 2007, “Photon emission rate from atomic systems in the CSL model”, Journal of Physics, A40: 13395. • Aicardi, F., Borsellino, A., Ghirardi, G.C., and Grassi, R., 1991, “Dynamic models for state-vector reduction—Do they ensure that measurements have outcomes?”, Foundations of Physics Letters, 4: 109. • Albert, D.Z., 1990, “On the Collapse of the Wave Function”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York. • –––, 1992, Quantum Mechanics and Experience, Harvard University Press, Cambridge, Mass. • Albert, D.Z. and Vaidman, L., 1989, “On a proposed postulate of state reduction”, Physics Letters, A139: 1. • Arndt, M, Nairz, O., Vos-Adreae, J., van der Zouw, G. and Zeilinger, A., 1999, “Wave-particle duality of C60 molecules”, Nature, 401: 680. • Bassi, A. and Ferialdi, L., 2009, “Non-Markovian quantum trajectories: An exact result”, Physical Review Letters, 103: 050403. • –––, 2009, “Non-Markovian dynamics for a free quantum particle subject to spontaneous collapse in space: general solution and main properties”, Physical Review, A 80: 012116. • Bassi, A. and Ghirardi, G.C., 1999, “More about dynamical reduction and the enumeration principle”, British Journal for the Philosophy of Science, 50: 719. • –––, 2000, “A general argument against the universal validity of the superposition principle”, Physics Letters, A 275: 373. • –––, 2001, “Counting marbles: Reply to Clifton and Monton”, British Journal for the Philosophy of Science, 52: 125. • –––, 2002, “Dynamical reduction models with general Gaussian noises”, Physical Review A, 65: 042114. • –––, 2003, “Dynamical Reduction Models”, Physics Reports, 379: 257. • –––, 2007, “The Conway-Kochen argument and relativistic GRW models”, to appear in Foundations of Physics . Also quant-phys 0610209 . • Bassi, A., Ippoliti, E. and Adler, S., 2005, “Relativistic Reduction Dynamics”, Foundations of Physics, 41: 686. • Bedingham, D., 2011, “Towards Quantum Superpositions of a Mirror: an Exact Open Systems Analysis”, Journal of Physics, A38: 2715. • Bell, J.S., 1981, “Bertlmann's socks and the nature of reality”, Journal de Physique, Colloque C2, suppl. au numero 3, Tome 42: 41. • –––, 1986, “Six possible worlds of quantum mechanics”, in Proceedings of the Nobel Symposium 65: Possible Worlds in Arts and Sciences, de Gruyter, New York. • –––, 1987, “Are there quantum jumps?”, in Schrödinger—Centenary Celebration of a Polymath, C.W. Kilmister (ed.), Cambridge University Press, Cambridge. • –––, 1989a, “Towards an Exact Quantum mechanics”, in Themes in Contemporary Physics II, S. Deser, R.J. Finkelstein (eds.), World Scientific, Singapore. • –––, 1989b, “The Trieste Lecture of John Stuart Bell”, Journal of Physics, A40: 2919. • –––, 1990, “Against ‘measurement’”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York. • Benatti, F., Ghirardi, G.C., and Grassi, R., 1995, “Quantum Mechanics with Spontaneous Localization and Experiments”, in Advances in quantum Phenomena, E. Beltrametti et al. (eds), Plenum, New York. • Berndl, K., Duerr, D., Goldstein, S., Zanghi, N., 1996 , “Nonlocality, Lorentz Invariance, and Bohmian Quantum Theory”, Physical Review , A53: 2062. • Bohm, D., 1952, “A suggested interpretation of the quantum theory in terms of hidden variables. I & II.” Physical Review, 85: 166, ibid., 85: 180. • Bohm, D. and Bub, J., 1966, “A proposed solution of the measurement problem in quantum mechanics by a hidden variable theory”, Reviews of Modern Physics, 38: 453. • Born, M., 1971, The Born-Einstein Letters, Walter and Co., New York. • Brown, H.R., 1986, “The insolubility proof of the quantum measurement problem”, Foundations of Physics, 16: 857. • Bub, J., 1997, “Interpreting the Quantum World”, Cambridge University Press, Cambridge. • Busch, P. and Shimony, A., 1996, “Insolubility of the quantum measurement problem for unsharp observables”, Studies in History and Philosophy of Modern Physics, 27B: 397. • Butterfield, J., Fleming, G.N., Ghirardi, G.C., and Grassi, R., 1993, “Parameter dependence in dynamical models for state-vector reduction”, International Journal of Theoretical Physics, 32: 2287. • Clifton, R. and Monton, B., 1999a, “Losing your marbles in wavefunction collapse theories”, British Journal for the Philosophy of Science, 50: 697. • –––, 1999b, “Counting marbles with ‘accessible’ mass density: A reply to Bassi and Ghirardi”, British Journal for the Philosophy of Science, 51: 155. • Conway, J. and Kochen, S., 2006, “The Free Will Theorem”, to appear in Foundations of Physics . Also quant-phys 0604079 . • –––, 2006b, “On Adler's Conway Kochen Twin Argument”, quant-phys 0610147 to appear on Foundations of Physics . • –––, 2007, “Reply to Comments of Bassi, Ghirardi and Tumulka on the Free Will Theorem”, quant-phys 0701016 to appear on Foundations of Physics. • Dawker, F. and Herbauts, I., 2004a, “Simulating Causal Collapse Models”, Classical and Quantum Gravity, 21: 2936. • –––, 2004b, “A Spontaneous Collapse Model on a Lattice”, Journal of Statistical Physics, 115: 1394. • d'Espagnat, B., 1971, “Conceptual Foundations of Quantum Mechanics”, W.A. Benjamin, Reading Mass. • Dirac, P.A.M., 1948, Quantum Mechanics, Clarendon Press, Oxford. • Dewdney, C. and Horton, G., 2001, “A non-local, Lorentz-invariant, hidden-variable interpretation of relativistic quantum mechanics based on particle trajectories”, Journal of Physics A, 34: 9871. • Diosi, L., 1990, “Relativistic theory for continuous measurement of quantum fields”, Physical Review A, 42: 5086. • Dürr, D., Goldstein, S., Münch-Berndl, K., Zanghi, N., 1999, “Hypersurface Bohm—Dirac models”, Physical Review, A60: 2729. • Eberhard, P., 1978, “Bell's theorem and different concepts of locality”, Nuovo Cimento, 46B: 392. • Fine, A., 1970, “Insolubility of the quantum measurement problem”, Physical Review, D2: 2783. • Fonda, L., Ghirardi, G.C., and Rimini A., 1973, “Evolution of quantum systems subject to random measurements”, Nuovo Cimento, 18B: 1. • –––, 1978, “Decay theory of unstable quantum systems”, Reports on Progress in Physics, 41: 587. • Fonda, L., Ghirardi, G.C., Rimini, A., and Weber, T., 1973, “Quantum foundations of exponential decay law”, Nuovo Cimento, 15A: 689. • Fu, Q., 1997, “Spontaneous radiation of free electrons in a nonrelativistic collapse model”, Physical Review, A56: 1806. • Gallis, M.R. and Fleming, G.N., 1990, “Environmental and spontaneous localization”, Physical Review, A42: 38. • Gerlich, S., Hackermüller, L., Hornberger, K., Stibor, A., Ulbricht, H., Gring, M., Goldfarb, F., Savas, T., Müri, M., Mayor, M and Arndt, M., 2007, “A Kapitza-Dirac-Talbot-Lau interferometer for highly polarizable molecules”, Nature Physics, 3: 711. • Ghirardi, G.C., 1996, “Properties and events in a relativistic context: Revisiting the dynamical reduction program”, Foundations of Physics Letters, 9: 313. • –––, 1997a, “Quantum Dynamical Reduction and Reality: Replacing Probability Densities with Densities in Real Space”, Erkenntnis, 45: 349. • –––, 1997b, “Macroscopic Reality and the Dynamical Reduction Program”, in Structures and Norms in Science, M.L. Dalla Chiara (ed.), Kluwer, Dordrecht. • –––, 2000, “Local measurements of nonlocal observables and the relativistic reduction process”, Foundations of Physics, 30: 1337. • –––, 2007, “Some reflections inspired by my research activity in quantum mechanics”, Journal of Physics A, 40: 2891. • Ghirardi, G.C. and Bassi, A., 1999, “Do dynamical reduction models imply that arithmetic does not apply to ordinary macroscopic objects”, British Journal for the Philosophy of Science, 50: 49. • Ghirardi, G.C. and Grassi, R., 1991, “Dynamical Reduction Models: some General Remarks”, in Nuovi Problemi della Logica e della Filosofia della Scienza, D. Costantini et al. (eds), Editrice Clueb, Bologna. • –––, 1994, “Outcome predictions and property attribution—The EPR argument reconsidered”, Studies in History and Philosophy of Science, 25: 397. • –––, 1996, “Bohm's Theory versus Dynamical Reduction”, in Bohmian Mechanics and Quantum Theory: an Appraisal, J. Cushing et al. (eds), Kluwer, Dordrecht. • Ghirardi, G.C., Grassi, R., and Benatti, F., 1995, “Describing the macroscopic world—Closing the circle within the dynamical reduction program”, Foundations of Physics, 25: 5. • Ghirardi, G.C., Grassi, R., Butterfield, J., and Fleming, G.N., 1993, “Parameter dependence and outcome dependence in dynamic models for state-vector reduction”, Foundations of Physics, 23: 341. • Ghirardi, G.C., Grassi, R., and Pearle, P., 1990a, “Relativistic dynamic reduction models—General framework and examples”, Foundations of Physics, 20: 1271. • –––, 1990b, “Relativistic Dynamical Reduction Models and Nonlocality”, in Symposium on the Foundations of Modern Physics 1990, P. Lahti and P. Mittelstaedt (eds), World Scientific, Singapore. • Ghirardi, G.C., Grassi, R., Rimini, A., and Weber, T., 1988, “Experiments of the Einstein-Podolsky-Rosen type involving CP-violation do not allow faster-than-light communication between distant observers”, Europhysics Letters, 6: 95. • Ghirardi, G.C., Pearle, P., and Rimini, A., 1990, “Markov-processes in Hilbert-space and continuous spontaneous localization of systems of identical particles”, Physical Review, A42: 78. • Ghirardi, G.C. and Rimini, A., 1990, “Old and New Ideas in the Theory of Quantum Measurement”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York . • Ghirardi, G.C., Rimini, A., and Weber, T., 1980, “A general argument against superluminal transmission through the quantum-mechanical measurement process”, Lettere al Nuovo Cimento, 27: 293. • –––, 1985, “A Model for a Unified Quantum Description of Macroscopic and Microscopic Systems”, in Quantum Probability and Applications, L. Accardi et al. (eds), Springer, Berlin. • –––, 1986, “Unified dynamics for microscopic and macroscopic systems”, Physical Review, D34: 470. • Gisin, N., 1984, “Quantum measurements and stochastic processes”, Physical Review Letters, 52: 1657, and “Reply”, ibid., 53: 1776. • –––, 1989, “Stochastic quantum dynamics and relativity”, Helvetica Physica Acta, 62: 363. • Goldstein, S. and Tumulka, R., 2003, “Opposite arrows of time can reconcile relativity and nonlocality”, Classical and Quantum Gravity, 20: 557. • Goldstein, S., Tausk, D.V., Tumulka, R., and Zanghi, N., 2010, “What does the Free Will Theorem Actually Prove?”, Notice of the American Mathematical Society, 57: 1451. • Gottfried, K., 2000, “Does Quantum Mechanics Carry the Seeds of its own Destruction?”, in Quantum Reflections, D. Amati et al. (eds), Cambridge University Press, Cambridge. • Hackermüller, L., Hornberger, K., Brexger, B., Zeilinger, A. and Arndt, M., 2004, “Decoherence of matter waves by thermal emission of radiation”, Nature, 427: 711. • Jarrett, J.P., 1984, “On the physical significance of the locality conditions in the Bell arguments”, Nous, 18: 569. • Joos, E., Zeh, H.D., Kiefer, C., Giulini, D., Kupsch, J., and Stamatescu, I.-O., 1996, “Decoherence and the Appearance of a Classical World”, Springer, Berlin. • Lewis, P., 1997, “Quantum mechanics, orthogonality and counting”, British Journal for the Philosophy of Science, 48: 313. • –––, 2003, “Four strategies for dealing with the counting anomaly in spontaneous collapse theories of quantum mechanics”, International Studies in the Philosophy of Science, 17: 137. • Marshall, W., Simon, C., Penrose, G. and Bouwmeester, D., 2003, “Towards quantum superpositions of a mirror”, Physical Review Letters, 91: 130401. • Maudlin, T., 2011, Quantum Non-Locality and Relativity Wiley-Blackwell. • Nicrosini, O. and Rimini, A., 2003, “Relativistic spontaneous localization: a proposal”, Foundations of Physics, 33: 1061. • Pais, A., 1982, Subtle is the Lord, Oxford University Press, Oxford. • Pearle, P., 1976, “Reduction of statevector by a nonlinear Schrödinger equation”, Physical Review, D13: 857. • –––, 1979, “Toward explaining why events occur”, International Journal of Theoretical Physics, 18: 489 . • –––, 1989, “Combining stochastic dynamical state-vector reduction with spontaneous localization”, Physical Review, A39: 2277. • –––, 1990, “Toward a Relativistic Theory of Statevector Reduction”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York. • –––, 1999, “Collapse Models”, in Open Systems and measurement in Relativistic Quantum Theory, H.P. Breuer and F. Petruccione (eds.), Springer, Berlin. • –––, 1999b, “Relativistic Collapse Model With Tachyonic Features”, Physical Review, A59: 80. • Pearle, P. and Squires, E., 1994, “Bound-state excitation, nucleon decay experiments, and models of wave-function collapse”, Physical Review Letters, 73: 1. • Penrose, R., 1989, The Emperor's New Mind, Oxford University Press, Oxford. • Peruzzi, G. and Rimini, A., 2000, “Compoundation invariance and Bohmian mechanics”, Foundations of Physics, 30: 1445. • Rae, A.I.M., 1990, “Can GRW theory be tested by experiments on SQUIDs?”, Journal of Physics, A23: 57. • Rimini, A., 1995, “Spontaneous Localization and Superconductivity”, in Advances in Quantum Phenomena, E. Beltrametti et al. (eds.), Plenum, New York. • Schrödinger, E., 1935, “Die gegenwärtige Situation in der Quantenmechanik”, Naturwissenschaften, 23: 807. • Schilpp, P.A. (ed.), 1949, Albert Einstein: Philosopher-Scientist, Tudor, New York. • Shimony, A., 1974, “Approximate measurement in quantum-mechanics. 2”, Physical Review, D9: 2321. • –––, 1983, “Controllable and uncontrollable non-locality”, in Proceedings of the International Symposium on the Foundations of Quantum Mechanics, S. Kamefuchi et al. (eds), Physical Society of Japan, Tokyo. • –––, 1989, “Search for a worldview which can accommodate our knowledge of microphysics”, in Philosophical Consequences of Quantum Theory, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana. • –––, 1990, “Desiderata for modified quantum dynamics”, in PSA 1990, Volume 2, A. Fine, M. Forbes and L. Wessels (eds), Philosophy of Science Association, East Lansing, Michigan. • Squires, E., 1991, “Wave-function collapse and ultraviolet photons”, Physics Letters, A 158: 431. • Stapp, H.P., 1989, “Quantum nonlocality and the description of nature”, in Philosophical Consequences of Quantum Theory, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana. • Suppes, P. and Zanotti, M., 1976, “On the determinism of hidden variables theories with strict correlation and conditional statistical independence of observables”, in Logic and Probability in Quantum Mechanics, P. Suppes (ed.), Reidel, Dordrecht. • Tumulka, R., 2006a, “A Relativistic Version of the Ghirardi-Rimini-Weber Model”, Journal of Statistical Physics, 125: 821. • –––, 2006b, “On Spontaneous Wave Function Collapse and Quantum Field Theory”, Proceedings of the Royal Society, London, A462: 1897. • –––, 2006c, “Collapse and Relativity”, in Quantum Mechanics: Are there Quantum Jumps? and On the Present Status of Quantum Mechanics, A. Bassi, D. Dürr, T. Weber and N. Zanghi (eds), AIP Conference Proceedings 844, American Institute of Physics • –––, 2007, “Comment on The Free Will Theorem, to appear in Foundations of Physics. Also quant-phys 0611283 . • van Fraassen, B., 1982, “The Charybdis of Realism: Epistemological Implications of Bell's Inequality”, Synthese, 52: 25. • Zeinlinger, A., 2005, “The message of the quantum”, Nature, 438: 743. • Zurek, W.H., 1993, “Decoherence—A reply to comments”, Physics Today, 46: ???.
e33e7b7d47995253
Showing posts from November, 2013 Team, Partner and Subject Teaching In a previous post, "Science and Mathematics Education: What Is the Current Situation?" I mentioned the following: "I have a friend who grew up in Singapore and one major complaint I heard from this person regarding education in the United States is the general lack of subject teachers. Teachers in US schools are assigned to teach an assortment of subjects while in Singapore, apparently, there is a math teacher, a science teacher, a reading teacher even in primary grades." It is assumed that subject teachers are experts on the subject they are assigned to teach. Subject matter experts, of course, are not necessarily more effective teachers especially in an elementary school. One can not pluck a chemistry professor from a PhD granting institution and expect that person to be a stellar teacher of science in a primary school. A practicing scientists often has difficulty in fact in relating their work with non scientists. There is subject expertise, but for basic educ… Equity in Education The top performing nations in the world in education pride itself by providing quality education to all. The Organisation for Economic Co-operation and Development (OECD) reports, "The highest performing education systems across OECD countries combine quality with equity". Ironically, most countries look at education as a way to become better than the rest. Education is seen as a tool to get ahead in society. This objective falls so far away from society's main goal of preparing its youngest members. Inequity leads to a school's failure and society pays heavily for this grave mistake in the future. Common sense dictates that schools with greater needs require more support and attention. Instead, the most effective teachers are attracted to schools with better resources and well prepared students. Facilities are usually updated in elite schools attended by children of the privileged class. Special programs are even provided for children who demonstrate high academic a… Rebuilding Schools After Yolanda After rescue and relief, rebuilding comes next. Rebuilding must attempt to mitigate the effects of a typhoon. Otherwise, communities will face the same tragedy when the next typhoon hits. It is also important that the extra measures take into account what these communities really need. The Philippines, with all of its islands, has a significant fraction of its people living in coastal communities. Fishing is a major part of livelihood as well as source of food. It is foolish, for example, to impose "no-build" zones on coast lines. We need to listen to one of the leaders of fishermen in the Philippines, Salvador France: France said about 10 million Filipinos or roughly 10 percent of the country’s population live in coastal areas, the Philippines being an archipelago of 7,101 islands and islets. Declaring coastlines as no-build zones is “stupid,” France said. (Business Mirror, 27 November 2013) One may suggest then to build homes that can resist both strong winds and storm surg… Gaming Special Education Seeing schools accommodating children with special needs or learning disabilities is indeed comforting. The point is to ensure that these children likewise receive the support they need in order to become positive contributors to society. The same standards of career- or college-readiness is therefore applied to special education. In the US, states provide additional support in terms of staff and resources to schools based on the number of special education students enrolled. These include students with learning disabilities as well as English language learners. Having schools actively identifying students with needs is not a bad thing. In fact, it is a good sign that schools are taking disability seriously. Unfortunately, there is a flip side. There are standardized exams which gauge learning outcomes. One of these is the National Assessment of Educational Progress (NAEP). States can exclude special education students from this exam. The current policy of the NAEP (issued in 2010) st… From Zero to Eight First came "Early Warning! Why Reading by the End of Third Grade Matters". This report of the Annie E. Casey Foundation made the case of how important reading is for learning. Children learn to read before the age of eight (preschool through third grade) while children read to learn after that. In a new report, "The First Eight Years", the Annie E. Casey Foundation highlights the status of third grade students in the United States. It is not pretty. The results on cognitive knowledge and skills, and physical well-being. With only 36% scoring respectably in science. math and reading, and only 56% in "excellent" or "very good" health, it is highly likely that there is significant overlap between these groups. Children in poor health are likely to be among those not having the cognitive skills and knowledge required at age 8. Looking at income, it is apparent that children from poor families are much more likely to be behind in all areas. It should… Diversity in Preschool and Elementary Years Sometimes, Things Are Really Simple But We Insist on Making Them Complicated Teaching quantum mechanics to students who have not seen the subject before can be extremely challenging. Take, for example, one of its postulates (A postulate is a statement that is assumed true without proof.): This is the time-dependent Schrödinger equation, which describes how a system evolves in time when acted upon by a force or energy. For a situation like this, I try to remind my students of the time they were in kindergarten and the teacher taught them that 1 + 1 = 2. This is no different. Being able to accept nature the way it is can be very difficult especially when our mind has been conditioned to rationalize all the time. There are building blocks which we must assume as starting material. How we connect or assemble these blocks to create something is indeed a skill, but we must not confuse skills with fundamentals.  In General Chemistry, there are likewise fundamental concepts. An example is the Law of Definite Proportions: "A chemical compound always contains exactl… When and Where Students Acquire Skills Nowadays, there is an obvious increased emphasis among education reformers on students acquiring skills. There is that favorite phrase "21st Century Skills". A committee from the United States National Academies concluded that these skills can be divided into three categories: cognitive, interpersonal and intrapersonal. Interpersonal skills include teamwork and communication while intrapersonal skills are exemplified by resilience and conscientiousness. Among these skills, conscientiousness has been shown to be most strongly correlated with positive life outcomes: fruitful employment, educational attainment, good health, longer life expectancy, and low criminal behavior. The following figure compiled by Heckman and Kautz in their recently released working paper, "Fostering and Measuring Skills: Interventions that Improve Character and Cognition", shows that in fact only conscientiousness appears to correlate with job performance in a statistically significant manne… Not How, But Why? It was in my senior year that I got introduced to the Greek word telos, which means purpose or goal. It sounded Greek and perhaps complicated especially in a philosophy class, but I think it is really no different from how a child thinks. Asking why is really common among children. Focusing on the goal often hinders an appreciation and understanding of what is in fact occurring. It is in a way related to an intrinsic desire to reach the finish line without actually going through the race. It is the inherent distaste for delayed gratification. As a result, a procedure involving numerous steps or progress that occurs in very small increments become very difficult to accept and learn. The obsession with why and not how prompts people to cling on finding a reason before knowing and understanding what just occurred. People are, for instance, quite quick to blame. Here are examples: The desire to arrive at a purpose-based explanation on why something happens is extremely strong. This desire… Where Have All the Good Teachers Gone? This is not a bashing of those who are currently in the teaching profession. It is simply a rehash of what I heard from some people during the past week regarding teachers in the United States. It was "Gender Summit" after all in Washington, DC. The gender summit is a conference that discusses how both research and innovation are improved through inclusion of gender. It is both a celebration as well as a discussion of remaining challenges with regard to the role of women in science, technology and policy. Throughout my basic education years, clearly more than ninety-percent of the teachers I had were female. The situation in the United States is similar. The National Center for Education Statistics in the US reports the following in 2008: National Assessment of Educational Progress 2013 The results are out. This is the report card for basic education in the United States. The National Assessment of Educational Progress (NAEP), administered every two years, provides a glimpse of how students in America perform in math and reading. This year 2013 shows incremental improvement in both areas. The scores are a bit better than those in 2011. Still, less than half are deemed proficient. More importantly, the gaps have not been reduced. Highlighting this is the following figure which shows that only one state (Maine) has reduced the gap between the scores of white and black Americans: One has to go back 10 years to see yellow/orange in the map above. Gaps narrowed in five states during the period 2003-2005. The report includes scores as far back as 1992. Using the gaps then, the following states have shown improvement: There are 16 states here that have narrowed the gap. This shows that states have done a better job during the decade 1992-2002 than the most recent one in red… Parallels between Disasters and Basic Education For the Philippines, there are indeed similarities between how the country is affected by typhoons and the current predicament of its basic education system. Both have been perennially plaguing the country and both seem to be insurmountable challenges. The parallels go even further than this. Take the following as an example: Here is an article from the Philippine Center for Investigative Journalismin 2006: And here is a recent column from the Philippine Star: At least, the one from basic education is only about fudging pupil to classroom ratios. With the disaster, the number of human lives lost is being manipulated. There are other features disasters and schools share in common. One is trying to get credit for building schools and providing relief aid. In front of a school building we may find, for example, the name of a Philippine politician (the specific name of the congressman is removed here for the sake of fairness since this practice is really widespread in the Philippines) tak… A Lesson We All Need to Read and Learn Almost a year ago, a category 5 typhoon packing sustained winds at 175 mph hit the southern island of the Philippines. The typhoon internationally known as Bopha was locally called "Pablo". While the devastation from the recent super typhoon Yolanda was attributed to storm surge, Pablo destroyed homes in Mindanao with rainfall that triggered landslides. Unlike Yolanda, Pablo did not receive ample media coverage. Patrick Fuller of CNN in "Two months on, Typhoon Bopha's victims still homeless" wrote: ...Bopha didn't get much traction in the international media. Competing against Syria for the headlines, the story appeared to drop off TV screens within days.With scant media coverage, the job of NGO fundraisers was made even more difficult. Barely any British NGOs launched public appeals in the full knowledge that levels of public sympathy just weren't high enough. But if a category 5 super typhoon -- the largest on the scale -- does not warrant donor atten…
9b923867c7ec06c1
EPSRC logo Details of Grant  EPSRC Reference: EP/J01690X/1 Title: Beyond Luttinger Liquids-spin-charge separation at high excitation energies Principal Investigator: Ford, Professor CJB Other Investigators: Ritchie, Professor D Researcher Co-Investigators: Project Partners: Department: Physics Organisation: University of Cambridge Scheme: Standard Research Starts: 15 October 2012 Ends: 14 April 2015 Value (£): 356,987 EPSRC Research Topic Classifications: Condensed Matter Physics EPSRC Industrial Sector Classifications: No relevance to Underpinning Sectors Related Grants: Panel History: Panel DatePanel NameOutcome 09 Feb 2012 EPSRC Physical Sciences Physics - February Announced Summary on Grant Application Form It is an astonishing fact that although an isolated electron is, as far as we can tell, indivisible, a collection of electrons constrained to move only in a narrow wire appear to dissociate into two new types of particle. These two particles carry separately the magnetism (or spin) of the electron and its electric charge and are called spinons and holons. These form the building blocks of a new state of matter known as a Tomonaga-Luttinger liquid. For decades our understanding of this Luttinger liquid has been entirely theoretical, resting on simplified models of how electrons behave, since even with the world's most powerful computers we are unable to solve exactly the behaviour of more than a handful of electrons-such is the complexity of the many-electron Schrödinger equation. Advances in semiconductor physics have made it possible in recent years to set up the necessary conditions to create a Luttinger liquid and observe the phenomenon of spin-charge separation directly. This we achieved in 2009 in a collaboration that brought together the experimentalist and theorist who are the principal investigators on this proposal. The experiment worked by injecting electrons into an array of wires (via quantum mechanical tunnelling) and mapping out where they subsequently go by varying the magnetic field and voltage. Though the experiment was a success, it raised a number of intriguing questions-only with the experimental results in front of us could we see the shortcomings of current theory. It is those questions that underpin this proposal. The most surprising observation is that, while the approximate theories that predict spin-charge separation are only valid for the lowest-energy excitations, we saw hints in the experiment that spin-charge separation extends to higher energies. The key question is: how high in energy can we track the spinon and holon? If they are unusually stable then what causes this stability and can we understand it mathematically? Also, the theoies all assume the wires are infinitely long. Our proposal involves studying a range of lengths to address how the excitations are influenced by the ends of the wire when it is short. That may be the vital step necessary to explain a 15 year-old mystery of the "0.7" step-like feature in the conductance of quantum wires. At the heart of this proposal is an improved device for measuring spin-charge separation, and recent theoretical ideas that develop mathematical machinery to allow us to calculate properties away from the low-energy limit of narrow wires. This theory needs to be related to the new tunnelling experiment of the proposal. Our new devices will also allow two new types of experiment to be undertaken. We will measure the tunnelling both into and out of a one-dimensional wire, from which it is possible to understand how the novel excitations relax back to equilibrium. We will also measure the drag forces between two 1D wires, which again will help characterise the distinct spinon and holon properties. There are preliminary theoretical predictions for both experiments, which we will test. The implications of the proposal extend beyond the boundaries of the Luttinger-liquid state. Other types of metal (so called "bad metals") also show, at high temperatures, properties that naively only belong at low energies and temperatures. If we can understand how this works in the one-dimensional Luttinger liquid (where typically we have more mathematical techniques to deploy) it could point to a solution of that much harder problem. Similarly, the techniques of manipulating very narrow wires and stabilising their unusual quantum properties are also what would be required to make a proposed type of quantum computer. Like the Luttinger liquid, the wires in question also have very unusual excitations but these have been constructed to be robust at high temperatures through a type of topological protection reminiscent of that which prevents a Möbius strip from unwinding. Key Findings Potential use in non-academic contexts Date Materialised Sectors submitted by the Researcher Project URL:   Further Information:   Organisation Website: http://www.cam.ac.uk
abdde77058dca28f
Causal Interpretation of the Quantum Harmonic Oscillator Initializing live version Download to Desktop Requires a Wolfram Notebook System The harmonic oscillator is an important model in quantum theory that could be described by the Schrödinger equation: , () with . In this Demonstration a causal interpretation of this model is applied. A stable (nondispersive) wave packet can be constructed by a superposition of stationary eigenfunctions of the harmonic oscillator. The solution is a wave packet in the (, ) space where the center of the packet oscillates harmonically between with frequency . From the wavefunction in the eikonal representation , the gradient of the phase function and therefore the equation for the motion could be calculated analytically. The motion is given by , where are the initial starting points. The trajectories of the particles oscillate with the amplitude and frequency and they never cross. In practice, it is impossible to predict or control the quantum trajectories with complete precision. The effective potential is the sum of quantum potential (QP) and potential that leads to the time-dependent quantum force: . On the right side, the graphic shows the squared wavefunction and the trajectories. The left side shows the particles' positions, the squared wavefunction (blue), the quantum potential (red), the potential (black), and the velocity (green). The quantum potential and the velocity are scaled down. Contributed by: Klaus von Bloh (March 2011) Open content licensed under CC BY-NC-SA P. Holland, The Quantum Theory of Motion, Cambridge, England: Cambridge University Press, 1993. D. Bohm, Quantum Theory, New York: Prentice–Hall, 1951. Feedback (field required) Email (field required) Name Occupation Organization
65ebf57b08cb3fca
Science vs. Religion For much of today’s world, science and religion are seen as opposing forces, mutually exclusive and entirely incompatible.  Some of today’s scientists identify themselves as atheists or agnostics and appear to think that a belief in God signifies ignorance in a person.   Likewise, some religious groups decry science as the unholy work of the devil and view scientists as agents of Satan.  I propose that both of these views are equally incorrect. How has such a strange and destructive opinion become so widely accepted?  It would seem that saying these two concepts were incompatible would be like saying that 19 and purple are incompatible.  They are very different concepts and either can exist with or without the other.  They are neither compatible nor incompatible.     However, they coexist and they do intersect just as purple and 19 do.  It is quite reasonable for a person to look at a basket of eggplants and say, “Look, there are 19 purple eggplants in the basket.”   This statement does not affect the concepts of 19 or purple in any way, yet they can work together to better define another concept, that lovely basket of eggplants.  Purple and 19 can be used to define a basket of eggplants Just as religion and science can define our life and universe.  A Catholic priest, Monsignor Georges Lemaître proposed the Big Bang theory and described it as “the Cosmic Egg exploding at the moment of the creation”.  The vast number of new ideas that were discovered in the last century and the different types of discoveries has made it virtually impossible for the everyday person to understand them or even to be aware of them.  In the 17th century Robert Hooke discovered the cellular structure in living things.  An average person with a little education could understand a discovery like this.  In the 18th century Benjamin Franklin discovered that lightning is electrical and Edward Jenner developed the small pox vaccine, again, discoveries that most people could grasp.   These were visible things that could be seen and applied to life. By the 19th century the discoveries were beginning to become more technical and difficult for the average person.  Alessandro Volta discovered electrochemical series and invented the battery at a time when very few people had any idea that electricity was anything but a lightning bolt.   Marie Curie discovered polonium and radium, and coined the term “radioactivity” which was also a completely foreign concept to the average person of her day. In the 20th century new ideas and concepts were becoming so esoteric that very few people understood even the words used to describe them much less the concept.  Albert Einstein published his theory of special relativity,   Heike Kamerlingh Onnes described superconductivity,   Niels Bohr came up with the model of the atom and Erwin Schrödinger devised the Schrödinger equation, the start of Quantum mechanics.   And these are just a few of the thousands of discoveries that we saw in these one hundred years.    Now, in the 21st century we hear things like the Higgs boson is finally found at CERN (confirmed to 99.999% certainty).  Who is this “Higgs” and how did he lose his boson?  A substantial number of people would think that was a serious question. The discoveries made by theoreticians and researchers now are completely meaningless to the average person until engineers turn the new idea into a physical product that can be purchased and used and many of them never will be seen even if they are in everyday consumer products.  It would be very interesting to see how the marketers could spin the Higgs boson to pitch it as the key to a new or better product.  Buy our new product with ten percent more bosons! At the same time that the discoveries were becoming too complex for most people to understand, the media was becoming more interested in reporting about them.  Unfortunately, the media had no more understanding of the new concepts than the average person and they took no time to try to learn about them.  They typically take excerpts from a technical paper published for other scientists and spin their own story in a manner to attract more readers.  This has resulted in some very strange public impressions of scientific discoveries ranging from the widespread news stories a hundred years ago that our ancestors were apes to the recent news stories about the discovery of a “god” particle. Science focuses on what things are made of and how they work and religion is focused on why people and things exist and why they have meaning.  The media people mixed those two separate purposes into a confusing mixture that angers both groups, religious and scientific. Darwin – “Generally the term (species) includes the unknown element of a distinct act of creation” About justjoe Reader, writer and retired entrepreneur. Enjoying life! This entry was posted in my stories. Bookmark the permalink.
5e49af278ec014d3
All Issues Volume 8, 2019 Volume 7, 2018 Volume 6, 2017 Volume 5, 2016 Volume 4, 2015 Volume 3, 2014 Volume 2, 2013 Volume 1, 2012 Evolution Equations & Control Theory September 2013 , Volume 2 , Issue 3 Select all articles Carleman Estimates and null controllability of coupled degenerate systems El Mustapha Ait Ben Hassi, Farid Ammar khodja, Abdelkarim Hajjaj and Lahcen Maniar 2013, 2(3): 441-459 doi: 10.3934/eect.2013.2.441 +[Abstract](1413) +[PDF](425.6KB) In this paper, we study the null controllability of weakly degenerate parabolic systems with two different diffusion coefficients and one control force. To obtain this aim, we had to develop new global Carleman estimates for a degenerate parabolic equation, with weight functions different from the ones of [2], [10] and [31]. Asymptotics for a second order differential equation with a linear, slowly time-decaying damping term Alain Haraux and Mohamed Ali Jendoubi 2013, 2(3): 461-470 doi: 10.3934/eect.2013.2.461 +[Abstract](1184) +[PDF](333.5KB) A gradient-like property is established for second order semilinear conservative systems in presence of a linear damping term which is asymptotically weak for large times. The result is obtained under the condition that the only critical points of the potential are absolute minima. The damping term may vanish on large intervals for arbitrarily large times and tends to $0$ at infinity, but not too fast (in a non-integrable way). When the potential satisfies an adapted, uniform, Łojasiewicz gradient inequality, convergence to equilibrium of all bounded solutions is shown, with examples in both analytic and non-analytic cases. Traction, deformation and velocity of deformation in a viscoelastic string Luciano Pandolfi 2013, 2(3): 471-493 doi: 10.3934/eect.2013.2.471 +[Abstract](1001) +[PDF](420.8KB) In this paper we consider a viscoelastic string whose deformation is controlled at one end. We study the relations and the controllability of the couples traction/velocity and traction/deformation and we show that the first couple behaves very like as in the purely elastic case, while new phenomena appears when studying the couple of the traction and the deformation. Namely, while traction and velocity are independent (for large time), traction and deformation are related at each time but the relation is not so strict. In fact we prove that an arbitrary number of ``Fourier'' components of the traction and, independently, of the deformation can be assigned at any time. Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations Pavel I. Plotnikov and Jan Sokolowski 2013, 2(3): 495-516 doi: 10.3934/eect.2013.2.495 +[Abstract](1134) +[PDF](422.4KB) The flow around a rigid obstacle is governed by the compressible Navier-Stokes equations. The nonhomogeneous Dirichlet problem is considered in a bounded domain in two spatial dimensions with a compact obstacle in its interior. The flight of the airflow is characterized by the work shape functional, to be minimized over a family of admissible obstacles. The lift of the airfoil is a given function of temporal variable and should be maintain closed to the flight scenario. The continuity of the work functional with respect to the shape of obstacle in two spatial dimensions is shown for a wide class of admissible obstacles compact with respect to the Kuratowski-Mosco convergence.     The dependence of small perturbations of approximate solutions to the governing equations with respect to the boundary variations of obstacles is analyzed for the nonstationary state equation. On singular limit of a nonlinear $p$-order equation related to Cahn-Hilliard and Allen-Cahn evolutions Cristina Pocci 2013, 2(3): 517-530 doi: 10.3934/eect.2013.2.517 +[Abstract](1077) +[PDF](363.3KB) In this paper we consider a geometric motion associated with the minimization of a functional which is the sum of a kinetic part of $p$-Laplacian type, a double well potential $\psi$ and a curvature term. In the case $p=2$, such a functional arises in connection with the image segmentation problem in computer vision theory. By means of matched asymptotic expansions, we show that the geometric motion can be approximated by the evolution of the zero level set of the solution of a nonlinear $p$-order equation. The singular limit depends on a complex way on the mean and Gaussian curvatures and the surface Laplacian of the mean curvature of the evolving front. Energy methods for Hartree type equations with inverse-square potentials Toshiyuki Suzuki 2013, 2(3): 531-542 doi: 10.3934/eect.2013.2.531 +[Abstract](1209) +[PDF](199.6KB) Nonlinear Schrödinger equations with nonlocal nonlinearities described by integral operators are considered. This generalizes usual Hartree type equations (HE)$_{0}$. We construct weak solutions to (HE)$_{a}$, $a\neq 0$, even if the kernel is of non-convolution type. The advantage of our methods is the applicability to the problem with strongly singular potential $a|x|^{-2}$ as a term in the linear part and with critical nonlinearity. On the structural properties of an efficient feedback law Ambroise Vest 2013, 2(3): 543-556 doi: 10.3934/eect.2013.2.543 +[Abstract](1052) +[PDF](392.9KB) We investigate some structural properties of an efficient feedback law that stabilize linear time-reversible systems with an arbitrarily large decay rate. After giving a short proof of the generation of a group by the closed-loop operator, we focus on the domain of the infinitesimal generator in order to illustrate the difference between a distributed control and a boundary control, the latter being technically more complex. We also give a new proof of the exponential decay of the solutions and we provide an explanation of the higher decay rate observed in some experiments. 2017  Impact Factor: 1.049 Email Alert [Back to Top]
6b6f1e4e31e4754f
World Library   Flag as Inappropriate Email this Article Richard P. Feynman Article Id: WHEBN0000211277 Reproduction Date: Title: Richard P. Feynman   Author: World Heritage Encyclopedia Language: English Subject: Leonhard Euler, Quantum electrodynamics, Lenz's law, 1963 in literature, Textbook, Sin-Itiro Tomonaga, Oersted Medal, Fundamental science, RPF, Jewish quota Publisher: World Heritage Encyclopedia Richard P. Feynman "Feynman" redirects here. For other uses, see Feynman (disambiguation). Richard Feynman File:Richard Feynman Nobel.jpg Born Richard Phillips Feynman (1918-05-11)May 11, 1918 Manhattan, New York Died February 15, 1988(1988-02-15) (aged 69) Los Angeles, California Residence United States Nationality American Fields Theoretical physics Institutions Manhattan Project Cornell University California Institute of Technology Alma mater Massachusetts Institute of Technology (B.S.), Princeton University (Ph.D.) Doctoral advisor John Archibald Wheeler Other academic advisors Manuel Sandoval Vallarta Doctoral students F. L. Vernon, Jr.[1] Willard H. Wells[1] Al Hibbs[1] George Zweig[1] Giovanni Rossi Lomanitz[1] Thomas Curtright[1] Other notable students Douglas D. Osheroff Robert Barro W. Daniel Hillis Known for Influences Paul Dirac Influenced Freeman Dyson Notable awards Albert Einstein Award (1954) E. O. Lawrence Award (1962) Nobel Prize in Physics (1965) Oersted Medal (1972) National Medal of Science (1979) Spouse Arline Greenbaum (m. 1941–45)(deceased) Mary Louise Bell (m. 1952–54) Gweneth Howarth (m. 1960–88) (his death) He was the father of Carl Feynman and adoptive father of Michelle Feynman. He was the brother of Joan Feynman. He assisted in the development of the atomic bomb during World War II and became known to a wide public in the 1980s as a member of the Rogers Commission, the panel that investigated the Space Shuttle Challenger disaster. In addition to his work in theoretical physics, Feynman has been credited with pioneering the field of quantum computing[4][5] and introducing the concept of nanotechnology.[6] He held the Richard Chace Tolman professorship in theoretical physics at the California Institute of Technology. Feynman was a keen popularizer of physics through both books and lectures, notably a 1959 talk on top-down nanotechnology called There's Plenty of Room at the Bottom, and the three-volume publication of his undergraduate lectures, The Feynman Lectures on Physics. Feynman also became known through his semi-autobiographical books Surely You're Joking, Mr. Feynman! and What Do You Care What Other People Think? and books written about him, such as Tuva or Bust!. Early life Richard Phillips Feynman was born on May 11, 1918, in New York City,[7][8] the son of Lucille (née Phillips), a homemaker, and Melville Arthur Feynman, a sales manager.[9] His family originated from Russia and Poland; both of his parents were Ashkenazi Jews.[10] They were not religious, and by his youth Feynman described himself as an "avowed atheist".[11] Feynman was a late talker, and by his third birthday had yet to utter a single word. The young Feynman was heavily influenced by his father, who encouraged him to ask questions to challenge orthodox thinking, and who was always ready to teach Feynman something new. From his mother he gained the sense of humor that he had throughout his life. As a child, he had a talent for engineering, maintained an experimental laboratory in his home, and delighted in repairing radios. When he was in grade school, he was able to create a home burglary system while his parents were out for the day running errands.[12] When Richard was five years old, his mother gave birth to a younger brother, but this brother died at four weeks of age.[9] Four years later, Richard gained a sister, Joan, and the family moved to Far Rockaway, Queens.[9] Though separated by nine years, Joan and Richard were close, as they both shared a natural curiosity about the world. Their mother thought that women did not have the cranial capacity to comprehend such things. Despite their mother's disapproval of Joan's desire to study astronomy, Richard encouraged his sister to explore the universe. She eventually became an astrophysicist specializing in interactions between the Earth and the solar wind.[13] In high school, his IQ was determined to be 125—high, but "merely respectable" according to biographer James Gleick.[14] In 1933, when he turned 15, he taught himself trigonometry, advanced algebra, infinite series, analytic geometry, and both differential and integral calculus.[15] Before entering college, he was experimenting with and re-creating mathematical topics such as the half-derivative using his own notation. In high school he was developing the mathematical intuition behind his Taylor series of mathematical operators.[16] His habit of direct characterization sometimes rattled more conventional thinkers; for example, one of his questions, when learning feline anatomy, was "Do you have a map of the cat?" (referring to an anatomical chart). Feynman attended Far Rockaway High School, a school also attended by fellow laureates Burton Richter and Baruch Samuel Blumberg.[17] A member of the Arista Honor Society, in his last year in high school Feynman won the New York University Math Championship; the large difference between his score and those of his closest competitors shocked the judges.[18] He applied to Columbia University but was not accepted.[9] Instead, he attended the Massachusetts Institute of Technology, where he received a bachelor's degree in 1939 and in the same year was named a Putnam Fellow. He attained a perfect score on the graduate school entrance exams to Princeton University in mathematics and physics—an unprecedented feat—but did rather poorly on the history and English portions.[18] Attendees at Feynman's first seminar included Albert Einstein, Wolfgang Pauli, and John von Neumann. He received a Ph.D. from Princeton in 1942; his thesis advisor was John Archibald Wheeler. Feynman's thesis applied the principle of stationary action to problems of quantum mechanics, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, laying the groundwork for the "path integral" approach and Feynman diagrams, and was titled "The Principle of Least Action in Quantum Mechanics". — James Gleick, Genius: The Life and Science of Richard Feynman The Manhattan Project At Princeton, the physicist Robert R. Wilson encouraged Feynman to participate in the Manhattan Project—the wartime U.S. Army project at Los Alamos developing the atomic bomb. Feynman said he was persuaded to join this effort to build it before Nazi Germany developed their own bomb. He was assigned to Hans Bethe's theoretical division and impressed Bethe enough to be made a group leader. He and Bethe developed the Bethe–Feynman formula for calculating the yield of a fission bomb, which built upon previous work by Robert Serber. He immersed himself in work on the project, and was present at the Trinity bomb test. Feynman claimed to be the only person to see the explosion without the very dark glasses or welder's lenses provided, reasoning that it was safe to look through a truck windshield, as it would screen out the harmful ultraviolet radiation. As a junior physicist, he was not central to the project. The greater part of his work was administering the computation group of human computers in the theoretical division (one of his students there, John G. Kemeny, later went on to co-design and co-specify the programming language BASIC). Later, with Nicholas Metropolis, he assisted in establishing the system for using IBM punched cards for computation. Feynman was sought out by physicist Niels Bohr for one-on-one discussions. He later discovered the reason: most of the other physicists were too much in awe of Bohr to argue with him. Feynman had no such inhibitions, vigorously pointing out anything he considered to be flawed in Bohr's thinking. Feynman said he felt as much respect for Bohr as anyone else, but once anyone got him talking about physics, he would become so focused he forgot about social niceties. Due to the top secret nature of the work, Los Alamos was isolated. In Feynman's own words, "There wasn't anything to do there". Bored, he indulged his curiosity by learning to pick the combination locks on cabinets and desks used to secure papers. Feynman played many jokes on colleagues. In one case he found the combination to a locked filing cabinet by trying the numbers he thought a physicist would use (it proved to be 27–18–28 after the base of natural logarithms, e = 2.71828…), and found that the three filing cabinets where a colleague kept a set of atomic bomb research notes all had the same combination.[16] He left a series of notes in the cabinets as a prank, which initially spooked his colleague, Frederic de Hoffmann, into thinking a spy or saboteur had gained access to atomic bomb secrets. On several occasions, Feynman drove to Albuquerque to see his ailing wife in a car borrowed from Klaus Fuchs, who was later discovered to be a real spy for the Soviets, transporting nuclear secrets in his car to Santa Fe. On occasion, Feynman would find an isolated section of the mesa where he could drum in the style of American natives; "and maybe I would dance and chant, a little". These antics did not go unnoticed, and rumors spread about a mysterious Indian drummer called "Injun Joe". He also became a friend of the laboratory head, J. Robert Oppenheimer, who unsuccessfully tried to court him away from his other commitments after the war to work at the University of California, Berkeley. Feynman alludes to his thoughts on the justification for getting involved in the Manhattan project in The Pleasure of Finding Things Out. He felt the possibility of Nazi Germany developing the bomb before the Allies was a compelling reason to help with its development for the U.S. He goes on to say, however, that it was an error on his part not to reconsider the situation once Germany was defeated. In the same publication, Feynman also talks about his worries in the atomic bomb age, feeling for some considerable time that there was a high risk that the bomb would be used again soon, so that it was pointless to build for the future. Later he describes this period as a "depression." Early academic career After the war, Feynman declined an offer from the Institute for Advanced Study in Princeton, New Jersey, despite the presence there of such distinguished faculty members as Albert Einstein, Kurt Gödel and John von Neumann. Feynman followed Hans Bethe, instead, to Cornell University, where Feynman taught theoretical physics from 1945 to 1950.[16] During a temporary depression following the destruction of Hiroshima by the bomb produced by the Manhattan Project, he focused on complex physics problems, not for utility, but for self-satisfaction. One of these was analyzing the physics of a twirling, nutating dish as it is moving through the air. His work during this period, which used equations of rotation to express various spinning speeds, proved important to his Nobel Prize-winning work, yet because he felt burned out and had turned his attention to less immediately practical problems, he was surprised by the offers of professorships from other renowned universities.[16] Despite yet another offer from the Institute for Advanced Study, Feynman rejected the Institute on the grounds that there were no teaching duties: Feynman felt that students were a source of inspiration and teaching was a diversion during uncreative spells. Because of this, the Institute for Advanced Study and Princeton University jointly offered him a package whereby he could teach at the university and also be at the institute. Feynman instead accepted an offer from the California Institute of Technology (Caltech)—and as he says in his book Surely You're Joking Mr. Feynman!—because a desire to live in a mild climate had firmly fixed itself in his mind while he was installing tire chains on his car in the middle of a snowstorm in Ithaca. Feynman has been called the "Great Explainer".[20] He gained a reputation for taking great care when giving explanations to his students and for making it a moral duty to make the topic accessible. His guiding principle was that, if a topic could not be explained in a freshman lecture, it was not yet fully understood. Feynman gained great pleasure [21] from coming up with such a "freshman-level" explanation, for example, of the connection between spin and statistics. What he said was that groups of particles with spin ½ "repel", whereas groups with integer spin "clump." This was a brilliantly simplified way of demonstrating how Fermi–Dirac statistics and Bose–Einstein statistics evolved as a consequence of studying how fermions and bosons behave under a rotation of 360°. This was also a question he pondered in his more advanced lectures, and to which he demonstrated the solution in the 1986 Dirac memorial lecture.[22] In the same lecture, he further explained that antiparticles must exist, for if particles had only positive energies, they would not be restricted to a so-called "light cone." Caltech years Feynman did significant work while at Caltech, including research in: • Quantum electrodynamics. The theory for which Feynman won his Nobel Prize is known for its accurate predictions.[24] This theory was begun in the earlier years during Feynman's work at Princeton as a graduate student and continued while he was at Cornell. This work consisted of two distinct formulations, and it is a common error to confuse them or to merge them into one. The first is his path integral formulation, and the second is the formulation of his Feynman diagrams. Both formulations contained his sum over histories method in which every possible path from one state to the next is considered, the final path being a sum over the possibilities (also referred to as sum-over-paths).[25] For a number of years he lectured to students at Caltech on his path integral formulation of quantum theory. The second formulation of quantum electrodynamics (using Feynman diagrams) was specifically mentioned by the Nobel committee. The logical connection with the path integral formulation is interesting. Feynman did not prove that the rules for his diagrams followed mathematically from the path integral formulation. Some special cases were later proved by other people, but only in the real case, so the proofs don't work when spin is involved. The second formulation should be thought of as starting anew, but guided by the intuitive insight provided by the first formulation. Freeman Dyson published a paper in 1949 which, among many other things, added new rules to Feynman's which told how to actually implement renormalization. Students everywhere learned and used the powerful new tool that Feynman had created. Eventually computer programs were written to compute Feynman diagrams, providing a tool of unprecedented power. It is possible to write such programs because the Feynman diagrams constitute a formal language with a grammar. • Physics of the superfluidity of supercooled liquid helium, where helium seems to display a complete lack of viscosity when flowing. Feynman provided a quantum-mechanical explanation for the Soviet physicist Lev D. Landau’s theory of superfluidity.[18] Applying the Schrödinger equation to the question showed that the superfluid was displaying quantum mechanical behavior observable on a macroscopic scale. This helped with the problem of superconductivity; however, the solution eluded Feynman.[26] It was solved with the BCS theory of superconductivity, proposed by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer. • A model of weak decay, which showed that the current coupling in the process is a combination of vector and axial currents (an example of weak decay is the decay of a neutron into an electron, a proton, and an anti-neutrino). Although E. C. George Sudarshan and Robert Marshak developed the theory nearly simultaneously, Feynman's collaboration with Murray Gell-Mann was seen as seminal because the weak interaction was neatly described by the vector and axial currents. It thus combined the 1933 beta decay theory of Enrico Fermi with an explanation of parity violation. From his diagrams of a small number of particles interacting in spacetime, Feynman could then model all of physics in terms of the spins of those particles and the range of coupling of the fundamental forces.[29] Feynman attempted an explanation of the strong interactions governing nucleons scattering called the parton model. The parton model emerged as a complement to the quark model developed by his Caltech colleague Murray Gell-Mann. The relationship between the two models was murky; Gell-Mann referred to Feynman's partons derisively as "put-ons". In the mid-1960s, physicists believed that quarks were just a bookkeeping device for symmetry numbers, not real particles, as the statistics of the Omega-minus particle, if it were interpreted as three identical strange quarks bound together, seemed impossible if quarks were real. The Stanford linear accelerator deep inelastic scattering experiments of the late 1960s showed, analogously to Ernest Rutherford's experiment of scattering alpha particles on gold nuclei in 1911, that nucleons (protons and neutrons) contained point-like particles which scattered electrons. It was natural to identify these with quarks, but Feynman's parton model attempted to interpret the experimental data in a way which did not introduce additional hypotheses. For example, the data showed that some 45% of the energy momentum was carried by electrically-neutral particles in the nucleon. These electrically-neutral particles are now seen to be the gluons which carry the forces between the quarks and carry also the three-valued color quantum number which solves the Omega-minus problem. Feynman did not dispute the quark model; for example, when the fifth quark was discovered in 1977, Feynman immediately pointed out to his students that the discovery implied the existence of a sixth quark, which was duly discovered in the decade after his death. After the success of quantum electrodynamics, Feynman turned to quantum gravity. By analogy with the photon, which has spin 1, he investigated the consequences of a free massless spin 2 field, and was able to derive the Einstein field equation of general relativity, but little more.[30] However, the computational device that Feynman discovered then for gravity, "ghosts", which are "particles" in the interior of his diagrams which have the "wrong" connection between spin and statistics, have proved invaluable in explaining the quantum particle behavior of the Yang–Mills theories, for example, QCD and the electro-weak theory. In 1965, Feynman was appointed a foreign member of the Royal Society.[7] At this time in the early 1960s, Feynman exhausted himself by working on multiple major projects at the same time, including a request, while at Caltech, to "spruce up" the teaching of undergraduates. After three years devoted to the task, he produced a series of lectures that eventually became The Feynman Lectures on Physics. He wanted a picture of a drumhead sprinkled with powder to show the modes of vibration at the beginning of the book. Concerned over the connections to drugs and rock and roll that could be made from the image, the publishers changed the cover to plain red, though they included a picture of him playing drums in the foreword. The Feynman Lectures on Physics [31] occupied two physicists, Robert B. Leighton and Matthew Sands, as part-time co-authors for several years. Even though the books were not adopted by most universities as textbooks, they continue to sell well because they provide a deep understanding of physics. As of 2005, The Feynman Lectures on Physics has sold over 1.5 million copies in English, an estimated 1 million copies in Russian, and an estimated half million copies in other languages. Many of his lectures and miscellaneous talks were turned into other books, including The Character of Physical Law, QED: The Strange Theory of Light and Matter, Statistical Mechanics, Lectures on Gravitation, and the Feynman Lectures on Computation. Partly as a way to bring publicity to progress in physics, Feynman offered $1,000 prizes for two of his challenges in nanotechnology; one was claimed by William McLellan and the other by Tom Newman.[32] He was also one of the first scientists to conceive the possibility of quantum computers. In 1984–86, he developed a variational method for the approximate calculation of path integrals which has led to a powerful method of converting divergent perturbation expansions into convergent strong-coupling expansions (variational perturbation theory) and, as a consequence, to the most accurate determination[34] of critical exponents measured in satellite experiments.[35] Feynman diagrams are now fundamental for string theory and M-theory, and have even been extended topologically.[37] The world-lines of the diagrams have developed to become tubes to allow better modeling of more complicated objects such as strings and membranes. Shortly before his death, Feynman criticized string theory in an interview: "I don't like that they're not calculating anything," he said. "I don't like that they don't check their ideas. I don't like that for anything that disagrees with an experiment, they cook up an explanation—a fix-up to say, ‘Well, it still might be true.'" These words have since been much-quoted by opponents of the string-theoretic direction for particle physics.[18] Challenger disaster Feynman played an important role on the Presidential Rogers Commission, which investigated the Challenger disaster. During a televised hearing, Feynman demonstrated that the material used in the shuttle's O-rings became less resilient in cold weather by compressing a sample of the material in a clamp and immersing it in ice-cold water.[38] The commission ultimately determined that the disaster was caused by the primary O-ring not properly sealing in unusually cold weather at Cape Canaveral.[39] Feynman devoted the latter half of his book What Do You Care What Other People Think? to his experience on the Rogers Commission, straying from his usual convention of brief, light-hearted anecdotes to deliver an extended and sober narrative. Feynman's account reveals a disconnect between NASA's engineers and executives that was far more striking than he expected. His interviews of NASA's high-ranking managers revealed startling misunderstandings of elementary concepts. For instance, NASA managers claimed that there was a 1 in 100,000 chance of a catastrophic failure aboard the shuttle, but Feynman discovered that NASA's own engineers estimated the chance of a catastrophe at closer to 1 in 200. He concluded that the space shuttle reliability estimate by NASA management was fantastically unrealistic, and he was particularly angered that NASA used these figures to recruit Christa McAuliffe into the Teacher-in-Space program. He warned in his appendix to the commission's report (which was included only after he threatened not to sign the report), "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled."[40] A television documentary drama named The Challenger, detailing Feynman's part in the investigation, was aired in 2013.[41] Cultural identification Although born to and raised by parents who were Ashkenazi, Feynman was not only an atheist[42] but declined to be labelled Jewish on supposedly "ethnic" grounds. He routinely refused to be included in lists or books that classified people by race. He asked to not be included in Tina Levitan's The Laureates: Jewish Winners of the Nobel Prize, writing, "To select, for approbation the peculiar elements that come from some supposedly Jewish heredity is to open the door to all kinds of nonsense on racial theory," and adding "…at thirteen I was not only converted to other religious views, but I also stopped believing that the Jewish people are in any way 'the chosen people'".[43][44] Personal life While researching for his Ph.D., Feynman married his first wife, Arline Greenbaum (often misspelled Arlene). She was diagnosed with tuberculosis, but she and Feynman were careful, and he never contracted it. She died of the disease in 1945. In 1946, Feynman wrote a letter to her, but kept it sealed for the rest of his life.[45] This portion of Feynman's life was portrayed in the 1996 film Infinity, which featured Feynman's daughter, Michelle, in a cameo role. —Mary Louise Bell divorce complaint[46] He later married Gweneth Howarth (1934–1989) from Ripponden, Yorkshire, who shared his enthusiasm for life and spirited adventure.[27] Besides their home in Altadena, California, they had a beach house in Baja California, purchased with the prize money from Feynman's Nobel Prize, his one third share of $55,000. They remained married until Feynman's death. They had a son, Carl, in 1962, and adopted a daughter, Michelle, in 1968.[27] Feynman had a great deal of success teaching Carl, using, for example, discussions about ants and Martians as a device for gaining perspective on problems and issues. He was surprised to learn that the same teaching devices were not useful with Michelle.[28] Mathematics was a common interest for father and son; they both entered the computer field as consultants and were involved in advancing a new method of using multiple computers to solve complex problems—later known as parallel computing. The Jet Propulsion Laboratory retained Feynman as a computational consultant during critical missions. One co-worker characterized Feynman as akin to Don Quixote at his desk, rather than at a computer workstation, ready to do battle with the windmills. Feynman traveled widely, notably to Brazil, where he gave courses at the CBPF (Brazilian Center for Physics Research) and near the end of his life schemed to visit the Russian land of Tuva, a dream that, because of Cold War bureaucratic problems, never became reality.[47] The day after he died, a letter arrived for him from the Soviet government, giving him authorization to travel to Tuva. Out of his enthusiastic interest in reaching Tuva came the phrase "Tuva or Bust" (also the title of a book about his efforts to get there), which was tossed about frequently amongst his circle of friends in hope that they, one day, could see it firsthand. The documentary movie, Genghis Blues, mentions some of his attempts to communicate with Tuva and chronicles the successful journey there by his friends. Responding to Hubert Humphrey's congratulation for his Nobel Prize, Feynman admitted to a long admiration for the then vice president.[48] In a letter to an MIT professor dated December 6, 1966, Feynman expressed interest in running for governor of California.[49] Feynman took up drawing at one time and enjoyed some success under the pseudonym, "Ofey", culminating in an exhibition of his work. He learned to play a metal percussion instrument (frigideira) in a samba style in Brazil, and participated in a samba school. According to Genius, the James Gleick-authored biography, Feynman tried LSD during his professorship at Caltech.[18] Somewhat embarrassed by his actions, he largely sidestepped the issue when dictating his anecdotes; he mentions it in passing in the "O Americano, Outra Vez" section, while the "Altered States" chapter in Surely You're Joking, Mr. Feynman! describes only marijuana and ketamine experiences at John Lilly's famed sensory deprivation tanks, as a way of studying consciousness.[16] Feynman gave up alcohol when he began to show vague, early signs of alcoholism, as he did not want to do anything that could damage his brain—the same reason given in "O Americano, Outra Vez" for his reluctance to experiment with LSD.[16] Feynman has a minor acting role in the film Anti-Clock credited as "The Professor".[51] Feynman had two rare forms of cancer, liposarcoma and Waldenström's macroglobulinemia, dying shortly after a final attempt at surgery for the former on February 15, 1988, aged 69.[18] His last recorded words are noted as, "I'd hate to die twice. It's so boring."[18][52] Popular legacy Alan Alda, the stage, screen and television actor, studied writings about Richard Feynman's life during the 1990s in preparation for playing the role of Feynman on stage. Based upon Alda's research, playwright Peter Parnell was commissioned by Alda to write a two-character play about a fictional day in the life of Feynman set two years prior to Feynman's death. The play, entitled QED, premiered at the Mark Taper Forum in Los Angeles, California in 2001. The play was then presented at the Vivian Beaumont Theater on Broadway, with both presentations starring Alan Alda as Richard Feynman. On May 4, 2005, the United States Postal Service issued the American Scientists commemorative set of four 37-cent self-adhesive stamps in several configurations. The scientists depicted were Richard Feynman, John von Neumann, Barbara McClintock, and Josiah Willard Gibbs. Feynman's stamp, sepia-toned, features a photograph of a 30-something Feynman and eight small Feynman diagrams.[53] The stamps were designed by Victor Stabin under the artistic direction of Carl T. Herrman.[54] The main building for the Computing Division at Fermilab is named the "Feynman Computing Center" in his honor.[55] The principal character in Thomas A. McMahon's 1970 novel, Principles of American Nuclear Chemistry: A Novel, is modeled on Feynman. Real Time Opera premiered its opera Feynman at the Norfolk (CT) Chamber Music Festival in June 2005.[56] In February 2008 LA Theatre Works released a recording of 'Moving Bodies' with Alfred Molina in the role of Richard Feynman. This radio play written by playwright Arthur Giron is an interpretation on how Feynman became one of the iconic American scientists and is loosely based on material found in Feynman's two transcribed oral memoirs Surely You're Joking, Mr. Feynman! and What Do You Care What Other People Think?. On the twentieth anniversary of Feynman's death, composer Edward Manukyan dedicated a piece for solo clarinet to his memory.[57] It was premiered by Doug Storey, the principal clarinetist of the Amarillo Symphony. Between 2009 and 2011, clips of an interview with Feynman were used by composer John Boswell as part of the Symphony of Science project in the second, fifth, seventh, and eleventh installments of his videos, "We Are All Connected", "The Poetry of Reality", "A Wave of Reason", and "The Quantum World".[58] In a 1992 New York Times article on Feynman and his legacy, James Gleick recounts the story of how Murray Gell-Mann described what has become known as "The Feynman Algorithm" or "The Feynman Problem-Solving Algorithm" to a student: "The student asks Gell-Mann about Feynman's notes. Gell-Mann says no, Dick's methods are not the same as the methods used here. The student asks, well, what are Feynman's methods? Gell-Mann leans coyly against the blackboard and says: Dick's method is this. You write down the problem. You think very hard. (He shuts his eyes and presses his knuckles parodically to his forehead.) Then you write down the answer." [59] In 1998, a photograph of Richard Feynman giving a lecture was part of the poster series commissioned by Apple Inc. for their "Think Different" advertising campaign.[60] In 2011, Feynman was the subject of a biographical graphic novel entitled simply, Feynman, written by Jim Ottaviani and illustrated by Leland Myrick.[61] In 2013, the BBC drama The Challenger depicted Feynman's role on the Rogers Commission in exposing the O-ring flaw in NASA's solid-rocket boosters (SRBs), itself based in part on Feynman's book What Do You Care What Other People Think?[62][63] Selected scientific works Textbooks and lecture notes The Feynman Lectures on Physics is perhaps his most accessible work for anyone with an interest in physics, compiled from lectures to Caltech undergraduates in 1961–64. As news of the lectures' lucidity grew, a number of professional physicists and graduate students began to drop in to listen. Co-authors Robert B. Leighton and Matthew Sands, colleagues of Feynman, edited and illustrated them into book form. The work has endured and is useful to this day. They were edited and supplemented in 2005 with "Feynman's Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics" by Michael Gottlieb and Ralph Leighton (Robert Leighton's son), with support from Kip Thorne and other physicists. • Includes Feynman's Tips on Physics (with Michael Gottlieb and Ralph Leighton), which includes four previously unreleased lectures on problem solving, exercises by Robert Leighton and Rochus Vogt, and a historical essay by Matthew Sands. Popular works Audio and video recordings • Los Alamos From Below (audio, talk given by Feynman at Santa Barbara on February 6, 1975) • The Feynman Lectures on Physics: The Complete Audio Collection • The The Character of Physical Law) • QED: The Strange Theory of Light and Matter are transcripts. (1979) • Richard Feynman: Fun to Imagine Collection, BBC Archive of 6 short films of Feynman talking in a style that is accessible to all about the physics behind common to all experiences. (1983) • Elementary Particles and the Laws of Physics (1986) • Tiny Machines: The Feynman Talk on Nanotechnology (video, 1984) • Computers From the Inside Out (video) • Quantum Mechanical View of Reality: Workshop at Esalen (video, 1983) • Idiosyncratic Thinking Workshop (video, 1985) • Bits and Pieces — From Richard's Life and Times (video, 1988) • Strangeness Minus Three (video, BBC Horizon 1964) • No Ordinary Genius (video, Cristopher Sykes Documentary) • Richard Feynman — The Best Mind Since Einstein (video, Documentary) • The Motion of Planets Around the Sun (audio, sometimes titled "Feynman's Lost Lecture") • Nature of Matter (audio) See also Further reading • Brown, Laurie M. and Rigden, John S. (editors) (1993) Most of the Good Stuff: Memories of Richard Feynman Simon and Schuster, New York, ISBN 0-88318-870-8. Commentary by Joan Feynman, John Wheeler, Hans Bethe, Julian Schwinger, Murray Gell-Mann, Daniel Hillis, David Goodstein, Freeman Dyson, and Laurie Brown • Dyson, Freeman (1979) Disturbing the Universe. Harper and Row. ISBN 0-06-011108-9. Dyson's autobiography. The chapters "A Scientific Apprenticeship" and "A Ride to Albuquerque" describe his impressions of Feynman in the period 1947–48 when Dyson was a graduate student at Cornell • Gleick, James (1992) Genius: The Life and Science of Richard Feynman. Pantheon. ISBN 0-679-74704-4 • Krauss, Lawrence M. (2011) Quantum Man: Richard Feynman's Life in Science. W.W. Norton & Company. 350 pages, biography. ISBN 0-393-06471-9, OCLC 601108916 • LeVine, Harry, III (2009) The Great Explainer: The Story of Richard Feynman (Profiles in Science series) Morgan Reynolds, Greensboro, North Carolina, ISBN 978-1-59935-113-1; for high school readers • Mehra, Jagdish (1994) The Beat of a Different Drum: The Life and Science of Richard Feynman. Oxford University Press. ISBN 0-19-853948-7 • Gribbin, John and Gribbin, Mary (1997) Richard Feynman: A Life in Science. Dutton, New York, ISBN 0-525-94124-X • Milburn, Gerard J. (1998) The Feynman Processor: Quantum Entanglement and the Computing Revolution Perseus Books, ISBN 0-7382-0173-1 • Mlodinow, Leonard (2003) Feynman's Rainbow: A Search For Beauty In Physics And In Life Warner Books. ISBN 0-446-69251-4 Published in the United Kingdom as Some Time With Feynman • Ottaviani, Jim and Myrick, Leland (2011) Feynman. First Second. ISBN 978-1-59643-259-8 OCLC 664838951. • Sykes, Christopher, ed., (1994) No Ordinary Genius: The Illustrated Richard Feynman. W W Norton & Co. Inc. ISBN 0-393-03621-9 Films and plays • Parnell, Peter (2002) "QED" Applause Books, ISBN 978-1-55783-592-5, (play). • Whittell, Crispin (2006) "Clever Dick" Oberon Books, (play) • "The Quest for Tannu Tuva", with Richard Feynman and Ralph Leighton. 1987, BBC TV ‘Horizon' and PBS ‘Nova' (entitled "Last Journey of a Genius") (50 minute film) • "No Ordinary Genius" A two-part documentary about Feynman's life and work, with contributions from colleagues, friends and family. 1993, BBC TV ‘Horizon' and PBS ‘Nova' (a one-hour version, under the title "The Best Mind Since Einstein") (2 × 50 minute films) • space shuttle Challenger disaster. External links • Richard Feynman • Feynman's Google Scholar profile • Gallery of Drawings by Richard P. Feynman • United States Department of Energy • Internet Movie Database • Works by Richard Feynman on Open Library at the Internet Archive • Feynman's Government files
226b332c8e75c2f0
Monday, June 24, 2019 30 years from now, what will a next larger particle collider have taught us? The year is 2049. CERN’s mega-project, the Future Circular Collider (FCC), has been in operation for 6 years. The following is the transcript of an interview with CERN’s director, Johanna Michilini (JM), conducted by David Grump (DG). DG: “Prof Michilini, you have guided CERN through the first years of the FCC. How has your experience been?” JM: “It has been most exciting. Getting to know a new machine always takes time, but after the first two years we have had stable performance and collected data according to schedule. The experiments have since seen various upgrades, such as replacing the thin gap chambers and micromegas with quantum fiber arrays that have better counting rates and have also installed… Are you feeling okay?” DG: “Sorry, I may have briefly fallen asleep. What did you find?” JM: “We have measured the self-coupling of a particle called the Higgs-boson and it came out to be 1.2 plus minus 0.3 times the expected value which is the most amazing confirmation that the universe works as we thought in the 1960s and you better be in awe of our big brains.” DG: “I am flat on the floor. One of the major motivations to invest into your institution was to learn how the universe was created. So what can you tell us about this today?” JM: “The Higgs gives mass to all fundamental particles that have mass and so it plays a role in the process of creation of the universe.” DG: “Yes, and how was the universe created?” JM: “The Higgs is a tiny thing but it’s the greatest particle of all. We have built a big thing to study the tiny thing. We have checked that the tiny thing does what we thought it does and found that’s what it does. You always have to check things in science.” JM: “You already said that.” DG: “Well isn’t it correct that you wanted to learn how the universe was created?” JM: “That may have been what we said, but what we actually meant is that we will learn something about how nuclear matter was created in the early universe. And the Higgs plays a role in that, so we have learned something about that.” DG: “I see. Well, that is somewhat disappointing.” JM: “If you need $20 billion, you sometimes forget to mention a few details.” DG: “Happens to the best of us. All right, then. What else did you measure?” JM: “Ooh, we measured many many things. For example we improved the precision by which we know how quarks and gluons are distributed inside protons.” DG: “What can we do with that knowledge?” JM: “We can use that knowledge to calculate more precisely what happens in particle colliders.” DG: “Oh-kay. And what have you learned about dark matter?” JM: “We have ruled out 22 of infinitely many hypothetical particles that could make up dark matter.” DG: “And what’s with the remaining infinitely many hypothetical particles?” JM: “We are currently working on plans for the next larger collider that would allow us to rule out some more of them because you just have to look, you know.” DG: “Prof Michilini, we thank you for this conversation.” Thursday, June 20, 2019 Away Note I'll be in the Netherlands for a few days to attend a workshop on "Probabilities in Cosmology". Back next week. Wish you a good Summer Solstice! Wednesday, June 19, 2019 No, a next larger particle collider will not tell us anything about the creation of the universe LHC magnets. Image: CERN. A few days ago, Scientific American ran a piece by a CERN physicist and a philosopher about particle physicists’ plans to spend $20 billion on a next larger particle collider, the Future Circular Collider (FCC). To make their case, the authors have dug up a quote from 1977 and ignored the 40 years after this, which is a truly excellent illustration of all that’s wrong with particle physics at the moment. I currently don’t have time to go through this in detail, but let me pick the most egregious mistake. It’s right in the opening paragraph where the authors claim that a next larger collider would tell us something about the creation of the universe: “[P]article physics strives to push a diverse range of experimental approaches from which we may glean new answers to fundamental questions regarding the creation of the universe and the nature of the mysterious and elusive dark matter. Such an endeavor requires a post-LHC particle collider with an energy capability significantly greater than that of previous colliders.” We previously encountered this sales-pitch in CERN’s marketing video for theFCC, which claimed that the collider would probe the beginning of the universe. But neither the LHC nor the FCC will tell us anything about the “beginning” or “creation” of the universe. What these colliders can do is create nuclear matter at high density by slamming heavy atomic nuclei into each other. Such matter probably also existed in the early universe. However, even collisions of large nuclei create merely tiny blobs of such nuclear matter, and these blobs fall apart almost immediately. In case you prefer numbers over words, they last about 10-23 seconds. This situation is nothing like the soup of plasma in the expanding space of the early universe. It is therefore highly questionable already that these experiments can tell us much about what happened back then. Even optimistically, the nuclear matter that the FCC can produce has a density about 70 orders of magnitude below the density at the beginning of the universe. And even if you are willing to ignore the tiny blobs and their immediate decay and the 70 orders of magnitude, then the experiments still tell us nothing about the creation of this matter, and certainly not about the creation of the universe. The argument that large colliders can teach us anything about the beginning, origin, or creation of the universe is manifestly false. The authors of this article either knew this and decided to lie to their readers, or they didn’t know it, in which case they have begun to believe their own institution’s marketing. I’m not sure which is worse. And as I have said many times before, there is no reason to think a next larger collider would find evidence of dark matter particles. Somewhat ironically, the authors spend the rest of their article arguing against theoretical arguments, but of course the appeal to dark matter is a bona-fide theoretical argument. In any case, it pains me to see not only that particle physicists are still engaging in false marketing, but that Scientific American plays along with it. How about sticking with the truth? The truth is that a next larger collider costs a shitload lot of money and will most likely not teach us much. If progress in the foundations of physics is what you want, this is not the way forward. Tuesday, June 18, 2019 Brace for the oncoming deluge of dark matter detectors that won’t detect anything Imagine an unknown disease spreads, causing temporarily blindness. Most patients recover after a few weeks, but some never regain eyesight. Scientists rush to identify the cause. They guess the pathogen’s shape and, based on this, develop test-stripes and antigens. If one guess doesn’t work, they’ll move on to the next. Doesn’t quite sound right? Of course it does not. Trying to identifying pathogens by guesswork is sheer insanity. The number of possible shapes is infinite. The guesses will almost certainly be wrong. No funding agency would pour money into this. Except they do. Not for pathogen identification, but for dark matter searches. In the past decades, the searches for the most popular dark matter particles have failed. Neither WIMPs nor axions have shown up in any detector, of which there have been dozens. Physicists have finally understood this is not a promising method. Unfortunately, they have not come up with anything better. Instead, their strategy is now to fund any proposed experiment that could plausibly be said to maybe detect something that could potentially be a hypothetical dark matter particle. And since there are infinitely many such hypothetical particles, we are now well on the way to building infinitely many detectors. DNA, carbon nanotubes, diamonds, old rocks, atomic clocks, superfluid helium, qubits, Aharonov-Bohm, cold atom gases, you name it. Let us call it the equal opportunity approach to dark matter search. As it should be, everyone benefits from the equal opportunity approach. Theorists invent new particles (papers will be written). Experimentalists use those invented particles as motivation to propose experiments (more papers will be written). With a little luck they get funding and do the experiment (even more papers). Eventually, experiments conclude they didn’t find anything (papers, papers, papers!). In the end we will have a lot of papers and still won’t know what dark matter is. And this, we will be told, is how science is supposed to work. Let me be clear that I am not strongly opposed to such medium scale experiments, because they typically cost “merely” a few million dollars. A few millions here and there don’t put overall progress at risk. Not like, say, building a next larger collider would. So why not live and let live, you may say. Let these physicists have some fun with their invented particles and their experiments that don’t find them. What’s wrong with that? What’s wrong with that (besides the fact that a million dollars is still a million dollars) is that it will almost certainly lead nowhere. I don’t want to wait another 40 years for physicists to realize that falsifiability alone is not sufficient to make a hypothesis promising. My disease analogy, as any analogy, has its shortcomings of course. You cannot draw blood from a galaxy and put it under a microscope. But metaphorically speaking, that’s what physicists should do. We have patients out there: All those galaxies and clusters which are behaving in funny ways. Study those until you have good reason to think you know what’s the pathogen. Then, build your detector. Not all types of dark matter particles do an equally good job to explain structure formation and the behavior of galaxies and all the other data we have. And particle dark matter is not the only explanation for the observations. Right now, the community makes no systematic effort to identify the best model to fit the existing data. And, needless to say, that data could be better, both in terms of sky coverage and resolution. The equal opportunity approach relies on guessing a highly specific explanation and then setting out to test it. This way, null-results are a near certainty. A more promising method is to start with highly non-specific explanations and zero in on the details. The failures of the past decades demonstrate that physicists must think more carefully before commissioning experiments to search for hypothetical particles. They still haven’t learned the lesson. Sunday, June 16, 2019 Book review: “Einstein’s Unfinished Revolution” by Lee Smolin Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum By Lee Smolin Penguin Press (April 9, 2019) Popular science books cover a spectrum from exposition to speculation. Some writers, like Chad Orzel or Anil Ananthaswamy, stay safely on the side of established science. Others, like Philip Ball in his recent book, keep their opinions to the closing chapter. I would place Max Tegmark’s “Mathematical Universe” and Lee Smolin’s “Trouble With Physics” somewhere in the middle. Then, on the extreme end of speculation, we have authors like Roger Penrose and David Deutsch who use books to put forward ideas in the first place. “Einstein’s Unfinished Revolution” lies on the speculative end of this spectrum. Lee is very upfront about the purpose of his writing. He is dissatisfied with the current formulation of quantum mechanics. It sacrifices realism, and he thinks this is too much to give up. In the past decades, he has therefore developed his own approach to quantum mechanics, the “ensemble interpretation”. His new book lays out how this ensemble interpretation works and what its benefits are. Before getting to this, Lee introduces the features of quantum theories (superpositions, entanglement, uncertainty, measurement postulate, etc) and discusses the advantages and disadvantages of the major interpretations of quantum mechanics (Copenhagen, many worlds, pilot wave, collapse models). He deserves applause for also mentioning the Montevideo interpretation and superdeterminism, though clearly he doesn’t like either. I have found his evaluation of these approaches overall balanced and fair. In the later chapters, Lee comes to his own ideas about quantum mechanics and how these tie together with his other work on quantum gravity. I have not been able to follow all his arguments here, especially not on the matter of non-locality. Unfortunately, Lee doesn’t discuss his ensemble interpretation half as critically the other approaches. From reading his book you may get away with the impression he has solved all problems. Let me therefore briefly mention the most obvious shortcomings of his approach. (a) To quantify the similarity of two systems you need to define a resolution. (b) This will violate Lorentz-invariance which means it’s hard to make compatible with standard model physics. (c) You better not ask about virtual particles. (d) If a system gets its laws from precedents, where do the first laws come from? Lee tells me that these issues have been discussed in the papers he lists on his website. As all of Lee’s previous books, this one is well-written and engaging, and if you liked Lee’s earlier books you will probably like this one too. The book has the occasional paragraph that I think will be over many reader’s head, but most of it should be understandable with little or no prior knowledge. I have found this book particularly valuable for spelling out the author’s philosophical stance. You may not agree with Lee, but at least you know where he is coming from. This book is recommendable for anyone who is dissatisfied with the current formulation of quantum mechanics, or who wants to understand why others are dissatisfied with it. It also serves well as a quick introduction to current research in the foundations of quantum mechanics. [Disclaimer: free review copy.] Thursday, June 13, 2019 Physicists are out to unlock the muon’s secret Fermilab g-2 experiment. [Image Glukicov/Wikipedia] Physicists count 25 elementary particles that, for all we presently know, cannot be divided any further. They collect these particles and their interactions in what is called the Standard Model of particle physics. But the matter around us is made of merely three particles: up and down quarks (which combine to protons and neutrons, which combine to atomic nuclei) and electrons (which surround atomic nuclei). These three particles are held together by a number of exchange particles, notably the photon and gluons. What’s with the other particles? They are unstable and decay quickly. We only know of them because they are produced when other particles bang into each other at high energies, something that happens in particle colliders and when cosmic rays hit Earth’s atmosphere. By studying these collisions, physicists have found out that the electron has two bigger brothers: The muon (μ) and the tau (τ). The muon and the tau are pretty much the same as the electron, except that they are heavier. Of these two, the muon has been studied closer because it lives longer – about 2 x 10-6 seconds. The muon turns out to be... a little odd. Physicists have known for a while, for example, that cosmic rays produce more muons than expected. This deviation from the predictions of the standard model is not hugely significant, but it has stubbornly persisted. It has remained unclear, though, whether the blame is on the muons, or the blame is on the way the calculations treat atomic nuclei. Next, the muon (like the electron and tau) has a partner neutrino, called the muon-neutrino. The muon neutrino also has some anomalies associated with it. No one currently knows whether those are real or measurement errors. The Large Hadron Collider has seen a number of slight deviations from the predictions of the standard model which go under the name lepton anomaly. They basically tell you that the muon isn’t behaving like the electron, which (all other things equal) really it should. These deviations may just be random noise and vanish with better data. Or maybe they are the real thing. And then there is the gyromagnetic moment of the muon, usually denoted just g. This quantity measures how muons spin if you put them into a magnetic field. This value should be 2 plus quantum corrections, and the quantum corrections (the g-2) you can calculate very precisely with the standard model. Well, you can if you have spent some years learning how to do that because these are hard calculations indeed. Thing is though, the result of the calculation doesn’t agree with the measurement. This is the so-called muon g-2 anomaly, which we have known about since the 1960s when the first experiments ran into tension with the theoretical prediction. Since then, both the experimental precision as well as the calculations have improved, but the disagreement has not vanished. The most recent experimental data comes from a 2006 experiment at Brookhaven National Lab, and it placed the disagreement at 3.7σ. That’s interesting for sure, but nothing that particle physicists get overly excited about. A new experiments is now following up on the 2006 result: The muon g-2 experiment at Fermilab. The collaboration projects that (assuming the mean value remains the same) their better data could increase the significance to 7σ, hence surpassing the discovery standard in particle physics (which is somewhat arbitrarily set to 5σ). For this experiment, physicists first produce muons by firing protons at a target (some kind of solid). This produces a lot of pions (composites of two quarks) which decay by emitting muons. The muons are then collected in a ring equipped with magnets in which they circle until they decay. When the muons decay, they produce two neutrinos (which escape) and a positron that is caught in a detector. From the direction and energy of the positron, one can then infer the magnetic moment of the muon. The Fermilab g-2 experiment, which reuses parts of the hardware from the earlier Brookhaven experiment, is already running and collecting data. In a recent paper, Alexander Keshavarzi, on behalf of the collaboration reports they successfully completed the first physics run last year. He writes we can expect a publication of the results from the first run in late 2019. After some troubleshooting (something about an underperforming kicker system), the collaboration is now in the second run. Another experiment to measure more precisely the muon g-2 is underway in Japan, at the J-PARC muon facility. This collaboration too is well on the way. While we don’t know exactly when the first data from these experiements will become available, it is clear already that the muon g-2 will be much talked about in the coming years. At present, it is our best clue for physics beyond the standard model. So, stay tuned. Wednesday, June 12, 2019 Guest Post: A conversation with Lee Smolin about his new book "Einstein’s Unfinished Revolution" [Tam Hunt sent me another lengthy interview, this time with Lee Smolin. Smolin is a faculty member at the Perimeter Institute for Theoretical Physics in Canada and adjunct professor at the University of Waterloo. He is one of the founders of loop quantum gravity. In the past decades, Smolin’s interests have drifted to the role of time in the laws of nature and the foundations of quantum mechanics.] TH: You make some engaging and bold claims in your new book, Einstein’s Unfinished Revolution, continuing a line of argument that you’ve been making over the course of the last couple of decades and a number of books. In your latest book, you argue essentially that we need to start from scratch in the foundations of physics, and this means coming up with new first principles as our starting point for re-building. Why do you think we need to start from first principles and then build a new system? What has brought us to this crisis point? LS: The claim that there is a crisis, which I first made in my book, Life of the Cosmos (1997), comes from the fact that it has been decades since a new theoretical hypothesis was put forward that was later confirmed by experiment. In particle physics, the last such advance was the standard model in the early 1970s; in cosmology, inflation in the early 1980s. Nor has there been a completely successful approach to quantum gravity or the problem of completing quantum mechanics. I propose finding new fundamental principles that go deeper than the principles of general relativity and quantum mechanics. In some recent papers and the book, I make specific proposals for new principles. TH: You have done substantial work yourself in quantum gravity (loop quantum gravity, in particular) and quantum theory (suggesting your own interpretation called the “real ensemble interpretation”), and yet in this new book you seem to be suggesting that you and everyone else in foundations of physics needs to return to the starting point and rebuild. Are you in a way repudiating your own work or simply acknowledging that no one, including you, has been able to come up with a compelling approach to quantum gravity or other outstanding foundations of physics problems? LS: There are a handful of approaches to quantum gravity that I would call partly successful. These each achieve a number of successes, which suggest that they could plausibly be at least part of the story of how nature reconciles quantum physics with space, time and gravity. It is possible, for example that these partly successful approaches model different regimes or phases of quantum gravity phenomena. These partly successful approaches include loop quantum gravity, string theory, causal dynamical triangulations, causal sets, asymptotic safety. But I do not believe that any approach to date, including these, is fully successful. Each has stumbling blocks that after many years remain unsolved. TH: You part ways with a number of other physicists in recent years who have railed against philosophy and philosophers of physics as being largely unhelpful for actual physics. You argue instead that philosophers have a lot to contribute to the foundations of physics problems that are your focus. Have you found philosophy helpful in pursuing your physics for most of your career or is this a more recent finding in your own work? Which philosophers, in particular, do you think can be helpful in this area of physics? LS: I would first of all suggest we revive the old idea of a natural philosopher, which is a working scientist who is inspired and guided by the tradition of philosophy. An education and immersion in the philosophical tradition gives them access to the storehouse of ideas, positions and arguments that have been developed over the centuries to address the deepest questions, such as the nature of space and time. Physicists who are natural philosophers have the advantage of being able to situate their work, and its successes and failures, within the long tradition of thought about the basic questions. Most of the key figures who transformed physics through its history have been natural philosophers: Galileo, Newton, Leibniz, Descartes, Maxwell, Mach, Einstein, Bohr, Heisenberg, etc. In more recent years, David Finkelstein is an excellent example of a theoretical physicist who made important advances, such as being the first to untangle the geometry of a black hole, and recognize the concept of an event horizon, who was strongly influenced by the philosophical tradition. Like a number of us, he identified as a follower of Leibniz, who introduced the concepts of relational space and time. The abstract of Finkelstein’s key 1958 paper on what were soon to be called black holes explicitly mentions the principle of sufficient reason, which is the central principle of Leibniz’s philosophy. None of the important developments of general relativity in the 1960s and 1970s, such as those by Penrose, Hawking, Newmann, Bondi, etc., would have been possible without that groundbreaking paper by Finkelstein. I asked Finkelstein once why it was important to know philosophy to do physics, and he replied, “If you want to win the long jump, it helps to back up and get a running start.”’ In other fields, we can recognize people like Richard Dawkins, Daniel Dennett, Lynn Margulis, Steve Gould, Carl Sagan, etc. as natural philosophers. They write books that argue the central issues in evolutionary theory, with the hope of changing each other’s minds. But we the lay public are able to read over their shoulders, and so have front row seats to the debates. There are also working now a number of excellent philosophers of physics, who contribute in important ways to the progress of physics. One example of these is a group, centred originally at Oxford, of philosophers who have been doing the leading work on attempting to make sense of the Many Worlds formulation of quantum mechanics. This work involves extremely subtle issues such as the meaning of probability. These thinkers include Simon Saunders, David Wallace, Wayne Mhyrvold; and there are equally good philosophers who are skeptical of this work, such as David Albert and Tim Maudlin. It used to be the case, half a century ago, that philosophers, such as Hilary Putnam, who opined about physics, felt qualified to do so with a bare knowledge of the principles of special relativity and single particle quantum mechanics. In that atmosphere my teacher Abner Shimony, who had two Ph.D’s – one in physics and one in philosophy – stood out, as did a few others who could talk in detail about quantum field theory and renormalization, such as Paul Feyerabend. Now the professional standard among philosophers of physics requires a mastery of Ph.D level physics, as well as the ability to write and argue with the rigour that philosophy demands. Indeed, a number of the people I just mentioned have Ph.D’s in physics. TH: One of your suggested hypotheses, the next step you take after stating your first principles, is an acknowledgment that time is fundamental, real and irreversible, effectively goring one of the sacred cows of modern physics. You made your case for this approach in your book Time Reborn and I'm curious if you've seen a softening over the last few years in terms of physicists and philosophers beginning to be more open to the idea that the passage of time is truly fundamental? Also, why wouldn't this hypothesis be instead a first principle, if time is indeed fundamental? LS: In my experience, there have always been physicists and philosophers open to these ideas, even if there is no consensus among those who have carefully thought the issues through. When I thought carefully about how to state a candidate set of basic principles, it became clear that it was useful to separate principles from hypotheses about nature. Principles such as sufficient reason and the identity of the indiscernible can be realized in formulations of physics in which time is either fundamental or secondary and emergent. Hence those principles are prior to the choice of a fundamental or emergent time. So I think it clarifies the logic of the situation to call the latter choice a hypothesis rather than a principle. TH: How does viewing time as irreversible and fundamental mesh with your principle of background independence? Doesn’t a preferred spacetime foliation, which would provide an irreversible and fundamental time, provide a background? LS: Background independence is an aspect of the two principles of Leibniz I just referred to: 1) sufficient reason (PSR) and 2) the identity of the indiscernible (PII). Hence it is deeper than the choice of whether time is fundamental or emergent. Indeed, there are theories which rest on both hypotheses about time (fundamental or emergent). Julian Barbour, for example, is a relationalist who develops background-independent theories in which time is emergent. I am also a relationalist, but I make background-independent models of physics in which time and its passage are fundamental. Viewing time as fundamental and irreversible doesn’t necessarily imply a preferred foliation; by the latter you mean a foliation of a pre-existing spacetime, specified kinematically in advance of the dynamical evolution. In our energetic causal set models there does arise a notion of the present, but this is determined dynamically by the evolution of the model and so is consistent with what we mean by background independence. The point is that the solutions to background-independent theories can have preferred frames, so long as they are generated by solving the dynamics. This is, for example, the case with cosmological solutions to general relativity. TH: You and many other physicists have focused for many years on finding a theory of quantum gravity, effectively unifying quantum mechanics and general relativity. In describing your preferred approach to achieving a theory of quantum gravity worthy of the name you describe why you think quantum mechanics is incomplete and why general relativity is in some key ways likely wrong. Let’s look first at quantum mechanics, which you describe as “wrong” and “incomplete.” Why is the Copenhagen (still perhaps the most popular version of quantum theory) school of quantum mechanics wrong and incomplete? LS: Copenhagen is incomplete because it is based on an arbitrarily chosen division of the world into a classical realm and a quantum realm. This reflects our practice as experimenters, and corresponds to nothing in nature. This means it is an operational approach which conflicts with the expectations that physics should offer a complete description of individual phenomena, with no reference to our existence, knowledge or measurements. TH: Your objections just stated (what’s known generally as the “measurement problem”) seem to me, even as an obvious non-expert in this area, to be fairly apparent and accurate objections to Copenhagen. If that’s the case, why is Copenhagen still with us today? Why was it ever considered a serious theory? LS: I don’t think there are many proponents of the Copenhagen view among people working in quantum foundations, or who have otherwise thought about the issues carefully. I don’t think there are many enthusiastic followers of Bohr left alive. Meanwhile, what most physicists who are not specialists in quantum foundations practice and teach is a very pragmatic, operational set of rules, which suffices because it closely parallels the practice of actual experimenters. They can get on with the physics without having to take a stand on realism. What Bohr had in mind was a much more radical rejection of realism and its replacement by a view of the world in which nature and us co-create phenomena. My sense is that most living physicists haven’t read Bohr’s actual writings. There are of course some exceptions, like Chris Fuch’s QBism, which is, to the extent that I understand it, an even more radical view. Even if I disagree, I very much admire Chris for the clarity of his thinking and his insistence on taking his view to its logical conclusions. But, in the end, as a realist who sees the necessity of completing quantum mechanics by the discovery of new physics, the intellectual contortions of anti-realists are, however elegant, no help for my projects. TH: Could this be a good example of why philosophical training could actually be helpful for physicists? LS: I would agree, in some cases it could be helpful for some physicists to study philosophy, especially if they are interested in discovering deeper foundational laws. But I would never say anyone should study philosophy, because it can be very challenging reading, and if someone is not inclined to think “philosophically” they are unlikely to get much from the effort. But I would say that if someone is receptive to the care and depth of the writing, it can open doors to new ideas and to a highly critical style of thinking, which could greatly aid someone’s research. The point I would like to make here is rather different. As I discussed in my earlier books, there are different periods in the development of science during which different kinds of problems present themselves. These require different strategies, different educations and perhaps even different styles of research to move forward. There are pragmatic periods where the laws needed to understand a wide range of phenomena are in place and the opportunities of greatly advancing our understanding of diverse physical phenomena dominate. These kinds of periods require a more pragmatic approach, which ignores whatever foundational issues may be present (and indeed, there are always foundational issues lurking in the background), and focuses on developing better tools to work out the implications of the laws as they stand. Then there are (to follow Kuhn) revolutionary periods in science, when the foundations are in question and the priority is to discover and express new laws. The kinds of people and the kinds of education needed to succeed are different in these two kinds of periods. Pragmatic times require pragmatic scientists, and philosophy is unlikely to be important. But foundational periods require foundational people, many of whom will, as in past foundational periods, find inspiration from philosophy. Of course, what I just said is an oversimplification. At all times, science needs a diverse mix of research styles. We always need pragmatic people who are very good at the technical side of science. And we always need at least a few foundational thinkers. But the optimal balance is different in different periods. The early part of the 20th Century, through around 1930, was a foundational period. That was followed by a pragmatic period during which the foundational issues were ignored and many applications of the quantum mechanics were developed. Since the late 1970s, physics has been again in a foundational period, facing deep questions in elementary particle physics, cosmology, quantum foundations and quantum gravity. The pragmatic methods which got us to that point no longer suffice; during such a period we need more foundational thinkers and we need to pay more attention to them. TH: Turning to general relativity, you also don’t mince your words and you describe the notion of reversible time, thought to be at the core of general relativity, as “wrong.” What does general relativity look like with irreversible and fundamental time? LS: We posed exactly this question: can we invent an extension of general relativity in which time evolution is asymmetric under a transformation that reverses a measure of time. We found two ways to do this. TH: You touched on consciousness as a physical phenomenon and a necessary ingredient in our physics in your book, Time Reborn (as have many other physicists over the last century, of course). You spend less time on consciousness in your new book — stating “Let us tiptoe past the hard question of consciousness to simpler questions” — but I’m curious if you’ve considered including as a first principle the notion that consciousness is a fundamental aspect of nature (or not) in your ruminations on these deep topics? LS: I am thinking slowly about the problems of qualia and consciousness, in the rough direction set out in the epilogue of Time Reborn. But I haven’t yet come to conclusions worth publishing. An early draft of Einstein’s Unfinished Revolution had an epilogue entirely devoted to these questions, but I decided it was premature to publish; it also would have distracted attention from the central themes of that book. TH: David Bohm, one of the physicists you discuss with respect to alternative versions of quantum theory, delved deeply into philosophy and spirituality in relation to his work in physics, as you discuss briefly in your new book. Do you find Bohm’s more philosophical notions such as the Implicate Order (the metaphysical ground of being in which the “explicate” manifest world that we know in our normal every day life is enfolded, and thus “implicate”) helpful for physics? LS: I am afraid I’ve not understood what Bohm was aiming for in his book on the implicate order, or his dialogues with Krishnamurti, but it is also true that I haven’t tried very hard. I think one can admire greatly the practical and psychological knowledge of Buddhism and related traditions, while remaining skeptical of their more metaphysical teachings. TH: Bohm’s Implicate Order has much in common with physical notions such as the (nonluminiferous) ether, which has been revived in today’s physics by some heavyweights such as Nobel Prize winner Frank Wilczek (The Lightness of Being: Mass, Ether, and the Unification of Forces) as another term for the set of space-filling fields that underlie our reality. Do you take the idea of reviving some notion of the ether as a physical/metaphysical background at all seriously in your work? LS: The important part of the idea of the ether was that it is a smooth, fundamental, physical substance, which had the property that vibrations and stresses within it reproduced the phenomena described by Maxwell’s field theory of electromagnetism. It was also important that there was a preferred frame of reference associated with being at rest with respect to this substance. We no longer believe any part of this. The picture we now have is that any such substance is made of a large collection of atoms. Therefore the properties of any substance are emergent and derivative. I don’t think Frank Wilczek disagrees with this, I suspect he is just being metaphorical. TH: He doesn’t seem to be metaphorical, writing in a 1999 article:“Quite undeservedly, the ether has acquired a bad name. There is a myth, repeated in many popular presentations and textbooks, that Albert Einstein swept it into the dustbin of history. The real story is more complicated and interesting. I argue here that the truth is more nearly the opposite: Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics. And yet, there is serious reason to suspect it may not be the last word.” In his 2008 book mentioned above, he reframes the set of accepted physical fields as “the Grid” (which is “the primary world-stuff”) or ether. Sounds like you don’t find this re-framing very compelling? LS: What is true is that quantum field theory (QFT) treats all propagating particles and fields as excitations of a (usually unique) vacuum state. This is analogized to the ether, but in my opinion it’s a bad analogy. One big difference is that the vacuum of a QFT is invariant under all the symmetries of nature, whereas the ether breaks many of them by defining a preferred state of at rest. TH: You consider Bohm’s alternative quantum theory in some depth, and say that “it makes complete sense,” but after further discussion you consider it inadequate because it is generally considered to be incompatible with special relativity, among other problems. LS: This is not the main reason I don’t think pilot wave theory describes nature. Pilot wave theory is based on two equations. One, which is the same as in ordinary QM-the Schrödinger equation, propagates the wave-function, while the second-the guidance equation, guides the “particles.” The first can be made compatible with special relativity, while the second cannot. But when one adds an assumption about probabilities, the averages of the guided particles follow the waves and so agree with both ordinary QM and special relativity. In this way you can say that pilot wave theory is “weakly compatible” with special relativity, in the sense that, while there is a preferred sense of rest, it can’t be measured. TH: If one considers time to be fundamental and irreversible, isn’t there a relativistic version of Bohmian mechanics readily available by adopting some version of Lorentzian or neo-Lorentzian relativity (which are background-dependent)? LS: Maybe — you are describing research to be done. TH: Last, how optimistic are you that your view, that today’s physics needs some really fundamental re-thinking, will catch on with the majority of today’s physicists in the next decade or so? LS: I’m not but I wouldn’t expect any such call for a reconsideration of the basic principles would be popular until it has results which make it hard to avoid thinking about. Monday, June 10, 2019 Sometimes giving up is the smart thing to do. [likely image source] A few years ago I signed up for a 10k race. It had an entry fee, it was a scenic route, and I had qualified for the first group. I was in best shape. The weather forecast was brilliant. Two days before the race I got a bad cold. But that wouldn’t deter me. Oh, no, not me. I’m not a quitter. I downed a handful of pills and went nevertheless. I started with a fever, a bad cough, and a banging head. It didn’t go well. After half a kilometer I developed a chest pain. After one kilometer it really hurt. After two kilometers I was sure I’d die. Next thing I recall is someone handing me a bottle of water after the finish line. Needless to say, my time wasn’t the best. But the real problem began afterward. My cold refused to clear out properly. Instead I developed a series of respiratory infections. That chest pain stayed with me for several months. When the winter came, each little virus the kids brought home knocked me down. I eventually went to see a doctor. She sent me to have a chest X-ray taken on the suspicion of tuberculosis. When the X-ray didn’t reveal anything, she put me on a 2 week regime of antibiotics. The antibiotics indeed finally cleared out whatever lingering infection I had carried away. It took another month until I felt like myself again. But this isn’t a story about the misery of aging runners. It’s a story about endurance sport of a different type: academia. In academia we write Perseverance with capital P. From day one, we are taught that pain is normal, that everyone hurts, and that self-motivation is the highest of virtues. In academia, we are all over-achievers. This summer, as every summer for the past two decades, I receive notes about who is leaving. Leaving because they didn’t get funding, because they didn’t get another position, or because they’re just no longer willing to sacrifice their life for so little in return. And this summer, as every summer for the past two decades, I find myself among the ones who made it into the next round, find myself sitting here, wondering if I’m worthy and if I’m in the right place doing the right thing at the right time. Because, let us be honest. We all know that success in academia has one or two elements of luck. Or maybe three. We all know it’s not always fair. I’m writing this for the ones who have left and the ones who are about to leave. Because I have come within an inch of leaving half a dozen times and I have heard the nasty, nagging voice in the back of my head. “Quitter,” it says and laughs, “Quitter.” Don’t listen. From the people I know who left academia, few have regrets. And the few with regrets found ways to continue some research along with their new profession. The loss isn’t yours. The loss is one for academia. I understand your decision and I think you choose wisely. Just because everyone you know is on a race to nowhere doesn’t mean going with them makes sense. Sometimes, giving up is the smart thing to do. A year after my miserable 10k experience, I signed up for a half-marathon. A few kilometers into the race, I tore a muscle. I don’t get a runner’s high, but running increases my pain tolerance to unhealthy levels. After a few kilometers, you could probably stab me in the back and I wouldn’t notice. I could well have finished that race. But I quit. Saturday, June 08, 2019 Book Review: “Beyond Weird” by Philip Ball By Philip Ball University of Chicago Press (October 18, 2018) I avoid popular science articles about quantum mechanics. It’s not that I am not interested, it’s that I don’t understand them. Give me a Hamiltonian, a tensor-product expansion, and some unitary operators, and I can deal with that. But give me stories about separating a cat from its grin, the many worlds of Wigner’s friend, or suicides in which you both die and not die, and I admit defeat on paragraph two. Ball is guilty of some of that. I got lost half through his explanation how a machine outputs plush cats and dogs when Alice and Bob put in quantum coins, and still haven’t figured out why the seer’s daughter wanted to be wed to a man evidently more stupid than she. But then, clearly, I am not the book’s intended audience, so let me instead tell you something more helpful. Ball knows what he writes about, that’s obvious from page one. For all I can tell the science in his book is flawless. It is also engagingly told, with some history but not too much, with some reference to current research, but not too much, with some philosophical discourse but not too much. Altogether, it is a well-balanced mix that should be understandable for everyone, even those without prior knowledge of the topic. And I entirely agree with Ball that calling quantum mechanics “weird” or “strange” isn’t helpful. In “Beyond Weird,” Ball does a great job sorting out the most common confusions about quantum mechanics, such as that it is about discretization (it is not), that it defies the speed of light limit (it does not), or that it tells you something about consciousness (huh?). Ball even cleans up with the myth that Einstein hated quantum mechanics (he did not), Feynman dubbed the Copenhagen interpretation “Shut up and calculate” (he did not, also, there isn’t really such a thing as the Copenhagen interpretation), and, best of all, clears out the idea that many worlds solves the measurement problem (it does not). In Ball’s book, you will learn just what quantum mechanics is (uncertainty, entanglement, superpositions, (de)coherence, measurement, non-locality, contextuality, etc), what the major interpretations of quantum mechanics are (Copenhagen, QBism, Many Worlds, Collapse models, Pilot Waves), and what the currently discussed issues are (epistemic vs ontic, quantum computing, the role of information). As someone who still likes to read printed books, let me also mention that Ball’s is just a pretty book. It’s a high quality print in a generously spaced and well-readable font, the chapters are short, and the figures are lovely, hand-drawn illustrations. I much enjoyed reading it. It is also remarkable that “Beyond Weird” has little overlap with two other recent books on quantum mechanics which I reviewed: Chad Orzel’s “Breakfast With Einstein” and Anil Ananthaswamy’s “Through Two Doors At Once.” While Ball focuses on the theory and its interpretation, Orzel’s book is about applications of quantum mechanics, and Ananthaswamy’s is about experimental milestones in the development and understanding of the theory. The three books together make an awesome combination. And luckily the subtitle of Philip Ball’s book turned out to be wrong. I would have been disturbed indeed had everything I thought I knew about quantum physics been different. [Disclaimer: Free review copy.] Related: Check out my list of 10 Essentials of Quantum Mechanics. Wednesday, June 05, 2019 If we spend money on a larger particle collider, we risk that progress in physics stalls. [Image: CERN] Particle physicists have a problem. For 40 years they have been talking about new particles that never appeared. The Large Hadron Collider was supposed to finally reveal them. It didn’t. This $10 billion machine has found the Higgs-boson, thereby completing the standard model of particle physics, but no other fundamentally new particles. With this, the Large Hadron Collider (LHC) has demonstrated that arguments used by particle physicists for the existence of new particles beyond those in the standard model were wrong. With these arguments now falsified, there is no reason to think that a next larger particle collider will do anything besides measuring the parameters of the standard model to higher precision. And with the cost of a next larger collider estimated at $20 billion or so, that’s a tough sell. Particle physicists have meanwhile largely given up spinning stories about discovering dark matter or recreating the origin of the universe, because it is clear to everyone now that this is marketing one cannot trust. Instead, they have a new tactic which works like this. First, they will refuse to admit anything went wrong in the past. They predicted all these particles, none of which was seen, but now they won’t mention it. They hyped the LHC for two decades, but now they act like it didn’t happen. The people who previously made wrong predictions cannot be bothered to comment. Except for those like Gordon Kane and Howard Baer, who simply make new predictions and hope you have forgotten they ever said anything else. Second, in case they cannot get away with outright denial, they will try to convince you it is somehow interesting they were wrong. Indeed, it is interesting – if you are a sociologist. A sociologist would be thrilled to see such an amazing example of groupthink, leading a community of thousands of intelligent people to believe that relying on beauty is a good method to make predictions. But as far as physics is concerned, there’s nothing to learn here, except that beauty isn’t a scientific criterion, which is hardly a groundbreaking insight. Third, they will sure as hell not touch the question whether there might be better ways to invest the money, because that can only work to their disadvantage. So they will tell you vague tales about the need to explore nature, but not ever discuss whether other methods to explore nature would advance science more. But fact is, building a large particle collider presently has a high cost for little expected benefit. This money would be better invested into less costly experiments with higher discovery potential, such as astrophysical searches for dark matter (I am not talking about direct detection experiments), table-top searches for quantum gravity, 21cm astronomy, gravitational wave interferometers, high-precision but low-energy measurements, just to mention a few. And that is only considering the foundations of physics, leaving aside the overarching question of societal benefit. $20 billion that go into a particle collider are $20 billion that do not go into nuclear fusion, drug development, climate science, or data infrastructure, all of which can be reasonably expected to have a larger return on investment. At the very least it is a question one should discuss. Add to this that the cost for a larger particle collider could dramatically go down in the next 20-30 years with future technological advances, such as wake-field acceleration or high-temperature superconductors. In the current situation, with colliders so extremely costly, it makes economically more sense to wait if one of these technologies reaches maturity. Who wants to spend some billions digging a 100km tunnel when that tunnel may no longer be necessary by the time the collider could be be in operation? Anyone who talks about building a larger particle collider, but who does not mention the above named issues demonstrates that they neither care about progress in physics nor about social responsibility. They do not want to have a sincere discussion. Instead, they are presenting a one-sided view. They are merely lobbying. If you encounter any such person, I recommend you ask them the following: Why were all these predictions wrong and what have particle physicists learned from it? Why is a larger particle collider a good way to invest such large amounts of money in the foundations of physics now? What is the benefit of such an investment for society? And do not take as response arguments about benefiting collaborations, scientific infrastructure, or education, because such arguments can be made in favor of any large investment into science. Such generic arguments do not explain why a particle collider in particular is the thing to do. I have a handy list with responses to further nonsense arguments here. A prediction. If you give particle physicists money for a next larger collider this is what will happen: This money will be used to hire more people who will tell you that particle physics is great. They will continue to invent new particles according to some new fad, and then claim they learned something when their expensive machine falsifies these inventions. In 40 years, we will still not know what dark matter is made of or how to quantize gravity. We will still not have a working fusion reactor, will still not have quantum computers, and will still have group-think in science. Particle physicists will then begin to argue they need a larger collider. Rinse and repeat. Of course it is possible that a larger collider will find something new. The only way to find out with certainty is to build it and look. But the same “Just Look” argument can be made about any experiment that explores new frontiers. Point is: Particle physicists have so far failed to come up with any reason why going to higher energies is currently a promising route forward. The conservative expectation therefore is that the next larger collider would be much like the LHC, but for twice the price and without the Higgs. Particle physics is a large and very influential community. Do not fall for their advertisements. Ask the hard questions. Monday, June 03, 2019 The multiverse hypothesis: Are there other universes besides our own? You are one of some billion people on this planet. This planet is one of some hundred billion planets in this galaxy. This galaxy is one of some hundred billion galaxies in the universe. Is our universe the only one? Or are there other universes? In the past decades, the idea that our universe is only one of many, has become popular among physicists. If there are several universes, their collection is called the “multiverse”, and physicists have a few theories for this that I want to briefly tell you about. 1. Eternal Inflation. We do not know how our universe was created and maybe we will never know. But according to a presently popular theory, called “inflation”, our universe was created from a quantum fluctuation of a field called the “inflaton”. In this case, there would be infinitely many such fluctuations giving rise to infinitely many universes. This process of universe-creation never stops, which is why it is called eternal inflation.  These other universes may contain the same matter as ours, but in different arrangements, or they may contain different types of matter. They may have the same laws of nature, or entirely different laws. Really, pretty much anything goes, as long as you have space, time, and matter. 2. The String Theory Landscape The string theory landscape came out of the realization that string theory does not, as originally hoped, uniquely predict the laws of nature we observe. Instead, the theory allows for many different laws of nature, that would give rise to universes different from our own.  The idea that all of them exist goes together well with eternal inflation, and so, the two theories are often lumped together. 3. Many Worlds Many Worlds is an interpretation of quantum mechanics. In quantum mechanics, we can make predictions only for probabilities. We can say, for example, a particle goes left or right, each with 50% probability. But then, when we measure it, we find it either left or right. And then we know where it is  with 100% probability. So what happened with the other option? The most common attitude you find among physicists is who cares? We are here and that’s what we have measured, now let’s move on. The many worlds interpretation, however, postulates that all possible outcomes of an experiment exist, each in a separate universe. It’s just that we happen to live in only one of those universes, and never see the other ones. 4. The Simulation Hypothesis Video games are getting better by the day, and it’s easy to imagine that maybe one day they will be so good we can no longer tell apart the virtual world and the real world. This brings up the question whether maybe we already live in a virtual world, one that is programmed by some being more intelligent than us and technologically ahead? If that is so, there is no reason to think that our universe is the only simulation that is going on. There may be many other universe simulations, programmed by superintelligent beings. This, too, is a variant of the multiverse. 5. The Mathematical Universe Finally, let me briefly mention the idea, popularized by Max Tegmark, that all of mathematics exists, and that we merely observe a very small part of it. It is this small part of mathematics that we call our universe. Are these theories science? Or are they fiction? Let me know what you think. Does God exist? Science does not have an answer. Thursday, May 30, 2019 Quantum mechanics: Still mysterious after all these years Last week I was in Barcelona, Spain, where I visited the Center for Contemporary Culture (CCCB). I didn’t know until I arrived that they currently have an exhibition on quantum mechanics. I have found the exhibition to be very interesting, and – thinking you would find it interesting too! – compiled the below video. Since I didn't have my video camera with me, it is made of footage provided by the CCCB and some recordings that a friendly lady from the Spanish TV made on her phone. Enjoy! Update May 31st: Now with Italian and German subtitles. Click on CC in the YouTube toolbar. Chose language in settings/gear icon. Tuesday, May 28, 2019 Capitalism is good for you Most economists I know started out as physicists. Being a physicist myself of course means that the sample is biased, but still it serves to demonstrate the closeness of the two subjects. The emergence of market economies in human society is almost a universal. Because markets are non-centralized, they can, and will, spontaneously arise. As of today, capitalism is the best mechanism we know to optimize the distribution of resources. We use it for one simple reason: It works. A physicist cannot not see how similar the problem of distributing resources is to optimization problems in many-body systems, to equilibrium processes, to self-organized criticality. I know a lot of people loathe the idea that humans are just nodes in a network, tasked to exchange bits of information. But to first approximation that’s what we are. I am not a free market enthusiast. Free markets work properly only if both consumers and producers rationally evaluate all available information, for example about the societal and environmental impacts of purchasing a product. This is a cognitive task we simply cannot, in practice, perform. Therefore, while the theoretically most optimal solution would be that we act perfectly rational and exclusively rely on markets, in reality we use political systems as a shortcuts. Laws and regulations result in market inefficiencies, but they approximately take into account values that our sloppy purchase decisions neglect. So far, so clear, or at least that’s what I thought. In the past months, however, I have repeatedly come across videos and opinion pieces that claim we must overthrow capitalism to save the world. Some examples: These articles spread some misinformation that I want to briefly sort out. Even if my little blog touches the lives of only a few people each day, a drop in the ocean is still a drop. If you don’t understand how black holes emit radiation, that’s unfortunate, but honestly it doesn’t really matter. If you don’t understand how capitalism works, that matters a big deal more. First, the major reason we have problems with capitalism is that it does not work properly. We know that markets fail under certain circumstances. Monopolies are one of them, and this is certainly a problem we see with social media. That markets do not automatically account for externalities is another reason, and this is the problem we see with environmental protection. The biggest problem however is what I already mentioned above, that markets only work if consumers know what they are buying. Which brings me to another misunderstanding. Second, capitalism isn’t about money. No, really, it is not. Money is just a medium we exchange to reach an optimal configuration. It does not itself define what is optimal. Markets optimize a quantity called “utility”. What is “utility” you ask? It is whatever is relevant to you. You may, for example, be willing to pay a somewhat higher price for a social media platform that does not spur the demise of democracy. This should, theoretically, give companies feedback about what customers want, leading the company to improve its products. Why isn’t this working? It’s not working because we currently pay for most online services – think Facebook – by advertisements. The cost of producing the advertisements increases the price of the advertised product. With this arrangement there is no feedback from the consumer of the online service to the service provider itself. Indeed, I suspect that Facebook prefers financing by ads exactly because this way they do not have to care about what users want. Now add on top that users do not actually know what they are getting into, and it isn’t hard to see why capitalism fails here: The self-correction of the market cannot work if consumers do not know what they get, and producers don’t get financial feedback about how well they meet consumers demand. And that’s leaving aside the monopoly problem. In a functioning capitalist system, nothing prevents you from preferably buying products of companies that support your non-financial values, thereby letting producers know that that’s what you want. Or, if that’s too much thinking, vote a party that passes laws enforcing these values. Third, capitalism just shows us who we are. Have you given all your savings to charity today? You probably haven’t. Children are starving in Africa and birds are choking from plastic, but you are sitting on your savings. A bad, bad, person you are. Guess what, you’re not alone. Of course you save money not for the sake of having money, but because it offers safety, freedom, health, entertainment and, yes, also luxury to some extent. You do not donate all your money to charity because, face it, you value your future well-being higher than the lives of children you don’t know. Economists call this “revealed preferences”. What we spend and do not spend our money on reveals what matters to us. Part of the backlash against capitalism that we now see is people who are inconsistent about their preferences. Or maybe they are just virtue-signalling because it’s fashionable. Yes, limiting climate change is important, nod-nod. But if that means rising gas prices, then maybe it’s not all that important. Complaining about capitalism will not resolve this tension. As I said in a recent blogpost, when it comes to climate change there are no simple solutions. It will hurt either way. And maybe the truth is that many of us just do not care all that much about future generations. Capitalism, of course, cannot fix all our problems, even if it was working perfectly from tomorrow on. That’s because change takes time, and if we don’t have time, the only thing that will get human society to move quickly is a centralization of power. Monday, May 27, 2019 Do I exist? Last week, we discussed what scientists mean when they say that something exists. To recap briefly: Something exists if it is useful to explain observations. This makes, most importantly, no statement about what is real or true which is a question for philosophers, not scientists. I then asked you to tell me whether you think that I exist. Many of you submitted great answers to my existential question. I want to pick two examples to illustrate some key points. Fernando wrote in comment on my YouTube Channel: “When I say “that chair exists”, I am also fitting the data (collected with my senses) with my internal conceptions of a chair. I think that the same is true when I see a two dimensional representation of Sabine Hossenfelder on my computer screen and say that Sabine exists.” As he says, correctly, he is really just trying to create a model to explain his sensory input. I am part of a model that works well, therefore he says I exist. And Dr Castaldo wrote in a comment on this blog: “I believe a single human exists that appears in videos and photographs and authors these blog posts, tweets, and answers to commentary. I am aware of no other plausible (to me) explanation for those artifacts and their consistency.” The important part of this comment is that he emphasizes the explanation that I exist is plausible, not certain, and it is plausible to him, personally. In the comments on my earlier blogpost you then find some exchange about whether it is possible that my videos are generated by an artificial intelligence and I do not exist. For all you know, that is possible. But even with the most advanced software presently available, it would be a challenge to fake me. At the very least, making me up would be a lot of effort for no good reason. Possible, yes, but not plausible. The simplest explanation is what most of you probably believe, that I am a human being not unlike yourself, in an apartment not unlike your own, with a camera and laptop, not unlike your own, and so on. And simple explanations are the most useful ones in a computational sense, so these are the ones scientists go with. The important points here are the following. First, explaining sensory input is all you ever do. You collect data with your senses and try to create a consistent model of the world that explains this data. Even scientists and their papers are just sensory input that you use to create a model of the world. Second, confidence in existence is gradual. The only reliable statements about what exists are based on models that you use to explain observations. But the confidence you have in these models depends on what data you have, and therefore it can gradually increase or decrease. The more videos you watch of me, the more confident you will be that I exist. It’s not either-or. It’s maybe or probably. The thing you can be most confident exists is yourself because you cannot explain anything unless there is a you to explain something. Data about yourself are the most immediate. That’s a complicated way of rephrasing what Descartes said: I think, therefore I am. Third, how confident you are that something exists depends on your personal history. It depends on your experience and your knowledge. If we have met and I shook your hand, you will be much more confident that I exist. Why? Because you only know of one way to create this sensory input. If you merely see me on a laptop screen, you also have to use knowledge about how your screen works, how the internet works, how human society works, and what’s the current status of artificial intelligence and so on. It’s a more difficult analysis of the data, and you will end up with a lower confidence. And this is why science communication is so, so relevant. Because someone who does not understand how scientists infer the existence of the Higgs-boson from data, and also does not understand how science itself works, will end up with a low confidence that the Higgs-boson exists and they will begin to question the use of science in general. Having settled this, here is the next homework assignment: Does God exist? Let me know what you think. Update May 29: The video now has German and Italian subtitles. To see those, click on CC in the YouTube tool bar. Chose language in settings/gear icon. Wednesday, May 22, 2019 Does the Higgs-boson exist? What do scientists mean when they say that something exists? Every time I give a public lecture, someone will come and inform me that black holes don’t exist, or quarks don’t exist, or time doesn’t exist. Last time someone asked me “Do you really believe that gravitational waves exist?” So, do I believe that gravitational waves exist? Let me ask you in return: Why do you care what I believe? What does it matter for anything? Look, I am a scientist. Scientists don’t deal with beliefs. They deal with data and hypotheses. Science is about knowledge and facts, not about beliefs. And what I know is that Einstein’s theory of general relativity is a mathematical framework from which we can derive predictions that are in excellent agreement with observation. We have given names to the mathematical structures in this theory. One of them is called gravitational waves, another one is called black holes. These are the mathematical structures from which we can calculate the observational consequences that have now been measured by the LIGO and VIRGO gravitational wave interferometers. When we say that these experiments measured “gravitational waves emitted in a black hole merger”, we really mean that specific equations led to correct predictions. It is a similar story for the Higgs-boson and for quarks. The Higgs-boson and quarks are names that we have given to mathematical structures. In this case the structures are part of what is called the standard model of particle physics. We use this mathematics to make predictions. The predictions agree with measurements. That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations. Same story for time. In General Relativity, time is a coordinate, much like space. It is part of the mathematical framework. We use it to make predictions. The predictions agree with observations. And that’s that. Now, you may complain that this is not what you mean by “existence”. You may insist that you want to know whether it is “real” or “true”. I do not know what it means for something to be “real” or “true.” You will have to consult a philosopher on that. They will offer you a variety of options, that you may or may not find plausible. A lot of scientists, for example, subscribe knowingly or unknowingly to a philosophy called “realism” which means that they believe a successful theory is not merely a tool to obtain predictions, but that its elements have an additional property that you can call “true” or “real”. I am loosely speaking here, because there several variants of realism. But they have in common that the elements of the theory are more than just tools. And this is all well and fine, but realism is a philosophy. It’s a belief system, and science does not tell you whether it is correct. So here is the thing. If you want to claim that the Higgs-boson does not exist, you have to demonstrate that the theory which contains the mathematical structure called “Higgs-boson” does not fit the data. Whether or not Higgs-bosons ever arrive in a detector is totally irrelevant. Here is a homework assignment: Do you think that I exist? And what do you even mean by that? Update: Now with Subtitles in German and Italian. To see them, click on CC in the YouTube toolbar, then chose a language in the settings/gear icon. Tuesday, May 21, 2019 Book and travel update The French translation of my book “Lost in Math” has now appeared under the title “Lost in Maths: Comment la beauté égare la physique”. The Spanish translation has now also now appeared under the title “Perdidos en las matemáticas: Cómo la belleza confunde a los físicos.” I don’t speak Spanish, but for all I can tell, the title is a literal translation. On Thursday (May 23rd) I am giving a public lecture in Barcelona. The lecture, it turns out, will be simultaneously translated into Spanish. This, I think, will be an interesting experience. The next talks I have scheduled are a colloquium in Mainz, Germany, on June 11, and a public lecture in Groningen, Netherlands, on June 21st. The public lecture is associated with the workshop “Probabilities in Cosmology” at the University of Groningen. I declined the invitation to the Nobel Laureate meeting in Lindau because I was informed they would only cover my travel expenses if I agree in advance to write about their meeting for a 3rd party. (If you get pitches about the meeting, please ask the author for a COI.) After some back and forth, I accepted the invitation to SciFoo 2019, mostly because I couldn’t think of a way to justify declining it even to myself. The fall is filling up too. The current plan looks roughly like this: On September 21st, I am giving a public lecture in Nürnberg. Early October I am in Brussels for a workshop. Mid of October I am giving a public lecture at the University of Minnesota. (I have not yet booked the flight for this trip. So if you want me to stop by at your institution for a lecture on the way, please get in touch asap.) End of October I am giving a lecture in Göttingen, and the first week of November I am in Potsdam and, again, in Berlin. From November on, I will be unemployed, at least that is what it presently looks like. Or maybe I should say I will be fully self-employed. Either way, I will have to think of some other way to earn money than doing calculations in Anti-DeSitter space. Finally, here is the usual warning that I am traveling for the rest of the week and comments on this blog will be stuck in the moderation queue longer than usual. Sunday, May 19, 2019 10 things you should know about black holes When I first learned about black holes, I was scared that one would fly through our solar system and eat us up. That was 30 years ago. I'm not afraid of black holes anymore but I am afraid that they have been misunderstood. So here are 10 things that you should know about black holes. 1. What is a black hole? A black hole contains a region from which nothing ever can escape, because, to escape, you would have to move faster than the speed of light, which you can’t.  The boundary of the region from which you cannot escape is called the “horizon.” In the simplest case, the horizon has the form of a sphere. Its radius is known as the Schwarzschild radius, named after Karl Schwarzschild who first derived black holes as a solution to Einstein’s General Relativity. 2. How large are black holes? The diameter of a black hole is directly proportional to the mass of the black hole. So the more mass falls into the black hole, the larger the black hole becomes. Compared to other stellar objects though, black holes are tiny because enormous gravitational pressure has compressed their mass into a very small volume. For example, the radius of a black hole with the approximate mass of planet Earth is only a few millimeters. 3. What happens at the horizon? A black hole horizon does not have substance. Therefore, someone crossing the black hole horizon does not notice anything weird going on in their immediate surroundings. This follows from Einstein’s equivalence principle, which implies that in your immediate surrounding you cannot tell the difference between acceleration in flat space and curved space that gives rise to gravity. However, an observer far away from a black hole who watches somebody fall in would notice that the infalling person seems to move slower and slower the closer they get to the horizon. It appears this way because time close by the black hole horizon runs much slower than far away from the horizon.  That’s one of these odd consequences of the relativity of time that Einstein discovered. So, if you fall into a black hole, it only takes a finite amount of time to cross the horizon, but from the outside it looks like it take forever. What you would experience at the horizon depends on the tidal force of the gravitational field. The tidal forces is loosely speaking the change of the gravitational force. It’s not the gravitational force itself, it’s the difference between the gravitational forces at two nearby places, say at your head and at your feet. The tidal force at the horizon is inversely proportional to the square of the mass of the black hole. This means the larger and more massive the black hole, the smaller the tidal force at the horizon. Yes, you heard that right. The larger the black hole, the smaller the tidal force at the horizon. Therefore, if the black hole is only massive enough, you can cross the horizon without noticing what just happened. And once you have crossed the horizon, there is no turning back. The stretching from the tidal force will become increasingly unpleasant as you approach the center of the black hole, and eventually rip everything apart. In the early days of General Relativity many physicists believed that there is a singularity at the horizon, but this turned out to be a mathematical mistake. 4. What is inside a black hole? Nobody really knows. General relativity predicts that inside the black hole is a singularity, that’s a place where the tidal forces become infinitely large. But we know that General Relativity does not work nearby the singularity because there, the quantum fluctuations of space and time become large. To be able to tell what is inside a black hole we would need a theory of quantum gravity – and we don’t have one. Most physicists believe that such a theory, if we had it, would replace the singularity with something else. 5. How do black holes form? We presently know of four different ways that black holes may form. The best understood one is stellar collapse. A sufficiently large star will form a black hole after its nuclear fusion runs dry, which happens when the star has fused everything that could be fused. Now, when the pressure generated by the fusion stops, the matter starts falling towards its own gravitational center, and thereby it becomes increasingly dense. Eventually the matter is so dense that nothing can overcome the gravitational pull on the stars’ surface: That’s when a black hole has been created. These black holes are called ‘solar mass black holes’ and they are the most common ones. The next common type of black holes are ‘supermassive black holes’ that can be found in the centers of many galaxies. Supermassive black holes have masses about a billion times that of solar mass black holes, and sometimes even more. Exactly how they form still is not entirely clear. Many astrophysicists think that supermassive black holes start out as solar mass black holes, and, because they sit in a densely populated galactic center, they swallow a lot of other stars and grow. However, it seems that the black holes grow faster than this simple idea suggests, and exactly how they manage this is not well understood. A more controversial idea are primordial black holes. These are black holes that might have formed in the early universe by large density fluctuations in the plasma. So, they would have been there all along. Primordial black holes can in principle have any mass. While this is possible, it is difficult to find a model that produces primordial black holes without producing too many of them, which is in conflict with observation. Finally, there is the very speculative idea that tiny black holes could form in particle colliders. This can only happen if our universe has additional dimensions of space. And so far, there has not been any observational evidence that this might be the case. 6. How do we know black holes exist? We have a lot of observational evidence that speaks for very compact objects with large masses that do not emit light. These objects reveal themselves by their gravitational pull. They do this for example by influencing the motion of other stars or gas clouds around them, which we have observed. We furthermore know that these objects do not have a surface. We know this because matter falling onto an object with a surface would cause more emission of particles than matter falling through a horizon and then just vanishing. And since most recently, we have the observation from the “Event Horizon Telescope” which is an image of the black hole shadow. This is basically an extreme gravitational lensing event. All these observations are compatible with the explanation that they are caused by black holes, and no similarly good alternative explanation exists. 7. Why did Hawking once say that black holes don’t exist? Hawking was using a very strict mathematical definition of black holes, and one that is rather uncommon among physicists.  If the inside of the black hole horizon remains disconnected forever, we speak of an “event horizon”. If the inside is only disconnected temporarily, we speak of an “apparent horizon”. But since an apparent horizon could be present for a very long time, like, billions of billions of years, the two types of horizons cannot be told apart by observation. Therefore, physicists normally refer to both cases as “black holes.” The more mathematically-minded people, however, count only the first case, with an eternal event horizon, as black hole. What Hawking meant is that black holes may not have an eternal event horizon but only a temporary apparent horizon. This is not a controversial position to hold, and one that is shared by many people in the field, including me. For all practical purposes though, the distinction Hawking drew is irrelevant. 8. How can black holes emit radiation? Black hole can emit radiation because the dynamical space-time of the collapsing black hole changes the notion of what a particle is. This is another example of the “relativity” in Einstein’s theory. Just like time passes differently for different observers, depending on where they are and how they move, the notion of particles too depends on the observer, on where they are and how they move.  Because of this, an observer who falls into a black hole thinks he is falling in vacuum, but an observer far away from the black hole thinks that it’s not vacuum but full of particles. And where do the particles come from? They come from the black hole. This radiation that black holes emit is called “Hawking radiation” because Hawking was the first to derived that this should happen. This radiation has a temperature which is inversely proportional to the black hole’s mass: So, the smaller the black hole the hotter. For the stellar and supermassive black holes that we know of, the temperature is well below that of the Cosmic microwave background and cannot be observed. 9. What is the information loss paradox? The information loss paradox is caused by the emission of Hawking radiation. This happens because the Hawking radiation is purely thermal which means it is random except for having a specific temperature. In particular, the radiation does not contain any information about what formed the black hole.  But while the black hole emits radiation, it loses mass and shrinks. So, eventually, the black hole will be entirely converted into random radiation and the remaining radiation depends only on the mass of the black hole. It does not at all depend on the details of the matter that formed it, or whatever fell in later. Therefore, if one only knows the final state of the evaporation, one cannot tell what formed the black hole.  Such a process is called “irreversible” — and the trouble is that there are no such processes in quantum mechanics. Black hole evaporation is therefore inconsistent with quantum theory as we know it and something has to give. Somehow this inconsistency has to be removed. Most physicists believe that the solution is that the Hawking radiation somehow must contain information after all. 10. So, will a black hole come and eat us up? It’s not impossible, but very unlikely.  Most stellar objects in galaxies orbit around the galactic center because of the way that galaxies form. It happens on occasion that two solar systems collide and a star or planet or black hole, is kicked onto a strange orbit, leaves one solar system and travels around until it gets caught up in the gravitational field of some other system.  But the stellar objects in galaxies are generally far apart from each other, and we sit in an outer arm of a spiral galaxy where there isn’t all that much going on. So, it’s exceedingly improbable that a black hole would come by on just exactly the right curve to cause us trouble. We would also know of this long in advance because we would see the gravitational pull of the black hole acting on the outer planets. If you enjoy my blog, please consider donating by using the button in the top right corner. Thanks!
2d1c3a2da6151a17
Tag Archives: hyperbolic sine Imaginary Angles You would have heard about imaginary numbers and most famous of them is i=\sqrt{-1}. I personally don’t like this name because all of mathematics is man/woman made, hence all mathematical objects are imaginary (there is no perfect circle in nature…) and lack physical meaning. Moreover, these numbers are very useful in physics (a.k.a. the study of nature using mathematics). For example, “time-dependent Schrödinger equation But, as described here: Complex numbers are a tool for describing a theory, not a property of the theory itself. Which is to say that they can not be the fundamental difference between classical and quantum mechanics (QM). The real origin of the difference is the non-commutative nature of measurement in QM. Now this is a property that can be captured by all kinds of beasts — even real-valued matrices. [Physics.SE] For more of such interpretation see: Volume 1, Chapter 22 of “The Feynman Lectures in Physics”. And also this discussion about Hawking’s wave function. All these facts may not have fascinated you, but the following fact from Einstein’s Special Relativity should fascinate you: In 1908 Hermann Minkowski explained how the Lorentz transformation could be seen as simply a hyperbolic rotation of the spacetime coordinates, i.e., a rotation through an imaginary angle. [Wiki: Rapidity] Irrespective of the fact that you do/don’t understand Einstein’s relativity, the concept of imaginary angle appears bizarre. But, mathematically its just another consequence of non-euclidean geometry which can be interpreted as Hyperbolic law of cosines etc. For example: \displaystyle{\cos (\alpha+i\beta) = \cos (\alpha) \cosh (\beta) - i \sin (\alpha) \sinh (\beta)} \displaystyle{\sin (\alpha+i\beta) = \sin (\alpha) \cosh (\beta) + i \cos (\alpha) \sinh (\beta)} Let’s try to understand what is meant by “imaginary angle” by following the article “A geometric view of complex trigonometric functions” by Richard Hammack. Consider the complex unit circle  U=\{z,w\in \mathbb{C} \ :  \  z^2+w^2=1\} of \mathbb{C}^2, in a manner exactly analogous to the definition of the standard unit circle in \mathbb{R}^2. Apparently U is some sort of surface in \mathbb{C}^2, but it can’t be drawn as simply as the usual unit circle, owing to the four-dimensional character of \mathbb{C}^2. But we can examine its lower dimensional cross sections. For example, if  z=x+iy and w=u+iv then by setting y = 0 we get the circle x^2+u^2=1 in x-u plane for v=0 and the hyperbola x^2-v^2 = 1 in x-vi plane for u=0. The cross-section of complex unit circle (defined by z^2+w^2=1 for complex numbers z and w) with the x-u-vi coordinate space (where z=x+iy and w=u+iv) © 2007 Mathematical Association of America These two curves (circle and hyperbola) touch at the points ±o, where o=(1,0) in \mathbb{C}^2, as illustrated above. The symbol o is used by Richard Hammack because this point will turn out to be the origin of complex radian measure. Let’s define complex distance between points \mathbf{a} =(z_1,w_1) and \mathbf{b}=(z_2,w_2) in \mathbb{C}^2 as where square root is the half-plane H of \mathbb{C} consisting of the non-negative imaginary axis and the numbers with a positive real part. Therefore, the complex distance between two points in \mathbb{C}^2 is a complex number (with non-negative real part). Starting at the point o in the figure above, one can move either along the circle or along the right-hand branch of the hyperbola. On investigating these two choices, we conclude that they involve traversing either a real or an imaginary distance. Generalizing the idea of real radian measure, we define imaginary radian measure to be the oriented arclength from o to a point p on the hyperbola, as (a) Real radian measure (b) Imaginary radian measure. © 2007 Mathematical Association of America If p is above the x axis, its radian measure is \beta i with \beta >0, while if it is below the x axis, its radian measure is \beta i with \beta <0. As in the real case, we define \cos (\beta i) and \sin (\beta i) to be the z and w coordinates of p. According to above figure (b), this gives \displaystyle{\cos (\beta i) = \cosh (\beta); \qquad \sin (\beta i) = i \sinh (\beta)} \displaystyle{\cos (\pi + \beta i) = -\cosh (\beta); \qquad \sin (\pi + \beta i) = -i \sinh (\beta)} Notice that both these relations hold for both positive and negative values of \beta, and are in agreement with the expansions of  \cos (\alpha+i\beta)  and \sin (\alpha+i\beta)  stated earlier. But, to “see” what a complex angle looks like we will have to examine the complex versions of lines and rays. Despite the four dimensional flavour, \mathbb{C}^2 is a two-dimensional vector space over the field \mathbb{C}, just like \mathbb{R}^2 over \mathbb{R}. Since a line (through the origin) in \mathbb{R}^2 is the span of a nonzero vector, we define a complex line in \mathbb{C}^2 analogously. For a nonzero vector u in \mathbb{C}^2, the complex line \Lambda through u is span(u), which is isomorphic to the complex plane. In \mathbb{R}^2, the ray \overline{\mathbf{u}} passing through a nonzero vector u can be defined as the set of all nonnegative real multiples of u. Extending this to \mathbb{C}^2 seems problematic, for the word “nonnegative” has no meaning in \mathbb{C}. Using the half-plane H (where complex square root is defined) seems a reasonable alternative. If u is a nonzero vector in \mathbb{C}, then the complex ray through u is the set \overline{\mathbf{u}} = \{\lambda u \ : \  \lambda\in H\}. Finally, we define a complex angle is the union of two complex rays \overline{\mathbf{u}_1} and \overline{\mathbf{u}_2} . I will end my post by quoting an application of imaginary angles in optics from here: … in optics, when a light ray hits a surface such as glass, Snell’s law tells you the angle of the refracted beam, Fresnel’s equations tell you the amplitudes of reflected and transmitted waves at an interface in terms of that angle. If the incidence angle is very oblique when travelling from glass into air, there will be no refracted beam: the phenomenon is called total internal reflection. However, if you try to solve for the angle using Snell’s law, you will get an imaginary angle. Plugging this into the Fresnel equations gives you the 100% reflectance observed in practice, along with an exponentially decaying “beam” that travels a slight distance into the air. This is called the evanescent wave and is important for various applications in optics. [Mathematics.SE]
7c17c6441c32bf47
WKB Computations on Morse Potential Requires a Wolfram Notebook System Requires a Wolfram Notebook System The semiclassical Wentzel–Kramers–Brillouin (WKB) method applied to one-dimensional problems with bound states often reduces to the Sommerfeld–Wilson quantization conditions, the cyclic phase-space integrals . It turns out that this formula gives the exact bound-state energies for the Morse oscillator with . The requisite integral can be reduced to , in which and are the classical turning points . The integral can be done "by hand", using the transformation followed by a contour integration in the complex plane, but Mathematica can evaluate the integral explicitly, needing only the additional fact that The result reads , which can be solved for to give , , in units with . The highest bound state is given by , where represents the floor, which for positive numbers is simply the integer part. The values of , , and (expressed in atomic units) used in this Demonstration are for illustrative purposes only and are not necessarily representative of any actual diatomic molecule. Contributed by: S. M. Blinder (January 2011) Open content licensed under CC BY-NC-SA A particle in a one-dimensional potential can be described by the Schrödinger equation: . The semiclassical or WKB method is based on the ansatz . In the limit as , in the exponential satisfies the Hamilton–Jacobi equation for the action function , with one solution . It is then shown in most graduate-level texts on quantum mechanics (e.g. Schiff, Merzbacher, etc.) that this usually leads to the Sommerfeld–Wilson quantum conditions on periodic orbits , . For one-dimensional problems, the cyclic integral can be replaced by , where , are the classical turning points of the motion and . As an étude consider the linear harmonic oscillator, with and . The quantum condition reads , which can be solved to give . This is one of a small number of cases in which the WKB method gives the exact quantum-mechanical energies. Feedback (field required) Email (field required) Name Occupation Organization
cf4e2a0faae04770
I'm trying to write some software that I can use to determine, roughly, what the physical properties of a pure substance are. I know I could just use a database of the known properties of each element, but that doesn't do much for me when I go to create molecules and combine substances. A database that lists how each element will react to another is likely impossible, so I got to thinking... there has to be a mathematical way of determining these properties! For example, I understand that there are trends on the periodic table that show that the stronger the force between the molecules is, the higher the melting point will be. I also understand that we can calculate how strong that force is based on what we know about the atom. So why can't I find any information relating the two in some mathematical way? The same goes for color, hardness, malleability, etc. Do we really only know these values through testing? We know the values but we don't know why they are what they are? I know this is deeply complex topic and may not even be computational possible. In those cases, I'm trying to do as much as I can to approximate. • 1 $\begingroup$ Would you agree that the behavior of electrons and nuclei are atomic properties? If so, with an appropriate operator, we can extract the properties you are talking about from the wavefunction. $\endgroup$ – LordStryker Sep 5 '14 at 14:35 • $\begingroup$ While theoretically speaking we should expect 'yes' no general practical method is known. \ $\endgroup$ – permeakra Sep 5 '14 at 14:40 • $\begingroup$ @permeakra, hmmm. The way I interpret this question, we should expect 'no' as the answer. The right way of determining properties of a molecule is to solve the Schrödinger equation for all the nuclei and the electrons. Strictly speaking, there is no even atoms in molecules in a sense which is given to them in this question. $\endgroup$ – Wildcat Sep 5 '14 at 14:48 • $\begingroup$ @LordStryker, can you expand upon what the appropriate operator(s) may be? Also, can you link me to information regarding the wave function? I'm still pretty new to chem. $\endgroup$ – Stradigos Sep 5 '14 at 15:08 • 1 $\begingroup$ @Stradigos, sure, Google "quantum chemistry" and follow the white rabbit. :-D $\endgroup$ – Wildcat Sep 5 '14 at 15:17 For atomic and molecular-level properties (bond, angle, torsion energies, etc), quantum mechanics provides the equations for solving for these properties. However, there are a number of approximations that are employed, especially for larger, more complex molecules. For bulk properties of a substance, thermodynamics and statistical mechanics provide equations for solving for a variety of material properties (critical properties, phase equilibria, various types of enthalpy, and so on). Read up on equations of state. There is also the option of molecular simulation. In this case you are solving relatively simple, well-established equations (eg, Newton's equations of motion in molecular dynamics simulations) repeatedly in order to compute the (usually) equilibrium properties of a substance of interest. In all cases, the accuracy varies, and validating results by comparing various methods is considered best practice. While there are "purists" out there who believe that all parameters in a model should be derived from first principles alone, in practice models are often fit to experimental data (or even data from quantum mechanical calculations) to improve accuracy. Your Answer
2b066b893bbe1d09
A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a single positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the baryonic mass of the universe.[1] Hydrogen atom,  1H Hydrogen 1.svg Name, symbolprotium,1H Nuclide data Natural abundance99.985% Isotope mass1.007825 u Excess energy7288.969± 0.001 keV Binding energy0.000± 0.0000 keV Isotopes of hydrogen Complete table of nuclides Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius. (Image not to scale) In everyday life on Earth, isolated hydrogen atoms (called "atomic hydrogen") are extremely rare. Instead, a hydrogen atom tends to combine with other atoms in compounds, or with another hydrogen atom to form ordinary (diatomic) hydrogen gas, H2. "Atomic hydrogen" and "hydrogen atom" in ordinary English use have overlapping, yet distinct, meanings. For example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen (which would refer to isolated hydrogen atoms). Atomic spectroscopy shows that there is a discrete infinite set of states in which a hydrogen (or any) atom can exist, contrary to the predictions of classical physics. Attempts to develop a theoretical understanding of the states of the hydrogen atom have been important to the history of quantum mechanics, since all other atoms can be roughly understood by knowing in detail about this simplest atomic structure. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons and is simply a proton and an electron. Protium is stable and makes up 99.985% of naturally occurring hydrogen atoms.[2] Deuterium contains one neutron and one proton. Deuterium is stable and makes up 0.0156% of naturally occurring hydrogen[2] and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance. Tritium contains two neutrons and one proton and is not stable, decaying with a half-life of 12.32 years. Because of the short half life, tritium does not exist in nature except in trace amounts. Higher isotopes of hydrogen are only created in artificial accelerators and reactors and have half lives around the order of 10−22 (0.0000000000000000000001) second. The formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant (correction formula given below) must be used for each hydrogen isotope. Hydrogen ionEdit Hydrogen is not found without its electron in ordinary chemistry (room temperatures and pressures), as ionized hydrogen is highly chemically reactive. When ionized hydrogen is written as "H+" as in the solvation of classical acids such as hydrochloric acid, the hydronium ion, H3O+, is meant, not a literal ionized single hydrogen atom. In that case, the acid transfers the proton to H2O to form H3O+. Ionized hydrogen without its electron, or free protons, are common in the interstellar medium, and solar wind. Theoretical analysisEdit Failed classical descriptionEdit Experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a tenuous negative charge cloud around it. This immediately raised questions about how such a system could be stable. Classical electromagnetism had shown that any accelerating charge radiates energy, as shown by the Larmor formula. If the electron is assumed to orbit in a perfect circle and radiates energy continuously, the electron would rapidly spiral into the nucleus with a fall time of:[3] Where   is the Bohr radius and   is the classical electron radius. If this were true, all atoms would instantly collapse, however atoms seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. Instead, atoms were observed to only emit discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics. Bohr–Sommerfeld ModelEdit In 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. The assumptions included: 1. Electrons can only be in certain, discrete circular orbits or stationary states, thereby having a discrete set of possible radii and energies. 2. Electrons do not emit radiation while in one of these stationary states. 3. An electron can gain or lose energy by jumping from one discrete orbital to another. Bohr supposed that the electron's angular momentum is quantized with possible values: and   is Planck constant over  . He also supposed that the centripetal force which keeps the electron in its orbit is provided by the Coulomb force, and that energy is conserved. Bohr derived the energy of each orbit of the hydrogen atom to be:[4] where   is the electron mass,   is the electron charge,   is the vacuum permittivity, and   is the quantum number (now known as the principal quantum number). Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values. For  , the value is called the Rydberg unit of energy. It is related to the Rydberg constant   of atomic physics by   The exact value of the Rydberg constant assumes that the nucleus is infinitely massive with respect to the electron. For hydrogen-1, hydrogen-2 (deuterium), and hydrogen-3 (tritium) which have finite mass, the constant must be slightly modified to use the reduced mass of the system, rather than simply the mass of the electron. This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. However, since the nucleus is much heavier than the electron, the electron mass and reduced mass are nearly the same. The Rydberg constant RM for a hydrogen atom (one electron), R is given by where   is the mass of the atomic nucleus. For hydrogen-1, the quantity   is about 1/1836 (i.e. the electron-to-proton mass ratio). For deuterium and tritium, the ratios are about 1/3670 and 1/5497 respectively. These figures, when added to 1 in the denominator, represent very small corrections in the value of R, and thus only small corrections to all energy levels in corresponding hydrogen isotopes. There were still problems with Bohr's model: 1. it failed to predict other spectral details such as fine structure and hyperfine structure 2. it could only predict energy levels with any accuracy for single–electron atoms (hydrogen–like atoms) 3. the predicted values were only correct to  , where   is the fine-structure constant. Most of these shortcomings were resolved by Arnold Sommerfeld's modification of the Bohr model. Sommerfeld introduced two additional degrees of freedom, allowing an electron to move on an elliptical orbit characterized by its eccentricity and declination with respect to a chosen axis. This introduced two additional quantum numbers, which correspond to the orbital angular momentum and its projection on the chosen axis. Thus the correct multiplicity of states (except for the factor 2 accounting for the yet unknown electron spin) was found. Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). However, some observed phenomena, such as the anomalous Zeeman effect, remained unexplained. These issues were resolved with the full development of quantum mechanics and the Dirac equation. It is often alleged that the Schrödinger equation is superior to the Bohr–Sommerfeld theory in describing hydrogen atom. This is not the case, as most of the results of both approaches coincide or are very close (a remarkable exception is the problem of hydrogen atom in crossed electric and magnetic fields, which cannot be self-consistently solved in the framework of the Bohr–Sommerfeld theory), and in both theories the main shortcomings result from the absence of the electron spin. It was the complete failure of the Bohr–Sommerfeld theory to explain many-electron systems (such as helium atom or hydrogen molecule) which demonstrated its inadequacy in describing quantum phenomena. Schrödinger equationEdit The Schrödinger equation allows one to calculate the stationary states and also the time evolution of quantum systems. Exact analytical answers are available for the nonrelativistic hydrogen atom. Before we go to present a formal account, here we give an elementary overview. Given that the hydrogen atom contains a nucleus and an electron, quantum mechanics allows one to predict the probability of finding the electron at any given radial distance  . It is given by the square of a mathematical function known as the "wavefunction," which is a solution of the Schrödinger equation. The lowest energy equilibrium state of the hydrogen atom is known as the ground state. The ground state wave function is known as the   wavefunction. It is written as: Here,   is the numerical value of the Bohr radius. The probability of finding the electron at a distance   in any radial direction is the squared value of the wavefunction: The   wavefunction is spherically symmetric, and the surface area of a shell at distance   is  , so the total probability   of the electron being in a shell at a distance   and thickness   is It turns out that this is a maximum at  . That is, the Bohr picture of an electron orbiting the nucleus at radius   is recovered as a statistically valid result. However, although the electron is most likely to be on a Bohr orbit, there is a finite probability that the electron may be at any other place  , with the probability indicated by the square of the wavefunction. Since the probability of finding the electron somewhere in the whole volume is unity, the integral of   is unity. Then we say that the wavefunction is properly normalized. As discussed below, the ground state   is also indicated by the quantum numbers  . The second lowest energy states, just above the ground state, are given by the quantum numbers  ;  ; and  . These   states all have the same energy and are known as the   and   states. There is one   state: and there are three   states: An electron in the   or   state is most likely to be found in the second Bohr orbit with energy given by the Bohr formula. The Hamiltonian of the hydrogen atom is the radial kinetic energy operator and coulomb attraction force between the positive proton and negative electron. Using the time-independent Schrödinger equation, ignoring all spin-coupling interactions and using the reduced mass  , the equation is written as: Expanding the Laplacian in spherical coordinates: This is a separable, partial differential equation which can be solved in terms of special functions. The normalized position wavefunctions, given in spherical coordinates are: 3D illustration of the eigenstate  . Electrons in this state are 45% likely to be found within the solid body shown.   is the reduced Bohr radius,  ,   is a generalized Laguerre polynomial of degree n − 1, and   is a spherical harmonic function of degree and order m. Note that the generalized Laguerre polynomials are defined differently by different authors. The usage here is consistent with the definitions used by Messiah,[6] and Mathematica.[7] In other places, the Laguerre polynomial includes a factor of  ,[8] or the generalized Laguerre polynomial appearing in the hydrogen wave function is   instead.[9] The quantum numbers can take the following values: Additionally, these wavefunctions are normalized (i.e., the integral of their modulus square equals 1) and orthogonal: where   is the state represented by the wavefunction   in Dirac notation, and   is the Kronecker delta function.[10] The wavefunctions in momentum space are related to the wavefunctions in position space through a Fourier transform which, for the bound states, results in [11] where   denotes a Gegenbauer polynomial and   is in units of  . The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds. The Schrödinger equation also applies to more complicated atoms and molecules. When there is more than one electron or nucleus the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. Since the Schrödinger equation is only valid for non-relativistic quantum mechanics, the solutions it yields for the hydrogen atom are not entirely correct. The Dirac equation of relativistic quantum theory improves these solutions (see below). Results of Schrödinger equationEdit The solution of the Schrödinger equation (wave equation) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the orbitals) are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: the eigenstates of the Hamiltonian (that is, the energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. This corresponds to the fact that angular momentum is conserved in the orbital motion of the electron around the nucleus. Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, and m (both are integers). The angular momentum quantum number = 0, 1, 2, ... determines the magnitude of the angular momentum. The magnetic quantum number m = −, ..., + determines the projection of the angular momentum on the (arbitrarily chosen) z-axis. Note that the maximum value of the angular momentum quantum number is limited by the principal quantum number: it can run only up to n − 1, i.e. = 0, 1, ..., n − 1. Due to angular momentum conservation, states of the same but different m have the same energy (this holds for all problems with rotational symmetry). In addition, for the hydrogen atom, states of the same n but different are also degenerate (i.e. they have the same energy). However, this is a specific property of hydrogen and is no longer true for more complicated atoms which have an (effective) potential differing from the form 1/r (due to the presence of the inner electrons shielding the nucleus potential). Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the z-axis, which can take on two values. Therefore, any eigenstate of the electron in the hydrogen atom is described fully by four quantum numbers. According to the usual rules of quantum mechanics, the actual state of the electron may be any superposition of these states. This explains also why the choice of z-axis for the directional quantization of the angular momentum vector is immaterial: an orbital of given and m′ obtained for another preferred axis z′ can always be represented as a suitable superposition of the various states of different m (but same l) that have been obtained for z. Mathematical summary of eigenstates of hydrogen atomEdit In 1928, Paul Dirac found an equation that was fully compatible with Special Relativity, and (as a consequence) made the wave function a 4-component "Dirac spinor" including "up" and "down" spin components, with both positive and "negative" energy (or matter and antimatter). The solution to this equation gave the following results, more accurate than the Schrödinger solution. Energy levelsEdit The energy levels of hydrogen, including fine structure (excluding Lamb shift and hyperfine structure), are given by the Sommerfeld fine structure expression:[12] where α is the fine-structure constant and j is the "total angular momentum" quantum number, which is equal to | ± 1/2| depending on the direction of the electron spin. This formula represents a small correction to the energy obtained by Bohr and Schrödinger as given above. The factor in square brackets in the last expression is nearly one; the extra term arises from relativistic effects (for details, see #Features going beyond the Schrödinger solution). It is worth noting that this expression was first obtained by A. Sommerfeld in 1916 based on the relativistic version of the old Bohr theory. Sommerfeld has however used different notation for the quantum numbers. Coherent statesEdit The coherent states have been proposed as[13] which satisfies   and takes the form Visualizing the hydrogen electron orbitalsEdit Probability densities through the xz-plane for the electron at different quantum numbers (, across top; n, down side; m = 0) The image to the right shows the first few hydrogen atom orbitals (energy eigenfunctions). These are cross-sections of the probability density that are color-coded (black represents zero density and white represents the highest density). The angular momentum (orbital) quantum number is denoted in each column, using the usual spectroscopic letter code (s means  = 0, p means  = 1, d means  = 2). The main (principal) quantum number n (= 1, 2, 3, ...) is marked to the right of each row. For all pictures the magnetic quantum number m has been set to 0, and the cross-sectional plane is the xz-plane (z is the vertical axis). The probability density in three-dimensional space is obtained by rotating the one shown here around the z-axis. The "ground state", i.e. the state of lowest energy, in which the electron is usually found, is the first one, the 1s state (principal quantum level n = 1, = 0). Black lines occur in each but the first orbital: these are the nodes of the wavefunction, i.e. where the probability density is zero. (More precisely, the nodes are spherical harmonics that appear as a result of solving Schrödinger equation in polar coordinates.) The quantum numbers determine the layout of these nodes.[14] There are: •   total nodes, •   of which are angular nodes: •   angular nodes go around the   axis (in the xy plane). (The figure above does not show these nodes since it plots cross-sections through the xz-plane.) •   (the remaining angular nodes) occur on the   (vertical) axis. •   (the remaining non-angular nodes) are radial nodes. Features going beyond the Schrödinger solutionEdit There are several important effects that are neglected by the Schrödinger equation and which are responsible for certain small but measurable deviations of the real spectral lines from the predicted ones: • Although the mean speed of the electron in hydrogen is only 1/137th of the speed of light, many modern experiments are sufficiently precise that a complete theoretical explanation requires a fully relativistic treatment of the problem. A relativistic treatment results in a momentum increase of about 1 part in 37,000 for the electron. Since the electron's wavelength is determined by its momentum, orbitals containing higher speed electrons show contraction due to smaller wavelengths. • Even when there is no external magnetic field, in the inertial frame of the moving electron, the electromagnetic field of the nucleus has a magnetic component. The spin of the electron has an associated magnetic moment which interacts with this magnetic field. This effect is also explained by special relativity, and it leads to the so-called spin-orbit coupling, i.e., an interaction between the electron's orbital motion around the nucleus, and its spin. Both of these features (and more) are incorporated in the relativistic Dirac equation, with predictions that come still closer to experiment. Again the Dirac equation may be solved analytically in the special case of a two-body system, such as the hydrogen atom. The resulting solution quantum states now must be classified by the total angular momentum number j (arising through the coupling between electron spin and orbital angular momentum). States of the same j and the same n are still degenerate. Thus, direct analytical solution of Dirac equation predicts 2S(1/2) and 2P(1/2) levels of Hydrogen to have exactly the same energy, which is in a contradiction with observations (Lamb-Retherford experiment). For these developments, it was essential that the solution of the Dirac equation for the hydrogen atom could be worked out exactly, such that any experimentally observed deviation had to be taken seriously as a signal of failure of the theory. Alternatives to the Schrödinger theoryEdit In the language of Heisenberg's matrix mechanics, the hydrogen atom was first solved by Wolfgang Pauli[15] using a rotational symmetry in four dimensions [O(4)-symmetry] generated by the angular momentum and the Laplace–Runge–Lenz vector. By extending the symmetry group O(4) to the dynamical group O(4,2), the entire spectrum and all transitions were embedded in a single irreducible group representation.[16] In 1979 the (non relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation of quantum mechanics.[17][18] This work greatly extended the range of applicability of Feynman's method. See alsoEdit 1. ^ Palmer, D. (13 September 1997). "Hydrogen in the Universe". NASA. Archived from the original on 29 October 2014. Retrieved 23 February 2017. 2. ^ a b Housecroft, Catherine E.; Sharpe, Alan G. (2005). Inorganic Chemistry (2nd ed.). Pearson Prentice-Hall. p. 237. ISBN 0130-39913-2. 3. ^ Olsen, James; McDonald, Kirk (7 March 2005). "Classical Lifetime of a Bohr Atom" (PDF). Joseph Henry Laboratories, Princeton University. 4. ^ "Derivation of Bohr's Equations for the One-electron Atom" (PDF). University of Massachusetts Boston. 6. ^ Messiah, Albert (1999). Quantum Mechanics. New York: Dover. p. 1136. ISBN 0-486-40924-4. 7. ^ LaguerreL. Wolfram Mathematica page 8. ^ Griffiths, p. 152 9. ^ Condon and Shortley (1963). The Theory of Atomic Spectra. London: Cambridge. p. 441. 10. ^ Griffiths, Ch. 4 p. 89 11. ^ Bransden, B. H.; Joachain, C. J. (1983). Physics of Atoms and Molecules. Longman. p. Appendix 5. ISBN 0-582-44401-2. 12. ^ Sommerfeld, Arnold (1919). Atombau und Spektrallinien [Atomic Structure and Spectral Lines]. Braunschweig: Friedrich Vieweg und Sohn. ISBN 3-87144-484-7. German English 13. ^ Klauder, John R (21 June 1996). "Coherent states for the hydrogen atom". Journal of Physics A: Mathematical and General. 29 (12): L293–L298. doi:10.1088/0305-4470/29/12/002. Retrieved 18 June 2019. 14. ^ Summary of atomic quantum numbers. Lecture notes. 28 July 2006 15. ^ Pauli, W (1926). "Über das Wasserstoffspektrum vom Standpunkt der neuen Quantenmechanik". Zeitschrift für Physik. 36 (5): 336–363. Bibcode:1926ZPhy...36..336P. doi:10.1007/BF01450175. 16. ^ Kleinert H. (1968). "Group Dynamics of the Hydrogen Atom" (PDF). Lectures in Theoretical Physics, edited by W.E. Brittin and A.O. Barut, Gordon and Breach, N.Y. 1968: 427–482. 17. ^ Duru I.H., Kleinert H. (1979). "Solution of the path integral for the H-atom" (PDF). Physics Letters B. 84 (2): 185–188. Bibcode:1979PhLB...84..185D. doi:10.1016/0370-2693(79)90280-6. 18. ^ Duru I.H., Kleinert H. (1982). "Quantum Mechanics of H-Atom from Path Integrals" (PDF). Fortschr. Phys. 30 (2): 401–435. Bibcode:1982ForPh..30..401D. doi:10.1002/prop.19820300802. External linksEdit (none, lightest possible) Hydrogen atom is an isotope of hydrogen Decay product of: free neutron Decay chain of hydrogen atom Decays to:
950a70e26c44133c
Polarization states as hidden variables? This post explores the limits of the physical interpretation of the wavefunction we have been building up in previous posts. It does so by examining if it can be used to provide a hidden-variable theory for explaining quantum-mechanical interference. The hidden variable is the polarization state of the photon. The outcome is as expected: the theory does not work. Hence, this paper clearly shows the limits of any physical or geometric interpretation of the wavefunction. This post sounds somewhat academic because it is, in fact, a draft of a paper I might try to turn into an article for a journal. There is a useful addendum to the post below: it offers a more sophisticated analysis of linear and circular polarization states (see: Linear and Circular Polarization States in the Mach-Zehnder Experiment). Have fun with it ! A physical interpretation of the wavefunction Duns Scotus wrote: pluralitas non est ponenda sine necessitate. Plurality is not to be posited without necessity.[1] And William of Ockham gave us the intuitive lex parsimoniae: the simplest solution tends to be the correct one.[2] But redundancy in the description does not seem to bother physicists. When explaining the basic axioms of quantum physics in his famous Lectures on quantum mechanics, Richard Feynman writes: “We are not particularly interested in the mathematical problem of finding the minimum set of independent axioms that will give all the laws as consequences. Redundant truth does not bother us. We are satisfied if we have a set that is complete and not apparently inconsistent.”[3] Also, most introductory courses on quantum mechanics will show that both ψ = exp(iθ) = exp[i(kx-ωt)] and ψ* = exp(-iθ) = exp[-i(kx-ωt)] = exp[i(ωt-kx)] = -ψ are acceptable waveforms for a particle that is propagating in the x-direction. Both have the required mathematical properties (as opposed to, say, some real-valued sinusoid). We would then think some proof should follow of why one would be better than the other or, preferably, one would expect as a discussion on what these two mathematical possibilities might represent¾but, no. That does not happen. The physicists conclude that “the choice is a matter of convention and, happily, most physicists use the same convention.”[4] Instead of making a choice here, we could, perhaps, use the various mathematical possibilities to incorporate spin in the description, as real-life particles – think of electrons and photons here – have two spin states[5] (up or down), as shown below. Table 1: Matching mathematical possibilities with physical realities?[6] Spin and direction Spin up Spin down Positive x-direction ψ = exp[i(kx-ωt)] ψ* = exp[i(ωt-kx)] Negative x-direction χ = exp[i(ωt-kx)] χ* = exp[i(kx+ωt)] That would make sense – for several reasons. First, theoretical spin-zero particles do not exist and we should therefore, perhaps, not use the wavefunction to describe them. More importantly, it is relatively easy to show that the weird 720-degree symmetry of spin-1/2 particles collapses into an ordinary 360-degree symmetry and that we, therefore, would have no need to describe them using spinors and other complicated mathematical objects.[7] Indeed, the 720-degree symmetry of the wavefunction for spin-1/2 particles is based on an assumption that the amplitudes C’up = -Cup and C’down = -Cdown represent the same state—the same physical reality. As Feynman puts it: “Both amplitudes are just multiplied by −1 which gives back the original physical system. It is a case of a common phase change.”[8] In the physical interpretation given in Table 1, these amplitudes do not represent the same state: the minus sign effectively reverses the spin direction. Putting a minus sign in front of the wavefunction amounts to taking its complex conjugate: -ψ = ψ*. But what about the common phase change? There is no common phase change here: Feynman’s argument derives the C’up = -Cup and C’down = -Cdown identities from the following equations: C’up = eCup and C’down = eCdown. The two phase factors  are not the same: +π and -π are not the same. In a geometric interpretation of the wavefunction, +π is a counterclockwise rotation over 180 degrees, while -π is a clockwise rotation. We end up at the same point (-1), but it matters how we get there: -1 is a complex number with two different meanings. We have written about this at length and, hence, we will not repeat ourselves here.[9] However, this realization – that one of the key propositions in quantum mechanics is basically flawed – led us to try to question an axiom in quantum math that is much more fundamental: the loss of determinism in the description of interference. The reader should feel reassured: the attempt is, ultimately, not successful—but it is an interesting exercise. The loss of determinism in quantum mechanics The standard MIT course on quantum physics vaguely introduces Bell’s Theorem – labeled as a proof of what is referred to as the inevitable loss of determinism in quantum mechanics – early on. The argument is as follows. If we have a polarizer whose optical axis is aligned with, say, the x-direction, and we have light coming in that is polarized along some other direction, forming an angle α with the x-direction, then we know – from experiment – that the intensity of the light (or the fraction of the beam’s energy, to be precise) that goes through the polarizer will be equal to cos2α. But, in quantum mechanics, we need to analyze this in terms of photons: a fraction cos2α of the photons must go through (because photons carry energy and that’s the fraction of the energy that is transmitted) and a fraction 1-cos2α must be absorbed. The mentioned MIT course then writes the following: “If all the photons are identical, why is it that what happens to one photon does not happen to all of them? The answer in quantum mechanics is that there is indeed a loss of determinism. No one can predict if a photon will go through or will get absorbed. The best anyone can do is to predict probabilities. Two escape routes suggest themselves. Perhaps the polarizer is not really a homogeneous object and depending exactly on where the photon is it either gets absorbed or goes through. Experiments show this is not the case. A more intriguing possibility was suggested by Einstein and others. A possible way out, they claimed, was the existence of hidden variables. The photons, while apparently identical, would have other hidden properties, not currently understood, that would determine with certainty which photon goes through and which photon gets absorbed. Hidden variable theories would seem to be untestable, but surprisingly they can be tested. Through the work of John Bell and others, physicists have devised clever experiments that rule out most versions of hidden variable theories. No one has figured out how to restore determinism to quantum mechanics. It seems to be an impossible task.”[10] The student is left bewildered here. Are there only two escape routes? And is this the way how polarization works, really? Are all photons identical? The Uncertainty Principle tells us that their momentum, position, or energy will be somewhat random. Hence, we do not need to assume that the polarizer is nonhomogeneous, but we need to think of what might distinguish the individual photons. Considering the nature of the problem – a photon goes through or it doesn’t – it would be nice if we could find a binary identifier. The most obvious candidate for a hidden variable would be the polarization direction. If we say that light is polarized along the x-direction, we should, perhaps, distinguish between a plus and a minus direction? Let us explore this idea. Linear polarization states The simple experiment above – linearly polarized light going through a polaroid – involves linearly polarized light. We can easily distinguish between left- and right-hand circular polarization, but if we have linearly polarized light, can we distinguish between a plus and a minus direction? Maybe. Maybe not. We can surely think about different relative phases and how that could potentially have an impact on the interaction with the molecules in the polarizer. Suppose the light is polarized along the x-direction. We know the component of the electric field vector along the y-axis[11] will then be equal to Ey = 0, and the magnitude of the x-component of E will be given by a sinusoid. However, here we have two distinct possibilities: Ex = cos(ω·t) or, alternatively, Ex = sin(ω·t). These are the same functions but – crucially important – with a phase difference of 90°: sin(ω·t) = cos(ω·t + π/2).   Figure 1: Two varieties of linearly polarized light?[12] Would this matter? Sure. We can easily come up with some classical explanations of why this would matter. Think, for example, of birefringent material being defined in terms of quarter-wave plates. In fact, the more obvious question is: why would this not make a difference? Of course, this triggers another question: why would we have two possibilities only? What if we add an additional 90° shift to the phase? We know that cos(ω·t + π) = –cos(ω·t). We cannot reduce this to cos(ω·t) or sin(ω·t). Hence, if we think in terms of 90° phase differences, then –cos(ω·t) = cos(ω·t + π)  and –sin(ω·t) = sin(ω·t + π) are different waveforms too. In fact, why should we think in terms of 90° phase shifts only? Why shouldn’t we think of a continuum of linear polarization states? We have no sensible answer to that question. We can only say: this is quantum mechanics. We think of a photon as a spin-one particle and, for that matter, as a rather particular one, because it misses the zero state: it is either up, or down. We may now also assume two (linear) polarization states for the molecules in our polarizer and suggest a basic theory of interaction that may or may not explain this very basic fact: a photon gets absorbed, or it gets transmitted. The theory is that if the photon and the molecule are in the same (linear) polarization state, then we will have constructive interference and, somehow, a photon gets through.[13] If the linear polarization states are opposite, then we will have destructive interference and, somehow, the photon is absorbed. Hence, our hidden variables theory for the simple situation that we discussed above (a photon does or does not go through a polarizer) can be summarized as follows: Linear polarization state Incoming photon up (+) Incoming photon down (-) Polarizer molecule up (+) Constructive interference: photon goes through Destructive interference: photon is absorbed Polarizer molecule down (-) Destructive interference: photon is absorbed Constructive interference: photon goes through Nice. No loss of determinism here. But does it work? The quantum-mechanical mathematical framework is not there to explain how a polarizer could possibly work. It is there to explain the interference of a particle with itself. In Feynman’s words, this is the phenomenon “which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics.”[14] So, let us try our new theory of polarization states as a hidden variable on one of those interference experiments. Let us choose the standard one: the Mach-Zehnder interferometer experiment. Polarization states as hidden variables in the Mach-Zehnder experiment The setup of the Mach-Zehnder interferometer is well known and should, therefore, probably not require any explanation. We have two beam splitters (BS1 and BS2) and two perfect mirrors (M1 and M2). An incident beam coming from the left is split at BS1 and recombines at BS2, which sends two outgoing beams to the photon detectors D0 and D1. More importantly, the interferometer can be set up to produce a precise interference effect which ensures all the light goes into D0, as shown below. Alternatively, the setup may be altered to ensure all the light goes into D1. Figure 2: The Mach-Zehnder interferometer[15] Mach Zehnder The classical explanation is easy enough. It is only when we think of the beam as consisting of individual photons that we get in trouble. Each photon must then, somehow, interfere with itself which, in turn, requires the photon to, somehow, go through both branches of the interferometer at the same time. This is solved by the magical concept of the probability amplitude: we think of two contributions a and b (see the illustration above) which, just like a wave, interfere to produce the desired result¾except that we are told that we should not try to think of these contributions as actual waves. So that is the quantum-mechanical explanation and it sounds crazy and so we do not want to believe it. Our hidden variable theory should now show the photon does travel along one path only. If the apparatus is set up to get all photons in the D0 detector, then we might, perhaps, have a sequence of events like this: Photon polarization At BS1 At BS2 Final result Up (+) Photon is reflected Photon is reflected Photon goes to D0 Down () Photon is transmitted Photon is transmitted Photon goes to D0 Of course, we may also set up the apparatus to get all photons in the D1 detector, in which case the sequence of events might be this: Photon polarization At BS1 At BS2 Final result Up (+) Photon is reflected Photon is transmitted Photon goes to D1 Down () Photon is transmitted Photon is reflected Photon goes to D1 This is a nice symmetrical explanation that does not involve any quantum-mechanical weirdness. The problem is: it cannot work. Why not? What happens if we block one of the two paths? For example, let us block the lower path in the setup where all photons went to D0. We know – from experiment – that the outcome will be the following: Final result Probability Photon is absorbed at the block 0.50 Photon goes to D0 0.25 Photon goes to D1 0.25 How is this possible? Before blocking the lower path, no photon went to D1. They all went to D0. If our hidden variable theory was correct, the photons that do not get absorbed should also go to D0, as shown below. Photon polarization At BS1 At BS2 Final result Down () Photon is absorbed Photon was absorbed Photon was absorbed Our hidden variable theory does not work. Physical or geometric interpretations of the wavefunction are nice, but they do not explain quantum-mechanical interference. Their value is, therefore, didactic only. Jean Louis Van Belle, 2 November 2018 This paper discusses general principles in physics only. Hence, references were limited to references to general textbooks and courses and physics textbooks only. The two key references here are the MIT introductory course on quantum physics and Feynman’s Lectures – both of which can be consulted online. Additional references to other material are given in the text itself (see footnotes). [1] Duns Scotus, Commentaria. [2] See: https://en.wikipedia.org/wiki/Occam%27s_razor. [3] Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 5, Section 5. [4] See, for example, the MIT’s edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 4, Section 3. [5] Photons are spin-one particles but they do not have a spin-zero state. [6] Of course, the formulas only give the elementary wavefunction. The wave packet will be a Fourier sum of such functions. [7] See, for example, https://warwick.ac.uk/fac/sci/physics/staff/academic/mhadley/explanation/spin/, accessed on 30 October 2018 [8] Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 6, Section 3. [9] Jean Louis Van Belle, Euler’s wavefunction (http://vixra.org/abs/1810.0339, accessed on 30 October 2018) [10] See: MIT edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 1, Section 3 (Loss of determinism). [11] The z-direction is the direction of wave propagation in this example. In quantum mechanics, we often define the direction of wave propagation as the x-direction. This will, hopefully, not confuse the reader. The choice of axes is usually clear from the context. [12] Source of the illustration: https://upload.wikimedia.org/wikipedia/commons/7/71/Sine_cosine_one_period.svg.. [13] Classical theory assumes an atomic or molecular system will absorb a photon and, therefore, be in an excited state (with higher energy). The atomic or molecular system then goes back into its ground state by emitting another photon with the same energy. Hence, we should probably not think in terms of a specific photon getting through. [14] Feynman’s Lectures on Quantum Mechanics, Vol. III, Chapter 1, Section 1. [15] Source of the illustration: MIT edX Course 8.04.1x (Quantum Physics), Lecture Notes, Chapter 1, Section 4 (Quantum Superpositions). Feynman’s Seminar on Superconductivity (1) The ultimate challenge for students of Feynman’s iconic Lectures series is, of course, to understand his final one: A Seminar on Superconductivity. As he notes in his introduction to this formidably dense piece, the text does not present the detail of each and every step in the development and, therefore, we’re not supposed to immediately understand everything. As Feynman puts it: we should just believe (more or less) that things would come out if we would be able to go through each and every step. Well… Let’s see. Feynman throws a lot of stuff in here—including, I suspect, some stuff that may not be directly relevant, but that he sort of couldn’t insert into all of his other Lectures. So where do we start? It took me one long maddening day to figure out the first formula:f1It says that the amplitude for a particle to go from to in a vector potential (think of a classical magnetic field) is the amplitude for the same particle to go from to b when there is no field (A = 0) multiplied by the exponential of the line integral of the vector potential times the electric charge divided by Planck’s constant. I stared at this for quite a while, but then I recognized the formula for the magnetic effect on an amplitude, which I described in my previous post, which tells us that a magnetic field will shift the phase of the amplitude of a particle with an amount equal to: Hence, if we write 〈b|a〉 for A = 0 as 〈b|aA = 0 = C·eiθ, then 〈b|a〉 in A will, naturally, be equal to 〈b|a〉 in A = C·ei(θ+φ) = C·eiθ·eiφ = 〈b|aA = 0 ·eiφ, and so that explains it. 🙂 Alright… Next. Or… Well… Let us briefly re-examine the concept of the vector potential, because we’ll need it a lot. We introduced it in our post on magnetostatics. Let’s briefly re-cap the development there. In Maxwell’s set of equations, two out of the four equations give us the magnetic field: B = 0 and c2×B = j0. We noted the following in this regard: 1. The ∇B = 0 equation is true, always, unlike the ×E = 0 expression, which is true for electrostatics only (no moving charges). So the B = 0 equation says the divergence of B is zero, always. 2. The divergence of the curl of a vector field is always zero. Hence, if A is some vector field, then div(curl A) = •(×A) = 0, always. 3. We can now apply another theorem: if the divergence of a vector field, say D, is zero—so if D = 0—then will be the the curl of some other vector field C, so we can write: D = ×C.  Applying this to B = 0, we can write:  If B = 0, then there is an A such that B = ×A So, in essence, we’re just re-defining the magnetic field (B) in terms of some other vector field. To be precise, we write it as the curl of some other vector field, which we refer to as the (magnetic) vector potential. The components of the magnetic field vector can then be re-written as: formula for B We need to note an important point here: the equations above suggest that the components of B depend on position only. In other words, we assume static magnetic fields, so they do not change with time. That, in turn, assumes steady currents. We will want to extend the analysis to also include magnetodynamics. It complicates the analysis but… Well… Quantum mechanics is complicated. Let us remind ourselves here of Feynman’s re-formulation of Maxwell’s equations as a set of two equations (expressed in terms of the magnetic (vector) and the electric potential) only: Wave equation for A Wave equation for potential These equations are wave equations, as you can see by writing out the second equation: wave equation It is a wave equation in three dimensions. Note that, even in regions where we do no have any charges or currents, we have non-zero solutions for φ and A. These non-zero solutions are, effectively, representing the electric and magnetic fields as they travel through free space. As Feynman notes, the advantage of re-writing Maxwell’s equations as we do above, is that the two new equations make it immediately apparent that we’re talking electromagnetic waves, really. As he notes, for many practical purposes, it will still be convenient to use the original equations in terms of E and B, but… Well… Not in quantum mechanics, it turns out. As Feynman puts it: “E and B are on the other side of the mountain we have climbed. Now we are ready to cross over to the other side of the peak. Things will look different—we are ready for some new and beautiful views.” Well… Maybe. Appreciating those views, as part of our study of quantum mechanics, does take time and effort, unfortunately. 😦 The Schrödinger equation in an electromagnetic field Feynman then jots down Schrödinger’s equation for the same particle (with charge q) moving in an electromagnetic field that is characterized not only by the (scalar) potential Φ but also by a vector potential A: Now where does that come from? We know the standard formula in an electric field, right? It’s the formula we used to find the energy states of electrons in a hydrogen atom: i·ħ·∂ψ/∂t = −(1/2)·(ħ2/m)∇2ψ + V·ψ Of course, it is easy to see that we replaced V by q·Φ, which makes sense: the potential of a charge in an electric field is the product of the charge (q) and the (electric) potential (Φ), because Φ is, obviously, the potential energy of the unit charge. It’s also easy to see we can re-write −ħ2·∇2ψ as [(ħ/i)·∇]·[(ħ/i)·∇]ψ because (1/i)·(1/i) = 1/i2 = 1/(−1) = −1. 🙂 Alright. So it’s just that −q·A term in the (ħ/i)∇ − q·A expression that we need to explain now. Unfortunately, that explanation is not so easy. Feynman basically re-derives Schrödinger’s equation using his trade-mark historical argument – which did not include any magnetic field – with a vector potential. The re-derivation is rather annoying, and I didn’t have the courage to go through it myself, so you should – just like me – just believe Feynman when he says that, when there’s a vector potential – i.e. when there’s a magnetic field – then that (ħ/i)·∇ operator – which is the momentum operator– ought to be replaced by a new momentum operator: So… Well… There we are… 🙂 So far, so good? Well… Maybe. While, as mentioned, you won’t be interested in the mathematical argument, it is probably worthwhile to reproduce Feynman’s more intuitive explanation of why the operator above is what it is. In other words, let us try to understand that −qA term. Look at the following situation: we’ve got a solenoid here, and some current I is going through it so there’s a magnetic field B. Think of the dynamics while we turn on this flux. Maxwell’s second equation (∇×E = −∂B/∂t) tells us the line integral of E around a loop will be equal to the time rate of change of the magnetic flux through that loop. The ∇×E = −∂B/∂t equation is a differential equation, of course, so it doesn’t have the integral, but you get the idea—I hope.solenoid Now, using the B = ×A equation we can re-write the ∇×E = −∂B/∂t as ∇×E = −∂(×A)/∂t. This allows us to write the following:  ∇×E = −∂(×A)/∂t = −×(∂A/∂t) ⇔ E = −∂A/∂t This is a remarkable expression. Note its derivation is based on the commutativity of the curl and time derivative operators, which is a property that can easily be explained: if we have a function in two variables—say x and t—then the order of the derivation doesn’t matter: we can first take the derivative with respect to and then to t or, alternatively, we can first take the time derivative and then do the ∂/∂x operation. So… Well… The curl is, effectively, a derivative with regard to the spatial variables. OK. So what? What’s the point? Well… If we’d have some charge q, as shown in the illustration above, that would happen to be there as the flux is being switched on, it will experience a force which is equal to F = qE. We can now integrate this over the time interval (t) during which the flux is being built up to get the following: 0t F = ∫0t m·a = ∫0t m·dv/dt = m·vt= ∫0t q·E = −∫0t q·∂A/∂t = −q·At Assuming v0 and Aare zero, we may drop the time subscript and simply write: v = −q·A The point is: during the build-up of the magnetic flux, our charge will pick up some (classical) momentum that is equal to p = m·v = −q·A. So… Well… That sort of explains the additional term in our new momentum operator. Note: For some reason I don’t quite understand, Feynman introduces the weird concept of ‘dynamical momentum’, which he defines as the quantity m·v + q·A, so that quantity must be zero in the analysis above. I quickly googled to see why but didn’t invest too much time in the research here. It’s just… Well… A bit puzzling. I don’t really see the relevance of his point here: I am quite happy to go along with the new operator, as it’s rather obvious that introducing changing magnetic fields must, obviously, also have some impact on our wave equations—in classical as well as in quantum mechanics. Local conservation of probability The title of this section in Feynman’s Lecture (yes, still the same Lecture – we’re not switching topics here) is the equation of continuity for probabilities. I find it brilliant, because it confirms my interpretation of the wave function as describing some kind of energy flow. Let me quote Feynman on his endeavor here: This is it, really ! The wave function does represent some kind of energy flow – between a so-called ‘real’ and a so-called ‘imaginary’ space, which are to be defined in terms of directional versus rotational energy, as I try to point out – admittedly: more by appealing to intuition than to mathematical rigor – in that post of mine on the meaning of the wavefunction. So what is the flow – or probability current as Feynman refers to it? Well… Here’s the formula: Huh? Yes. Don’t worry too much about it right now. The essential point is to understand what this current – denoted by J – actually stands for: So what’s next? Well… Nothing. I’ll actually refer you to Feynman now, because I can’t improve on how he explains how pairs of electrons start behaving when temperatures are low enough to render Boltzmann’s Law irrelevant: the kinetic energy that’s associated with temperature can no longer break up electron pairs if temperature comes close to the zero point. Huh? What? Electron pairs? Electrons are not supposed to form pairs, are they? They carry the same charge and are, therefore, supposed to repel each other. Well… Yes and no. In my post on the electron orbitals in a hydrogen atom – which just presented Feynman’s presentation on the subject-matter in a, hopefully, somewhat more readable format – we calculated electron orbitals neglecting spin. In Feynman’s words: “We make another approximation by forgetting that the electron has spin. […] The non-relativistic Schrödinger equation disregards magnetic effects. [However] Small magnetic effects [do] occur because, from the electron’s point-of-view, the proton is a circulating charge which produces a magnetic field. In this field the electron will have a different energy with its spin up than with it down. [Hence] The energy of the atom will be shifted a little bit from what we will calculate. We will ignore this small energy shift. Also we will imagine that the electron is just like a gyroscope moving around in space always keeping the same direction of spin. Since we will be considering a free atom in space the total angular momentum will be conserved. In our approximation we will assume that the angular momentum of the electron spin stays constant, so all the rest of the angular momentum of the atom—what is usually called “orbital” angular momentum—will also be conserved. To an excellent approximation the electron moves in the hydrogen atom like a particle without spin—the angular momentum of the motion is a constant.” To an excellent approximation… But… Well… Electrons in a metal do form pairs, because they can give up energy in that way and, hence, they are more stable that way. Feynman does not go into the details here – I guess because that’s way beyond the undergrad level – but refers to the Bardeen-Coopers-Schrieffer (BCS) theory instead – the authors of which got a Nobel Prize in Physics in 1972 (that’s a decade or so after Feynman wrote this particular Lecture), so I must assume the theory is well accepted now. 🙂 Of course, you’ll shout now: Hey! Hydrogen is not a metal! Well… Think again: the latest breakthrough in physics is making hydrogen behave like a metal. 🙂 And I am really talking the latest breakthrough: Science just published the findings of this experiment last month! 🙂 🙂 In any case, we’re not talking hydrogen here but superconducting materials, to which – as far as we know – the BCS theory does apply. So… Well… I am done. I just wanted to show you why it’s important to work your way through Feynman’s last Lecture because… Well… Quantum mechanics does explain everything – although the nitty-gritty of it (the Meissner effect, the London equation, flux quantization, etc.) are rather hard bullets to bite. 😦 Don’t give up ! I am struggling with the nitty-gritty too ! 🙂
98f717783b015c8f
Itinerant Ferromagnetism in ultracold Fermi gases Itinerant Ferromagnetism in ultracold Fermi gases Itinerant ferromagnetism in cold Fermi gases with repulsive interactions is studied applying the Jastrow-Slater approximation generalized to finite polarization and temperature. For two components at zero temperature a second order transition is found at compatible with QMC. Thermodynamic functions and observables such as the compressibility and spin susceptibility and the resulting fluctuations in number and spin are calculated. For trapped gases the resulting cloud radii and kinetic energies are calculated and compared to recent experiments. Spin polarized systems are recommended for effective separation of large ferromagnetic domains. Collective modes are predicted and tri-critical points are calculated for multi-component systems. 71.10.Ca, 03.75.Ss, 32.80.Pj I Introduction Ultracold Fermi systems with strong attraction between atoms has led to important discoveries as universal physics and the BCS-BEC crossover. Recently strongly repulsive interactions has been studied and a transition to a ferromagnetic phase was observed in the experiments of Jo et al. (1). Earlier Bourdel et al. (2) and Gupta et al. (3) also observed a transition when the interactions became strongly repulsive near Feshbach resonances. A phase transition from a paramagnetic (PM) to ferromagnetic (FM) phase was predicted long ago by Stoner (4) based on the Hartree-Foch mean field energy and has recently been confirmed by more elaborate calculations including fluctuations (5); (6) and by QMC (7); (8). The calculated transition points and order of the transition differ also from experiment (1). The FM transition is disputed by Zhai (9) who claims that the experimental data is compatible with strongly correlated repulsive Fermi systems which would explain the inability to observe FM domains in Ref. (1). It the purpose of this work to clarify the phase diagram of strongly repulsive Fermi atomic systems as well as to calculate thermodynamic functions and measurable observables in atomic traps that clearly can distinguish the FM and PM phases and determine the order of the transition and the universal functions. By extending the Jastrow-Slater model (10); (11); (12) to finite polarization and temperature, we calculate the free energy and find a second order FM transition in a repulsive Fermi gas. A number of thermodynamic functions as the spin susceptibility, compressibility, and observables as radii and kinetic energies can be compared to experiments, and others as fluctuations, collective oscillations and phase separation can be predicted. As a start the dilute limit model of Stoner is extended to finite temperature and the polarization and order of the transition is determined and compared to second order calculations. Subsequently, the Jastrow-Slater approximation is described for the correlated manybody wave-function in the strongly interacting limit and extended to finite polarization and temperature. Detailed calculations of the free energy and a number of thermodynamic functions are given. In particular the spin-susceptibility and compressibility are used for calculating fluctuations in spin and total particle number in section III. In section IV finite traps are considered and the cloud radii and kinetic energies are calculated and compared to recent experiments (1). Collective modes are discussed in section V. Multi-components systems are discussed in section VI and a new string of critical points is found and plotted in a multi-component phase diagram. Finally, a summary and outlook is given. Ii Ferromagnetic transition The models for repulsive ultracold Fermi gases in Refs. (4); (5); (6); (7); (8) all predict a phase transition somewhere near the unitarity limit but the phase diagrams disagree quantitatively as well as qualitatively concerning the order and critical points. For a reference model we start with a simple finite temperature extension of the Hartree-Fock approximation originally studied by Stoner (4), which is a dilute limit expansion to first order in the scattering length. Subsequently, we calculate the phase diagram in the JS approximation and compare to those in the dilute limit to first (4) and second (5); (6) order as well as QMC (7); (8). ii.1 Dilute approximations Figure 1: Phase diagram for a two-component Fermi gas with repulsive interactions. Full (dashed) curves indicate first (second) order PM to FM transitions within JS and dilute approximations with (6) and without (4) fluctuations. The circle indicates a tri-critical point. Triangles show the QMC transition points at zero temperature of Refs. (7); (8). A dilute () degenerate Fermi gas with atoms in spin states with densities and Fermi energy has the free energy It consists of the kinetic energy, the interaction energy to lowest order in the scattering length as in the Stoner model (4), and the thermal energy at low temperatures . In the dilute limit the effective mass in two-component symmetric systems only deviates from the bare mass to second order in the interaction parameter . The density of the components are equal only in the PM phase and when the components are balanced initially. In the following we define an average Fermi wave number from the total density . We postpone multicomponent systems to sec. VI and concentrate first on two spin states, e.g. with total density . The population of spin states are allowed to change (polarize) in order to observe phase transitions to itinerant ferromagnetism. The polarization (or magnetization) of the ground state phase is found by minimizing the free energy at zero magnetic field. The free energy of a low temperature ideal gas is . Expanding Eq. (1) for small polarization leads to a Ginzburg-Landau type equation for the free energy to leading orders in interaction, polarization and temperature. Here is the isothermal spin-susceptibility given by where is the spin-susceptibility for an ideal gas at zero temperature. becomes singular when where the free energy of Eq. (2) predicts a second order phase transition from a PM to a FM (see Fig. 1) in accordance with the zero temperature result of Stoner (4). The polarization is at zero temperature but quickly leads to a locally fully polarized system due to the small fourth order coefficient in Eq. (2). However, the predicted transition occurs close to the unitarity limit where the dilute equation of state Eq. (1) is not valid. Higher orders are important as exemplified by including fluctuations, i.e. the next order correction. As found in Refs. (5); (6) fluctuations change the transition from second to first order at low temperatures up to a tri-critical point at temperature , where the transition becomes second order again (see Fig. 1). However, the 2nd order expansion is not valid either in the unitarity limit. ii.2 Jastrow-Slater approximation The JS approximation applies to both strongly attractive and repulsive crossovers where it already has proven to be quite accurate for predicting universal functions and parameters. The JS approximation is the lowest order in a constrained variational (LOCV) approach to calculate the ground state energies of strongly correlated systems. It was developed for strongly interacting and correlated Bose and Fermi fluids respectively such as He, He and nuclear matter (10). JS was among the earliest models applied to the unitarity limit and crossover of ultracold Fermi (11) and Bose (12) atomic gases. As explained in (10); (11); (12) the JS wave function incorporates essential two-body correlations in the Jastrow function . The antisymmetric Slater wave function for free fermions insures that same spins are spatially anti-symmetric. The Jastrow wave function only applies to particles with different spins (indicated by the primes). The pair correlation function can be determined variationally by minimizing the expectation value of the energy, , which may be calculated by Monte Carlo methods (13); (8). At distances shorter than the interparticle spacing two-body clusters dominate and the Jastrow wave function obeys the Schrödinger equation for a pair of particles of different spins interacting through a potential where the eigenvalue is the interaction energy of one atom . Most importantly, the boundary condition at short distances () is given by the scattering length Many-body effects become important when is comparable to the interparticle distance , but are found to be small (10); (11); (12). Here the boundary conditions that is constant and are imposed at the healing distance , which is determined self consistently from number conservation The prefactor takes into account that a given spin only interacts and correlates with unlike spins . In the dilute limit and so . In the unitary limit the healing length approaches in stead. Generally the healing length is of order the Fermi wavelength of the other component, . For a positive scattering length the interaction energy is positive and the solution to Eq. (6) is with . Defining the boundary conditions and number conservation requires (12) The resulting interaction energy reproduces the correct dilute limit result of Eq. (1). In the unitarity limit , the positive energy solution reduces to with multiple solutions , , etc., and asymptotically for integer . In addition there is one negative energy solution for with which corresponds to the BCS-BEC crossover when . Generally, is the number of nodes in the Jastrow wave function and each determines a new universal limit with universal parameters depending on the number of nodes. The phase in the wave function is whenever the unitarity limit of nodes is encountered. Figure 2: Universal functions calculated within JS at zero temperature vs. repulsive interaction: the ratio of interaction and kinetic energy , the pressure and chemical potential and the spin susceptibility , all with respect to their non-interactive values. Full curves include the FM transition whereas the dashed have FM suppressed, i.e. remain in the PM phase. It should be emphasised that for positive scattering lengths the wave function and thus the correlations function between fermions of unlike spin and bosons has a node at which is somewhere within the interparticle distance. It does not vanish as as does the wave function for a short range repulsive potential as in hard sphere scattering where . Therefore the Gutzwiller approximation may well apply for hard sphere gases, strongly correlated nuclear fluids and liquid helium as discussed in (9) but it does not apply to the repulsive unitarity limit of ultracold gases when the wave function has to obey the short range boundary condition of Eq. (7). It is customary to define the universal function as the ratio of the interaction and kinetic energy . In the JS model the interaction energy per particle is and thus in the PM phase. In the FM phase the spin densities differ and the ratio of the average interaction to kinetic energy can be considerably lower than as shown in Fig. (2). Because Eq. (9) has a string of solutions for a given scattering length or , and are multivalued functions which we distinguish by the index referring to the number of nodes in the many-body wave function between any two atoms (11). has been studied extensively in the BCS-BEC crossover and in the repulsive crossover (2); (3); (1). In the repulsive unitarity limit the universal parameter is . It has recently been measured for a Li gas in two spin states (1). The chemical potential in the optical trap almost doubles going from the non-interacting to the unitarity limit. Since it scales as we obtain compatible with JS. In the following we concentrate on repulsive interactions and use . Figure 3: Polarization vs. at from left to right. The second order transition yields a steep but continuous transitions . The diamond indicates the transition point to a pure one-component () FM at zero temperature. The interaction energy for an atom with spin depends on the density of unlike spins and is given by , where is the universal function for repulsive interactions. We obtain the total energy density at zero temperature by adding the Fermi kinetic energy and the interaction energy , and sum over particle densities (10); (11) including a thermal free energy as given above. This expression generalizes the standard expression for the energy density to finite polarization and temperature. The result can be understood from dimensional arguments as is dimensionless and gives the repulsive energy of particles of spin due interactions with particles of opposite spin. Note that the interaction energy and its dependence on polarization is given in terms of one universal function of one variable only. As shown in Fig. 2 the ratio of the interaction to kinetic energy is reduced by the FM transition w.r.t . Expanding Eq. (10) for small polarization gives where the isothermal spin susceptibility is with and . In the dilute limit and Eqs. (11) and (12) reduce to Eqs. (2) and (3) respectively. The spin-susceptibility calculated within JS is shown in Fig. 2 at zero temperature. It diverges at where the universal function is . By equating the energy of the unpolarized gas, with that of a fully polarized (one-component) gas, , we find that a first order transition requires , and therefore JS predicts a second order FM transition as shown in Fig. 1. This transition point is in remarkable agreement with two recent QMC calculations which find (7) and (8). The QMC calculations could not determine the order of the transition within numerical accuracy. In the BCS-BEC crossover a minor discrepancy was found between JS (11) and QMC (14); (13) which partly could be attributed to pairing which is excluded in the JS wave function. Since pairing is absent for repulsive interactions the JS model is expected to match the QMC calculations better near the FM transition. Note that the JS wave function is also used as a starting point in the QMC calculations of Refs. (8); (14); (13). Minimizing the free energy of Eq. (11) we obtain the polarization at the onset of FM as shown in Fig. 3 at low temperatures. Full polarization is reached at at zero temperature only. The spin-susceptibility is related to the spin-antisymmetric Landau parameter as . The effective mass is implicitly assumed in the JS energy of Eq. (10). It has recently been measured in the BCS-BEC unitarity limit (15) but not for the repulsive crossover yet. Since at the FM transition point is comparable to , these two effective masses may be expected to be similar. The small deviation from only changes the universal functions and the phase diagram slightly at higher temperature. The order and the position of the transition is unchanged at zero temperature. Figure 4: Compressibility and polytropic index vs. repulsive scattering length () at zero temperature. Both are second derivates of the free energy and are therefore discontinuous at the FM transition. Iii Number fluctuations The density or local number fluctuations have recently been measured in shot noise experiments for an ideal ultracold Fermi gas and by speckle noise in the BCS-BEC crossover (16); (17). The number fluctuations are measured in a small subvolume of the atomic cloud with almost uniform density. The fluctuations in spin and total number of atoms are directly related to the spin susceptibility and compressibility respectively. The local fluctuations in total number can for a large number of atoms be related to the isothermal compressibility , by the fluctuation-dissipation theorem Here, is the compressibility for an ideal Fermi gas at zero temperatures. An ideal classical gas has such that the number fluctuations are Poisson: . The compressibility is related to the symmetric Landau parameter as . The compressibility can generally be expressed in terms of the universal function (18) at zero temperature in the PM phase. Once the FM transition sets in, the ground state energy of Eq. (10) is lowered due to finite polarization and the inverse compressibility drops as shown in Fig. 4 for JS. It is discontinuous when the second order FM transition sets in because it is a second derivative of the free energy which is softened by the spin-susceptibility term in Eq. (11). In the pure one-component FM phase the compressibility is that of an ideal one-component gas . The peculiar and discontinuous behaviour of the compressibility at the FM transition is directly reflected in the fluctuations in total number according to Eq. (13). If the FM transition was first order the compressibility diverges at the phase transition, i.e., vanishes in part of the density region where (see Figs. 3+4). Consequently, the number fluctuation also diverges according to Eq. (13) reflecting the density discontinuity at a first order transition. The fluctuation-dissipation theorem also relates the thermal spin fluctuations to the spin susceptibility At the FM instability the spin-susceptibility and therefore also the spin fluctuations diverge reflecting that phase separation occurs between domains of polarization . Such domains were, however, not observed in the experiments of (1) within the spatial resolution of the experiment. Iv Trap radii and kinetic energies In experiments the atoms are confined in harmonic traps. For a sufficiently large number of particles confined in a (shallow) trap the system size is so long that density variations and the extent of possible phase transition interfaces can be ignored and one can apply the local density approximation. The total chemical potential is given by the sum of the harmonic trap potential and the local chemical potential which must be constant over the lattice for all components . It can therefore be set to its value at its edge , which gives the r.h.s. in Eq. (16). The equation of state determines the chemical potentials in terms of the universal function of Eq. (10). In a two-component spin-balanced system the chemical potential and radii of the two components are equal (denoted and in the following). In the FM phase their densities differ but these FM spin domains coexist. Using the JS EoS of Eq. (10) to calculate the chemical potential we can find the density distribution from chemical equilibrium Eq. (16) including phase transitions and calculate cloud radii , the root mean square and kinetic energy averaged over all particles in the trap. These are shown in Fig. 5 normalized to their values trapped non-interacting ultracold atoms, , and respectively. Here, and are the Fermi energy and wave number in the centre of the trap for non-interacting atoms and is the oscillator length. Repulsive interactions reduce the central density and Fermi energy as can be seen from shown in Fig. 5. As a consequence the radii increase except for the RMS radius above the FM transition (see Fig. 5). It decreases because atoms are redistributed from the PM phase near the surface to the FM phase in the centre. The kinetic energy of the atoms has the opposite behaviour because repulsion increases the interaction energy in the PM phase at the cost of the kinetic energy. In the recent experiment of Ref. (1) a transition is observed around at temperatures (and at ). This transition point is a factor of larger than the FM transition point calculated in all models (4); (5); (6); (7); (8) as well as JS. Rescaling by a factor  2 we find very good quantitative and qualitative agreement with the data of (1) as was found in the second order calculation of Ref. (6)). The distinct transitions in the radii, kinetic energies and atomic losses are well reproduced. Figure 5: Radius of the trapped cloud, RMS radius, kinetic energy and central Fermi wavenumber all at zero temperature and relative to their non-interaction values vs. repulsive scattering length (). Dashed curved shows the RMS radius for a PM phase where the FM transition is inhibited. Due to repulsion the central density and thus is lower. However, the FM domains were not observed in Ref. (1) within the spatial resolution of the experiment. Direct observation should be possible in unbalanced spin systems where macroscopic FM domain sizes can be realized. As the repulsion is increased the RMS radii of the minority spins increases faster than that of the majority spins. When the FM transition occurs the system favours a core with predominantly majority spins surrounded by a mantle of both spins in a PM phase. With increasing spin imbalance the majority spin purity of the FM core increases, i.e. the domains are effectively separated on a large scale. The amount of separation and change in radii will depend on the overall spin imbalance. Three component systems with more than one Feshbach resonance as in Li are also more complicated. For example, when the Feshbach magnetic field is such that two resonances and are large but small, the atoms will separate between a FM phase of 1 and a mixed FM phase of 2+3 with different densities. V Collective modes Collective modes have been studied intensively in the BCS-BEC crossover where they reveal important information of the equation of state (EoS) and determine . When the EoS can be approximated by a simple polytrope the collective eigen-frequencies can be calculated analytically (21); (18) in terms of the polytropic index . Even when the EoS is not a perfect polytrope the collective modes in the BCS-BEC crossover could be described well using the effective polytropic index at densities near the centre of the trap given by the logarithmic derivative (18) We therefore calculate within JS for repulsive interactions as shown in Fig. 4. Like the compressibility it has a discontinuity at the FM phase transition because it is a second derivative of the free energy with a second order transition. In both the dilute limit and pure FM phase the gas is ideal with polytropic index . For a very elongated or cigar-shaped trap (prolate in nuclear terminology), , used in most experiments (19); (20), the collective breathing modes separate into a low frequency axial mode with oscillation frequency and a radial mode with (21). The spin dipole mode is more complicated because it is sensitive to the spin susceptibility which diverges at the FM transition point. The EoS is far from polytropic and the delicate calculation of spin dipole modes with diverging spin susceptibility is beyond the scope of this work. The spin dipole mode is estimated within a sum rule approach in Ref. (22). Vi Multicomponent systems Interesting information on the order of the FM transition can be obtained by generalizing the above results to Fermi gases with more that two spin states such as Li with hyperfine states (23), Yb with six nuclear spin states (24), and heteronuclear mixtures of K and Li (25). The interactions and phases can be very complicated when the Feshbach resonances between various components differ as for Li. In the following we restrict ourselves to multi-components with the same relative scattering length . In the dilute case the condition for a first order phase transition in a component system can be found from the energy density of Eq. (1). The preferred transition is directly from to a domains of one-components system which occurs when At zero temperature this condition is , etc. for respectively. The condition for a second order transition is found by expanding the dilute multi-component free energy for small polarization. One finds the same spin susceptibility as in the two-component case, Eq. (4), and therefore the putative the second order transition remains at . Comparing numbers we conclude that at zero temperature the second order transition occurs for only in the dilute case whereas for the FM transition is first order and given by Eq. (18). At finite temperatures the second order transition of Eq. (3) match the first order of Eq. (18) at a temperature which determines the tri-critical point in the phase diagram for as shown in Fig. 6. The free energy of the JS model, Eq. (10), also applies to multi-component systems. The condition for a first order FM transition to coexisting fully polarized (one-component) FM domains is At zero temperature the FM transition occurs for at for respectively. As in the dilute case the spin-susceptibility is unchanged, Eq. (12), in the JS model and the putative second order transition remains when at . Thus the FM transition at is at zero temperature marginally second order for but first order for . Again the tri-critical points are determined by the matching condition for the first Eq. (19) and second Eq. (12) order transitions and are shown in Fig. 6. Generally the difference between first and second order FM transition is small which may explain why QMC could not determine the order within numerical accuracy (7); (8). First order transitions to partially polarized FM does not occur for two-component systems but may be possible in multi-component systems. The marginal first vs. second order FM transition for is analogous to the marginal stability in the unitary limit of the BCS-BEC crossover (11). Here it is known that two-component systems are stable but four-component systems are unstable as in nuclear matter. Figure 6: Phase diagrams for multi-component ( from left to right) Fermi gases with repulsive interactions. Full (dashed) curves indicate first (second) order PM to FM transitions within JS and dilute approximations. Circles indicate the tri-critical points where the transition changes from first to second order at higher temperatures. Vii Summary and outlook By extending the Jastrow-Slater approximation to finite polarization and temperature we have calculated a number of thermodynamic functions and observables for cold Fermi atoms with repulsive interactions. In particular we found a second order FM phase transition at at zero temperature in close agreement with QMC. The compressibility and spin susceptibility were calculated and the resulting observables like the fluctuations in total number and spin as well as collective modes are discontinuous at the transition point. These can be distinguished from a first order transition where, e.g., the compressibility diverges. For trapped gases the radii and kinetic energies also have characteristic behaviour as function of repulsive interaction strength when the FM transition occurs in the centre. If the interaction strength is reduced by a factor the radii and kinetic energies of JS and Ref. (6) agree qualitatively and quantitatively with experiments (1). In order to observe the FM domains we suggest to start out with a spin-imbalanced system of two-component Fermi atoms and tune the magnetic field towards the Feshbach resonance from the repulse side where the FM transition sets in. As result the core will be a large domain of the majority spin only which exceeds the experimental domain size resolution. It would be interesting to study multi-component systems such as the three component Li system near Feshbach resonances where bulk separation between the spin component domains is predicted to take place. Multi-component systems with the same interactions (and scattering lengths) between states display interesting phase diagrams with first to second order tri-critical points when the number of components exceeds two in the dilute case and three in the JS model. 1. G-B. Jo et al., Science 325, 1521-1524 (2009); see also comment ArXiv:0910.3419. 2. T. Bourdel et al., Phys. Rev. Lett. 91, 020402 (2003); cond-mat/0403091; J. Cubizolles et al., cond-mat/0308018 3. S. Gupta et al., Science 300, 47 (2003). 4. E. Stoner, Phil. Mag. 15, 1018 (1933). 5. R. A. Duine & A. H. McDonald, Phys. Rev. Lett. 95, 230403 (2005). 6. G. J. Conduit, B. D. Simons, Phys. Rev. Lett. 103, 200403 (2009). 7. S. Pilati, G. Bertaina, S. Giorgini, M. Troyer, arXiv/1004.1169 8. S.-Y. Chang, M. Randeria, N. Trivedi, 9. H. Zhai, Phys Rev. A 80, 051605(R) (2009). 10. V. R. Pandharipande, Nucl. Phys. A 174, 641 (1971) and 178, 123 (1971); Pandharipande, V. R., and Bethe, H. A., Phys. Rev. C 7, 1312 (1973); 11. H. Heiselberg, Phys. Rev. A 63, 043606 (2001); J. Phys. B: At. Mol. Opt. Phys. 37, 1 (2004). 12. S. Cowell, H. Heiselberg, I. E. Mazets, J. Morales, V. R. Pandharipande, and C. J. Pethick, Phys. Rev. Lett. 88, 210403 (2002). J. Carlson, H. Heiselberg, V. R. Pandharipande, Phys. Rev. C 63, 017603 (2001). 13. G.E. Astrakharchik, J. Boronat, J. Casulleras, S. Giorgini, Phys. Rev. Lett. 93, 200 (2004). 14. J. Carlson, S-Y. Chang, V. R. Pandharipande, K. E. Schmidt, Phys. Rev. Lett. 91, 50401 (2003); S-Y. Chang et al., Phys. Rev. A 70, 043602, 2004; Nucl. Phys. A 746, 215, 2004. 15. S. Nascimbene, N. Navon, K.J. Jiang, F. Cevy, C. Salomon, Nature 463 (2010) 1057 16. C. Sanner, E. J. Su, A. Keshet, R. Gommers, Y. Shin, W. Huang, W. Ketterle, Phys. Rev. Lett. 105, 040402 (2010); arXiv:1010.1874 17. T. Müller, B. Zimmermann, J. Meineke, J. Brantut, T. Esslinger, H. Moritz, arXiv:1005.0302 18. H. Heiselberg, Phys. Rev. Lett. 93, 040402 (2004). 19. K. M. O’Hara, S. L. Hemmer, M. E. Gehm, S. R. Granade, J. E. Thomas, Science 298 (2002) 2179. L. Luo, B. Clancy, J. Joseph, J. Kinast, J. E. Thomas Phys. Rev. Lett. 98, 080402 (2007). 20. M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, C. Chin, J. Hecker Denschlag, R. Grimm, Phys. Rev. Lett. 92, 203201 (2004) 21. M. Cozzini, S. Stringari, Phys. Rev. A 67, 041602 (R) (2003). 22. A. Recati and S. Stringari, arXiv:1007.4504. 23. S. Jochim et al., Phys. Rev. Lett. 91, 240402 (2003); M. Bartenstein et al., ibid. 92, 120401 (2004). T. B. Ottenstein, T. Lompe, M. Kohnen, A. N. Wenz, S. Jochim, Phys. Rev. Lett 101, 203202 (2008). 24. M. Kitagawa et al., Phys. Rev. A 77, 012719 (2008). 25. F. M. Spiegelhalder et al., arXiv:0908.1101; E. Wille et al., Phys. Rev. Lett. 100, 053201 (2008) Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
764c00d6adfc5529
Working with Three-Dimensional Harmonic Oscillators - dummies Working with Three-Dimensional Harmonic Oscillators By Steven Holzner In quantum physics, when you are working in one dimension, the general particle harmonic oscillator looks like the figure shown here, where the particle is under the influence of a restoring force — in this example, illustrated as a spring. A harmonic oscillator. A harmonic oscillator. The restoring force has the form Fx = –kxx in one dimension, where kx is the constant of proportionality between the force on the particle and the location of the particle. The potential energy of the particle as a function of location x is This is also sometimes written as Now take a look at the harmonic oscillator in three dimensions. In three dimensions, the potential looks like this: Now that you have a form for the potential, you can start talking in terms of Schrödinger’s equation: Substituting in for the three-dimension potential, V(x, y, z), gives you this equation: Take this dimension by dimension. Because you can separate the potential into three dimensions, you can write Therefore, the Schrödinger equation looks like this for x: Solving that equation, you get this next solution: and nx = 0, 1, 2, and so on. The Hnx term indicates a hermite polynomial, which looks like this: • H0(x) = 1 • H1(x) = 2x • H2(x) = 4x2 – 2 • H3(x) = 8x3 – 12x • H4(x) = 16x4 – 48x2 + 12 • H5(x) = 32x5 – 160x3 + 120x Therefore, you can write the wave function like this: That’s a relatively easy form for a wave function, and it’s all made possible by the fact that you can separate the potential into three dimensions. What about the energy of the harmonic oscillator? The energy of a one-dimensional harmonic oscillator is And by analogy, the energy of a three-dimensional harmonic oscillator is given by Note that if you have an isotropic harmonic oscillator, where the energy looks like this: As for the cubic potential, the energy of a 3D isotropic harmonic oscillator is degenerate. For example, E112 = E121 = E211. In fact, it’s possible to have more than threefold degeneracy for a 3D isotropic harmonic oscillator — for example, E200 = E020 = E002 = E110 = E101 = E011. In general, the degeneracy of a 3D isotropic harmonic oscillator is where n = nx + ny + nz.
3b575236e77f637f
Take the 2-minute tour × Inspired by this question: Are these two quantum systems distinguishable? and discussion therein. Given an ensemble of states, the randomness of a measurement outcome can be due to classical reasons (classical probability distribution of states in ensemble) and quantum reasons (an individual state can have a superposition of states). Because a classical system cannot be in a superposition of states, and in principle the state can be directly measured, the probability distribution is directly measurable. So any differing probability distributions are distinguishable. However in quantum mechanics, an infinite number of different ensembles can have the same density matrix. What assumptions are necessary to show that if two ensembles initially have the same density matrix, that there is no way to apply the same procedure to both ensembles and achieve different density matrices? (ie. that the 'redundant' information regarding what part of Hilbert space is represented in the ensemble is never retrievable even in principle) To relate to the referenced question, for example if we could generate an interaction that evolved: 1) an ensemble of states $|0\rangle + e^{i\theta}|1\rangle$ with a uniform distribution in $\theta$ 2) an ensemble of states $|0\rangle + e^{i\phi}|1\rangle$ with a non-uniform distribution in $\phi$ such an mapping of vectors in Hilbert space can be 1-to-1. But it doesn't appear it can be done with a linear operator. So it hints that we can probably prove an answer to the question using only the assumption that states are vectors in a Hilbert space, and the evolution is a linear operator. Can someone list a simple proof showing that two ensembles with initially the same density matrix, can never evolve to two different density matrices? Please be explicit with what assumptions you make. Update: I guess to prove they are indistinguishable, we'd also need to show that non-unitary evolution like the projection from a measurement, can't eventually allow one to distinguish the underlying ensemble either. Such as perhaps using correlation between multiple measurements or possibly instead of asking something with only two answers, asking something with more that two so that finally the distribution of answers needs more than just the expectation value to characterize the results. share|improve this question Hah! I addressed your update in my answer before I even saw it. –  Keenan Pepper Apr 6 '11 at 1:05 2 Answers 2 up vote 8 down vote accepted You only need to assume 1. the Schrödinger equation (yes, the same old linear Schrödinger equation, so the proof doesn't work for weird nonlinear quantum-mechanics-like theories) 2. the standard assumptions about projective measurements (i.e. the Born rule and the assumption that after you measure a system it gets projected into the eigenspace corresponding to the eigenvalue you measured) Then it's easy to show that the evolution of a quantum system depends only on its density matrix, so "different" ensembles with the same density matrix are not actually distinguishable. First, you can derive from the Schrödinger equation a time evolution equation for the density matrix. This shows that if two ensembles have the same density matrix and they're just evolving unitarily, not being measured, then they will continue to have the same density matrix at all future times. The equation is $$\frac{d\rho}{dt} = \frac{1}{i\hbar} \left[ H, \rho \right]$$ Second, when you perform a measurement on an ensemble, the probability distribution of the measurment results depends only on the density matrix, and the density matrix after the measurement (of the whole ensemble, or of any sub-ensemble for which the measurement result was some specific value) only depends on the density matrix before the measurement. Specifically, consider a general observable (assumed to have discrete spectrum for simplicity) represented by a hermitian operator $A$. Let the diagonalization of $A$ be $$A = \sum_i a_i P_i$$ where $P_i$ is the projection operator in to the eigenspace corresponding to eigenvalue (measurement outcome) $a_i$. Then the probability that the measurement outcome is $a_i$ is $$p(a_i) = \operatorname{Tr}(\rho P_i)$$ This gives the complete probability distribution of $A$. The density matrix of the full ensemble after the measurment is $$\rho' = \sum_i P_i \rho P_i$$ and the density matrix of the sub-ensemble for which the measurment value turned out to be $a_i$ is $$\rho'_i = \frac{P_i \rho P_i}{\operatorname{Tr}(\rho P_i)}$$ Since none of these equations depend on any property of the ensemble other than its density matrix (e.g. the pure states and probabilities of which the mixed state is "composed"), the density matrix is a full and complete description of the quantum state of the ensemble. share|improve this answer Oh, and for the case of an observable $A$ with a continuous spectrum, it works basically the same way. For mathematicians it might get more hairy, but as a physicist I have no problem just saying "replace all the summation signs with integrals". –  Keenan Pepper Apr 6 '11 at 0:59 You don't even need to assume Schrödinger equation, but only the fact that the evolution of a quantum state is unitary. –  Frédéric Grosshans Apr 10 '11 at 19:14 Density matrices are an alternative description of quantum mechanics. Consequently, if two ensembles have the same density matrix, they are not distinguishable. Example, consider the unpolarized spin-1/2 density matrix which can be modeled as a system that is half pure states in the +x direction and half in the -x direction, or alternatively, as half pure states in the +z direction (i.e. spin up) and half in the -z direction (i.e. spin down): $$\begin{pmatrix}0.5&0\\0&0.5\end{pmatrix} = 0.5\rho_{+x}+0.5\rho_{-x} = 0.5\rho_{+z}+0.5\rho_{-z}$$ Now compute the average value of an operator $H$ with respect to these ensembles. Let $$H = \begin{pmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{pmatrix}$$ then the averages for the four states involved are: $$\begin{array}{rcl} \langle H\rangle_{+x} &=& 0.5(h_{11}+h_{12}+h_{21}+h_{22})\\ \langle H\rangle_{-x} &=& 0.5(h_{11}-h_{12}-h_{21}+h_{22})\\ \langle H\rangle_{+z} &=& h_{11}\\ \langle H\rangle_{-z} &=& h_{22} \end{array}$$ From the above, it's clear that taking the average over $\pm x$ will give the same result as taking the average over $\pm z$, that is, in both cases the ensemble will give an average of $$\langle H\rangle = 0.5(h_{11}+h_{22})$$ Any preparation of the system amounts to an operator acting on the states and so $H$ can stand for a general operation. Therefore there is no way of distinguishing an unpolarized mixture of +- x from an unpolarized mixture of +-z. The argument for general density matrices is similar, but I think this gets the point across. share|improve this answer Are you saying instead of representing a state as a vector in Hilbert space, it is sufficient to represent a state as a density matrix? It seems like this view would change the counting of physical states and would have an effect in statistical mechanics or thermodynamics of a system. It almost seems like you would be reducing the entropy by mixing two ensembles. –  Ginsberg Apr 6 '11 at 0:32 Either way, the whole point of the question was to see a concrete mathematical proof. Instead of just saying it is so, can you please show how it is so, such that I can learn more? –  Ginsberg Apr 6 '11 at 0:34 @Ginsberg; Yes, a density matrix is equivalent to a collection of pure states (presumably represented by state vectors) along with a probability density for the pure states. I've not found the reference I was looking for so I'll type up an outline of a proof and edit it in. –  Carl Brannen Apr 6 '11 at 0:45 Your Answer
d74b25d9ea042d1c
— 1. Dan Spielman — Dan Spielman works in numerical analysis (and in particular, numerical linear algebra) and theoretical computer science. Here I want to talk about one of his key contributions, namely his pioneering work with Teng on smoothed analysis. This is about an idea as much as it is about a collection of rigorous results, though Spielman and Teng certainly did buttress their ideas with serious new theorems. Prior to this work, there were two basic ways that one analysed the performance (which could mean run-time, accuracy, or some other desirable quality) of a given algorithm. Firstly, one could perform a worst-case analysis, in which one assumed that the input was chosen in such an “adversarial” fashion that the performance was as poor as possible. Such an analysis would be suitable for applications such as certain aspects of cryptography, in which the input really was chosen by an adversary, or in high-stakes situations in which there was zero tolerance for any error whatsoever; it is also useful as a “default” analysis for when no realistic input model is available. At the other extreme, one could perform an average-case analysis, in which the input was chosen in a completely random fashion (e.g. a random string of zeroes and ones, or a random vector whose entries were all distributed according to a Gaussian distribution). While such input models were usually not too realistic (except in situations where the signal-to-noise ratio was very low), they were usually fairly simple to analyse (using tools such as concentration of measure). In many situations, the worst-case analysis is too conservative, and the average-case analysis is too optimistic or unrealistic. For instance, when using the popular simplex method to solve linear programming problems, the worst-case run-time can be exponentially large in the size of the problem, whereas the average-case run-time (in which one is fed a randomly chosen linear program as input) is polynomial. However, the typical linear program that one encounters in practice has enough structure to it that it does not resemble a randomly chosen program at all, and so it is not clear that the average-case bound is appropriate for the type of inputs one has in practice. At the other extreme, the exponentially bad worst-case inputs were so rare that they never seemed to come up in practice either. To obtain a better input model, Spielman and Teng considered a smoothed-case model, in which the input was the sum of a deterministic (and possibly worst-case) input, and a small noise perturbation, which they took to be Gaussian to simplify their analysis. This reflected the presence of measurement error, roundoff error, and similar sources of noise in real-life applications of numerical algorithms. Remarkably, they were able to analyse the run-time of the simplex method for this model, concluding (after a lengthy technical argument) that under reasonable choices of parameters, the run time was polynomial time in the length, thus explaining the empirically observed phenomenon that the simplex method tended to run a lot better in practice than its worst-case analysis would predict, even if one started with extremely ill-conditioned inputs, provided that there was a bit of noise in the system. One of the ingredients in their analysis was a quantitative bound on the condition number of an arbitrary matrix when it is perturbed by a random gaussian perturbation; the point being that random perturbation can often make an ill-conditioned matrix better behaved. (This is perhaps analogous in some ways to the empirical experience that some pieces of machinery work better after being kicked.) Recently, new tools from additive combinatorics (in particular, inverse Littlewood-Offord theory) have enabled Rudelson-Vershynin, Vu and myself, and others to generalise this bound to other noise models, such as random Bernoulli perturbations, which are a simple model for modeling digital roundoff error. — 2. Yves Meyer — Yves Meyer has worked in many fields over the years, from number theory to harmonic analysis to PDE to signal processing. As the Gauss prize is concerned with impact on fields outside of mathematics, Meyer’s major contributions to the theoretical foundations of wavelets, which are now a basic tool in signal processing, were undoubtedly a major consideration in awarding this prize. But I would like to focus here on another of Yves’ contributions, namely the Coifman-Meyer theory of paraproducts developed with Raphy Coifman, which is a cornerstone of the para-differential calculus that has turned out to be an indispensable tool in the modern theory of nonlinear PDE. Nonlinear differential equations, by definition, tend to involve a combination of differential operators and nonlinear operators. The simplest example of the latter is a pointwise product {uv} of two fields {u} and {v}. One is thus often led to study expressions such as {D(uv)} or {D^{-1}(uv)} for various differential operators {D}. For first order operators {D}, we can handle derivatives using the product rule (or Leibniz rule) from freshman calculus: \displaystyle D(uv) = (Du)v + u(Dv). We can then iterate this to handle higher order derivatives. For instance, we have \displaystyle D^2(uv) = (D^2 u) v + 2 (Du) (Dv) + u (D^2 v), \displaystyle D^3(uv) = (D^3 u) v + 3 (D^2u) (Dv) + 3 (Du) (D^2 v) + u (D^3 v), and so forth, assuming of course that all functions involved are sufficiently regular so that all expressions make sense. For inverse derivative expressions such as {D^{-1}(uv)}, no such simple formula exists, although one does have the very important integration by parts formula as a substitute. And for fractional derivatives such as {|D|^\alpha(uv)} with {\alpha > 0} a non-integer, there is also no closed formula of the above form. Note how the derivatives on the product {uv} get distributed to the individual factors {u,v}, with {u} absorbing all the derivatives in one term, {v} absorbing all the derivatives in another, and the derivatives being shared between {u} and {v} in other terms. Within the usual confines of differential calculus; we cannot pick and choose which of the terms we like to keep, and which ones to discard; we must treat every single one of the terms that arise from the Leibniz expansion. This can cause difficulty when trying to control the product of two functions of unequal regularity – a situation that occurs very frequently in nonlinear PDE. For instance, if {u} is {C^1} (once continuously differentiable), and {v} is {C^3} (three times continuously differentiable), then the product {uv} is merely {C^1} rather than {C^3}; intuitively, we cannot “prevent” the three derivatives in the expression {D^3(uv)} from making their way to the {u} factor, which is not prepared to absorb all of them. However, it turns out that if we split the product {uv} into paraproducts such as the high-low paraproduct {\pi_{hl}(u,v)} and the low-high paraproduct, we can effectively separate these terms from each other, allowing for a much more flexible analysis of the situation. The concept of a paraproduct can be motivated by using the Fourier transform. For simplicity let us work in one dimension, with {D} being the usual differential operator {D = \frac{d}{dx}}. Using a Fourier expansion (and assuming as much regularity and integrability as is needed to justify the formal manipulations) of {u} into components of different frequencies, we have \displaystyle u(x) = \int_{\bf R} \hat u(\xi) e^{i x \xi}\ d\xi (here we use the usual PDE normalisation in which we try to hide the {2\pi} factor) and similarly \displaystyle v(x) = \int_{\bf R} \hat v(\eta) e^{i x \eta}\ d\eta and thus \displaystyle uv(x) = \int_{\bf R} \int_{\bf R} \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta \ \ \ \ \ (1) and thus, by differentiating under the integral sign \displaystyle D^k(uv)(x) = i^k \int_{\bf R} \int_{\bf R} (\xi+\eta)^k \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta. In contrast, we have \displaystyle (D^j u) (D^{k-j} v)(x) = i^k \int_{\bf R} \int_{\bf R} \xi^j \eta^{k-j} \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta; thus, the iterated Leibnitz rule just becomes the binomial formula \displaystyle (\xi+\eta)^2 = \xi^2 + 2\xi\eta + \eta^2, \quad (\xi+\eta)^3 = \xi^3 + 3 \xi^2 \eta + 3 \xi \eta^2 + \eta^3, \ldots after taking Fourier transforms. Now, it is certainly true that when dealing with an expression such as {(\xi+\eta)^2}, all three terms {\xi^2, 2\xi\eta, \eta^2} need to be present. But observe that when dealing with a “high-low” frequency interaction, in which {\xi} is much larger in magnitude than {\eta}, then the first term dominates: {(\xi+\eta)^2 \sim \xi^2}. Conversely, with a “low-high” frequency interaction, in which {\eta} is much larger in magnitude than {\xi}, we have {(\xi+\eta)^2 \sim \eta^2}. (There are also “high-high” interactions, in which {\xi} and {\eta} are comparable in magnitude, and {(\xi+\eta)^2} can be significantly smaller than either {\xi^2} or {\eta^2}, but for simplicity of discussion let us ignore this case.) It then becomes natural to try to decompose the product {uv} into “high-low” and “low-high” pieces (plus a “high-high” error), for instance by inserting suitable cutoff functions {m_{hl}(\xi,\eta)} or {m_{lh}(\xi,\eta)} in (1) to the regions {|\xi| \gg |\eta|} or {|\xi| \ll |\eta|} to create the paraproducts \displaystyle \pi_{hl}(u,v)(x) = \int_{\bf R} \int_{\bf R} m_{hl}(\xi,\eta) \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta \displaystyle \pi_{lh}(u,v)(x) = \int_{\bf R} \int_{\bf R} m_{lh}(\xi,\eta) \hat u(\xi) \hat v(\eta) e^{ix(\xi+\eta)}\ d\xi d\eta Such paraproducts were first introduced by Calderón, and more explicitly by Bony. Heuristically, {\pi_{hl}(u,v)} is the “high-low” portion of the product {uv}, in which the high frequency components of {u} are “allowed” to interact with the low frequency components of {v}, but no other frequency interactions are permitted, and similarly for {\pi_{lh}(u,v)}. The para-differential calculus of Bony, Coifman, and Meyer then allows one to manipulate these paraproducts in ways that are very similar to ordinary pointwise products, except that they behave better with respect to the Leibniz rule or with more exotic differential or integral operators. For instance, one has \displaystyle D^k \pi_{hl}(u,v) \approx \pi_{hl}(D^k u, v) \displaystyle D^k \pi_{lh}(u,v) \approx \pi_{lh}(u, D^k v) for differential operators {D^k} (and more generally for pseudodifferential operators such as {|D|^\alpha}, or integral operators such as {D^{-1}}), where we use the {\approx} symbol loosely to denote “up to lower order terms”. Furthermore, many of the basic estimates of the pointwise product, in particular Hölder’s inequality, have analogues for paraproducts; this is a special case of what is now known as the Coifman-Meyer theorem, which is fundamental in this subject, and is proven using Littlewood-Paley theory. The same theory in fact gives some estimates for paraproducts beyond what are available for products. For instance, if {u} is in {C^1} and {v} is in {C^3}, then the paraproduct {\pi_{lh}(u,v)} is “almost” in {C^3} (modulo some technical logarithmic divergences which I will not elaborate on here), in contrast to the full product {uv} which is merely in {C^1}. Paraproducts also allow one to extend the classical product and chain rules to fractional derivative operators, leading to the fractional Leibniz rule \displaystyle |D|^\alpha(uv) \approx (|D|^\alpha u) v + u (|D|^\alpha v) and fractional chain rule \displaystyle |D|^\alpha(F(u)) \approx (|D|^\alpha u) F'(u) which are both very useful in nonlinear PDE (see e.g. this book of Taylor for a thorough treatment). See also this brief Notices article on paraproducts by Benyi, Maldonado, and Naibo. — 3. Louis Nirenberg — Louis Nirenberg has made an amazing number of contributions to analysis, PDE, and geometry (e.g. John-Nirenberg inequality, Nirenberg-Treves conjecture (recently solved by Dencker), Newlander-Nirenberg theorem, Gagliardo-Nirenberg inequality, Caffarelli-Kohn-Nirenberg theorem, etc.), while also being one of the nicest people I know. I will mention only two results of his here, one of them very briefly. Among other things, Nirenberg and Kohn introduced the pseudo-differential calculus which, like the para-differential calculus mentioned in the previous section, is an extension of differential calculus, but this time focused more on generalisation to variable coefficient or fractional operators, rather than in generalising the Leibniz or chain rules. This calculus sits at the intersection of harmonic analysis, PDE, von Neumann algebras, microlocal analysis, and semiclassical physics, and also happens to be closely related to Meyer’s work on wavelets; it quantifies the positive aspects of the Heisenberg uncertainty principle, in that one can observe position and momentum simultaneously so long as the uncertainty relation is respected. But I will not discuss this topic further today. Instead, I would like to focus here instead on a gem of an argument of Gidas, Ni, and Nirenberg, which is a brilliant application of Alexandrov’s method of moving planes, combined with the ubiquitous maximum principle. This concerns solutions to the ground state equation \displaystyle \Delta Q + Q^p = Q \ \ \ \ \ (2) where {Q: {\bf R}^n \rightarrow {\bf R}^+} is a smooth positive function that decays exponentially at infinity, {p > 1} is an exponent, and {\Delta := \sum_{j=1}^n \frac{\partial^2}{\partial x_j^2}} is the Laplacian. This equation shows up in a number of contexts, including the nonlinear Schrödinger equation and also, by coincidence, in connection with the best constants in the Gagliardo-Nirenberg inequality. The existence of ground states {Q} can be proven by the variational principle. But one can say much more: Lemma 1 (Gidas-Ni-Nirenberg) All ground states {Q} are radially symmetric with respect to some origin. To show this radial symmetry, a small amount of Euclidean geometry shows that it is enough to show that there is a lot of reflection symmetry: Lemma 2 (Gidas-Ni-Nirenberg, again) If {Q} is a ground state and {\omega \in S^{n-1}} is a unit vector, then there exists a hyperplane orthogonal to {\omega} with respect to which {Q} is symmetric. To prove this lemma, we use the moving planes method, sliding in a plane orthogonal to {\omega} from infinity. More precisely, for each {t \in {\bf R}}, let {\Pi_t} be the hyperplane {\{ x: x \cdot \omega = t \}}, let {H_t} be the associated half-space {\{ x: x \cdot \omega \leq t\}}, and let {Q_t: H_t \rightarrow {\bf R}} be the function {Q_t(x) := Q(x) - Q(r_t(x))}, where \displaystyle r_t(x) := x + 2 (t - x \cdot \omega) \omega is the reflection through {\Pi_t}; thus {Q_t} is the difference between {Q} and its reflection in {\Pi_t}. In particular, {Q_t} vanishes on the boundary {\Pi_t} of the half-space {H_t}. Intuitively, the argument proceeds as follows. It is plausible that {Q_t} is going to be positive in the interior of {H_t} for large positive {t}, but negative in the interior of {H_t} for large negative {t}. Now imagine sliding {t} down from {+\infty} to {-\infty} until one reaches the first point {t = t_0} where {Q_{t_0}} ceases to be positive in the interior of {H_{t_0}}, then it attains its minimum at some zero in the interior of {H_{t_0}}. But by playing around with (2) (using the Lipschitz nature of the map {Q \mapsto Q^p} when {Q} is bounded) we know that {Q_{t_0}} obeys an elliptic constraint of the form {\Delta Q_{t_0} = O( |Q_{t_0}| )}. Applying the maximum principle, we can then conclude that {Q_{t_0}} vanishes identically in {H_{t_0}}, which gives the desired reflectoin symmetry. (Now it turns out that there are some technical issues in making the above sketch precise, mainly because of the non-compact nature of the half-space {H_t}, but these can be fixed with a little bit of fiddling; see for instance Appendix B of my PDE textbook.)
b92d48bb6e9657a4
Skip to content The Secular Man’s Suicide Pact I recently visited a place with a much larger Moslem population than my home town.  That set me to thinking about things. Leftist progress on diversity looks to be moving right along.  I’m guessing they have a schedule for undoing the dispersion of Babel.  We’ll see how that works out. Trembling as I am to utter the following blasphemy against one of the Secular Man’s highest articles of faith, it appears to me that segregation is pretty much a natural thing for most people.  If we don’t segregate by race, we do it by sex, religion, income, the type of work you do, your IQ, or even your status as a manager or worker bee.  I’ve noted that the Ivy League humanities majors don’t pal around with the NASCAR set a whole lot, not that either side of this travesty of segregation is complaining about it, because each side is equally convinced that the other side is peopled with bigots and idiots.  In fact, NASCAR people won’t even hang around NHRA.  Oh, well. So people prefer to be around folks who are like themselves.  But somehow, acknowledging this obvious fact makes me a blasphemer in the eyes of the Secular Man.  Like Winston said in 1984, “theyll shoot me i don’t care theyll shoot me in the back of the neck i dont care”. There are exceptions to the general trend toward segregation, sometimes benign, sometimes not.  There are cases where we let students come over here and send ours over there.  No harm there.  If you want a kid from France in your home for the school year, more power to you, sez I. And there are Christian societies who send doctors, farmers, engineers and whatnot.  They have an ulterior motive, of course, but it’s an open secret.  They’ll treat lepers and show folks how to have safe drinking water in exchange for a chance to explain about Jesus and the cross.  There is definitely no harm in that.  And — speaking only of my own premillennial views — Christianity decidedly does not teach us to take over the world by force.  If there is to be any force, Jesus will impose it in person when He arrives.  Post-mill folks, I’ll let you speak for yourselves on this. Other cases are not quite so benign.  Caesar desegregated the Italians and the Gauls, but not to help the latter. Likewise the Assyrians in Israel, the Babylonians in Judah, and so on. In general, I’d say anybody who thinks it’s the destiny of his group to take over the world by force is a threat.  In modern times, the Communists and Fascists fit this category and were proud of it.  There was a time when Americans understood this and reacted against it.  The old adage about being better dead than red was a way of acknowledging the open threat posed by Communism while saying that we intended to push back with whatever force it took. And to get back where I started from, Moslems fit this category.  Islam intends to take over the world, by persuasion where it can, by force where it must. In the vast majority of cases, your Moslem co-worker is no threat to you or anybody else.  He’s just a guy trying to get by, raise his kids to be good Moslems, and keep his wife from feeling like a conspicuous fool wearing her burqa. The problem arises when there are enough Moslems to form a society that runs along Islamic lines.  Because Islam expects to own Earth and everyone on it.  The Secular Man, wearing his feelings on his “coexist” bumper sticker, is just not prepared to deal with the reality of an Islam that will not rest until everyone bows to Mecca.  The Secular Man’s refusal to see a threat where there clearly is one looks a lot like a suicide pact. Will the real oligarchy please stand up The Washington Times published an article quoting an Ivy League study saying we live in an oligarchy rather than a republic.  Elites and special interests buy influence and get their way, he says, describing a government of plutocrats more than oligarchs.  The Catholic News Service quotes the same study to the same effect, complaining against big corporations doing a lot of evil influence buying.  The gist of it is that the rich get their way, harming the rest of us. But the biggest source of influence over the way the government governs is the government itself. And here’s how it works.  Something like 148 million Americans are receiving some form of stipend from the government.  Government is essentially buying their votes with money confiscated from the 90 million private sector workers who pay for it all.  So maybe it’s true that we have an oligarchy or plutocracy settling in upon us, but the “evil” corporations are by no means the dominant players in this field.  The money laundering from the government dwarfs all other forms of paid influence, whether it’s from the Koch brothers on the right or Warren Buffet and George Soros on the left. It’s a victory for Big Irony that many of the people complaining the loudest about the influence of Big Corporations are actually part of Big Government and seem completely blind to their role in the most pervasive and ruinous corruption scheme of all. So why are you still a Christian? Given the general drift of western civilization, there are some on our side who are feeling like Christianity is, well, in retreat.  If you could wind your time turner back 40 years, nobody living at that time would have said that America would be in the process of honoring sodomy with its own special rite of marriage.  Nobody would have predicted rates of divorce and illegitimacy where they are today.  Jokes about mass confusion on Fathers’ Day used to be directed at other people, not us.  Now we are the joke. Struggling families can’t find a reason to stick it out.  People give up and give over to their pet sins.  People born, raised, and married in the church suddenly go secular, not out of any sense of offense or hostility, but because they just don’t care any more. So why are you still plodding along with a crowd that seems to be losing so badly?  Maybe it’s because you’re part of the gray-haired set that still does all that Churchianity stuff (including Wednesday night).  Maybe it’s because America isn’t the only place in the world, and in some other places like China, Christianity is growing like mad.  Maybe you stick around because you’re one of those fortunate folks still plugged into a dynamite church, and you really enjoy it. Here’s a reason for you to ponder: You should keep the faith because Jesus is alive.  The only real reason to get into Christianity in the first place, if I could say it like that, is because it is true.  And the central truth of Christianity is that He rose from the dead.  If He rose from the dead, there is every reason in the world to continue faithful regardless of what the rest of society does. If Jesus didn’t rise from the dead, there never was a reason to be a Christian.  In the early days of Christianity, Paul said that if Christ has not risen, then “we of all men are most miserable.”  Why suffer for a dead god?  What sense does that make?  We don’t suffer any serious persecution in America, so let’s apply the thought more accurately to us: why deny yourself the pleasures of a hedonistic life to honor the memory of a dead guy? But if Jesus is alive, that changes everything.  That would mean that He has power over death.  It would mean He really is the Son of God.  It would mean His church is destined to become the central focus of history.  It would mean that following Jesus matters, not just to your kids or your person sense of stick-tuitiveness, or the moral tidiness of your little corner of the world, but it matters on the biggest and most cosmic scope imaginable. And if Jesus is alive, it would mean that death is not the end of life that we all thought it was, but just a pause before we transition into something far greater He has prepared for us.  It would mean that our ultimate conclusions about life stand upon a living hope.  And hope is the thing that makes us keep on keeping on. So I’m still a Christian because the tomb was emptied when Christ came out of it alive. Ukraine and Obama’s complicated failure Of course you’ve heard by now that Mr. Romney and Mrs. Palin both warned that Mr. Obama’s policy toward Russia would lead to the ongoing crisis in Ukraine. America should have been doing everything possible to help Ukraine establish a sturdy, free economy, a justice system free of graft and corruption, and a credible military. If we were attempting any of these things, it never made any news I could see. But now Obama’s failures are starting to earn compound interest. Obama, who years ago helped bring about the decline of America’s manned space program — not that he did this by himself; he had plenty of help — now has another problem on his hands in the matter of Ukraine. He can’t afford to anger the Russians too much because we depend on them to get our astronauts and supplies to/from the space station. We’re in one of those moments when you realize that great nations have to remain strong in every area. America’s space program has been the envy of the world for 60 years, that is, until the last shuttle flight. Now we have no way to get people and cargo to the space station. So Mr. Obama will be low key in his reactions to the Russian dealings in Ukraine. It would be too embarrassing to have the Russians tell America to kiss off next time we need one of our astronauts to bum a ride in a Soyuz spacecraft. And I feel I need to add something here.  I am not ashamed of America, but I am ashamed of our self-imposed weakness and immorality after years of secular, Socialist-leaning misrule. The looming problems confronting our astronauts are the kinds of weird, unique gotchas that crop up when foreign policy is dominated by the wishful, utopian thinking of liberal academics instead of a hard-headed determination to deal with facts as they are.  Russia is a powerful nation that sees itself as our rival.  Mr. Putin is a smart, tough ex-KGB agent and a fierce nationalist.  He plays to win and won’t hesitate to spill blood to achieve his goals.  Only blind folly could have failed to see that and act accordingly.  Putting our space program into a state of dependency on the Russian program is beyond naïve. So remember that next time a liberal politician tells you America is disliked around the world, embarks on a worldwide apologize-for-America tour, and offers a former KGB agent one of those ludicrous red reset buttons. Why I think there’s a God Louise Antony gave her reasons for thinking there’s no God, and I dealt with those here. But what are the reasons for thinking there is one? In most Christian literature, these boil down to five. Why There’s Something Instead of Nothing The physical universe tells us it had a beginning. The sun isn’t merely shining; it is burning up. The world isn’t merely turning; it is spinning down to a stop. Natural processes everywhere are in a state of decay and decline. The available energy in the universe is a consumable resource. There’s an end point when the mainspring of the whole cosmos will stop ticking. Therefore, it had to have been wound up at some point in the past. So the physical universe isn’t eternal. So something else must have been here before there was a universe. Whatever that was, it must have been eternal and must have had the capacity to bring the universe into being. Why There’s Order Instead of Chaos When you drive through the South and see a 1000-acre tract of pine trees planted in rows, equally spaced along the row and all the same age, you don’t have to ask if somebody did that. When I look at the far more complex arrangements of DNA, it’s obvious that a mighty intelligence made this. DNA contains the coding needed to duplicate itself. But a process capable of creating a DNA molecule from scratch simply does not exist in nature. Nothing even remotely approaching this degree of sophistication has ever been observed, not in nature, nor even in man’s most advanced laboratories. So something eternal and powerful was there before the universe existed. And it had the capacity to bring the universe into being, wind up the spring, and then release the energy through myriads of the most intricately designed mechanisms. Such a being is intelligent beyond all the reckoning of man. Why Things are Right and Wrong People have a moral component to their nature. Ms. Antony shows this when she asks that we all work for peace. Nice thought, though I wish she’d explain why, on atheistic principles, peace is better than war. After all, isn’t evolution driven by conflict and winnowing away the unfit so that only the strongest and smartest survive to breed again? Here’s a case where evolutionists are better than their principles. They generally wish the world were better — and “better” is defined in moral terms. Furthermore, there is, for lack of a better term, a genuine reality underlying morals. We aren’t merely displeased when brutes kidnap little girls and sell them into sexual slavery. No, this is really and truly evil, and wrong. And it’s not just that we feel happy about a man who would redeem little slaves out of their bondage. No, such a deed is really and truly good and right. The fact that morality cannot be derived from nature is not an argument from gaps in our knowledge. Rather, it’s plain to see that there is no arrangement of particles and forces that can ever account for a moral right and wrong because morality involves not just an assessment of facts, but an assertion of authority. Morality is the claim, coming from outside your own head, that you ought or ought not do something. And “ought” inherently arrives in the form of a command. Morality sees what is wrong and authoritatively forbids it. Morality sees what is right and authoritatively commands it. The origin of morality, then, is very much like the origin of the physical universe. It’s here; it’s real, and it defies natural, material explanation. It demands a source that is outside of this world, transcendent, and that was capable of implanting it in the human heart when man was first formed. So — just building the argument — something eternal brought the universe into being, something that was powerful enough to do it, intelligent enough to design it, and this Being possessed a moral code which it then hard wired into the hearts of men. Why We Sense the Transcendent It’s an interesting question as to why, on naturalist/materialist principles, people should have ever evolved to be capable of wondering about what could be outside this physical dimension. Where’s the survival value in such a massive and stressful distraction?  Or to take the question a level deeper, how do matter and energy interact in such a way as to produce conscious beings who ponder things higher than matter and energy? Ms. Antony herself experiences the draw of the transcendent but drops it too soon. The real question is what a sense of transcendence is leading you to.  Being a Christian, it’s obviously my opinion that God created this in us to lead us to Him.  Paul told the Athenians that we “feel after Him,” (Acts 17:27) clearly expecting that even pagan men would have been open minded enough to investigate an intuition shared by virtually all people. We Christians find our sense of transcendence filled, satisfied, yet heightened and completed by knowing our God through His Son, Jesus.  People from other religions testify of their version of the same sense of transcendence.  It’s not my purpose to address those experiences, only to say that whether we’re making out shapes in a fog or seeing in the full light of day, something is there, and we all sense it to some degree.  And although the argument is not dispositive, I can’t frame a better explanation for a sense of transcendence than to propose that God has indeed set “eternity in our hearts” (Eccl 3:11) as a way to both prompt us to seek Him and as a way to experience Him once He is found. The life of Jesus Christ The chief way God chose to reveal Himself to man was through Jesus.  The officers sent to arrest Him said, “Nobody ever spoke like this man.”  We exhaust all the superlatives when we consider Him.  His teachings set the standard for goodness even among those who reject Him.  He led such a life that those who sought His ruin could accuse Him only by lying.  Without money, without armies, without political connections, without allies, without any access to the levers of power, having died young, Jesus did more to change the world for good than all who ever came before or after. And He rose from the dead.  Yes, His followers reported many other miracles He did, turning water to wine, walking on water, feeding multitudes out of a sack lunch. But the miracle of His resurrection was the story they were all, to man, willing to be tortured and die for the privilege of telling it, not because they had anything to gain by it, but because they undeniably believed it to be true. If there is a God such as I have described, and if God became a man, I would expect Him to be a man like Jesus. So that’s it.  It’s why I think God exists and has revealed Himself to us through His Son, Jesus. Answers for an atheist The New York Times published an interview with atheist Louise Antony who confidently affirms that there is no God. Read the linked article if you like, but her arguments against God boil down to a just handful of things. First, Antony says, “I deny that there are beings or phenomena outside the scope of natural law.” This, of course, is no argument at all. It’s just assuming the conclusion. Presupposing materialism merely evades the debate about whether God exists. The Christian idea of God is that He is transcendent, meaning that He is “above” or “beyond” or “outside” the universe. Looking for God by material methods is like prospecting for diamonds with a metal detector. Wrong tool. In her second argument, Antony says religious people can’t all agree on what God is, what He is like, or whether there are more gods than one. This is all true, and all irrelevant. For the sake of argument, let’s assume that all religious people are hopelessly muddled on the nature of God. Does this mean they’re all equally deceived on the existence of God? Not at all. Even in a total fog, people can know something is out there without knowing any details about it. Antony then says she cannot reconcile the existence of evil with the existence of God. Beg pardon, but what is this “evil” she speaks of? The existence of categories like “good” and “evil” assumes a Supreme Authority who establishes what’s good and what’s not. And consider again Antony’s statement, “I deny that there are beings or phenomena outside the scope of natural law.” Yet the very categories of good and evil are outside of natural law. You cannot derive morality from Newton’s laws or the Schrödinger equation. That requires a transcendent source. On the other hand, if good and evil are not real categories, if they’re just cultural norms or her own private intuitions, then her objection vanishes. Her argument amounts to, “I’m displeased (or we are); therefore, there is no god,” which is absurd. But Ms. Antony is left to ponder the motions of her own heart. Why is she outraged by rape or brutality? Who cares, and why should anyone care, if orphans starve, tyrants strut, armed gangs pillage and plunder, girls are bought and sold, and all the rest of human misery is played out before our eyes? If Ms. Antony knows anything at all, she knows there’s Something Big moving out there in the fog. And following that, Ms. Antony should be the first to accept religious experiences. After all, she’s had a big one. She’s felt the wrong of this fallen, sinful world and felt the need to put it all back right. That didn’t evolve from a big cloud of hydrogen gas. God has set eternity in our hearts, and that’s what it sounds like when people pay attention to it, even a little bit. Kim Jung O So now the FCC wants to install government minders in newsrooms across the country to make sure “underserved minorities” get the news they need. I guess we’ll show Kim Jung Un how it’s done. Even Mr. Obama’s lickspittle media has an eyebrow aloft. But don’t worry, lefties — if you like your freedom of the press, you can keep it! Another foretaste of things to come It’s no secret that Christian values are being slowly but inexorably dispossessed in America. Wedding cake bakers who refuse service to homosexual couples get sued over it, and lose. They’re told that once you open your business up to serve the public, then you have to serve whatever comes through the door. But now a bar owner in California says he’ll refuse service to state legislators who vote for anti-gay legislation. Actually, he went a bit farther and said he’d deny them entry to his bar. I’m thinking his valiant pro-gay stand isn’t likely to cost him a lot of money. How many Christians are clamoring to enter a gay bar in California? Still, the principle being established here should tell every Christian that it’s past time to gird up the old loins. Christian bakers are fair game for discrimination suits if they transgress against the Secular Man’s homodoxy on the grounds that public businesses have to accept whatever the public accepts. To borrow from Spurgeon, I’ll adventure to prophesy that anti-Christian bar owners will be immune from suits on the same grounds. Yet — lest we all forget — Californians voted against homosexual marriage, even going so far as to forbid it in their state constitution. So it’s clear that the actual public in California accepts anti-gay legislators just fine. But you can be certain that the bar owner, should he get sued for discrimination, will get a pass. Christians should be waking up to the fact that we’re in a fight. And to paraphrase Mordecai to Esther, don’t think this won’t ever touch you. When politics go bad King Baasha of Israel was a drunkard. His servant Zimri murdered him while he was drunk. Short moral of story: A drunken king can’t be trusted to know who the enemy is. Zimri took over and reigned for about a week. Another servant named Omri found out Baasha was dead and came after Zimri. Zimri neither fought nor fled, but went into his own house and burned it down upon himself. Moral: It’s easier to take over than it is to actually keep order, and once order is lost, you don’t have a lot of options. Omri was a wicked king and plunged Israel deeper into ruinous idolatry. Moral: A guy who just wants to be in charge is about the last man you want in power. Omri’s son Ahab eventually became king. The Bible describes Ahab as worse than all who came before him. He married Jezebel who was even worse than he was. Moral: Getting rid of drunks, killers, and tyrants doesn’t mean things are about to get better. The son might make you wish for his daddy back. And beware the tyrant’s wife. During Ahab’s reign the prophet Elijah called for a drought that lasted for years. Moral: When the right leadership arrives, the fight isn’t over; it’s just starting, and you may dislike his methods. At Mount Carmel, God spoke by fire from heaven. Israel, convinced, repented. They acknowledged that the Lord is God, not Baal, and executed the idolatrous priests. Then the rain came. Moral: Fixing a country starts with fixing hearts. Baghdad Bob and ObamaCare Is this a great country or what? Secular Man’s smoking habits Has anyone else noticed how smoking tobacco has been getting less legal while smoking marijuana has been getting more legal? And isn’t it just the funniest thing that so many plain old potheads are claiming it’s for medicinal purposes? Connecticut — nekkid and hoping you won’t notice One of the great insights of the American Revolution is that a government’s authority derives from the consent of the governed. The State of Connecticut passed a law saying everyone in the state must register so-called “assault” weapons and high capacity ammo magazines. Comes now the report that ten thousands of citizens in Connecticut — perhaps millions — have declined to obey. Registration schemes are plainly the first move in a game of confiscation. Many who intend not to surrender their arms are declining to register them. This may turn out to be a very, very big deal. Failure to register a weapon in Connecticut is a class D felony. A class D felony is punishable by up to five years in prison. Despite that, gun owners in Connecticut collectively jutted their jaws and said, “Hell, no.” How big is the problem? Connecticut estimates there are about 370,000 so-called “assault” weapons in Connecticut. Less than 50,000 have been registered. They estimate there are 2.4 million high capacity ammo magazines in the state. About 38,000 have been registered. Theoretically, Connecticut now has well over two million new felons. You can be sure Connecticut pols see the problem just like I do. If a huge swath of the population responds with sullen defiance, the government no longer has the consent of the governed. How is it a legitimate government any more? And how do you recover that once it’s lost? I see three options. 1) Connecticut can openly and humbly restore its legitimacy by repealing the law. 2) Officials can reduce enforcement to some low level that ruins a few people’s lives while leaving most violators untouched yet still under state threat. 3) The state can hire more SWAT teams, build way more prisons and start the crackdown. Option 2 is most likely because the gun law was designed not to solve a problem but to make liberals feel good about themselves. Neither practicing humility nor engorging the prisons would serve that purpose, although criminalizing a bunch of rightwingers would. And if a few of them get busted, well, that’s the price one pays. One problem: Reducing enforcement to a level that prevents serious conflict is claiming victory while hoisting a white flag. It’s like one of those dreams where you show up at work buck nekkid and nobody notices. As Drudge says, “Developing…” From creation clearly seen The recent creation/evolution debate between Ken Ham and Bill Nye was pretty good.  It was not excellent. The rules of the debate didn’t require the contestants to engage one another to any great extent, so the back-and-forth that challenges reasoning didn’t happen. One of the things Mr. Ham said that begged for discussion was his remark that just doing science presupposes God and creation.  Christians schooled in apologetics promptly said rah-rah, but the argument was left as a mere assertion.  Nye declined to ask for explanation, and Ham obliged. Why should there be any such thing as natural law? Why should nature be orderly and predictable? Why should gravitation behave according to a rule so precise that you can measure its effects and write a mathematical equation that tells you exactly what’s going to happen? A Christian would argue from the creation account that God intended His universe to function in an orderly way. Creatures bring forth “after their kind,” it says ten times. The motions of the earth, sun, moon, and stars provide day, night, signs, and seasons. There is order in this, and Paul tells us that the invisible things of God are clearly seen, being understood by what God made. (Rom 1:20) But the deeper question for Mr. Nye would have been this: What is it about your thought process that leads you to look for orderliness in the first place, and why does your mind naturally recognize it and latch onto it? Based on Nye’s frequent and brave admissions about what he doesn’t know, I can only surmise that he’d admit again that he has no idea why the Bing Bang resulted in law and order rather than sheer chaos, and he’d likely admit that he has no idea why his mind should be structured to look for order. Or he might just say it evolved this way, which is the same thing. But the Christian can say that if we take the Word of God as our starting point, the first thing we learn is that God is, that He made the universe, and that He did it in an orderly manner. Further, God immediately set about revealing Himself to man with that revelation being set in a framework of reason and logic. The imago dei means our heads are hard-wired to look for order, to recognize it at once, and to latch onto it when it’s found. For science to exist at all, all these Christian teachings about creation and human nature have to be assumed as prerequisites. They must be presupposed. The questions for Mr. Nye and everyone who investigates science from a naturalistic viewpoint are these: How does the Big Bang account for the fact that the resulting cosmos functions according to fixed laws? And second, how did the mind of man come to look for such things? Christianity has an answer for these questions. Naturalism can’t do any better than offer a shrug and say that’s just the way things are — which is the opposite of true science. Conservatives who can’t connect the dots A few months ago while the electioneering was in full-throated roar, a “conservative” writer lamented that liberal voters seem unable to connect the dots.  One quoted a low-info voter who expressed unconcern about a property tax hike because, said the voter, “I rent an apartment, so property taxes don’t affect me.”  How do you connect intelligently with people this thick? And then today, I was listening to talk radio “conservative” Mark Larsen explaining to a caller that he ‘d have no problem with the Boy Scouts changing their stance on homosexuality to go with the PeeCee flow and start accepting it.  The caller wondered why the institution must change to accommodate the individual rather than the other way around, noting that the Boy Scouts have always required young men to be morally straight. “What is morality?” wondered the blind Mr. Larsen aloud.  After all, Christian denominations have differed over this or that detail.  And whatever would we say to the Metropolitan churches who are openly homosexual?  (Tacit premise in the question: Until you get everything perfect, you’re not allowed to say they’re wrong.) This is a conservative, low-info talk show host who cannot connect the dots.  Well, actually, Larsen says he’s libertarian, but he’s still dense on this topic and unable to connect dots, and here’s why. Morality of any and every sort is an assertion of authority.  The moment you say “ought” or “ought not,” somebody else demands, “Says who?”  Morality requires an anchor.  The Author, the Anchor, is God.  And even though the church admittedly has quibbles a-plenty, we’re all together in relaying to you His judgment that sodomy isn’t okay. Mr. Larsen, apparently unwilling to consider a reliable message from a capable though fallible messenger, has no anchor.  How else can you even ask such a question as, “What is morality?” And once you pull up the anchor, everything tied to it will drift away.  The current debate over homosexuality didn’t spring upon America like a bolt from the sky.  It started way back when Americans grew discontented with the God who insists we should keep our word.  Not long thence, easy-breezy divorce became socially acceptable.  A few years later, pornography began to proliferate.  And then came the sexual revolution with its promiscuity, the shack-ups, the meteoric rise in illegitimacy, the loss of shame as the entertainer class breeds without commitment. First thing you know, many major cities had whole sections of their towns devoted to sodomy, and before you can adjust to that, they’ve got us voting on whether homosexuals have a right to marry one another. And at that point, people like Mr. Larsen cannot render a reason as to what could possibly be bad about that. Prediction: Sometime soon our society will be debating polygamy, pedophilia, bestiality, and necrophilia, and those who (for whatever reason) disapprove of such things but who have no anchor will find themselves as tongue tied as the hapless Mr. Larsen was.  Who’s to say what’s wrong, after all? Without God as the anchor for morals, you will have no morals.  He made the world where it can’t be any other way.  And yes, He did that on purpose.  Morals, like the rights stated in the Declaration of Independence, are derived.  And just as God created us equal and endowed us with rights, so he also created us with the social, civic, and religious obligations we refer to as morality. When you pull up the anchor, you don’t just lose your morals.  You’ll start losing your rights, too.  Same anchor, same God.  Say good-bye to life, liberty, and the pursuit of happiness.  Godless men cannot comprehend, let alone respect, the Bill of Rights.  They have no clue where such things came from, no idea of what makes them special, and no sense of a higher Authority to whom all earthly authorities must give account.  You can no more have rights without morality than you can have a stream without water.  Both flow from the same spring, the Eternal God. God deliver us from leaders who do not know their Maker, or even that they are made. Lance and Oprah The embarrassing spectacle of Lance Armstrong confessing to Oprah has failed to capture the popular imagination. For one thing, Lance is not a sympathetic character.  Americans are not prone to soaring eloquence, so people call him a jerk.  British writer Geoffrey Wheatcroft said of him, “Mr. Armstrong has “a voice like ice cubes,” as one French journalist puts it, and I have to admit that he reminds me of what Daniel O’Connell said about Sir Robert Peel: He has a smile like moonlight playing on a gravestone.” Another thing is that Lance’s confession came too late.  And it was lame.  And it was tacky.  But it fits the pattern now so familiar in no-fault America in which a famous person commits a sin, gets caught, lies about it till the lie becomes ridiculous, then finally stages a theatrical confession.  The staging is usually in proportion to the fame and ego of the perpetrator.  Thus, Lance. Scroll through the mental list of publicly groveling miscreants from Lance back through Anthony Weiner, Bill Clinton, South Carolina governor Mark Sanford, gay/doping preacher Ted Haggard, and a host of others. The spiritual man can see what this is all about.  Adam remains banished from Eden.  The occasional rite of public humiliation is just a couple of the exiles passing by the gate and wishing for a way back in.  But the gate is shut.  The cherub with the flaming sword still bars the road to paradise. A final thing about Lance’s confession is that we can all see it does no good.  The public, momentarily curious, watches the ritual confessions and is vaguely aware of the hopelessness of it all.  To confess seems required.  A wrong was done.  To admit it is demanded.  We all feel the pressure of the demand.  Some of us help exert it.  At the same time, it’s inadequate.  It’s watering a dead tree, and all the same to the tree whether it’s water or tears. The Secular Man, two-dimensional being that he is, confesses to himself and to his peers.  Who else is there?  To the carnal mind, what is paradise but the pleasure he felt before his sin was found out?  A degrading confession seems to be how you shake up the Etch-A-Sketch and redraw the picture. The confession has to feature humiliation and suffering.  Part of the suffering involves the rest of us smirking at the poor dumb schmuck locked in the pillory.  But even when we humiliate ourselves as Lance did, the sin remains.  And even if you suffer to the point of death, you’re just dead and guilty.  Whether you’re confessing to Oprah or CNN, it’s still just praying to a god that cannot save. (Isa 45:20) The riddle is solved at the cross.  It is Christ’s humiliation, not ours, and His suffering and death, that brings remission of sins.  It is our confession to Him, not to Oprah nor to a public filled with critics and voyeurs, that brings peace.
3de27f2022adc483
Take the 2-minute tour × In his book "Einstein's mistakes" H. C. Ohanian, the author, holds that Einstein delivered 7 proofs for $E=mc^2$ in his life that all were in some way incorrect. This despite the fact that correct proves had been published and mistakes in his proofs were sometimes pointed out to him. The first proof e.g. contains a circular line of thought in that it falsely assumes special relativity to be compatible with rigid bodies. Reference: stated by V. Icke, prof. theoretical astronomy at Leiden University in his (Dutch) book 'Niks relatief': "Einstein made an error in his calculation of 1905". I found a reference on the internet discussing rigid body motion in special relativity. It quotes from Einsteins 1905 paper: “Let there be given a stationary rigid rod ...”. The referenced paper shows that dealing with rigid bodies in special relativity is at least really complicated, so one could argue that a proof for $E=mc^2$ should not use a rigid body. Do you think Ohanian's statement is true or was/is he biased in his opinion? share|improve this question closed as not constructive by Manishearth May 23 '13 at 21:11 Terence Tao has written on Einstein's derivation here terrytao.wordpress.com/2007/12/28/einsteins-derivation-of-emc2 –  j.c. Nov 24 '10 at 22:12 (alert: rhetorical question) do you think Ohanian was right? And more importantly, can you back up your opinion with a reasoned argument? If you have a specific objection to one of Einstein's derivations of $E = mc^2$, you can certainly ask about that here, but this is not the place to poll the community to see who agrees with such-and-such an opinion. Also, asking whether a certain person was biased or not is not a physics question; it strikes me as more of a history question. –  David Z Nov 27 '10 at 4:14 I'm not sure of the merits of this question, but it does seem that the word "arrogant" is being thrown around a lot in the answers. I think we can all agree that all physicists, as a rule of thumb, are arrogant to some degree (or considered to be so by the general public) if only by virtue of the fact that our resumes contains references to such things as grand unification and theory of everything. So it would be nice if we could stick to debating this question on its merits alone. Just my contribution from the peanut gallery. –  user346 Jan 13 '11 at 18:18 I'm downvoting the question. It tells us there's a book that contains a specific argument, and asks us for our opinions on that argument. But we don't have access to the book, and most of us don't speak Dutch, so we wouldn't be able to read the book if we did have access to it. The question can't be answered unless Gerard tells us what the mysterious argument is. In general, I have not been impressed with the material I've seen from the Ohanian book. Specifically, the discussion of length contraction and W.F.G. Swann is totally bogus. –  Ben Crowell Aug 14 '11 at 21:38 There are some strange inconsistencies in the question. "The first proof" would have to refer to the 1905 paper titled "Does the inertia of a body depend upon its energy content?," but this date would contradict the 1906 date in the title of the Icke book. In discussion below, Gerard says, "His proof involves a photon that hits a rigid body." The 1905 paper doesn't involve photons at all, it discusses only emission of light, not absorption, and it never mentions a rigid body. –  Ben Crowell Aug 15 '11 at 0:00 6 Answers 6 I will exaggerate a bit, but in physics, proof in the sense of mathematical proof is irrelevant. Even if all of Einstein's deductions of the formula were wrong, it still turns out that empirical evidence supports $E=mc^2$. Now, without the exaggeration, mathematical deduction is important in physical theories because it shows us how conclusions and principles hang together. This can be important when elaborating further theories. Imagine, for the sake of argument, it turns out that the relativity principle is not correct, yet $E=mc^2$. Since usually we deduce the latter from the former, there is something interesting here, it would mean that $E=mc^2$ is more fundamental than the principle of relativity. Special relativity arises itself from this kind of considerations. Realizing that the invariance of the speed of light is more fundamental than the invariance of time, the latter being only approximately true at very low speeds. EDIT: I still wasn't able to read what Ohanian is saying in particular, but it is no secret that Einstein was not a great mathematician. For instance, if it was not for the help of his friend Marcel Grossmann, Einstein might never have been able to develop the theory of general relativity. From his intuition about the equivalence principle in 1905 to the actual GR in 1915, he had to toil for 10 years with non-Euclidean geometry. In the meantime, he nearly got overtaken by David Hilbert. (See Marek's comment.) share|improve this answer Corrections: Einstein was far from a lousy mathematician, he just wasn't a great one. Also, Hilbert was nowhere close. He was a (rather arrogant) mathematician, and a somewhat mediocre physicist. –  Noldorin Nov 24 '10 at 19:12 @Raskolnikov: My oh my, what is this with all the anti-Einstein and anti-Hilbert sentiment? Almost nothing that has been said here is true. Einstein wasn't overtaken by Hilbert at all. Hilbert just wrote the E-H action, but this was only in 1915 when he was already familiar with all the Einstein's work. Not only that but he himself admitted all the credit to Einstein. –  Marek Nov 26 '10 at 0:35 @Noldorin: Seem to me that you're quite quick to call people arrogant without actually knowing anything about them. What Hilbert said is actually a common knowledge and he obviously meant only the fact that many physicists are not really able to handle mathematics required for the modern physics and that they don't care for proofs and formal correctness. Actually, it can also be said that "mathematics is too hard for mathematicians" because many of them don't have the physical intuition :-) In any case, take it easy and don't be so quick to judge others ;-) –  Marek Nov 26 '10 at 0:44 @Noldorin: nobody's telling you what to do. I am just saying that you're a little uptight and it wouldn't hurt if you took things a little easier. It is just my advice and you're certainly free to ignore it; but you can count on me arguing with you again simply because I don't like your judgemental behavior. I don't think one has to be a moderator to point out obvious flaws in your argumentation. –  Marek Nov 26 '10 at 17:43 And just a little note, @Noldorin: when I told the Hilbert's statement to my theoretical physics friends some time ago, all of them laughed and agreed. I assume same situation happened long ago with Hilbert's physicist friends. It's quite a pity you see arrogance here. But of course, you are free to hate whomever you want, me and Hilbert included :-) –  Marek Nov 26 '10 at 17:46 The argument is summarized on the Wikipedia page "Mass/Energy equivalence", and it goes like this: imagine a body at rest which then emits two equal photons, one to the right and one to the left. In the rest frame, the body is still at rest because the photons have equal and oppsite momentum. Shifting to a frame moving to the right. In this frame, the photon moving to the left is blueshifted and carries more momentum, and the photon moving to the left is redshifted and carries less momentum. This means that the object has lost some right momentum after the emission. The velocity is the same before and after, because the velocity is the same in the rest frame, so how could the body lose momentum without changing its velocity? It must have lost mass. If you calculate the lost mass, it equals the energy of the photons divided by c2. This argument is obviously correct, essentially rigorous (it requires a precise framework to state rigorously, but there are no imprecise assumptions). The bickering came because Einstein demonstrated this and not anyone else, everyone else thought that the mass/energy relation was E=4/3 mc^2. Soon after, Poincare realized what everyone's mistake was. Planck also discovers E=mc^2, and published after Einstein (but I would bet his work was independent). He just refuses to accept that Einstein's argument is correct, and says his argument is the correct one. This is possibly because of his bitterness at being scooped at such an important result. From this attack come all future error claims. Ohanian's book in general gets everything wrong. Here is a complete list of Einstein's mistakes (I put an expanded version of this on Wikipedia years ago, but it slowly got reworded, watered down, and moved. That gradual process, of course, was the work of Satan): Einstein's mistakes • 1905: In the original German version of the special relativity pape, Einstein gives the transverse mass as $ m/(1 - v^2/c^2)$, while the actual value is $ m/\sqrt{1 - v^2/c^2}$ (Max Planck corrected this). • 1905: In his PhD dissertation, the friction in dilute solutions has an miscalculated numerical prefactor, which makes the estimate of Avogadro’s number off by a factor of 3. The mistake is corrected by Einstein in a later publication. • 1905: An expository paper explaining how airplanes fly includes an example which is incorrect. There is a wing which he claims will generate lift. This wing is flat on the bottom, and flat on the top, with a small bump at the center. It is designed to generate lift by Bernoulli’s principle, and Einstein claims that it will. Simple action reaction considerations, though, show that the wing will not generate lift, at least if it is long enough. • 1913: Einstein started writing papers based on his belief that the hole argument made general covariance impossible in a theory of gravity. Einstein realized he was wrong in 1915, and finds General Relativity. • 1922: Einstein published a qualitative theory of superconductivity based on the vague idea of electrons quantum-mechanically shared in orbits. This paper predated modern quantum mechanics, and is well understood to be completely wrong. Einstein's paper is more of an old-quantum-mechanical version of the modern explanation of ordinary conductivity. • 1937: Einstein believed that the focusing properties of geodesics in general relativity would lead to an instability which causes plane gravitational waves to collapse in on themselves. While this is true to a certain extent in some limits, because gravitational instabilities can lead to a concentration of energy density into black holes, for plane waves of the type Einstein and Rosen considered in their paper, the instabilities are under control. Einstein retracted this position a short time later, but his collaborator Nathan Rosen maintained that gravitational waves are unstable until his death. • 1939: Einstein denied several times that black holes could form, the last time in print. He published a paper that argues that a star collapsing would spin faster and faster, spinning at the speed of light with infinite energy well before the point where it is about to collapse into a black hole. This paper received no citations, and the conclusions are well understood to be wrong. Einstein’s argument itself is inconclusive, since he only shows that stable spinning objects have to spin faster and faster to stay stable before the point where they collapse. But it is well understood today (and was understood well by some even then) that collapse cannot happen through stationary states the way Einstein imagined. There's other mistakes that are not mistakes, but philosophical things: • In the Bohr-Einstein debates and the papers following this, Einstein tries to poke holes in the uncertainty principle, ingeniously, but unsuccessfully. • In the EPR paper, Einstein concludes that quantum mechanics must be replaced by local hidden variables. The measured violations of Bell’s inequality show that hidden variables, if they exist, must be nonlocal. Einstein considered the cosmological constant a mistake, but the cosmological constant is necessary within general relativity as it is currently understood, and it is widely believed to have a nonzero value today. He had lapses in taste too, usually quickly corrected: • Einstein briefly flirted with transverse and longitudinal mass concepts, before rejecting them. • Einstein initially opposed Minkowski’s geometrical formulation of special relativity, changing his mind completely a few years later. • Based on his cosmological model, Einstein rejected expanding universe solutions by Friedman and Lemaitre as unphysical, changing his mind when the universe was shown to be expanding a few years later. • Finding it too formal, Einstein believed that Heisenberg’s matrix mechanics was incorrect. He changed his mind when Schrödinger and others demonstrated that the formulation in terms of the Schrödinger equation, based on Einstein’s wave-particle duality was equivalent to Heisenberg’s matrices. • Einstein rejected work on black holes by Chandrasekhar, Oppenheimer, and others, believing, along with Eddington, that collapse past the horizon (then called the ’Schwarzschild singularity’) would never happen. So big was his influence, that this opinion was not rejected until the early 1960s, almost a decade after his death. • Einstein believed that some sort of nonlinear instability could lead to a field theory whose solutions would collapse into pointlike objects which would behave like quantum particles. This is impossible by Bell’s inequality. It is sometimes claimed that the general line of Einstein’s reasoning in the 1905 relativity paper is flawed, or the photon paper, or one or another of the most famous papers. Those claims are all ridiculous. share|improve this answer Just a small point about wings. Pilots learn to think in terms of the momentum of the downdraft created by a wing. Bernoulli is only the reason why the air over the top of the wing is sucked downward. –  Mike Dunlavey Sep 13 '11 at 13:42 @Mike: yes, of course, this is why Einstein's example is not very good. It is remarkable that he wasn't thinking this way in 1905. –  Ron Maimon Nov 28 '11 at 8:44 Einstein's proof did not rely on having a rigid body. It relies on having a body with mass (obviously). To be more clear. • The paper only says body. • It does not rely on any rigid body property (like the size) • It does not rely on any relativistic speed or condition on the body The proof merely involve how energy is measured by an observer stationary with the body and one stationary with the emitted waves. Let there be a stationary body in the system $(x, y, z)$, and let its energy--referred to the system $(x, y, z)$ be $E_0$. Let the energy of the body relative to the system $(\xi,\eta,\zeta)$ moving as above with the velocity $v$, be $H_0$. In fact the paper is really easy and clear... :-) share|improve this answer nonsense, the paper is not really easy and clear, hence the controversy. –  Physiks lover Nov 14 '12 at 0:25 Oh, you think it's really easy and clear? I suggest you have a proper read of it! –  Larry Harson Jan 28 '13 at 16:20 In special relativity, it's only accelerating bodies which are not allowed to be rigid. Non-accelerating bodies don't have any forces on them, so there is no obstacle to their retaining the same shape. I haven't checked, but I believe Einstein's intuitive derivation of relativity didn't involve any accelerating bodies. share|improve this answer His proof involves a photon that hits a rigid body. –  Gerard Nov 25 '10 at 18:18 ... which then accelerates it an infinitesimal amount, thus making it infinitesimally non-rigid? If you stick in some $\epsilon$'s and $\delta$'s, this should even be a good enough proof for mathematicians. Of course, he didn't, so I guess it's not quite a rigorous proof, but by physics standards it definitely passes. –  Peter Shor Nov 25 '10 at 19:46 See my edit for a reference from a professor in theoretical astronomy and strong advocate of A.E. Can you prove him wrong? –  Gerard Nov 26 '10 at 8:15 @Gerard : you should either restate the argument of Vincent Icke, and not arrogantly point @Peter to a book written in Dutch, expecting that @Peter learns Dutch (if he doesn't already speaks it), reads the book, find in it the place where you think the argument is stated, and refute it. He already gave you a quite explicit explanation on why the rigid body problem is not so difficult. I don't think your bad usage of the authority argument has place here, and I'm definitely sure it has no place against Peter. –  Frédéric Grosshans Dec 3 '10 at 13:49 @Gerard: "His proof involves a photon that hits a rigid body." You (and Icke) seem to be referring to a different argument, not the 1905 one. See my comments on your question. –  Ben Crowell Aug 15 '11 at 1:09 The answers do not address what Ohanian said. His paper is a free download. As far as I know, Ohanian has not been refuted. share|improve this answer His paper is so trivial to refute, it is hardly worth the bother: he is saying that Einstein is wrong to use the nonrelativistic expression for energy in his argument, but the velocity of the object that Einstein is considering is entirely due to shifting reference frame, and the velocity of the shift can be infinitesimally small. In addition, while Einstein might have asked "what is the kinetic energy of the body" to determine the loss of mass, the system loses mass when you ask "what is the linear momentum of the body (as explained in my answer). The only relativistic thing is the light. –  Ron Maimon Aug 16 '11 at 4:19 I remember reading Einstein's original paper on this, and it seemed to be argued pretty clearly. I believe he considers a scenario involving the emission and absorption of photons, and uses the length/time dilation factors to get an expression for energy, which hey then takes to the classical (Newtonian) limit and equates with $\frac{1}{2}mv^2$ to show the relation. share|improve this answer The proof in the original paper assumes the existance of rigid bodies, so that proof cannot suffice! –  Gerard Nov 24 '10 at 22:57 It's easily extended to any sort of body though. –  Noldorin Nov 25 '10 at 0:33 A circular line of thought is quite a serious error in a proof. It can be fixed, but still ... –  Gerard Nov 25 '10 at 15:25 You're accusing Einstein of circular thought? Wow, you have nerve. –  Noldorin Nov 25 '10 at 17:46 @Gerard: I've gone through the argument and haven't found any reference to a rigid body. Several other people here have also failed to find any reference to a rigid body. Nobody has posted a specific description of where in the paper any such assumption is made. If you detect such a hidden assumption, please tell us where you think it is. –  Ben Crowell Aug 14 '11 at 21:31
1f2f2448ca176cc7
Statistical Mechanics/The Foundations < Statistical Mechanics The goal of statistical mechanics is to bridge the gap that exists between the microscopic and the macroscopic world. Take for example the equation governing the behaviour of microscopic particles: the quantum mechanical Schrödinger equation. Based on this equation, a quantum physicist will tell you everything you want to know about a particle by itself. If you ask, he will even be able to give you a solution for the wave function of a system of two particles. However, once you add a third or more, things start to get more complicated. There is no longer an analytic solution, and one must turn to computers to solve the problem numerically - and the results the computer will spit out are quite accurate for systems of 3, 4 or more particles. Even when considering classical Hamiltonian mechanics, there is no analytical solution for many-body problems such as the motion of the planets, although we can simulate them accurately numerically. But what happens when the number of particles you have is much larger - not just ten or twenty, or even a thousand - what happens when you have a cup of water for example, with ~1025 particles? Each of the 1025 particles interacts with every single one of the 1025 other particles - that's a total number of interactions of the order of 1050 that would have to be computed at every instant! Even a computer that could perform a trillion calculations per second would take 1020 times longer than the age of the universe to compute the exact state of your cup of water for a single instant in time. Clearly such a computation is not possible. It is therefore not possible, in practice, to solve the equations governing macroscopic systems. Statistical mechanics provides the tools required to take the information given by quantum physics and use it to describe macroscopic systems and predict how they will evolve in time. By far the most important of these tools is probability theory. Instead of saying that a physical system is in exactly one or the other configuration, we will talk about the probability of it being in a certain configuration. For example, in a room filled with gas, it is far more probable that the gas is spread evenly rather than being bunched up in one corner. This may seem to be nothing more than common sense, but it has profound implications, especially when the probabilities involved are studied quantitatively. Through the use of this and other tools, statistical mechanics enables physicists to gain fundamental insight into the workings of the macroscopic world.
5edb99284fd1e720
quantum mechanics quantum mechanics, science dealing with the behaviour of matter and light on the atomic and subatomic scale. It attempts to describe and account for the properties of molecules and atoms and their constituents—electrons, protons, neutrons, and other more esoteric particles such as quarks and gluons. These properties include the interactions of the particles with one another and with electromagnetic radiation (i.e., light, X-rays, and gamma rays). The behaviour of matter and radiation on the atomic scale often seems peculiar, and the consequences of quantum theory are accordingly difficult to understand and to believe. Its concepts frequently conflict with common-sense notions derived from observations of the everyday world. There is no reason, however, why the behaviour of the atomic world should conform to that of the familiar, large-scale world. It is important to realize that quantum mechanics is a branch of physics and that the business of physics is to describe and account for the way the world—on both the large and the small scale—actually is and not how one imagines it or would like it to be. The study of quantum mechanics is rewarding for several reasons. First, it illustrates the essential methodology of physics. Second, it has been enormously successful in giving correct results in practically every situation to which it has been applied. There is, however, an intriguing paradox. In spite of the overwhelming practical success of quantum mechanics, the foundations of the subject contain unresolved problems—in particular, problems concerning the nature of measurement. An essential feature of quantum mechanics is that it is generally impossible, even in principle, to measure a system without disturbing it; the detailed nature of this disturbance and the exact point at which it occurs are obscure and controversial. Thus, quantum mechanics attracted some of the ablest scientists of the 20th century, and they erected what is perhaps the finest intellectual edifice of the period. Historical basis of quantum theory Basic considerations At a fundamental level, both radiation and matter have characteristics of particles and waves. The gradual recognition by scientists that radiation has particle-like properties and that matter has wavelike properties provided the impetus for the development of quantum mechanics. Influenced by Newton, most physicists of the 18th century believed that light consisted of particles, which they called corpuscles. From about 1800, evidence began to accumulate for a wave theory of light. At about this time Thomas Young showed that, if monochromatic light passes through a pair of slits, the two emerging beams interfere, so that a fringe pattern of alternately bright and dark bands appears on a screen. The bands are readily explained by a wave theory of light. According to the theory, a bright band is produced when the crests (and troughs) of the waves from the two slits arrive together at the screen; a dark band is produced when the crest of one wave arrives at the same time as the trough of the other, and the effects of the two light beams cancel. Beginning in 1815, a series of experiments by Augustin-Jean Fresnel of France and others showed that, when a parallel beam of light passes through a single slit, the emerging beam is no longer parallel but starts to diverge; this phenomenon is known as diffraction. Given the wavelength of the light and the geometry of the apparatus (i.e., the separation and widths of the slits and the distance from the slits to the screen), one can use the wave theory to calculate the expected pattern in each case; the theory agrees precisely with the experimental data. Early developments Planck’s radiation law By the end of the 19th century, physicists almost universally accepted the wave theory of light. However, though the ideas of classical physics explain interference and diffraction phenomena relating to the propagation of light, they do not account for the absorption and emission of light. All bodies radiate electromagnetic energy as heat; in fact, a body emits radiation at all wavelengths. The energy radiated at different wavelengths is a maximum at a wavelength that depends on the temperature of the body; the hotter the body, the shorter the wavelength for maximum radiation. Attempts to calculate the energy distribution for the radiation from a blackbody using classical ideas were unsuccessful. (A blackbody is a hypothetical ideal body or surface that absorbs and reemits all radiant energy falling on it.) One formula, proposed by Wilhelm Wien of Germany, did not agree with observations at long wavelengths, and another, proposed by Lord Rayleigh (John William Strutt) of England, disagreed with those at short wavelengths. In 1900 the German theoretical physicist Max Planck made a bold suggestion. He assumed that the radiation energy is emitted, not continuously, but rather in discrete packets called quanta. The energy E of the quantum is related to the frequency ν by E = hν. The quantity h, now known as Planck’s constant, is a universal constant with the approximate value of 6.62607 × 10−34 joule∙second. Planck showed that the calculated energy spectrum then agreed with observation over the entire wavelength range. Einstein and the photoelectric effect In 1905 Einstein extended Planck’s hypothesis to explain the photoelectric effect, which is the emission of electrons by a metal surface when it is irradiated by light or more-energetic photons. The kinetic energy of the emitted electrons depends on the frequency ν of the radiation, not on its intensity; for a given metal, there is a threshold frequency ν0 below which no electrons are emitted. Furthermore, emission takes place as soon as the light shines on the surface; there is no detectable delay. Einstein showed that these results can be explained by two assumptions: (1) that light is composed of corpuscles or photons, the energy of which is given by Planck’s relationship, and (2) that an atom in the metal can absorb either a whole photon or nothing. Part of the energy of the absorbed photon frees an electron, which requires a fixed energy W, known as the work function of the metal; the rest is converted into the kinetic energy meu2/2 of the emitted electron (me is the mass of the electron and u is its velocity). Thus, the energy relation is If ν is less than ν0, where hν0 = W, no electrons are emitted. Not all the experimental results mentioned above were known in 1905, but all Einstein’s predictions have been verified since. Bohr’s theory of the atom A major contribution to the subject was made by Niels Bohr of Denmark, who applied the quantum hypothesis to atomic spectra in 1913. The spectra of light emitted by gaseous atoms had been studied extensively since the mid-19th century. It was found that radiation from gaseous atoms at low pressure consists of a set of discrete wavelengths. This is quite unlike the radiation from a solid, which is distributed over a continuous range of wavelengths. The set of discrete wavelengths from gaseous atoms is known as a line spectrum, because the radiation (light) emitted consists of a series of sharp lines. The wavelengths of the lines are characteristic of the element and may form extremely complex patterns. The simplest spectra are those of atomic hydrogen and the alkali atoms (e.g., lithium, sodium, and potassium). For hydrogen, the wavelengths λ are given by the empirical formula where m and n are positive integers with n > m and R, known as the Rydberg constant, has the value 1.097373157 × 107 per metre. For a given value of m, the lines for varying n form a series. The lines for m = 1, the Lyman series, lie in the ultraviolet part of the spectrum; those for m = 2, the Balmer series, lie in the visible spectrum; and those for m = 3, the Paschen series, lie in the infrared. Bohr started with a model suggested by the New Zealand-born British physicist Ernest Rutherford. The model was based on the experiments of Hans Geiger and Ernest Marsden, who in 1909 bombarded gold atoms with massive, fast-moving alpha particles; when some of these particles were deflected backward, Rutherford concluded that the atom has a massive, charged nucleus. In Rutherford’s model, the atom resembles a miniature solar system with the nucleus acting as the Sun and the electrons as the circulating planets. Bohr made three assumptions. First, he postulated that, in contrast to classical mechanics, where an infinite number of orbits is possible, an electron can be in only one of a discrete set of orbits, which he termed stationary states. Second, he postulated that the only orbits allowed are those for which the angular momentum of the electron is a whole number n times ℏ (ℏ = h/2π). Third, Bohr assumed that Newton’s laws of motion, so successful in calculating the paths of the planets around the Sun, also applied to electrons orbiting the nucleus. The force on the electron (the analogue of the gravitational force between the Sun and a planet) is the electrostatic attraction between the positively charged nucleus and the negatively charged electron. With these simple assumptions, he showed that the energy of the orbit has the form where E0 is a constant that may be expressed by a combination of the known constants e, me, and ℏ. While in a stationary state, the atom does not give off energy as light; however, when an electron makes a transition from a state with energy En to one with lower energy Em, a quantum of energy is radiated with frequency ν, given by the equation Inserting the expression for En into this equation and using the relation λν = c, where c is the speed of light, Bohr derived the formula for the wavelengths of the lines in the hydrogen spectrum, with the correct value of the Rydberg constant. Bohr’s theory was a brilliant step forward. Its two most important features have survived in present-day quantum mechanics. They are (1) the existence of stationary, nonradiating states and (2) the relationship of radiation frequency to the energy difference between the initial and final states in a transition. Prior to Bohr, physicists had thought that the radiation frequency would be the same as the electron’s frequency of rotation in an orbit. Scattering of X-rays Soon scientists were faced with the fact that another form of radiation, X-rays, also exhibits both wave and particle properties. Max von Laue of Germany had shown in 1912 that crystals can be used as three-dimensional diffraction gratings for X-rays; his technique constituted the fundamental evidence for the wavelike nature of X-rays. The atoms of a crystal, which are arranged in a regular lattice, scatter the X-rays. For certain directions of scattering, all the crests of the X-rays coincide. (The scattered X-rays are said to be in phase and to give constructive interference.) For these directions, the scattered X-ray beam is very intense. Clearly, this phenomenon demonstrates wave behaviour. In fact, given the interatomic distances in the crystal and the directions of constructive interference, the wavelength of the waves can be calculated. In 1922 the American physicist Arthur Holly Compton showed that X-rays scatter from electrons as if they are particles. Compton performed a series of experiments on the scattering of monochromatic, high-energy X-rays by graphite. He found that part of the scattered radiation had the same wavelength λ0 as the incident X-rays but that there was an additional component with a longer wavelength λ. To interpret his results, Compton regarded the X-ray photon as a particle that collides and bounces off an electron in the graphite target as though the photon and the electron were a pair of (dissimilar) billiard balls. Application of the laws of conservation of energy and momentum to the collision leads to a specific relation between the amount of energy transferred to the electron and the angle of scattering. For X-rays scattered through an angle θ, the wavelengths λ and λ0 are related by the equation The experimental correctness of Compton’s formula is direct evidence for the corpuscular behaviour of radiation. Broglie’s wave hypothesis Faced with evidence that electromagnetic radiation has both particle and wave characteristics, Louis-Victor de Broglie of France suggested a great unifying hypothesis in 1924. Broglie proposed that matter has wave as well as particle properties. He suggested that material particles can behave as waves and that their wavelength λ is related to the linear momentum p of the particle by λ = h/p. In 1927 Clinton Davisson and Lester Germer of the United States confirmed Broglie’s hypothesis for electrons. Using a crystal of nickel, they diffracted a beam of monoenergetic electrons and showed that the wavelength of the waves is related to the momentum of the electrons by the Broglie equation. Since Davisson and Germer’s investigation, similar experiments have been performed with atoms, molecules, neutrons, protons, and many other particles. All behave like waves with the same wavelength-momentum relationship. Basic concepts and methods Bohr’s theory, which assumed that electrons moved in circular orbits, was extended by the German physicist Arnold Sommerfeld and others to include elliptic orbits and other refinements. Attempts were made to apply the theory to more complicated systems than the hydrogen atom. However, the ad hoc mixture of classical and quantum ideas made the theory and calculations increasingly unsatisfactory. Then, in the 12 months started in July 1925, a period of creativity without parallel in the history of physics, there appeared a series of papers by German scientists that set the subject on a firm conceptual foundation. The papers took two approaches: (1) matrix mechanics, proposed by Werner Heisenberg, Max Born, and Pascual Jordan, and (2) wave mechanics, put forward by Erwin Schrödinger. The protagonists were not always polite to each other. Heisenberg found the physical ideas of Schrödinger’s theory “disgusting,” and Schrödinger was “discouraged and repelled” by the lack of visualization in Heisenberg’s method. However, Schrödinger, not allowing his emotions to interfere with his scientific endeavours, showed that, in spite of apparent dissimilarities, the two theories are equivalent mathematically. The present discussion follows Schrödinger’s wave mechanics because it is less abstract and easier to understand than Heisenberg’s matrix mechanics. Schrödinger’s wave mechanics Schrödinger expressed Broglie’s hypothesis concerning the wave behaviour of matter in a mathematical form that is adaptable to a variety of physical problems without additional arbitrary assumptions. He was guided by a mathematical formulation of optics, in which the straight-line propagation of light rays can be derived from wave motion when the wavelength is small compared to the dimensions of the apparatus employed. In the same way, Schrödinger set out to find a wave equation for matter that would give particle-like propagation when the wavelength becomes comparatively small. According to classical mechanics, if a particle of mass me is subjected to a force such that its potential energy is V(xyz) at position xyz, then the sum of V(xyz) and the kinetic energy p2/2me is equal to a constant, the total energy E of the particle. Thus, It is assumed that the particle is bound—i.e., confined by the potential to a certain region in space because its energy E is insufficient for it to escape. Since the potential varies with position, two other quantities do also: the momentum and, hence, by extension from the Broglie relation, the wavelength of the wave. Postulating a wave function Ψ(xyz) that varies with position, Schrödinger replaced p in the above energy equation with a differential operator that embodied the Broglie relation. He then showed that Ψ satisfies the partial differential equation This is the (time-independent) Schrödinger wave equation, which established quantum mechanics in a widely applicable form. An important advantage of Schrödinger’s theory is that no further arbitrary quantum conditions need be postulated. The required quantum results follow from certain reasonable restrictions placed on the wave function—for example, that it should not become infinitely large at large distances from the centre of the potential. Schrödinger applied his equation to the hydrogen atom, for which the potential function, given by classical electrostatics, is proportional to −e2/r, where −e is the charge on the electron. The nucleus (a proton of charge e) is situated at the origin, and r is the distance from the origin to the position of the electron. Schrödinger solved the equation for this particular potential with straightforward, though not elementary, mathematics. Only certain discrete values of E lead to acceptable functions Ψ. These functions are characterized by a trio of integers n, l, m, termed quantum numbers. The values of E depend only on the integers n (1, 2, 3, etc.) and are identical with those given by the Bohr theory. The quantum numbers l and m are related to the angular momentum of the electron; (l(l + 1))ℏ is the magnitude of the angular momentum, and mℏ is its component along some physical direction. The square of the wave function, Ψ2, has a physical interpretation. Schrödinger originally supposed that the electron was spread out in space and that its density at point x, y, z was given by the value of Ψ2 at that point. Almost immediately Born proposed what is now the accepted interpretation—namely, that Ψ2 gives the probability of finding the electron at xyz. The distinction between the two interpretations is important. If Ψ2 is small at a particular position, the original interpretation implies that a small fraction of an electron will always be detected there. In Born’s interpretation, nothing will be detected there most of the time, but, when something is observed, it will be a whole electron. Thus, the concept of the electron as a point particle moving in a well-defined path around the nucleus is replaced in wave mechanics by clouds that describe the probable locations of electrons in different states. Electron spin and antiparticles In 1928 the English physicist Paul A.M. Dirac produced a wave equation for the electron that combined relativity with quantum mechanics. Schrödinger’s wave equation does not satisfy the requirements of the special theory of relativity because it is based on a nonrelativistic expression for the kinetic energy (p2/2me). Dirac showed that an electron has an additional quantum number ms. Unlike the first three quantum numbers, ms is not a whole integer and can have only the values +1/2 and −1/2. It corresponds to an additional form of angular momentum ascribed to a spinning motion. (The angular momentum mentioned above is due to the orbital motion of the electron, not its spin.) The concept of spin angular momentum was introduced in 1925 by Samuel A. Goudsmit and George E. Uhlenbeck, two graduate students at the University of Leiden, Neth., to explain the magnetic moment measurements made by Otto Stern and Walther Gerlach of Germany several years earlier. The magnetic moment of a particle is closely related to its angular momentum; if the angular momentum is zero, so is the magnetic moment. Yet Stern and Gerlach had observed a magnetic moment for electrons in silver atoms, which were known to have zero orbital angular momentum. Goudsmit and Uhlenbeck proposed that the observed magnetic moment was attributable to spin angular momentum. The electron-spin hypothesis not only provided an explanation for the observed magnetic moment but also accounted for many other effects in atomic spectroscopy, including changes in spectral lines in the presence of a magnetic field (Zeeman effect), doublet lines in alkali spectra, and fine structure (close doublets and triplets) in the hydrogen spectrum. The Dirac equation also predicted additional states of the electron that had not yet been observed. Experimental confirmation was provided in 1932 by the discovery of the positron by the American physicist Carl David Anderson. Every particle described by the Dirac equation has to have a corresponding antiparticle, which differs only in charge. The positron is just such an antiparticle of the negatively charged electron, having the same mass as the latter but a positive charge. Identical particles and multielectron atoms Because electrons are identical to (i.e., indistinguishable from) each other, the wave function of an atom with more than one electron must satisfy special conditions. The problem of identical particles does not arise in classical physics, where the objects are large-scale and can always be distinguished, at least in principle. There is no way, however, to differentiate two electrons in the same atom, and the form of the wave function must reflect this fact. The overall wave function Ψ of a system of identical particles depends on the coordinates of all the particles. If the coordinates of two of the particles are interchanged, the wave function must remain unaltered or, at most, undergo a change of sign; the change of sign is permitted because it is Ψ2 that occurs in the physical interpretation of the wave function. If the sign of Ψ remains unchanged, the wave function is said to be symmetric with respect to interchange; if the sign changes, the function is antisymmetric. The symmetry of the wave function for identical particles is closely related to the spin of the particles. In quantum field theory (see below Quantum electrodynamics), it can be shown that particles with half-integral spin (1/2, 3/2, etc.) have antisymmetric wave functions. They are called fermions after the Italian-born physicist Enrico Fermi. Examples of fermions are electrons, protons, and neutrons, all of which have spin 1/2. Particles with zero or integral spin (e.g., mesons, photons) have symmetric wave functions and are called bosons after the Indian mathematician and physicist Satyendra Nath Bose, who first applied the ideas of symmetry to photons in 1924–25. The requirement of antisymmetric wave functions for fermions leads to a fundamental result, known as the exclusion principle, first proposed in 1925 by the Austrian physicist Wolfgang Pauli. The exclusion principle states that two fermions in the same system cannot be in the same quantum state. If they were, interchanging the two sets of coordinates would not change the wave function at all, which contradicts the result that the wave function must change sign. Thus, two electrons in the same atom cannot have an identical set of values for the four quantum numbers n, l, m, ms. The exclusion principle forms the basis of many properties of matter, including the periodic classification of the elements, the nature of chemical bonds, and the behaviour of electrons in solids; the last determines in turn whether a solid is a metal, an insulator, or a semiconductor (see atom; matter). The Schrödinger equation cannot be solved precisely for atoms with more than one electron. The principles of the calculation are well understood, but the problems are complicated by the number of particles and the variety of forces involved. The forces include the electrostatic forces between the nucleus and the electrons and between the electrons themselves, as well as weaker magnetic forces arising from the spin and orbital motions of the electrons. Despite these difficulties, approximation methods introduced by the English physicist Douglas R. Hartree, the Russian physicist Vladimir Fock, and others in the 1920s and 1930s have achieved considerable success. Such schemes start by assuming that each electron moves independently in an average electric field because of the nucleus and the other electrons; i.e., correlations between the positions of the electrons are ignored. Each electron has its own wave function, called an orbital. The overall wave function for all the electrons in the atom satisfies the exclusion principle. Corrections to the calculated energies are then made, which depend on the strengths of the electron-electron correlations and the magnetic forces. Time-dependent Schrödinger equation At the same time that Schrödinger proposed his time-independent equation to describe the stationary states, he also proposed a time-dependent equation to describe how a system changes from one state to another. By replacing the energy E in Schrödinger’s equation with a time-derivative operator, he generalized his wave equation to determine the time variation of the wave function as well as its spatial variation. The time-dependent Schrödinger equation reads The quantity i is the square root of −1. The function Ψ varies with time t as well as with position xyz. For a system with constant energy, E, Ψ has the form where exp stands for the exponential function, and the time-dependent Schrödinger equation reduces to the time-independent form. The probability of a transition between one atomic stationary state and some other state can be calculated with the aid of the time-dependent Schrödinger equation. For example, an atom may change spontaneously from one state to another state with less energy, emitting the difference in energy as a photon with a frequency given by the Bohr relation. If electromagnetic radiation is applied to a set of atoms and if the frequency of the radiation matches the energy difference between two stationary states, transitions can be stimulated. In a stimulated transition, the energy of the atom may increase—i.e., the atom may absorb a photon from the radiation—or the energy of the atom may decrease, with the emission of a photon, which adds to the energy of the radiation. Such stimulated emission processes form the basic mechanism for the operation of lasers. The probability of a transition from one state to another depends on the values of the l, m, ms quantum numbers of the initial and final states. For most values, the transition probability is effectively zero. However, for certain changes in the quantum numbers, summarized as selection rules, there is a finite probability. For example, according to one important selection rule, the l value changes by unity because photons have a spin of 1. The selection rules for radiation relate to the angular momentum properties of the stationary states. The absorbed or emitted photon has its own angular momentum, and the selection rules reflect the conservation of angular momentum between the atoms and the radiation. The phenomenon of tunneling, which has no counterpart in classical physics, is an important consequence of quantum mechanics. Consider a particle with energy E in the inner region of a one-dimensional potential well V(x), as shown in Figure 1: The phenomenon of tunneling. Classically, a particle is bound in the central region C if its energy E is less than V0, but in quantum theory the particle may tunnel through the potential barrier and escape.. (A potential well is a potential that has a lower value in a certain region of space than in the neighbouring regions.) In classical mechanics, if E < V0 (the maximum height of the potential barrier), the particle remains in the well forever; if E > V0, the particle escapes. In quantum mechanics, the situation is not so simple. The particle can escape even if its energy E is below the height of the barrier V0, although the probability of escape is small unless E is close to V0. In that case, the particle may tunnel through the potential barrier and emerge with the same energy E. The phenomenon of tunneling has many important applications. For example, it describes a type of radioactive decay in which a nucleus emits an alpha particle (a helium nucleus). According to the quantum explanation given independently by George Gamow and by Ronald W. Gurney and Edward Condon in 1928, the alpha particle is confined before the decay by a potential of the shape shown in . For a given nuclear species, it is possible to measure the energy E of the emitted alpha particle and the average lifetime τ of the nucleus before decay. The lifetime of the nucleus is a measure of the probability of tunneling through the barrier—the shorter the lifetime, the higher the probability. With plausible assumptions about the general form of the potential function, it is possible to calculate a relationship between τ and E that is applicable to all alpha emitters. This theory, which is borne out by experiment, shows that the probability of tunneling, and hence the value of τ, is extremely sensitive to the value of E. For all known alpha-particle emitters, the value of E varies from about 2 to 8 million electron volts, or MeV (1 MeV = 106 electron volts). Thus, the value of E varies only by a factor of 4, whereas the range of τ is from about 1011 years down to about 10−6 second, a factor of 1024. It would be difficult to account for this sensitivity of τ to the value of E by any theory other than quantum mechanical tunneling. Axiomatic approach Although the two Schrödinger equations form an important part of quantum mechanics, it is possible to present the subject in a more general way. Dirac gave an elegant exposition of an axiomatic approach based on observables and states in a classic textbook entitled The Principles of Quantum Mechanics. (The book, published in 1930, is still in print.) An observable is anything that can be measured—energy, position, a component of angular momentum, and so forth. Every observable has a set of states, each state being represented by an algebraic function. With each state is associated a number that gives the result of a measurement of the observable. Consider an observable with N states, denoted by ψ1, ψ2, . . . , ψN, and corresponding measurement values a1, a2, . . . , aN. A physical system—e.g., an atom in a particular state—is represented by a wave function Ψ, which can be expressed as a linear combination, or mixture, of the states of the observable. Thus, the Ψ may be written as For a given Ψ, the quantities c1, c2, etc., are a set of numbers that can be calculated. In general, the numbers are complex, but, in the present discussion, they are assumed to be real numbers. The theory postulates, first, that the result of a measurement must be an a-value—i.e., a1, a2, or a3, etc. No other value is possible. Second, before the measurement is made, the probability of obtaining the value a1 is c12, and that of obtaining the value a2 is c22, and so on. If the value obtained is, say, a5, the theory asserts that after the measurement the state of the system is no longer the original Ψ but has changed to ψ5, the state corresponding to a5. A number of consequences follow from these assertions. First, the result of a measurement cannot be predicted with certainty. Only the probability of a particular result can be predicted, even though the initial state (represented by the function Ψ) is known exactly. Second, identical measurements made on a large number of identical systems, all in the identical state Ψ, will produce different values for the measurements. This is, of course, quite contrary to classical physics and common sense, which say that the same measurement on the same object in the same state must produce the same result. Moreover, according to the theory, not only does the act of measurement change the state of the system, but it does so in an indeterminate way. Sometimes it changes the state to ψ1, sometimes to ψ2, and so forth. There is an important exception to the above statements. Suppose that, before the measurement is made, the state Ψ happens to be one of the ψs—say, Ψ = ψ3. Then c3 = 1 and all the other cs are zero. This means that, before the measurement is made, the probability of obtaining the value a3 is unity and the probability of obtaining any other value of a is zero. In other words, in this particular case, the result of the measurement can be predicted with certainty. Moreover, after the measurement is made, the state will be ψ3, the same as it was before. Thus, in this particular case, measurement does not disturb the system. Whatever the initial state of the system, two measurements made in rapid succession (so that the change in the wave function given by the time-dependent Schrödinger equation is negligible) produce the same result. The value of one observable can be determined by a single measurement. The value of two observables for a given system may be known at the same time, provided that the two observables have the same set of state functions ψ1, ψ2, . . . , ψN. In this case, measuring the first observable results in a state function that is one of the ψs. Because this is also a state function of the second observable, the result of measuring the latter can be predicted with certainty. Thus the values of both observables are known. (Although the ψs are the same for the two observables, the two sets of a values are, in general, different.) The two observables can be measured repeatedly in any sequence. After the first measurement, none of the measurements disturbs the system, and a unique pair of values for the two observables is obtained. Incompatible observables The measurement of two observables with different sets of state functions is a quite different situation. Measurement of one observable gives a certain result. The state function after the measurement is, as always, one of the states of that observable; however, it is not a state function for the second observable. Measuring the second observable disturbs the system, and the state of the system is no longer one of the states of the first observable. In general, measuring the first observable again does not produce the same result as the first time. To sum up, both quantities cannot be known at the same time, and the two observables are said to be incompatible. A specific example of this behaviour is the measurement of the component of angular momentum along two mutually perpendicular directions. The Stern-Gerlach experiment mentioned above involved measuring the angular momentum of a silver atom in the ground state. In reconstructing this experiment, a beam of silver atoms is passed between the poles of a magnet. The poles are shaped so that the magnetic field varies greatly in strength over a very small distance (Figure 2: Magnet in Stern-Gerlach experiment. N and S are the north and south poles of a magnet. The knife-edge of S results in a much stronger magnetic field at the point P than at Q.Encyclopædia Britannica, Inc.). The apparatus determines the ms quantum number, which can be +1/2 or −1/2. No other values are obtained. Thus in this case the observable has only two states—i.e., N = 2. The inhomogeneous magnetic field produces a force on the silver atoms in a direction that depends on the spin state of the atoms. The result is shown schematically in Figure 3: Measurements of the x and y components of angular momentum for silver atoms, S, in the ground state. A, B, and C are magnets with inhomogeneous magnetic fields. The arrows show the average direction of each magnetic field.Encyclopædia Britannica, Inc.. A beam of silver atoms is passed through magnet A. The atoms in the state with ms = +1/2 are deflected upward and emerge as beam 1, while those with ms = −1/2 are deflected downward and emerge as beam 2. If the direction of the magnetic field is the x-axis, the apparatus measures Sx, which is the x-component of spin angular momentum. The atoms in beam 1 have Sx = +ℏ/2 while those in beam 2 have Sx = −ℏ/2. In a classical picture, these two states represent atoms spinning about the direction of the x-axis with opposite senses of rotation. The y-component of spin angular momentum Sy also can have only the values +ℏ/2 and −ℏ/2; however, the two states of Sy are not the same as for Sx. In fact, each of the states of Sx is an equal mixture of the states for Sy, and conversely. Again, the two Sy states may be pictured as representing atoms with opposite senses of rotation about the y-axis. These classical pictures of quantum states are helpful, but only up to a certain point. For example, quantum theory says that each of the states corresponding to spin about the x-axis is a superposition of the two states with spin about the y-axis. There is no way to visualize this; it has absolutely no classical counterpart. One simply has to accept the result as a consequence of the axioms of the theory. Suppose that, as in , the atoms in beam 1 are passed into a second magnet B, which has a magnetic field along the y-axis perpendicular to x. The atoms emerge from B and go in equal numbers through its two output channels. Classical theory says that the two magnets together have measured both the x- and y-components of spin angular momentum and that the atoms in beam 3 have Sx = +ℏ/2, Sy = +ℏ/2, while those in beam 4 have Sx = +ℏ/2, Sy = −ℏ/2. However, classical theory is wrong, because if beam 3 is put through still another magnet C, with its magnetic field along x, the atoms divide equally into beams 5 and 6 instead of emerging as a single beam 5 (as they would if they had Sx = +ℏ/2). Thus, the correct statement is that the beam entering B has Sx = +ℏ/2 and is composed of an equal mixture of the states Sy = +ℏ/2 and Sy = −ℏ/2—i.e., the x-component of angular momentum is known but the y-component is not. Correspondingly, beam 3 leaving B has Sy = +ℏ/2 and is an equal mixture of the states Sx = +ℏ/2 and Sx = −ℏ/2; the y-component of angular momentum is known but the x-component is not. The information about Sx is lost because of the disturbance caused by magnet B in the measurement of Sy. Heisenberg uncertainty principle The observables discussed so far have had discrete sets of experimental values. For example, the values of the energy of a bound system are always discrete, and angular momentum components have values that take the form mℏ, where m is either an integer or a half-integer, positive or negative. On the other hand, the position of a particle or the linear momentum of a free particle can take continuous values in both quantum and classical theory. The mathematics of observables with a continuous spectrum of measured values is somewhat more complicated than for the discrete case but presents no problems of principle. An observable with a continuous spectrum of measured values has an infinite number of state functions. The state function Ψ of the system is still regarded as a combination of the state functions of the observable, but the sum in equation (10) must be replaced by an integral. Measurements can be made of position x of a particle and the x-component of its linear momentum, denoted by px. These two observables are incompatible because they have different state functions. The phenomenon of diffraction noted above illustrates the impossibility of measuring position and momentum simultaneously and precisely. If a parallel monochromatic light beam passes through a slit (Figure 4: (A) Parallel monochromatic light incident normally on a slit, (B) variation in the intensity of the light with direction after it has passed through the slit. If the experiment is repeated with electrons instead of light, the same diagram would represent the variation in the intensity (i.e., relative number) of the electrons.), its intensity varies with direction, as shown in . The light has zero intensity in certain directions. Wave theory shows that the first zero occurs at an angle θ0, given by sin θ0 = λ/b, where λ is the wavelength of the light and b is the width of the slit. If the width of the slit is reduced, θ0 increases—i.e., the diffracted light is more spread out. Thus, θ0 measures the spread of the beam. The experiment can be repeated with a stream of electrons instead of a beam of light. According to Broglie, electrons have wavelike properties; therefore, the beam of electrons emerging from the slit should widen and spread out like a beam of light waves. This has been observed in experiments. If the electrons have velocity u in the forward direction (i.e., the y-direction in ), their (linear) momentum is p = meu. Consider px, the component of momentum in the x-direction. After the electrons have passed through the aperture, the spread in their directions results in an uncertainty in px by an amount where λ is the wavelength of the electrons and, according to the Broglie formula, equals h/p. Thus, Δpxh/b. Exactly where an electron passed through the slit is unknown; it is only certain that an electron went through somewhere. Therefore, immediately after an electron goes through, the uncertainty in its x-position is Δx ≈ b/2. Thus, the product of the uncertainties is of the order of ℏ. More exact analysis shows that the product has a lower limit, given by This is the well-known Heisenberg uncertainty principle for position and momentum. It states that there is a limit to the precision with which the position and the momentum of an object can be measured at the same time. Depending on the experimental conditions, either quantity can be measured as precisely as desired (at least in principle), but the more precisely one of the quantities is measured, the less precisely the other is known. The uncertainty principle is significant only on the atomic scale because of the small value of h in everyday units. If the position of a macroscopic object with a mass of, say, one gram is measured with a precision of 10−6 metre, the uncertainty principle states that its velocity cannot be measured to better than about 10−25 metre per second. Such a limitation is hardly worrisome. However, if an electron is located in an atom about 10−10 metre across, the principle gives a minimum uncertainty in the velocity of about 106 metre per second. The above reasoning leading to the uncertainty principle is based on the wave-particle duality of the electron. When Heisenberg first propounded the principle in 1927 his reasoning was based, however, on the wave-particle duality of the photon. He considered the process of measuring the position of an electron by observing it in a microscope. Diffraction effects due to the wave nature of light result in a blurring of the image; the resulting uncertainty in the position of the electron is approximately equal to the wavelength of the light. To reduce this uncertainty, it is necessary to use light of shorter wavelength—e.g., gamma rays. However, in producing an image of the electron, the gamma-ray photon bounces off the electron, giving the Compton effect (see above Early developments: Scattering of X-rays). As a result of the collision, the electron recoils in a statistically random way. The resulting uncertainty in the momentum of the electron is proportional to the momentum of the photon, which is inversely proportional to the wavelength of the photon. So it is again the case that increased precision in knowledge of the position of the electron is gained only at the expense of decreased precision in knowledge of its momentum. A detailed calculation of the process yields the same result as before (equation [12]). Heisenberg’s reasoning brings out clearly the fact that the smaller the particle being observed, the more significant is the uncertainty principle. When a large body is observed, photons still bounce off it and change its momentum, but, considered as a fraction of the initial momentum of the body, the change is insignificant. The Schrödinger and Dirac theories give a precise value for the energy of each stationary state, but in reality the states do not have a precise energy. The only exception is in the ground (lowest energy) state. Instead, the energies of the states are spread over a small range. The spread arises from the fact that, because the electron can make a transition to another state, the initial state has a finite lifetime. The transition is a random process, and so different atoms in the same state have different lifetimes. If the mean lifetime is denoted as τ, the theory shows that the energy of the initial state has a spread of energy ΔE, given by This energy spread is manifested in a spread in the frequencies of emitted radiation. Therefore, the spectral lines are not infinitely sharp. (Some experimental factors can also broaden a line, but their effects can be reduced; however, the present effect, known as natural broadening, is fundamental and cannot be reduced.) Equation (13) is another type of Heisenberg uncertainty relation; generally, if a measurement with duration τ is made of the energy in a system, the measurement disturbs the system, causing the energy to be uncertain by an amount ΔE, the magnitude of which is given by the above equation. Quantum electrodynamics The application of quantum theory to the interaction between electrons and radiation requires a quantum treatment of Maxwell’s field equations, which are the foundations of electromagnetism, and the relativistic theory of the electron formulated by Dirac (see above Electron spin and antiparticles). The resulting quantum field theory is known as quantum electrodynamics, or QED. QED accounts for the behaviour and interactions of electrons, positrons, and photons. It deals with processes involving the creation of material particles from electromagnetic energy and with the converse processes in which a material particle and its antiparticle annihilate each other and produce energy. Initially the theory was beset with formidable mathematical difficulties, because the calculated values of quantities such as the charge and mass of the electron proved to be infinite. However, an ingenious set of techniques developed (in the late 1940s) by Hans Bethe, Julian S. Schwinger, Tomonaga Shin’ichirō, Richard P. Feynman, and others dealt systematically with the infinities to obtain finite values of the physical quantities. Their method is known as renormalization. The theory has provided some remarkably accurate predictions. According to the Dirac theory, two particular states in hydrogen with different quantum numbers have the same energy. QED, however, predicts a small difference in their energies; the difference may be determined by measuring the frequency of the electromagnetic radiation that produces transitions between the two states. This effect was first measured by Willis E. Lamb, Jr., and Robert Retherford in 1947. Its physical origin lies in the interaction of the electron with the random fluctuations in the surrounding electromagnetic field. These fluctuations, which exist even in the absence of an applied field, are a quantum phenomenon. The accuracy of experiment and theory in this area may be gauged by two recent values for the separation of the two states, expressed in terms of the frequency of the radiation that produces the transitions: An even more spectacular example of the success of QED is provided by the value for μe, the magnetic dipole moment of the free electron. Because the electron is spinning and has electric charge, it behaves like a tiny magnet, the strength of which is expressed by the value of μe. According to the Dirac theory, μe is exactly equal to μB = eℏ/2me, a quantity known as the Bohr magneton; however, QED predicts that μe = (1 + aB, where a is a small number, approximately 1/860. Again, the physical origin of the QED correction is the interaction of the electron with random oscillations in the surrounding electromagnetic field. The best experimental determination of μe involves measuring not the quantity itself but the small correction term μe − μB. This greatly enhances the sensitivity of the experiment. The most recent results for the value of a are Since a itself represents a small correction term, the magnetic dipole moment of the electron is measured with an accuracy of about one part in 1011. One of the most precisely determined quantities in physics, the magnetic dipole moment of the electron can be calculated correctly from quantum theory to within about one part in 1010. The interpretation of quantum mechanics Although quantum mechanics has been applied to problems in physics with great success, some of its ideas seem strange. A few of their implications are considered here. The electron: wave or particle? Young’s aforementioned experiment in which a parallel beam of monochromatic light is passed through a pair of narrow parallel slits (Figure 5: (A) Monochromatic light incident on a pair of slits gives interference fringes (alternate light and dark bands) on a screen, (B) variation in the intensity of the light at the screen when both slits are open. With a single slit, there is no interference pattern; the intensity variation is shown by the broken line. As with Figure 4B, the same diagram would give the variation in the intensity of electrons in the corresponding electron experiment.) has an electron counterpart. In Young’s original experiment, the intensity of the light varies with direction after passing through the slits (). The intensity oscillates because of interference between the light waves emerging from the two slits, the rate of oscillation depending on the wavelength of the light and the separation of the slits. The oscillation creates a fringe pattern of alternating light and dark bands that is modulated by the diffraction pattern from each slit. If one of the slits is covered, the interference fringes disappear, and only the diffraction pattern (shown as a broken line in ) is observed. Young’s experiment can be repeated with electrons all with the same momentum. The screen in the optical experiment is replaced by a closely spaced grid of electron detectors. There are many devices for detecting electrons; the most common are scintillators. When an electron passes through a scintillating material, such as sodium iodide, the material produces a light flash which gives a voltage pulse that can be amplified and recorded. The pattern of electrons recorded by each detector is the same as that predicted for waves with wavelengths given by the Broglie formula. Thus, the experiment provides conclusive evidence for the wave behaviour of electrons. If the experiment is repeated with a very weak source of electrons so that only one electron passes through the slits, a single detector registers the arrival of an electron. This is a well-localized event characteristic of a particle. Each time the experiment is repeated, one electron passes through the slits and is detected. A graph plotted with detector position along one axis and the number of electrons along the other looks exactly like the oscillating interference pattern in . Thus, the intensity function in the figure is proportional to the probability of the electron moving in a particular direction after it has passed through the slits. Apart from its units, the function is identical to Ψ2, where Ψ is the solution of the time-independent Schrödinger equation for this particular experiment. If one of the slits is covered, the fringe pattern disappears and is replaced by the diffraction pattern for a single slit. Thus, both slits are needed to produce the fringe pattern. However, if the electron is a particle, it seems reasonable to suppose that it passed through only one of the slits. The apparatus can be modified to ascertain which slit by placing a thin wire loop around each slit. When an electron passes through a loop, it generates a small electric signal, showing which slit it passed through. However, the interference fringe pattern then disappears, and the single-slit diffraction pattern returns. Since both slits are needed for the interference pattern to appear and since it is impossible to know which slit the electron passed through without destroying that pattern, one is forced to the conclusion that the electron goes through both slits at the same time. In summary, the experiment shows both the wave and particle properties of the electron. The wave property predicts the probability of direction of travel before the electron is detected; on the other hand, the fact that the electron is detected in a particular place shows that it has particle properties. Therefore, the answer to the question whether the electron is a wave or a particle is that it is neither. It is an object exhibiting either wave or particle properties, depending on the type of measurement that is made on it. In other words, one cannot talk about the intrinsic properties of an electron; instead, one must consider the properties of the electron and measuring apparatus together. Hidden variables A fundamental concept in quantum mechanics is that of randomness, or indeterminacy. In general, the theory predicts only the probability of a certain result. Consider the case of radioactivity. Imagine a box of atoms with identical nuclei that can undergo decay with the emission of an alpha particle. In a given time interval, a certain fraction will decay. The theory may tell precisely what that fraction will be, but it cannot predict which particular nuclei will decay. The theory asserts that, at the beginning of the time interval, all the nuclei are in an identical state and that the decay is a completely random process. Even in classical physics, many processes appear random. For example, one says that, when a roulette wheel is spun, the ball will drop at random into one of the numbered compartments in the wheel. Based on this belief, the casino owner and the players give and accept identical odds against each number for each throw. However, the fact is that the winning number could be predicted if one noted the exact location of the wheel when the croupier released the ball, the initial speed of the wheel, and various other physical parameters. It is only ignorance of the initial conditions and the difficulty of doing the calculations that makes the outcome appear to be random. In quantum mechanics, on the other hand, the randomness is asserted to be absolutely fundamental. The theory says that, though one nucleus decayed and the other did not, they were previously in the identical state. Many eminent physicists, including Einstein, have not accepted this indeterminacy. They have rejected the notion that the nuclei were initially in the identical state. Instead, they postulated that there must be some other property—presently unknown, but existing nonetheless—that is different for the two nuclei. This type of unknown property is termed a hidden variable; if it existed, it would restore determinacy to physics. If the initial values of the hidden variables were known, it would be possible to predict which nuclei would decay. Such a theory would, of course, also have to account for the wealth of experimental data which conventional quantum mechanics explains from a few simple assumptions. Attempts have been made by Broglie, David Bohm, and others to construct theories based on hidden variables, but the theories are very complicated and contrived. For example, the electron would definitely have to go through only one slit in the two-slit experiment. To explain that interference occurs only when the other slit is open, it is necessary to postulate a special force on the electron which exists only when that slit is open. Such artificial additions make hidden variable theories unattractive, and there is little support for them among physicists. The orthodox view of quantum mechanics—and the one adopted in the present article—is known as the Copenhagen interpretation because its main protagonist, Niels Bohr, worked in that city. The Copenhagen view of understanding the physical world stresses the importance of basing theory on what can be observed and measured experimentally. It therefore rejects the idea of hidden variables as quantities that cannot be measured. The Copenhagen view is that the indeterminacy observed in nature is fundamental and does not reflect an inadequacy in present scientific knowledge. One should therefore accept the indeterminacy without trying to “explain” it and see what consequences come from it. Attempts have been made to link the existence of free will with the indeterminacy of quantum mechanics, but it is difficult to see how this feature of the theory makes free will more plausible. On the contrary, free will presumably implies rational thought and decision, whereas the essence of the indeterminism in quantum mechanics is that it is due to intrinsic randomness. Paradox of Einstein, Podolsky, and Rosen In 1935 Einstein and two other physicists in the United States, Boris Podolsky and Nathan Rosen, analyzed a thought experiment to measure position and momentum in a pair of interacting systems. Employing conventional quantum mechanics, they obtained some startling results, which led them to conclude that the theory does not give a complete description of physical reality. Their results, which are so peculiar as to seem paradoxical, are based on impeccable reasoning, but their conclusion that the theory is incomplete does not necessarily follow. Bohm simplified their experiment while retaining the central point of their reasoning; this discussion follows his account. The proton, like the electron, has spin 1/2; thus, no matter what direction is chosen for measuring the component of its spin angular momentum, the values are always +ℏ/2 or −ℏ/2. (The present discussion relates only to spin angular momentum, and the word spin is omitted from now on.) It is possible to obtain a system consisting of a pair of protons in close proximity and with total angular momentum equal to zero. Thus, if the value of one of the components of angular momentum for one of the protons is +ℏ/2 along any selected direction, the value for the component in the same direction for the other particle must be −ℏ/2. Suppose the two protons move in opposite directions until they are far apart. The total angular momentum of the system remains zero, and if the component of angular momentum along the same direction for each of the two particles is measured, the result is a pair of equal and opposite values. Therefore, after the quantity is measured for one of the protons, it can be predicted for the other proton; the second measurement is unnecessary. As previously noted, measuring a quantity changes the state of the system. Thus, if measuring Sx (the x-component of angular momentum) for proton 1 produces the value  +ℏ/2, the state of proton 1 after measurement corresponds to Sx = +ℏ/2, and the state of proton 2 corresponds to Sx = −ℏ/2. Any direction, however, can be chosen for measuring the component of angular momentum. Whichever direction is selected, the state of proton 1 after measurement corresponds to a definite component of angular momentum about that direction. Furthermore, since proton 2 must have the opposite value for the same component, it follows that the measurement on proton 1 results in a definite state for proton 2 relative to the chosen direction, notwithstanding the fact that the two particles may be millions of kilometres apart and are not interacting with each other at the time. Einstein and his two collaborators thought that this conclusion was so obviously false that the quantum mechanical theory on which it was based must be incomplete. They concluded that the correct theory would contain some hidden variable feature that would restore the determinism of classical physics. A comparison of how quantum theory and classical theory describe angular momentum for particle pairs illustrates the essential difference between the two outlooks. In both theories, if a system of two particles has a total angular momentum of zero, then the angular momenta of the two particles are equal and opposite. If the components of angular momentum are measured along the same direction, the two values are numerically equal, one positive and the other negative. Thus, if one component is measured, the other can be predicted. The crucial difference between the two theories is that, in classical physics, the system under investigation is assumed to have possessed the quantity being measured beforehand. The measurement does not disturb the system; it merely reveals the preexisting state. It may be noted that, if a particle were actually to possess components of angular momentum prior to measurement, such quantities would constitute hidden variables. Does nature behave as quantum mechanics predicts? The answer comes from measuring the components of angular momenta for the two protons along different directions with an angle θ between them. A measurement on one proton can give only the result +ℏ/2 or −ℏ/2. The experiment consists of measuring correlations between the plus and minus values for pairs of protons with a fixed value of θ, and then repeating the measurements for different values of θ, as in Figure 6: Experiment to determine the correlation in measured angular momentum values for a pair of protons with zero total angular momentum. The two protons are initially at the point 0 and move in opposite directions toward the two magnets.. The interpretation of the results rests on an important theorem by the Irish-born physicist John Stewart Bell. Bell began by assuming the existence of some form of hidden variable with a value that would determine whether the measured angular momentum gives a plus or minus result. He further assumed locality—namely, that measurement on one proton (i.e., the choice of the measurement direction) cannot affect the result of the measurement on the other proton. Both these assumptions agree with classical, commonsense ideas. He then showed quite generally that these two assumptions lead to a certain relationship, now known as Bell’s inequality, for the correlation values mentioned above. Experiments have been conducted at several laboratories with photons instead of protons (the analysis is similar), and the results show fairly conclusively that Bell’s inequality is violated. That is to say, the observed results agree with those of quantum mechanics and cannot be accounted for by a hidden variable (or deterministic) theory based on the concept of locality. One is forced to conclude that the two protons are a correlated pair and that a measurement on one affects the state of both, no matter how far apart they are. This may strike one as highly peculiar, but such is the way nature appears to be. It may be noted that the effect on the state of proton 2 following a measurement on proton 1 is believed to be instantaneous; the effect happens before a light signal initiated by the measuring event at proton 1 reaches proton 2. Alain Aspect and his coworkers in Paris demonstrated this result in 1982 with an ingenious experiment in which the correlation between the two angular momenta was measured, within a very short time interval, by a high-frequency switching device. The interval was less than the time taken for a light signal to travel from one particle to the other at the two measurement positions. Einstein’s special theory of relativity states that no message can travel with a speed greater than that of light. Thus, there is no way that the information concerning the direction of the measurement on the first proton could reach the second proton before the measurement was made on it. Measurement in quantum mechanics The way quantum mechanics treats the process of measurement has caused considerable debate. Schrödinger’s time-dependent wave equation (equation [8]) is an exact recipe for determining the way the wave function varies with time for a given physical system in a given physical environment. According to the Schrödinger equation, the wave function varies in a strictly determinate way. On the other hand, in the axiomatic approach to quantum mechanics described above, a measurement changes the wave function abruptly and discontinuously. Before the measurement is made, the wave function Ψ is a mixture of the ψs as indicated in equation (10). The measurement changes Ψ from a mixture of ψs to a single ψ. This change, brought about by the process of measurement, is termed the collapse or reduction of the wave function. The collapse is a discontinuous change in Ψ; it is also unpredictable, because, starting with the same Ψ represented by the right-hand side of equation (10), the end result can be any one of the individual ψs. The Schrödinger equation, which gives a smooth and predictable variation of Ψ, applies between the measurements. The measurement process itself, however, cannot be described by the Schrödinger equation; it is somehow a thing apart. This appears unsatisfactory, inasmuch as a measurement is a physical process and ought to be the subject of the Schrödinger equation just like any other physical process. The difficulty is related to the fact that quantum mechanics applies to microscopic systems containing one (or a few) electrons, protons, or photons. Measurements, however, are made with large-scale objects (e.g., detectors, amplifiers, and meters) in the macroscopic world, which obeys the laws of classical physics. Thus, another way of formulating the question of what happens in a measurement is to ask how the microscopic quantum world relates and interacts with the macroscopic classical world. More narrowly, it can be asked how and at what point in the measurement process does the wave function collapse? So far, there are no satisfactory answers to these questions, although there are several schools of thought. One approach stresses the role of a conscious observer in the measurement process and suggests that the wave function collapses when the observer reads the measuring instrument. Bringing the conscious mind into the measurement problem seems to raise more questions than it answers, however. As discussed above, the Copenhagen interpretation of the measurement process is essentially pragmatic. It distinguishes between microscopic quantum systems and macroscopic measuring instruments. The initial object or event—e.g., the passage of an electron, photon, or atom—triggers the classical measuring device into giving a reading; somewhere along the chain of events, the result of the measurement becomes fixed (i.e., the wave function collapses). This does not answer the basic question but says, in effect, not to worry about it. This is probably the view of most practicing physicists. A third school of thought notes that an essential feature of the measuring process is irreversibility. This contrasts with the behaviour of the wave function when it varies according to the Schrödinger equation; in principle, any such variation in the wave function can be reversed by an appropriate experimental arrangement. However, once a classical measuring instrument has given a reading, the process is not reversible. It is possible that the key to the nature of the measurement process lies somewhere here. The Schrödinger equation is known to apply only to relatively simple systems. It is an enormous extrapolation to assume that the same equation applies to the large and complex system of a classical measuring device. It may be that the appropriate equation for such a system has features that produce irreversible effects (e.g., wave-function collapse) which differ in kind from those for a simple system. One may also mention the so-called many-worlds interpretation, proposed by Hugh Everett III in 1957, which suggests that, when a measurement is made for a system in which the wave function is a mixture of states, the universe branches into a number of noninteracting universes. Each of the possible outcomes of the measurement occurs, but in a different universe. Thus, if Sx = 1/2 is the result of a Stern-Gerlach measurement on a silver atom (see above Incompatible observables), there is another universe identical to ours in every way (including clones of people), except that the result of the measurement is Sx = −1/2. Although this fanciful model solves some measurement problems, it has few adherents among physicists. Because the various ways of looking at the measurement process lead to the same experimental consequences, trying to distinguish between them on scientific grounds may be fruitless. One or another may be preferred on the grounds of plausibility, elegance, or economy of hypotheses, but these are matters of individual taste. Whether one day a satisfactory quantum theory of measurement will emerge, distinguished from the others by its verifiable predictions, remains an open question. Applications of quantum mechanics As has been noted, quantum mechanics has been enormously successful in explaining microscopic phenomena in all branches of physics. The three phenomena described in this section are examples that demonstrate the quintessence of the theory. Decay of the kaon The kaon (also called the K0 meson), discovered in 1947, is produced in high-energy collisions between nuclei and other particles. It has zero electric charge, and its mass is about one-half the mass of the proton. It is unstable and, once formed, rapidly decays into either 2 or 3 pi-mesons. The average lifetime of the kaon is about 10−10 second. In spite of the fact that the kaon is uncharged, quantum theory predicts the existence of an antiparticle with the same mass, decay products, and average lifetime; the antiparticle is denoted by K0. During the early 1950s, several physicists questioned the justification for postulating the existence of two particles with such similar properties. In 1955, however, Murray Gell-Mann and Abraham Pais made an interesting prediction about the decay of the kaon. Their reasoning provides an excellent illustration of the quantum mechanical axiom that the wave function Ψ can be a superposition of states; in this case, there are two states, the K0 and K0 mesons themselves. A K0 meson may be represented formally by writing the wave function as Ψ = K0; similarly Ψ = K0 represents a K0 meson. From the two states, K0 and K0, the following two new states are constructed: From these two equations it follows that The reason for defining the two states K1 and K2 is that, according to quantum theory, when the K0 decays, it does not do so as an isolated particle; instead, it combines with its antiparticle to form the states K1 and K2. The state K1 (called the K-short [K0S]) decays into two pi-mesons with a very short lifetime (about 9 × 10−11 second), while K2 (called the K-long [K0L]) decays into three pi-mesons with a longer lifetime (about 5 × 10−8 second). The physical consequences of these results may be demonstrated in the following experiment. K0 particles are produced in a nuclear reaction at the point A (Figure 7: Decay of the K0 meson.). They move to the right in the figure and start to decay. At point A, the wave function is Ψ = K0, which, from equation (16), can be expressed as the sum of K1 and K2. As the particles move to the right, the K1 state begins to decay rapidly. If the particles reach point B in about 10−8 second, nearly all the K1 component has decayed, although hardly any of the K2 component has done so. Thus, at point B, the beam has changed from one of pure K0 to one of almost pure K2, which equation (15) shows is an equal mixture of K0 and K0. In other words, K0 particles appear in the beam simply because K1 and K2 decay at different rates. At point B, the beam enters a block of absorbing material. Both the K0 and K0 are absorbed by the nuclei in the block, but the K0 are absorbed more strongly. As a result, even though the beam is an equal mixture of K0 and K0 when it enters the absorber, it is almost pure K0 when it exits at point C. The beam thus begins and ends as K0. Gell-Mann and Pais predicted all this, and experiments subsequently verified it. The experimental observations are that the decay products are primarily two pi-mesons with a short decay time near A, three pi-mesons with longer decay time near B, and two pi-mesons again near C. (This account exaggerates the changes in the K1 and K2 components between A and B and in the K0 and K0 components between B and C; the argument, however, is unchanged.) The phenomenon of generating the K0 and regenerating the K1 decay is purely quantum. It rests on the quantum axiom of the superposition of states and has no classical counterpart. Cesium clock The cesium clock is the most accurate type of clock yet developed. This device makes use of transitions between the spin states of the cesium nucleus and produces a frequency which is so regular that it has been adopted for establishing the time standard. Like electrons, many atomic nuclei have spin. The spin of these nuclei produces a set of small effects in the spectra, known as hyperfine structure. (The effects are small because, though the angular momentum of a spinning nucleus is of the same magnitude as that of an electron, its magnetic moment, which governs the energies of the atomic levels, is relatively small.) The nucleus of the cesium atom has spin quantum number 7/2. The total angular momentum of the lowest energy states of the cesium atom is obtained by combining the spin angular momentum of the nucleus with that of the single valence electron in the atom. (Only the valence electron contributes to the angular momentum because the angular momenta of all the other electrons total zero. Another simplifying feature is that the ground states have zero orbital momenta, so only spin angular momenta need to be considered.) When nuclear spin is taken into account, the total angular momentum of the atom is characterized by a quantum number, conventionally denoted by F, which for cesium is 4 or 3. These values come from the spin value 7/2 for the nucleus and 1/2 for the electron. If the nucleus and the electron are visualized as tiny spinning tops, the value F = 4 (7/2 + 1/2) corresponds to the tops spinning in the same sense, and F = 3 (7/2 − 1/2) corresponds to spins in opposite senses. The energy difference ΔE of the states with the two F values is a precise quantity. If electromagnetic radiation of frequency ν0, where is applied to a system of cesium atoms, transitions will occur between the two states. An apparatus that can detect the occurrence of transitions thus provides an extremely precise frequency standard. This is the principle of the cesium clock. The apparatus is shown schematically in Figure 8: Cesium clock.Encyclopædia Britannica, Inc.. A beam of cesium atoms emerges from an oven at a temperature of about 100 °C. The atoms pass through an inhomogeneous magnet A, which deflects the atoms in state F = 4 downward and those in state F = 3 by an equal amount upward. The atoms pass through slit S and continue into a second inhomogeneous magnet B. Magnet B is arranged so that it deflects atoms with an unchanged state in the same direction that magnet A deflected them. The atoms follow the paths indicated by the broken lines in the figure and are lost to the beam. However, if an alternating electromagnetic field of frequency ν0 is applied to the beam as it traverses the centre region C, transitions between states will occur. Some atoms in state F = 4 will change to F = 3, and vice versa. For such atoms, the deflections in magnet B are reversed. The atoms follow the whole lines in the diagram and strike a tungsten wire, which gives electric signals in proportion to the number of cesium atoms striking the wire. As the frequency ν of the alternating field is varied, the signal has a sharp maximum for ν = ν0. The length of the apparatus from the oven to the tungsten detector is about one metre. Each atomic state is characterized not only by the quantum number F but also by a second quantum number mF. For F = 4, mF can take integral values from 4 to −4. In the absence of a magnetic field, these states have the same energy. A magnetic field, however, causes a small change in energy proportional to the magnitude of the field and to the mF value. Similarly, a magnetic field changes the energy for the F = 3 states according to the mF value which, in this case, may vary from 3 to −3. The energy changes are indicated in Figure 9: Variation of energy with magnetic-field strength for the F = 4 and F = 3 states in cesium-133.. In the cesium clock, a weak constant magnetic field is superposed on the alternating electromagnetic field in region C. The theory shows that the alternating field can bring about a transition only between pairs of states with mF values that are the same or that differ by unity. However, as can be seen from the figure, the only transitions occurring at the frequency ν0 are those between the two states with mF = 0. The apparatus is so sensitive that it can discriminate easily between such transitions and all the others. If the frequency of the oscillator drifts slightly so that it does not quite equal ν0, the detector output drops. The change in signal strength produces a signal to the oscillator to bring the frequency back to the correct value. This feedback system keeps the oscillator frequency automatically locked to ν0. The cesium clock is exceedingly stable. The frequency of the oscillator remains constant to about one part in 1013. For this reason, the device is used to redefine the second. This base unit of time in the SI system is defined as equal to 9,192,631,770 cycles of the radiation corresponding to the transition between the levels F = 4, mF = 0 and F = 3, mF = 0 of the ground state of the cesium-133 atom. Prior to 1967, the second was defined in terms of the motion of Earth. The latter, however, is not nearly as stable as the cesium clock. Specifically, the fractional variation of Earth’s rotation period is a few hundred times larger than that of the frequency of the cesium clock. A quantum voltage standard Quantum theory has been used to establish a voltage standard, and this standard has proven to be extraordinarily accurate and consistent from laboratory to laboratory. If two layers of superconducting material are separated by a thin insulating barrier, a supercurrent (i.e., a current of paired electrons) can pass from one superconductor to the other. This is another example of the tunneling process described earlier. Several effects based on this phenomenon were predicted in 1962 by the British physicist Brian D. Josephson. Demonstrated experimentally soon afterwards, they are now referred to as the Josephson effects. If a DC (direct-current) voltage V is applied across the two superconductors, the energy of an electron pair changes by an amount of 2eV as it crosses the junction. As a result, the supercurrent oscillates with frequency ν given by the Planck relationship (E = hν). Thus, This oscillatory behaviour of the supercurrent is known as the AC (alternating-current) Josephson effect. Measurement of V and ν permits a direct verification of the Planck relationship. Although the oscillating supercurrent has been detected directly, it is extremely weak. A more sensitive method of investigating equation (19) is to study effects resulting from the interaction of microwave radiation with the supercurrent. Several carefully conducted experiments have verified equation (19) to such a high degree of precision that it has been used to determine the value of 2e/h. This value can in fact be determined more precisely by the AC Josephson effect than by any other method. The result is so reliable that laboratories now employ the AC Josephson effect to set a voltage standard. The numerical relationship between V and ν is In this way, measuring a frequency, which can be done with great precision, gives the value of the voltage. Before the Josephson method was used, the voltage standard in metrological laboratories devoted to the maintenance of physical units was based on high-stability Weston cadmium cells. These cells, however, tend to drift and so caused inconsistencies between standards in different laboratories. The Josephson method has provided a standard giving agreement to within a few parts in 108 for measurements made at different times and in different laboratories. The experiments described in the preceding two sections are only two examples of high-precision measurements in physics. The values of the fundamental constants, such as c, h, e, and me, are determined from a wide variety of experiments based on quantum phenomena. The results are so consistent that the values of the constants are thought to be known in most cases to better than one part in 106. Physicists may not know what they are doing when they make a measurement, but they do it extremely well.
45f654ef9c428aca
Take the 2-minute tour × When tackling a physics problem, An Engineer will manipulate the axes/coordinate system where a Mathematicians and/or Physicists will use the original coordinate system and math. Why do Engineers think differently? I know its likely because that is how they are taught, but why are they taught that way? share|improve this question closed as off topic by David Z May 31 '11 at 21:25 Who told you that this is special to engineers? I learned it both ways and I teach it both ways, because the important thing is that each person be able to do it in a way that makes sense to them and still be able to follow when someone else does it another way. –  dmckee May 31 '11 at 18:26 My calculus professor made this general statement about engineers, and my engineering professors agree. –  Dale May 31 '11 at 18:45 I'm with dmckee. Physicists will seek out coordinate systems to greater simplify problems. –  Jerry Schirmer May 31 '11 at 19:36 This strikes me as a question about engineers, not about physics. –  David Z May 31 '11 at 21:26 Such statements are good for jokes,eventually! The span of very different attitudes of engineers (civil, electronics, surveillance, etc) to math and physics shows that this statement on "engineers" is silly. –  Georg Jun 1 '11 at 8:58 4 Answers 4 Choosing an appropriate coordinate system often vastly simplifies a problem. Anyone who wants to solve a problem expediently will try to find a coordinate system that simplifies the problem. If your professors told you that physicists do not do this, then your professors told you a falsehood. share|improve this answer Engineers and Physicists have different requirements so they use different tools, and sometimes use the same tools with different approaches Engineers usually are after solving differential equations, or doing resonance analysis on some structure, which mostly involves doing Laplace transforms of complicated systems of equations, these equations might become significant easier to solve in specific coordinate systems. Some coordinate systems are better than others for certain problems Physics also use this for solving equations (think how easier is to solve Schrödinger equation for the hydrogen atom in spherical coordinates rather than, say, cartesian). However in theoretical physics one usually does not want to focus how the equations look in specific coordinates; one actually wants to see what part of a equation does not change (or change in a preescribed manner) when a coordinate system is changed, since the most interesting theoretical quantities are usually the ones that transform in particular simple and elegant ways share|improve this answer Because engineers like making things simple - it it's easier to work in the coordinate system of the aircraft (rather than galactic coordiantes) then they will. On the other hand the physicists will redefine all the constants to 1 to simplify the sums. share|improve this answer There are a few reasons. The first is that most engineers do project work, so a coordinate system is usually developed to suit the project making everything simple and easy to input into calculations. The second reason is that engineers like to look at solutions to problems by comparing the results calculations and designs with other designs at different stages. It is far easier to compare the dimensions of items when the units and origin of the coordinate system are suited to the problems. For example if you had to compare the depth of bridge girders but the bridge girders were measured as offsets from a origin at the support of the bridge rather than simply the depth of the girder it would be far more difficult to do a simple comparison. The final reason is simply that one engineering structure often interfaces with another. If you take the example of the London Underground it has it's own coordinate system for x, y and z coordinates. This means that it is easy for a new project to connect to an existing project share|improve this answer Every particle physics experiment I've worked on had it's own coordinate system. Some had multiple systems (i.e. accelerator coordinates, electron spectrometer coordinates, hadron spectrometer coordinates, etc.). Sometimes these were reasonably obvious, other times ... "whaddya mean the z-axis runs 3.4 degrees below the horizontal?!?". Each set was chosen for a good reason. –  dmckee Jun 1 '11 at 1:16
2dade8c5e2548dc4
Thursday, December 30, 2010 Physics First ?  The way it ought to be. Thanks, Tennessee. Inverted curriculum makes physics first science in high school CHATTANOOGA (AP) -   Some educators are starting to turn the way they teach high school science upside down. Rather than starting off in ninth and 10th grades with biology and chemistry, they are going to begin teaching physics first. The idea is to teach physics -- normally a course for later grades -- to freshmen in an effort to get them familiar with scientific concepts. Teachers then will help students apply those concepts as they teach chemistry and biology. “The physics is really the underlying science for biology and chemistry,” said Robert Marlowe, a professor of physics at the University of Tennessee at Chattanooga, who is helping secure grant money from the National Science Foundation to certify more local teachers in physics and chemistry. “The benefit for students is they will see how strongly physics is tied into biology and chemistry,” he said. “They will get a sense for how it is not the case that physics lies down one path and chemistry is behind a different door and biology is behind a different door. That’s nuts! It’s never been that way.” The grant involves eight universities and 30 school districts as well as the Tennessee Department of Education. Regardless of whether Hamilton County receives $875,000 of the $10 million total grant, officials say local teachers will move toward offering an inverted curriculum. The money partly would go toward summer institutes to get more science teachers certified in chemistry and physics, one of which is required to teach the new freshman-level physics class. The physical world concepts class, which a handful of schools already have begun teaching to ninth-graders, is a lower-level conceptual class, Marlowe said, which still leaves room for a senior-level physics class in 12th grade. Since Tennessee’s academic standards have become more rigorous in the last year, ninth-graders are starting high school with more experience in math, which, in turn, makes conceptual physics easier to understand, he said. “It’s always been that physics is more abstract than chemistry and biology and has laid a heavier emphasis on math, so the students gear up with chemistry and biology and take physics their senior year, when they can better handle the math,” Marlowe said. “But if you concentrate on the concepts, you can get away with just algebra and geometry.” Jamie Parris, Hamilton County Schools’ director of secondary math and science, said the new freshman class will be very hands-on. “They will be doing more experimentation instead of being told what to do,” he said. For instance, rather than studying a two-day lesson on the motion of a pendulum, students may spend an entire week investigating the concept, Marlowe said. “They would ask ... ’What influences the motion of a pendulum? What kind of data do we need to gather to learn about pendulums?’ They’ll analyze and graph their results,” he said. “This is going to take a little time. It could be taught faster, but what would students walk away with? Typically not much. They’re developing a method to study all types of scientific phenomenon.” Jack Pickett teaches physical world concepts at Chattanooga Center for the Creative Arts, one of the few Hamilton County schools that already has made the switch to an inverted science curriculum. The ninth-graders are not ready for some of the more complex physics concepts, he said, so he picks and chooses the ones he thinks are important. The class includes lessons on electricity, Newton’s law of gravity and the properties of light and sound. Kelley Kuhn, head of the science department at Chattanooga Center for the Creative Arts, said she is particularly excited to teach biology next year to upperclassmen for the first time. “Biology is tough, but we’ve tended to water it down so we could teach it to freshmen,” she said. In addition to offering students what many teachers consider a more natural progression through the sciences, officials said they’re also hoping to get more students to take physics in high school and possibly later in college. “There haven’t been many students (majoring) in physics or physics education, especially,” Marlowe said. “We have been almost wholly focused on research-based physics and not getting them into the work force as teachers. And if you don’t have good high school preparation in physics, then you are hurting.” Whether or not she decides to pursue a career in physics, Chattanooga Center for the Creative Arts freshman Caitlyn Clear, 14, said she’s learned more in physical world concepts than in traditional science classes. “We use models and things. It’s more visual and more hands-on,” she said. “You can only learn so much with notes.” as reported at Published December 26th, 2010 | Added December 26th, 2010 7:44 pm | Comments Tuesday, December 28, 2010 Happy New Decade !! Has anyone seen Schrödinger's Cat ? Were you a weenie like me on December 31, 1999 screaming from the rafters that the new century/millennium was NOT going to begin the next day but rather a year later on January 1, 2001 ? After all, there was no such thing as a Year Zero.* December 31st 1 B.C. (B.C.E. for you Atheists and Chinese, same difference) turned into January 1st, 1 A.D. (C.E.= Common Era), by our reckoning. Well, I was 43 on that day so I only screamed that in my head, not to others. Were I 23 though .... I would have been QUITE vocal, and probably lost friends in the process. Young people! Youth is a temporary affliction, but fortunately, Father Time has the cure. :-) But what does it matter, anyway? Every day is but one day later than the day before. A man who turns 30 should look on the bright side, for example, that he's tied with few others as the youngest man in his 30's on the planet, rather than the depressing thought that the single most exciting decade of his life is behind him. You are after all, just one day older. Attitude is Everything. On Jan. 1, 2001 I was 44 with a svelte 43 year-old-wife and 4 kids ages 11, 10, 7, and 5. Good times. Now add 10 to those numbers, and a few pounds all around. Still good times, just a bit crazier, as Science has proven that the raising/expense of teenagers is the source of gray hair. :-) So, what has happened since Jan. 1, 2001, the dawn of a new day/month/year/decade/century/millennium? Two new American Presidents, and 2 new wars, still ongoing. Terrorism of the worst sort. A once balanced budget that isn't anymore (thanks to Dumbya and his fellow "Legal Thieves" of the U.S. Treasury, World's largest piggy bank), and probably won't be for the foreseeable future, and the rise of The People's Republic of China, which slowly but surely recognized that Communism is a dead end. Its rise is ongoing, and I don't see the economic inertia changing direction anytime soon. What happened in  Mathematical Physics? Again, mostly War (what the Hell is WRONG with our Species?!), especially between SuperString Theory and Loop Quantum Gravity. Lee Smolin and Peter Woit published books that had the people questioning the direction of Theoretical Physics to the point that not only were funds to ST reduced, but funds to very badly needed research in the Quantum Field Theories of Quantum Electrodynamics and Quantum Chromodynamics were reduced as well. ALSO, String Theorists engaged in an ongoing Civil War amongst themselves over "The Anthropic Landscape" and the number 10 raised to the power of 500, large but finite, with Leonard Susskind of Stanford and Joseph Polchinski of Kavli taking the pro-Anthropic view, Nobel laureate and Kavli Director David Gross championing the anti-Anthropic view, and Edward Witten of IAS-Princeton taking the diplomatic moderate stance. And Lubos Motl got his PhD and unleashed his webblog upon the world, for good or for ... ill. But whatever he was and is, you can't say he's not entertaining ... in a Howard Stern kind of way. Nature abhors a vacuum, so into the fray stepped the highly speculative field of Cosmology, to the point that Dark Matter Phenomenology has replaced Strings as the primary choice of specialization amongst the grad students and post-docs at top physics research institutions in the USA, at least. But The Standard Model of Particle Physics still rules, and for this the first decade of the 21st century will likely best be known for the start-up of the LHC at CERN in Switzerland/France. Great expectations, wonderful results and thus good times are around the corner, with Nobels awarded on the one hand and careers crushed on the other, as  results both expected and unexpected are forthcoming soon from the greatest machine built by Humanity to-date. Of course, NOTHING advanced in the last decade as much as Biology, Astronomy, and the too often forgotten fields of Social Anthropology (sometimes called Cultural Anthropology) and Psychology. Well, Astronomy's advancement was almost pre-ordained, given the great results born of the great astronomical observatories, both in space and on Earth, planned in decades past and now up and doing their jobs. More yet to come. Biology has taken off like a bat out of Hell, so much so that 60% of ALL Science blogs are Biology/Medicine-based. But remember: we don't have Biology without Chemistry, and we don't have Modern Chemistry without Physics, thanks mostly to Wolfgang Pauli and Erwin Schrodinger. Cultural Anthropology and Psychology are very broad and open fields of study, but mostly they are very young, so sure there is much work to be done, and good news, it's being done. Overseeing ALL of this and most important all is the great tool that is Computers. Computer Science is ... everywhere, in every field. I can't believe that it was only 1995 when the milestone of half of American households became internet-wired, and then through the ONLY significant portal of the time that was AOL. That would make this past decade the FIRST of the full decades when we became more wired than ... not. And now, one last look at the LHC, specifically the CERN/LHC scientists celebrating the startup of same: Geez, are there ANY non-White people working at CERN and the LHC ?? If you enjoyed those pics, there are many more available from the source material that you can find by clicking here. * - ADDENDUM: "Year zero" does not exist in the widely used Gregorian calendar or in its predecessor, the Julian calendar. Under those systems, the year 1 BC is followed by AD 1. However, there is a year zero in astronomical year numbering (where it coincides with the Julian year 1 BC) and in ISO 8601:2004 (where it coincides with the Gregorian year 1 BC) as well as in all Buddhist and Hindu calendars. (from Wikipedia) Yes, more useless yet mildly interesting information to help you impress others with "the size of your intellectual penis" at your local Mensa gathering or at your Math Department's Pizza Friday seminar. Saturday, December 25, 2010 Happy Christmas from The Spirit of John Lennon, Steve Martin, John Malkovich, and Me The Timeless Message from the only person who could say then and forever that he was the leader of the most successfu1 band of all time: Steve Martin's 5 Christmas Wishes: John Malkovich reads 'Twas the Night Christmas to children, in which he explains The Physics of Sleighs, and why The Santa of Portugal is the Most Feared: Merry Christmas from Multiplication by Infinity to you and yours. John Lennon performing Earth Science in the 1970's Friday, December 24, 2010 IceCube Neutrino Detector Now Finished WELLINGTON (AFP) – An extraordinary underground observatory for subatomic particles has been completed in a huge cube of ice one kilometre on each side deep under the South Pole, researchers said. Building the IceCube, the world's largest neutrino observatory, has taken a gruelling decade of work in the Antarctic tundra and will help scientists study space particles in the search for dark matter, invisible material that makes up most of the Universe's mass. The observatory, located 1,400 metres underground near the US Amundsen-Scott South Pole Station, cost more than 270 million dollars, according to the US National Science Foundation (NSF). NSF said the final sensor was installed in the cube, which is one kilometre (0.62 miles) long in each direction, on December 18. Once in place they will be forever embedded in the permafrost as the drill holes fill with ice. The point of the exercise is to study neutrinos, subatomic particles that travel at close to the speed of light but are so small they can pass through solid matter without colliding with any molecules. "Antarctic polar ice has turned out to be an ideal medium for detecting neutrinos," the NSF said in a statement announcing the project's completion. "It is exceptionally pure, transparent and free of radioactivity." Scientists have hailed the IceCube as a milestone for international research and say studying neutrinos will help them understand the origins of the Universe. "From its vantage point at the end of the world, IceCube provides an innovative means to investigate the properties of fundamental particles that originate in some of the most spectacular phenomena in the Universe," NSF said. Most of the IceCube's funding came from the NSF, with contributions from Germany, Belgium and Sweden. It is operated by the University of Wisconsin-Madison. From here 7 Laws to Bring Them All and In the Brightness Bind Them 1. Newton's First Law of Motion 2. Newton's Second Law of Motion 3. Newton's Third Law of Motion 4. The First Law of Thermodynamics Energy can be transformed, i.e. changed from one form to another, but cannot be created nor destroyed. It is usually formulated by stating that the change in the internal energy of a system is equal to the amount of heat supplied to the system, minus the amount of work performed by the system on its surroundings. 5. The Second Law of Thermodynamics An expression of the tendency that over time, differences in temperature, pressure, and chemical potential equilibrate in an isolated physical system. From the state of thermodynamic equilibrium, the law deduced the principle of the increase of entropy and explains the phenomenon of irreversibility in nature. The second law declares the impossibility of machines that generate usable energy from the abundant internal energy of nature by processes called perpetual motion of the second kind. The second law may be expressed in many specific ways, but the first formulation is credited to the German scientist Rudolf Clausius. The law is usually stated in physical terms of impossible processes. In classical thermodynamics, the second law is a basic postulate applicable to any system involving measurable heat transfer, while in statistical thermodynamics, the second law is a consequence of unitarity in quantum theory. In classical thermodynamics, the second law defines the concept of thermodynamic entropy, while in statistical mechanics entropy is defined from information theory, known as the Shannon entropy. 6. The Third Law of Thermodynamics A statistical law of nature regarding entropy and the impossibility of reaching absolute zero, the null point of the temperature scale. The most common enunciation of the third law of thermodynamics is: As a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value. This minimum value, the residual entropy, is not necessarily zero, although it is always zero for a perfect crystal in which there is only one possible ground state. 7. The Wheeler-DeWitt Equation A functional differential equation. It is ill defined in the general case, but very important in theoretical physics, especially in quantum gravity. It is a functional differential equation on the space of three dimensional spatial metrics. The Wheeler–DeWitt equation has the form of an operator acting on a wave functional, the functional reduces to a function in cosmology. Contrary to the general case, the Wheeler–DeWitt equation is well defined in mini-superspaces like the configuration space of cosmological theories. An example of such a wave function is the Hartle–Hawking state. Bryce DeWitt first published this equation in 1967 under the name “Einstein–Schrodinger equation”; it was later renamed the “Wheeler–DeWitt equation”.[2] Simply speaking, the Wheeler–DeWitt equation says \hat{H}(x) |\psi\rangle = 0 where \hat{H}(x) is the Hamiltonian constraint in quantized general relativity. Unlike ordinary quantum field theory or quantum mechanics, the Hamiltonian is a first class constraint on physical states. We also have an independent constraint for each point in space. Although the symbols \hat{H} and |\psi\rangle may appear familiar, their interpretation in the Wheeler–DeWitt equation is substantially different from non-relativistic quantum mechanics. |\psi\rangle is no longer a spatial wave function in the traditional sense of a complex-valued function that is defined on a 3-dimensional space-like surface and normalized to unity. Instead it is a functional of field configurations on all of spacetime. This wave function contains all of the information about the geometry and matter content of the universe. \hat{H} is still an operator that acts on the Hilbert space of wave functions, but it is not the same Hilbert space as in the nonrelativistic case, and the Hamiltonian no longer determines evolution of the system, so the Schrödinger equation \hat{H} |\psi\rangle = i \hbar \partial / \partial t |\psi\rangle no longer applies. This property is known as timelessness [disambiguation needed]. The reemergence of time requires the tools of decoherence and clock operators. We also need to augment the Hamiltonian constraint with momentum constraints \vec{\mathcal{P}}(x) \left| \psi \right\rangle = 0 associated with spatial diffeomorphism invariance. In minisuperspace approximations, we only have one Hamiltonian constraint (instead of infinitely many of them). In fact, the principle of general covariance in general relativity implies that global evolution per se does not exist; t is just a label we assign to one of the coordinate axes. Thus, what we think about as time evolution of any physical system is just a gauge transformation, similar to that of QED induced by U(1) local gauge transformation \psi \rightarrow e^{i\theta(\vec{r} )} \psi where \theta(\vec{r}) plays the role of local time. The role of a Hamiltonian is simply to restrict the space of the "kinematic" states of the Universe to that of "physical" states - the ones that follow gauge orbits. For this reason we call it a "Hamiltonian constraint." Upon quantization, physical states become wave functions that lie in the kernel of the Hamiltonian operator. In general, the Hamiltonian vanishes for a theory with general covariance or time-scaling invariance. Thursday, December 23, 2010 Evil Santa Claus - With Love From Finland Each year, Sweden gives us the Nobel Prizes, except the one for Peace. Norway awards the Nobel Peace Prize, sometimes pre-emptively, sometimes to terrorists like Yassar Arafat, and sometimes just to piss off The People's Republic of China, like this year. But what of Finland, the forgotten Scandinavian country? Can they play too? Well fret no more, boys and girls! The Finns are back, especially this year, with "Rare Exports", a new film from Finland, in which Santa Claus is revealed to be an old demon (he eats children, so in a way you can say sure, he "likes" them), imprisoned in a mountain long ago, in Russia just across the Finnish border. And guess who breaks him out? Yup, the Americans. Why not? Who ya gonna call?  :-) Here's the trailer: You better watch out. You better not cry. If you meet up with Santa, You surely will die! Please Santa, please don't eat little Timmy! Well, I but that up as a background on our computer, but my family thought Santa to be TOO evil looking, so I was forced to replace it, with this: Tuesday, December 21, 2010 Announcing SIAL: The Somerset Institute for Advanced Logic in Rocky Hill, NJ A group of wealthy charitable persons and deep intellectuals in Somerset County, NJ, who wish to remain anonymous for the time being, are pleased to announce the formation of yet another "Advanced Institute" to be called The Somerset Institute for Advanced Logic in the borough of Rocky Hill, NJ, in Somerset County, NJ, four miles north of Princeton. The purpose of SIAL is as follows: To Advance Humanity. More specifically: To Advance Humanity by assisting Physics and Physicists. More specifically: To Advance Humanity by assisting Physics and Physicists, by assembling in one place, the finest Applied Mathematicians and the finest Applied Computer Scientists on our planet. And also, any future full-time employees will be well compensated, unlike most intellectuals. Minimum $150,000 for first-year employees (a doctorate in Mathematics or Computer Science required), and future salary to be determined. We'll see how this goes. Ironically, NO PhD. in Physics will be invited, as the purpose of SIAL will be to assist Physics, not employ them. In both Academia and other Advanced Institutes, such as IAS at Princeton Township, there are plenty of jobs to employ them. However, we are most interested in the opinions of the world's greatest Physicists as to how to proceed. SIAL is currently in the planning stages, and will not start up until the year 2015, at the soonest. Toward that end, there are nine individuals on Earth whose opinions we seek on how to go forward. All nine individuals are unaware of this announcement, today, on December 21st, 2010, yet we seek their opinion greatly. If not them, then we are open to the next best choices. Those individuals are: - Paul Allen - Steve Wozniak - Jaron Lanier - Andrew Wiles - Shing-Tung Yau - Edward Witten - Steven Weinberg - Gerardus 't Hooft - Garrett Lisi If any of you reading this can think of better choices, do tell. For example, we would like to also involve John Baez and Greg Egan involved somehow, indeed they are on our shortlist as initial co-directors (Roman Republic consul style). The current state of affairs at SIAL is that we are researching a farm to buy in either The Borough of Rocky Hill or The Township of Montgomery, which surrounds it. In order to have SIAL, a non-profit institution, sustain itself into the future and beyond the initial investment, we intend the farm to be large enough to host an amusement park across the street from the Institute (so maybe we'll have to buy two farms ... what the heck, it's only money). Classical Newtonian Mechanics is cool, and roller coasters are the greatest "hook" in our opinion to attract people to that currently low-paying field yet ultimately important field (if we are to advance our species) that is Science, specifically the "gold standard" of Science, that is Physics. Our "patron saint" if you will, will be Aristotle, and it is hoped we are successful one day to erect a twice-lifesize statue to that man at the entrance of our Institute.  Onward and upward, Humanity! Marble bust of Aristotle. Roman copy after a Greek bronze original by Lysippus c. 330 BC. Sunday, December 19, 2010 Mathematical Physics Basics From here. The language of physics is mathematics. In order to study physics seriously, one needs to learn mathematics that took generations of brilliant people centuries to work out. Algebra, for example, was cutting-edge mathematics when it was being developed in Baghdad in the 9th century. But today it's just the first step along the journey. Algebra provides the first exposure to the use of variables and constants, and experience manipulating and solving linear equations of the form y = ax + b and quadratic equations of the form y = ax2+bx+c. Geometry at this level is two-dimensional Euclidean geometry, Courses focus on learning to reason geometrically, to use concepts like symmetry, similarity and congruence, to understand the properties of geometric shapes in a flat, two-dimensional space. Trigonometry begins with the study of right triangles and the Pythagorean theorem. The trigonometric functions sin, cos, tan and their inverses are introduced and clever identities between them are explored. Calculus (single variable) Calculus begins with the definition of an abstract functions of a single variable, and introduces the ordinary derivative of that function as the tangent to that curve at a given point along the curve. Integration is derived from looking at the area under a curve,which is then shown to be the inverse of differentiation. Calculus (multivariable) Multivariable calculus introduces functions of several variables f(x,y,z...), and students learn to take partial and total derivatives. The ideas of directional derivative, integration along a path and integration over a surface are developed in two and three dimensional Euclidean space. Analytic Geometry Analytic geometry is the marriage of algebra with geometry. Geometric objects such as conic sections, planes and spheres are studied by the means of algebraic equations. Vectors in Cartesian, polar and spherical coordinates are introduced. Linear Algebra In linear algebra, students learn to solve systems of linear equations of the form ai1 x1 + ai2 x2 + ... + ain xn = ci and express them in terms of matrices and vectors. The properties of abstract matrices, such as inverse, determinant, characteristic equation, and of certain types of matrices, such as symmetric, antisymmetric, unitary or Hermitian, are explored. Ordinary Differential Equations This is where the physics begins! Much of physics is about deriving and solving differential equations. The most important differential equation to learn, and the one most studied in undergraduate physics, is the harmonic oscillator equation, ax'' + bx' + cx = f(t), where x' means the time derivative of x(t). Partial Differential Equations For doing physics in more than one dimension, it becomes necessary to use partial derivatives and hence partial differential equations. The first partial differential equations students learn are the linear, separable ones that were derived and solved in the 18th and 19th centuries by people like Laplace, Green, Fourier, Legendre, and Bessel. Methods of approximation Most of the problems in physics can't be solved exactly in closed form. Therefore we have to learn technology for making clever approximations, such as power series expansions, saddle point integration, and small (or large) perturbations. Probability and statistics Probability became of major importance in physics when quantum mechanics entered the scene. A course on probability begins by studying coin flips, and the counting of distinguishable vs. indistinguishable objects. The concepts of mean and variance are developed and applied in the cases of Poisson and Gaussian statistics. Here are some of the topics in mathematics that a person who wants to learn advanced topics in theoretical physics, especially string theory, should become familiar with. Real analysis In real analysis, students learn abstract properties of real functions as mappings, isomorphism, fixed points, and basic topology such as sets, neighborhoods, invariants and homeomorphisms. Complex analysis Complex analysis is an important foundation for learning string theory. Functions of a complex variable, complex manifolds, holomorphic functions, harmonic forms, Kähler manifolds, Riemann surfaces and Teichmuller spaces are topics one needs to become familiar with in order to study string theory. Group theory Modern particle physics could not have progressed without an understanding of symmetries and group transformations. Group theory usually begins with the group of permutations on N objects, and other finite groups. Concepts such as representations, irreducibility, classes and characters. Differential geometry Einstein's General Theory of Relativity turned non-Euclidean geometry from a controversial advance in mathematics into a component of graduate physics education. Differential geometry begins with the study of differentiable manifolds, coordinate systems, vectors and tensors. Students should learn about metrics and covariant derivatives, and how to calculate curvature in coordinate and non-coordinate bases. Lie groups A Lie group is a group defined as a set of mappings on a differentiable manifold. Lie groups have been especially important in modern physics. The study of Lie groups combines techniques from group theory and basic differential geometry to develop the concepts of Lie derivatives, Killing vectors, Lie algebras and matrix representations. Differential forms The mathematics of differential forms, developed by Elie Cartan at the beginning of the 20th century, has been powerful technology for understanding Hamiltonian dynamics, relativity and gauge field theory. Students begin with antisymmetric tensors, then develop the concepts of exterior product, exterior derivative, orientability, volume elements, and integrability conditions. Homology concerns regions and boundaries of spaces. For example, the boundary of a two-dimensional circular disk is a one-dimensional circle. But a one-dimensional circle has no edges, and hence no boundary. In homology this case is generalized to "The boundary of a boundary is zero." Students learn about simplexes, complexes, chains, and homology groups. Cohomology and homology are related, as one might suspect from the names. Cohomology is the study of the relationship between closed and exact differential forms defined on some manifold M. Students explore the generalization of Stokes' theorem, de Rham cohomology, the de Rahm complex, de Rahm's theorem and cohomology groups. Lightly speaking, homotopy is the study of the hole in the donut. Homotopy is important in string theory because closed strings can wind around donut holes and get stuck, with physical consequences. Students learn about paths and loops, homotopic maps of loops, contractibility, the fundamental group, higher homotopy groups, and the Bott periodicity theorem. Fiber bundles Fiber bundles comprise an area of mathematics that studies spaces defined on other spaces through the use of a projection map of some kind. For example, in electromagnetism there is a U(1) vector potential associated with every point of the spacetime manifold. Therefore one could study electromagnetism abstractly as a U(1) fiber bundle over some spacetime manifold M. Concepts developed include tangent bundles, principal bundles, Hopf maps, covariant derivatives, curvature, and the connection to gauge field theories in physics. Characteristic classes The subject of characteristic classes applies cohomology to fiber bundles to understand the barriers to untwisting a fiber bundle into what is known as a trivial bundle. This is useful because it can reduce complex physical problems to math problems that are already solved. The Chern class is particularly relevant to string theory. Index theorems In physics we are often interested in knowing about the space of zero eigenvalues of a differential operator. The index of such an operator is related to the dimension of that space of zero eigenvalues. The subject of index theorems and characteristic classes is concerned with Supersymmetry and supergravity The mathematics behind supersymmetry starts with two concepts: graded Lie algebras, and Grassmann numbers. A graded algebra is one that uses both commutation and anti-commutation relations. Grassmann numbers are anti-commuting numbers, so that x times y = –y times x. The mathematical technology needed to work in supersymmetry includes an understanding of graded Lie algebras, spinors in arbitrary spacetime dimensions, covariant derivatives of spinors, torsion, Killing spinors, and Grassmann multiplication, derivation and integration, and Kähler potentials. These are topics in mathematics at the current cutting edge of superstring research:     Cohomology is a powerful mathematical technology for classifying differential forms. In the 1960s, work by Sir Michael Atiyah, Isadore Singer, Alexandre Grothendieck, and Friedrich Hirzebruch generalized coholomogy from differential forms to vector bundles, a subject that is now known as K-theory.     Witten has argued that K-theory is relevant to string theory for classifying D-brane charges. D-brane objects in string theory carry a type of charge called Ramond-Ramond charge. Ramond-Ramond fields are differential forms, and their charges should be classifed by ordinary cohomology. But gauge fields propagate on D-branes, and gauge fields give rise to vector bundles. This suggests that D-brane charge classification requires a generalization of cohomology to vector bundles -- hence K-theory. Overview of K-theory Applied to Strings by Edward Witten D-branes and K-theory by Edward Witten Noncommutative geometry (NCG for short)     Geometry was originally developed to describe physical space that we can see and measure. After modern mathematics was freed from Euclid's Fifth Axiom by Gauss and Bolyai, Riemann added to modern geometry the abstract notion of a manifold M with points that are labeled by local coordinates that are real numbers, with some metric tensor that determines an extremal length between two points on the manifold.     Much of the progress in 20th century physics was in applying this modern notion of geometry to spacetime, or to quantum gauge field theory.     In the quest to develop a notion of quantum geometry, as far back as 1947, people were trying to quantize spacetime so that the coordinates would not be ordinary real numbers, but somehow elevated to quantum operators obeying some nontrivial quantum commutation relations. Hence the term "noncommutative geometry," or NCG for short.     The current interest in NCG among physicists of the 21st century has been stimulated by work by French mathematician Alain Connes. Two Lectures on D-Geometry and Noncommutative Geometry by Michael R. Douglas Noncommutative Geometry and Matrix Theory: Compactification on Tori by Alain Connes, Michael R. Douglas, Albert Schwarz String Theory and Noncommutative Geometry by Edward Witten and Nathan Seiberg. Non-commutative spaces in physics and mathematics by Daniela Bigatti Noncommutative Geometry for Pedestrians by J.Madore Friday, December 17, 2010 Some Effects of Human Overpopulation The Catholic Church's former attitude re Birth Control, now changed for 2010 and years thereafter. Some problems associated with or exacerbated by human overpopulation: • Inadequate fresh water[144] for drinking water use as well as sewage treatment and effluent discharge. Some countries, like Saudi Arabia, use energy-expensive desalination to solve the problem of water shortages.[168][169] • Depletion of natural resources, especially fossil fuels[170] • Deforestation and loss of ecosystems[172] that sustain global atmospheric oxygen and carbon dioxide balance; about eight million hectares of forest are lost each year.[173] • Changes in atmospheric composition and consequent global warming[174][175] • Irreversible loss of arable land and increases in desertification[176] Deforestation and desertification can be reversed by adopting property rights, and this policy is successful even while the human population continues to grow.[177] • Mass species extinctions.[178] from reduced habitat in tropical forests due to slash-and-burn techniques that sometimes are practiced by shifting cultivators, especially in countries with rapidly expanding rural populations; present extinction rates may be as high as 140,000 species lost per year.[179] As of 2008, the IUCN Red List lists a total of 717 animal species having gone extinct during recorded human history.[180] • High infant and child mortality.[181] High rates of infant mortality are caused by poverty. Rich countries with high population densities have low rates of infant mortality.[182] • Increased chance of the emergence of new epidemics and pandemics[183] For many environmental and social reasons, including overcrowded living conditions, malnutrition and inadequate, inaccessible, or non-existent health care, the poor are more likely to be exposed to infectious diseases.[184] • Starvation, malnutrition[143] or poor diet with ill health and diet-deficiency diseases (e.g. rickets). However, rich countries with high population densities do not have famine.[185] • Low life expectancy in countries with fastest growing populations[187] • Unhygienic living conditions for many based upon water resource depletion, discharge of raw sewage[188] and solid waste disposal. However, this problem can be reduced with the adoption of sewers. For example, after Karachi, Pakistan installed sewers, its infant mortality rate fell substantially.[189] • Elevated crime rate due to drug cartels and increased theft by people stealing resources to survive[190] • Conflict over scarce resources and crowding, leading to increased levels of warfare[191] • Less Personal Freedom / More Restrictive Laws. Laws regulate interactions between humans. Law "serves as a primary social mediator of relations between people." The higher the population density, the more frequent such interactions become, and thus there develops a need for more laws and/or more restrictive laws to regulate these interactions. It is even speculated that democracy is threatened due to overpopulation, and could give rise to totalitarian style governments.[dubious ] Some economists, such as Thomas Sowell[192] and Walter E. Williams[193] argue that third world poverty and famine are caused in part by bad government and bad economic policies. Most biologists and sociologists see overpopulation as a serious threat to the quality of human life.[10][194] From the Wikipedia article on Overpopulation. Shut Up About "Climate Change." What Are We Doing To The Oceans? As you can see from the following photographs, the crap that Industry (which we cannot exclusively blame as long as we use their products, which we all do) puts into the oceans far exceeds the crap put into the atmosphere. Since the dawn of the Industrial Revolution, the acidic content of the seas has increased significantly. What will be the result? We know more about the surface of the Moon than we do about our own oceans. The residents of Japanese fishing villages are well aware of what happens - the existence of giant-sized Nomura jellyfish which have destroyed their local economies, below: Blessed be the jellyfish and blessed be the sea cucumbers, for they shall inherit the earth. Originally posted on Apr. 4, 2010. Back by popular demand. Mine. Thursday, December 16, 2010 Space Colonization and Transhumanism - Inevitable? I didn't write the following (source given at end): Space colonies will become necessary to house the many billions of individuals that will be born in the future as our population continues to expand at a lazy exponential. In his book, The Millennial Project, Marshall T. Savage estimates that the Asteroid Belt could hold 7,500 trillion people, if thoroughly reshaped into O'Neill colonies. At a typical population growth rate for developed countries at 1% per annum (doubling every 72 years), it would take us 1,440 years to fill that space. Siphoning light gases off Jupiter and Saturn and fusing them into heavier elements for construction of further colonies seems plausible in the longer term as well. Why expand into space? For many, the answers are blatantly obvious, but the easiest is that the alternatives are limiting the human freedom to reproduce, or mass murder, both of which are morally unacceptable. Population growth is not inherently antithetical to a love of the environment — in fact, by expanding outwards into the cosmos in all directions, we'll be able to seed every star system with every species of plant and animal imaginable. The genetic diversity of the embryonic home planet will seem tiny by comparison. Space colonization is closely related to transhumanism through the mutual association of futurist philosophy, but also more directly because the embrace of transhumanism will be necessary to colonize space. Human beings aren't designed to live in space. Our physiological issues with it are manifold, from deteriorating muscle mass to uncontrollable flatulence. On the surface of Venus, we would melt, on the surface of Mars, we'd freeze. The only reasonable solution is to upgrade our bodies. Not terraform the cosmos, but cosmosform ourselves. From The Top Ten Transhumanist Technologies at The Lifeboat Foundation Steve here. I just found out about this website, so I haven't explored for the moment and thus have no comment at this time about the subject, except this. I must say going in that I guess I am a creature of my times, because I find "transhumanism" as spooky in an uncomfortable way as I find it inevitable, assuming we don't extinct ourselves in the meantime. Division By Zero In mathematics, division by zero is a term used if the divisor (denominator) is zero. Such a division can be formally expressed as a / 0 where a is the dividend (numerator). Whether this expression can be assigned a well-defined value depends upon the mathematical setting. In ordinary (real number) arithmetic, the expression has no meaning, as there is no number which, multiplied by 0, gives a (a≠0). In computer programming, an attempt to divide by zero may, depending on the programming language and the type of number being divided by zero, generate an exception, generate an error message, crash the program being executed, generate either positive or negative infinity, or could result in a special not-a-number value (see below). Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to a / 0 is contained in George Berkeley's criticism of infinitesimal calculus in The Analyst; see Ghosts of departed quantities.  In elementary arithmetic When division is explained at the elementary arithmetic level, it is often considered as a description of dividing a set of objects into equal parts. As an example, consider having ten apples, and these apples are to be distributed equally to five people at a table. Each person would receive \textstyle\frac{10}{5} = 2 apples. Similarly, if there are 10 apples, and only one person at the table, that person would receive \textstyle\frac{10}{1} = 10 apples. So for dividing by zero – what is the number of apples that each person receives when 10 apples are evenly distributed amongst 0 people? Certain words can be pinpointed in the question to highlight the problem. The problem with this question is the "when". There is no way to distribute 10 apples amongst 0 people. In mathematical jargon, a set of 10 items cannot be partitioned into 0 subsets. So \textstyle\frac{10}{0}, at least in elementary arithmetic, is said to be meaningless, or undefined. Similar problems occur if one has 0 apples and 0 people, but this time the problem is in the phrase "the number". A partition is possible (of a set with 0 elements into 0 parts), but since the partition has 0 parts, vacuously every set in our partition has a given number of elements, be it 0, 2, 5, or 1000. If there are, say, 5 apples and 2 people, the problem is in "evenly distribute". In any integer partition of a 5-set into 2 parts, one of the parts of the partition will have more elements than the other. In all of the above three cases, \textstyle\frac{10}{0}, \textstyle\frac{0}{0} and \textstyle\frac{5}{2}, one is asked to consider an impossible situation before deciding what the answer will be, and that is why the operations are undefined in these cases. To understand division by zero, one must check it with multiplication: multiply the quotient by the divisor to get the original number. However, no number multiplied by zero will produce a product other than zero. To satisfy division by zero, the quotient must be bigger than all other numbers, i.e., infinity. This connection of division by zero to infinity takes us beyond elementary arithmetic (see below). A recurring theme even at this elementary stage is that for every undefined arithmetic operation, there is a corresponding question that is not well-defined. "How many apples will each person receive under a fair distribution of ten apples amongst three people?" is a question that is not well-defined because there can be no fair distribution of ten apples amongst three people. There is another way, however, to explain the division: if one wants to find out how many people, who are satisfied with half an apple, can one satisfy by dividing up one apple, one divides 1 by 0.5. The answer is 2. Similarly, if one wants to know how many people, who are satisfied with nothing, can one satisfy with 1 apple, one divides 1 by 0. The answer is infinite; one can satisfy infinite people, that are satisfied with nothing, with 1 apple. Clearly, one cannot extend the operation of division based on the elementary combinatorial considerations that first define division, but must construct new number systems.  Early attempts The Brahmasphutasiddhanta of Brahmagupta (598–668) is the earliest known text to treat zero as a number in its own right and to define operations involving zero.[1] The author failed, however, in his attempt to explain division by zero: his definition can be easily proven to lead to algebraic absurdities. According to Brahmagupta, In 830, Mahavira tried unsuccessfully to correct Brahmagupta's mistake in his book in Ganita Sara Samgraha: "A number remains unchanged when divided by zero."[1] Bhaskara II tried to solve the problem by defining (in modern notation) \textstyle\frac{n}{0}=\infty.[1] This definition makes some sense, as discussed below, but can lead to paradoxes if not treated carefully. These paradoxes were not treated until modern times.  In algebra It is generally regarded among mathematicians that a natural way to interpret division by zero is to first define division in terms of other arithmetic operations. Under the standard rules for arithmetic on integers, rational numbers, real numbers, and complex numbers, division by zero is undefined. Division by zero must be left undefined in any mathematical system that obeys the axioms of a field. The reason is that division is defined to be the inverse operation of multiplication. This means that the value of a/b is the solution x of the equation bx = a whenever such a value exists and is unique. Otherwise the value is left undefined. For b = 0, the equation bx = a can be rewritten as 0x = a or simply 0 = a. Thus, in this case, the equation bx = a has no solution if a is not equal to 0, and has any x as a solution if a equals 0. In either case, there is no unique value, so \textstyle\frac{a}{b} is undefined. Conversely, in a field, the expression \textstyle\frac{a}{b} is always defined if b is not equal to zero.  Division as the inverse of multiplication The concept that explains division in algebra is that it is the inverse of multiplication. For example, since 2 is the value for which the unknown quantity in ?\times 3=6 is true. But the expression requires a value to be found for the unknown quantity in ?\times 0=6. But any number multiplied by 0 is 0 and so there is no number that solves the equation. The expression requires a value to be found for the unknown quantity in ?\times 0=0. Again, any number multiplied by 0 is 0 and so this time every number solves the equation instead of there being a single number that can be taken as the value of 0/0. In general, a single value can't be assigned to a fraction where the denominator is 0 so the value remains undefined (see below for other applications). Fallacies based on division by zero It is possible to disguise a special case of division by zero in an algebraic argument,[1] leading to spurious proofs that 1 = 2 such as the following: With the following assumptions: 0\times 1 &= 0 \\ 0\times 2 &= 0. The following must be true: 0\times 1 = 0\times 2.\, Dividing by zero gives: Simplified, yields: 1 = 2.\, The fallacy is the implicit assumption that dividing by 0 is a legitimate operation.  In calculus  Extended real line At first glance it seems possible to define a/0 by considering the limit of a/b as b approaches 0. For any positive a, the limit from the right is \lim_{b \to 0^+} {a \over b} = +\infty however, the limit from the left is \lim_{b \to 0^-} {a \over b} = -\infty and so the \lim_{b \to 0} {a \over b} is undefined (the limit is also undefined for negative a). Furthermore, there is no obvious definition of 0/0 that can be derived from considering the limit of a ratio. The limit \lim_{(a,b) \to (0,0)} {a \over b} does not exist. Limits of the form \lim_{x \to 0} {f(x) \over g(x)} in which both ƒ(x) and g(x) approach 0 as x approaches 0, may equal any real or infinite value, or may not exist at all, depending on the particular functions ƒ and g (see l'Hôpital's rule for discussion and examples of limits of ratios). These and other similar facts show that the expression 0/0 cannot be well-defined as a limit.  Formal operations A formal calculation is one carried out using rules of arithmetic, without consideration of whether the result of the calculation is well-defined. Thus, it is sometimes useful to think of a/0, where a ≠ 0, as being \infty. This infinity can be either positive, negative, or unsigned, depending on context. For example, formally: \lim\limits_{x \to 0} {\frac{1}{x} =\frac{\lim\limits_{x \to 0} {1}}{\lim\limits_{x \to 0} {x}}} = \frac{1}{0} = \infty. \lim\limits_{x \to 0^+} \frac{1}{x} = \frac{1}{0^+} = +\infty\text{ and }\lim\limits_{x \to 0^-} \frac{1}{x} = \frac{1}{0^-} = -\infty. (Since the one-sided limits are different, the two-sided limit does not exist in the standard framework of the real numbers. Also, the fraction 1/0 is left undefined in the extended real line, therefore it and \frac{\lim\limits_{x \to 0} 1 }{\lim\limits_{x \to 0} x} are meaningless expressions.)  Real projective line The set \mathbb{R}\cup\{\infty\} is the real projective line, which is a one-point compactification of the real line. Here \infty means an unsigned infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies -\infty = \infty, which is necessary in this context. In this structure, \scriptstyle a/0 = \infty can be defined for nonzero a, and \scriptstyle a/\infty = 0. It is the natural way to view the range of the tangent and cotangent functions of trigonometry: tan(x) approaches the single point at infinity as x approaches either \scriptstyle+\pi/2 or \scriptstyle-\pi/2 from either direction. This definition leads to many interesting results. However, the resulting algebraic structure is not a field, and should not be expected to behave like one. For example, \infty + \infty is undefined in the projective line.  Riemann sphere The set \mathbb{C}\cup\{\infty\} is the Riemann sphere, which is of major importance in complex analysis. Here too \infty is an unsigned infinity – or, as it is often called in this context, the point at infinity. This set is analogous to the real projective line, except that it is based on the field of complex numbers. In the Riemann sphere, 1/0=\infty, but 0/0 is undefined, as is 0\times\infty.  Extended non-negative real number line The negative real numbers can be discarded, and infinity introduced, leading to the set [0, ∞], where division by zero can be naturally defined as a/0 = ∞ for positive a. While this makes division defined in more cases than usual, subtraction is instead left undefined in many cases, because there are no negative numbers.  In higher mathematics Non-standard analysis Distribution theory Linear algebra In matrix algebra (or linear algebra in general), one can define a pseudo-division, by setting a/b = ab+, in which b+ represents the pseudoinverse of b. It can be proven that if b−1 exists, then b+ = b−1. If b equals 0, then 0+ = 0; see Generalized inverse. Abstract algebra Any number system that forms a commutative ring — for instance, the integers, the real numbers, and the complex numbers — can be extended to a wheel in which division by zero is always possible; however, in such a case, "division" has a slightly different meaning. The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such as rings and fields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in a skew field (which for this reason is called a division ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ring Z/6Z of integers mod 6. The meaning of the expression \textstyle\frac{2}{2} should be the solution x of the equation 2x = 2. But in the ring Z/6Z, 2 is not invertible under multiplication. This equation has two distinct solutions, x = 1 and x = 4, so the expression \textstyle\frac{2}{2} is undefined. In field theory, the expression \textstyle\frac{a}{b} is only shorthand for the formal expression ab−1, where b−1 is the multiplicative inverse of b. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning when b is zero. Modern texts include the axiom 0 ≠ 1 to avoid having to consider the trivial ring or a "field with one element", where the multiplicative identity coincides with the additive identity. In computer arithmetic In the SpeedCrunch calculator application, when a number is divided by zero the answer box displays “Error: Divide by zero”. Most calculators, such as this Texas Instruments TI-86, will halt execution and display an error message when the user or a running program attempts to divide by zero. The IEEE floating-point standard, supported by almost all modern floating-point units, specifies that every floating point arithmetic operation, including division by zero, has a well-defined result. The standard supports signed zero, as well as infinity and NaN (not a number). There are two zeroes, +0 (positive zero) and −0 (negative zero) and this removes any ambiguity when dividing. In IEEE 754 arithmetic, a ÷ +0 is positive infinity when a is positive, negative infinity when a is negative, and NaN when a = ±0. The infinity signs change when dividing by −0 instead. Integer division by zero is usually handled differently from floating point since there is no integer representation for the result. Some processors generate an exception when an attempt is made to divide an integer by zero, although others will simply continue and generate an incorrect result for the division. The result depends on how division is implemented, and can either be zero, or sometimes the largest possible integer. Because of the improper algebraic results of assigning any value to division by zero, many computer programming languages (including those used by calculators) explicitly forbid the execution of the operation and may prematurely halt a program that attempts it, sometimes reporting a "Divide by zero" error. In these cases, if some special behavior is desired for division by zero, the condition must be explicitly tested (for example, using an if statement). Some programs (especially those that use fixed-point arithmetic where no dedicated floating-point hardware is available) will use behavior similar to the IEEE standard, using large positive and negative numbers to approximate infinities. In some programming languages, an attempt to divide by zero results in undefined behavior. In two's complement arithmetic, attempts to divide the smallest signed integer by − 1 are attended by similar problems, and are handled with the same range of solutions, from explicit error conditions to undefined behavior. Most calculators will either return an error or state that 1/0 is undefined, however some TI and HP graphing calculators will evaluate (1/0)2 to ∞. More advanced computer algebra systems will return an infinity as a result for division by zero; for instance, Microsoft Math and Mathematica will show an ComplexInfinity result. Historical accidents • On September 21, 1997, a divide by zero error on board the USS Yorktown (CG-48) Remote Data Base Manager brought down all the machines on the network, causing the ship's propulsion system to fail.[2] See also 1. ^ a b c d Kaplan, Robert (1999). The nothing that is: A natural history of zero. New York: Oxford University Press. pp. 68–75. ISBN 0195142373.  2. ^ "Sunk by Windows NT". Wired News. 1998-07-24.,1282,13987,00.html.  Further reading