chash
stringlengths
16
16
content
stringlengths
267
674k
23678874653531f5
Sunday, December 28, 2014 Best Indie Album Of The Year First let's be clear about what this music is's not about pickup trucks, booze in plastic cups, or lovers that treat you bad. It's not about anything normally discussed in pop culture. It's substantial. If you like a little substance in your music, you may find this to be just your cup of tea then. The music on "Western Front Of Dreams" has been described by fans as having elements of Pink Floyd, Peter Gabriel, Robert Fripp, The Beatles, 10CC, and David Lindley. It's's related to pop and rock...but not exactly the same thing. The key to really appreciating the album is the layers...songs all have layers to them both musically and conceptually. There is immediate gratification, but there are subtle hues and complete twists that reveal themselves only with repeated listens. Conceptually It's a search for some sort of detente where the matters of heart and the matters of the mind coexist precariously, but coexist none the less. Ben New is indeed a masterful songwriter and performer with a unique style that fuses elements of seemingly diametrical concepts into cohesive and ultimately beautiful wholes. The lyrics are guaranteed not to insult your intelligence. The music is crafty and expressive, covering a remarkably wide range of genres. You'll be treated to unexpected but truly satisfying orchestrations and arrangements with dynamic twists wrapped around surprising rich timbres featuring everything from banjos, to lap steel guitars, to wood flutes and melodica. Yet the guitar arrangements, chord voicings, and soloing are definitely worth the price of admission in themselves. Ben's virtuosity on guitar is evident even when not the focus of a piece. Be prepared for a memorable experience exploring this music. It's deep and iconoclastic while remaining wholly accessible. You can get yours at By the time this is published, the album should be available on itunes and Amazon as well.  If you like digging deeper, and who doesn't really? There is a lyrics page here. (Click on the text below the breaking news banner that says Western Front Of Dreams). Also check out Ben New on Facebook here. Need a teaser video? Say no more! Wednesday, December 24, 2014 Tuesday, December 23, 2014 I love the smell frankincense in the morning By far, the longest running war, the War on Christmas...and all the counter wars against counter wars associated with this sordid affair is the most lengthy conflict in the history of mankind.  Its origins are sketchy, even among those on the front lines, but actually the modern conflict (though rooted in older events, Cromwellian and other skirmishes)  started in 1959 when the John Birch Society released a pamphlet called “There Goes Christmas,” in which they claimed there was a new communist plot to take the Christ out of Christmas and replace the baby Jesus with United Nations decorations. In this enlightened era of course we know that it isn’t Communists or the U.N. trying to destroy Christmas it’s those vexatious, sinister, secular liberal types who are obviously the minions of Satan! This holiday terrorism has been raging apparently during my entire lifetime and one wonders just what the effects the war has on everyday citizens.  I mean when one grows up in war, one becomes accustomed to it immune to it's horrors right?  I’m completely chagrined to confess that while my fellow citizens have been fighting this battle year after year neither my family nor I personally have  felt the effects. Why, there is a Christmas tree lit up right now as I type and no one has stormed in to arrest me or rip the tree away to taken it ( or me) to the public square for burning.   The soldier's on the front lines, Christian Soldiers... Commander and Chief Bill O’Reilly and his Chief Propaganda Minister Sarah Palin have been instrumental in keeping the public informed on this matter.  I was astonished to learn there were two new fronts in the war, one on either coast. Not surprising, the coasts are heavily populated with the Secular liberals. Why they even have a glitzy interactive map that showed where every battle was raging!  Apparently a new front line had moved to my own backyard, Teaneck, New Jesey. Where for the last few years during the holiday season, Teaneck Township has had a menorah and various decorations and lights scattered around their town hall, but according to at least one community activist, they refuse to appropriately acknowledge Christmas. Christian Soldier Hector Ferrer has been fighting to get a nativity scene and Christmas tree displayed on the municipal building's property, but the township has resisted!!! “The former city manager stated that it was too controversial of an issue,” Ferrer explained. The township finally agreed a few years ago, but Ferrer said that it caused a very contentious relationship. The nativity scene was placed in a lower visibility area and the spotlight for that display was seemingly aimed away... and the Christmas tree lights allegedly burned out and hadn't been replaced in over three-years! The Township has attempted to placate Ferrer with numerous excuses... first they said the nativity scene and tree were controversial.  Then they said there were electrical issues that led to the Christmas tree not being lit, however Ferrer took photos which showed that the lights simply weren't plugged in! To which the township has responded just yesterday saying  they have no obligation to provide any decorations or lights and these things just take time. Wanting to get a firsthand look at how the war was affecting people I chose to venture there since there was no budget to go to the other front on the west coast. As I ventured closer to the site of this latest battle I expected to see a city ravaged by intense fighting. I was certain to see hoards of out-of-work elves feasting on the carcasses of downed reindeer, the bodies of blow up Santa’s strewn across the road, their suits made crimson by their own blood; yet I saw none of this. In fact, everything looked normal; even festive. Every shop I passed had Christmas displays, Christmas lights sparkled on nearly every house even giant decorated trees adorned the malls and government buildings. How could this be, Christmas decorations in public? Were these people mad? Surely bands of black, gay, atheist, Jewish, Muslim, Wiccan, Festivus worshippers would descend at any minute and destroy everything. I was beginning to think I’d been misinformed about the war for all these years Yes, when I reached the front lines there in Teaneck my suspicions were confirmed. The park was filled with people caroling, strolling with Christmas packages; there was a Santa on a skateboard....but there were no Nativity scenes. I asked a passerby what had happened to them and was astonished to hear that there that there had been a battle, a year or two ago, about having 14 Nativity scenes displayed on land owned by the city. OMFG!!! Was Fox news right?? I asked a passing stranger what had happened to the nativity scenes. Were they fire bombed? Was baby Jesus hurled into the ocean? Were the 3 Wise Men napalmed?  “No, no, no” she responded. “They moved them to the front lawn of the Lutheran Church across the street.” Of course! Well we do have a separation of church and state in theory so that one group's religious beliefs can not be forced on another. And to be fair, government; local or otherwise, really does have a responsibility to deal with other issues besides seeing to it that religious icons are placed about to the satisfaction of every citizen. So this war really wasn’t much of a war after all. No reports to be filed today from the front. There is no front. As I stood there puzzled, Christmas was everywhere I looked. I couldn’t escape it. No matter where I went. Even if I couldn’t see displays of Christmas I would hear the nauseating sound of the “Little Drummer Boy” droning on in elevators and supermarkets.   Commander O’Reilly and Chief Propaganda Minister Palin were simply talking shit. They were particularly incensed by people saying “Happy Holidays” rather than “Merry Christmas.” Claiming that saying Happy Holidays pulls the Christ out of Christmas and infringes on the rights of Christians. I always thought saying “Happy Holidays” was a nice way to include everyone, since many religions celebrate this time of year and for those that aren’t religious it’s a nice holiday as well. Apparently however, being inclusive and pleasant to one another is unchristian. The latest strategy in the war is for good Christians to shank anyone who says “Happy Holidays".  In related news, the image of Cthulhu has appeared on toast.  And...the image of Satan has appeared on Rupert Murdoch's head.  As well as Santa! In summary, it's a miracle... not that these images appear to us on toast or grilled cheese sandwiches ...but that a species that is obviously so mentally deficient has survived this long. Sunday, August 31, 2014 Oedipus In The 21st Century-- The Primrose Path To Democracy's End By Without Shoes Correspondent Ben New You know, to say democracy is struggling in America is almost a cliché today and we are tempted to shrug and move on about our personal business when we see evidence of this or hear someone speak up about it. I suggest however we have unwittingly already lost this age old conflict. There is no denying that in nearly every society throughout human history, there have been people who have tried to constitute themselves as an aristocracy by imposing an internalized psychological condition of deference that crushes people's minds often employing the destruction of reason and logic, replacing it with a marginally tangential skewed version of reality and deference. Democracy, despite any faults one may find in it was the only solution to emerge to prevent the domination of society by the few sociopaths who believe they have 'divine' theochristic rights to rule over the rest of us.  The alternative to democracy, unless and until something 'better' is imagined, is always some form of totalitarianism, an aristocracy, or an oligarchy of some kind. Alternatives to democracy have always led directly to tyranny. I'm sorry to report that our nation (and many of our political allies in the world are following suit, walking down the same path to the proverbial slaughterhouse) simply isn't a democracy at all any longer. In an unknowing Oedipus-like scenario, you and I have been inadvertently duped into inglorious ignominy. It is we, our generations {the ones alive during the last 35 years or so}; that will receive the blame and shame in history for this contemptible disgrace because it's happened on our watch. We now live in a completely faked managed democracy. A deception where the illusion is maintained on the surface, while a sort of inverted totalitarianism conjoins economic and state powers. We have become a society of politically uninterested and submissive pawns, with self proclaimed elites eager to keep us that way and the tools to accomplish this.  In this "managed democracy"  the public is shepherded, It is not sovereign. Corporate power no longer answers to any type of state controls.  Although today's America may not be morally or politically comparable to totalitarian states in the past like Nazi Germany,  but clearly unchecked economic powers combined with state powers has its own unnerving pathologies that do in fact align with the goals of fascism. (Fascism was defined by Mussolini, who coined the term as a bundling of state and and corporate [economic] powers). It's always a minefield to bring up fascism as it becomes a tired banality and is so often misused, but it would be a mistake of the highest order to ignore the similar dynamics at work in this scenario. We simply can't intellectually or realistically avoid it.  Myth makers alone dominate today's politics as the quest for an insufferable, impossibly self-defeating, ever-expanding economy combines with the deplorably perverse attraction to endless indefinable war on obliquely indefinable "terror". This is the diagnosis of the condition.  What is the solution? This is a brave new world...though in a sense it's the same old struggle. The difference is the self proclaimed Pharaohs of today have finally found the method that keeps them hidden from plain view while dismantling democracy on a staggering scale quite effectively. None of this relates to the past. The "left -right" political dynamics of the past are completely irrelevant, and have been for some time. It's just a prop on the stage. It's bait to get the people who do get involved away from the real mechanisms and further divorce the rest of the population from the process. Sure, the product on the left or right will throw their supporters a bone once in while. So voting does matter to a order to get a bone or scrap that the real powers don't care about. But in the big picture, on serious matters of policy that truly have impact in YOUR world, you and I will have no say whatsoever. That's a fact. The solution may not be a political one at all actually. In a machine that is powered by wealth and mythology alone, unless you have endless supplies of cash to pay the piper, you will hear no music. However you CAN teach people how to deconstruct the myths. Teach logic Actual democracy requires that a good majority of citizens be capable of logical thought. Starting with the Greeks, logic has been taught in a fairly narrow way. Logic of course, does include syllogisms, but it also includes a great deal of savoir faire about what actually constitutes a good argument and conversely a good counterargument... as well as a good counterargument to that. In a sense, it's like visualizing a 3 dimensional map of the arguments. Our existing curriculum regarding "critical thinking" is generally weak. And I propose this is not incidental. (It's well known that people with reasoning skills are much harder to manipulate.) It's become another banal cliche to lambaste public education, the truth is there are far more triumphs than failures and the quality of one's educational experience is very localized, as each district has it's own set of socio-economic problems to contend other words there is no blanket solution, no magic political bullet to solve the issues that need improvement in general education. However one ingredient in the curriculum that is missing in action and needs to be addressed across the board is teaching logic. And the first thing to address in teaching logic SHOULD be close analysis of irrationality.  Understand that I'm not suggesting that the purpose of reason is to petition authorities more effectively, rather it's to disarm their obfuscations and myths, to help others cut through the darkness of deception. Education about logic is critical to this. Reason is not merely the property of the elite. That is part of the myth, part of the deception. This notion is sold as part of the package that deprives the common people of the capacity to engage in democracy. Reason, itself; is the tool that dismantles this hybrid tyranny we face. You may argue that politics based on reason tilts the playing field in favor of the elite. This may be historically true, however as I said earlier this is a brave new world and the dynamics of the past are irrelevant. In the past only those who could afford to purchase advanced education would have access to the knowledge and history of logic. This MUST not be the case anymore, as there is nothing esoteric or terribly difficult about critical thinking and it can be taught to everyone, in fact it would likely improve the overall ability to perform better in all areas of academics, including testing. I would also respond that politics based on money has tipped the field far more drastically and dangerously. The reality is that democracy needs the citizenry to be educated, and the skills of reasoning are the most basic foundation of democratic education. Democracy simply can not function in any other way. Aristocratic rule is not reinforced by the use of reason. The reality is quite the reverse: in order to dominate society, the would-be Pharaoh must simulate reason, and pretend that their deception is itself reasonable when it in fact is not. How many pundits, George Will and Thomas Sowell come to mind; make their living by saying illogical things in a reasonable tone of voice? We will be subjugated and dominated unless the great majority of citizens can identify just how this trick works. Maybe you can't afford to buy your own senator, lobbying firm, or political party...but you can teach others about logic. Especially the young. Because it will take time to thwart the last 35-40 years of dismantling democracy, and the torch is passed to the young. Their role is critical and the consequences couldn't be more momentous for the survival of civilization. Sunday, August 24, 2014 If Schroedinger's Cat Enters The Forest, & No One Is Around To Observe It, Is The Future A Porkchop? By Benjamin New I often find myself thinking about the nature of the time space bubble we find ourselves in. Perhaps you do as well. Certainly this is the primary function of the science of study the nature...of nature.   According to the Copenhagen interpretation of quantum humor, the state of your reaction to my jokes is undefined until you observe them written here. Then you collapse into either one of the two humor eigenstates: either rolling on the floor laughing or frowning and groaning. There are a few assumptions being made on my part here, basically that my dear readers have a working familiarity with the basics of Quantum Mechanics. Fear not if you don't think you do! Here are a few links that will do the trick...if for instance you think that "the collapse of the wave function" is a condition over advertised by pharmaceutical companies that's cured with a blue pill, or if you think "discrete position and momentum" might be an illustration in some new sort of hipster Kama Sutra (Oh come on now. obviously sex is a semi-classical process---the quantum corrections are WAY too small to matter!),  then click this link to clarify things in regards to the rest of this article. Quantum Mechanics is the most battle-tested theory in all of science. And one-third of our economy involves products designed using it. The device you are reading this on is directly a result of quantum mechanics. The principles of Q. M. work for conducting fundamental science as well as for very for practical applications. Some folks suggest that is all we need to works...why question a good thing? However, this consistently dependable and useful physics model absolutely challenges any reasonable worldviews as it either denies the existence of a physically real world independent of its observation... or it suggests an unimaginable infinitely unfolding series of worlds forming from every possible event that takes place. Today friends, we are going to concern ourselves with the two most widely accepted views regarding the interpretation of quantum mechanics. The Copenhagen Interpretation, and the Many Worlds interpretation. Copenhagen comes down to this: “everything we can measure ultimately behaves like a quantum wave, but this doesn't apply to me, so what are the implications of that?” Many Worlds boils down to saying: “everything we measure ultimately behaves like a quantum wave, what are the implications of that?” The Copenhagen interpretation says what it says and it clearly does not  have apodictic dents in it's armor. It has no internal inconsistencies and it is not in contradiction at all with any observation done as of today...but the same can be said of Everett's Many Worlds interpretation. Copenhagen is considered the standard model while Many Worlds has steadily gained in acceptance. What we know is that particles are in fact in a state of superposition, that is to say that there is empirical evidence that simultaneous states co-exist in matter. Where paradoxes seem to occur are when we try to understand why or how. These undefined superpositions have been observed in larger quantum systems as well...and this is where the rub comes into play. Laws of physics as we understand them don't have exceptions. Gravity exerts force on galaxies, planets, bowling balls, beads, or molecules in a predictable and identical manner. Events in our macroscopic world all seem to have a causality. It's when we try to comprehend what our observations of very small particles means in the macroscopic world that we run into what seems to be contradictions with what we think we know of this objective world we are accustomed to. However it may well be that these paradoxes only exist in our brains. In other words, these paradoxes are only conflicts between reality and our feelings about what reality ought to be as opposed to what it actually is. As a physicist, one seeks to understand the nature of the physical universe, many physicists would rather stick to the mathematical calculations that represent physical reality and leave the interpretations and implications out of the picture. ( Professor Stephen Hawking and probably most physicists would rather not concern themselves with the interpretations, while Einstein, Heisenberg, Bohr, & others, certainly did. They discussed the possible philosophical implications as well as the physical implications ). There is an implicit ontology that no one really wants to discuss but everyone ponders when it comes to quantum mechanics. As far as can be tested, the Copenhagen model works, but comes at a price that Einstein & Schrodinger suspected was too high. And that price is acausal randomness and the implication that there is ultimately no objective reality. Our Newtonian reality intuitively does not seem to be that way. (Yet it probably is).  As Einstein said "the moon is not there if you don't look at it"...though he was being is applying principles observed in the micro system to the macro system. one of  the  key concepts of C.I is the wave function collapse. The idea that every event exists as a “wave function” which contains every possible outcome of that event, which “collapses”—becoming an actual outcome, once it is observed. A thought experiment that illustrates this (similar to Schrodinger's Cat) is if a room is unobserved, anything and everything that could possibly be in that room exists in “quantum superposition”—an indeterminate state, full of every possibility, until someone enters the room and observes it, thereby collapsing the wave function and solidifying the reality. The problem with this is that observation is given some sort of vaguely defined superpower in a way. It seems to suggest that we are somehow 'different' than the quantum systems we observe or outside of them...and I'm not sure I find that entirely feasible. Though as Niels Bohr, the father of the orthodox 'Copenhagen Interpretation' of quantum physics once said, "Anyone who is not shocked by quantum theory has not understood it". Some people have come to the conclusion that consciousness itself and particle physics are inter-related. There was a legitimate paper on this in 1997 by Dr. Henry Stapp at the University of California in which he suggested that the synapses in your brain are so small that quantum effects are occurring. He suggested that there is quantum uncertainty about whether a neuron will fire or not - and this degree of freedom that nature has, allows for the interaction of mind and matter. (I'll just say that this really doesn't seem intuitively correct to me, but that is just my opinion...and in the realm of QM, intuitions based on experiences in the classical or Newtonian world can't be taken all that seriously it seems). He's written several books on the subject and he is a quantum physicist who worked with both Wolfgang Pauli and Werner Heisenberg. (unfortunately this sort of suggestion has given rise to all sorts of ludicrous quackery...which is clearly NOT related legitimately to QM....for instance NO legitimate science claims the result of the cat in the quantum box can be manipulated by wishful thinking...there's a ton of new-age quantum quackery proliferating out there on the web, avoid it if possible... except for a laugh.)  I have a problem with this interpretation actually, it does not strike me as consistent or reasonable within what we know of nature...even at the quantum level. A law of physics should apply across the board...if there is an exception, this would be a first. I mean to say that the Newtonian world we are used to interacting in isn't and shouldn't be separate in terms of physical laws from the quantum world. Rather isn't it more likely that everything is a quantum system, every combination of particles, a quantum machine of sorts... including ourselves? Even the entire universe....a quantum machine. This is why I tend to think that Hugh Everett's Many Worlds interpretation is, well,....less wrong.  Welcome to Many Worlds! Many Worlds was proposed by Hugh Everett in 1957. Essentially it states that there is no wave collapse, but when we observe something the universe splits or branches off into an alternate timeline or world so to speak.  Max Tegmark, the well known and respected astrophysicist says this about Everett's theory:   "Everett’s theory is simple to state but has complex consequences, including parallel universes. The theory can be summed up by saying that the Schrödinger equation applies at all times; in other words, that the wavefunction of the Universe never collapses. That's it - no mention of parallel universes or splitting worlds, which are implications of the theory rather than postulates. His brilliant insight was that this collapse-free quantum theory is, in fact, consistent with observation."  We have observed larger and larger objects that can be in multiple states, using the same double slit experiment or variations of it.  The same wave-particle duality was exhibited on a molecule which had fully 108 atoms, made up of 2,424 protons, neutrons, and electrons.  The entire molecule (actually, thousands of them) interfered with itself, demonstrating that it was in multiple states. It seems that everything that can indeed be tested has demonstrated quantum superposition, so why not just extend that to “everything obeys the same quantum mechanical laws, including superposition.”? This is the essence and indeed the beauty of Everett's interpretation. It applies to everything. The math is elegant. It makes logical sense, requires no particular special provisions, no dividing the universe into an indeterministic microscopic world and the deterministic macroscopic world. Many would say no, “the physics at small scales is just different!”.  Maybe so...but as I've stated earlier there are no physical laws that work differently on different scales.  In our experience, the same physical laws (see the Navier-Stokes equation) govern everything. Generally, in science,  all laws apply at all scales, it’s just a question of degree.  Relativity is at work at all velocities, you just do not notice the effects until you move really fast. Should this “size argument” (that larger objects somehow have different laws) turn out to be true, it will be the very first instance of such a thing. The quantum world builds the classical world. Everything in the classical, macroscopic world is composed of microscopic particles acting in unison. The problem is, if it's a problem at all; the consequences of not collapsing the wave function is that the universe is constantly splitting into alternate worlds which accommodate each possible outcome of every single event. It's likely that for many years before Everett wrote his paper, people had thought about this problem, but Everett was the first to propose a logically consistent way of removing the barrier backed up by convincing equations to support it. Schrödinger had said a few years before in Dublin in a physics conference a few years before Everett published his paper that physicists fear that if we don't have the collapse, "We should find our surroundings rapidly turning into a quagmire, or sort of featureless jelly or plasma, all contours becoming blurred, we ourselves probably becoming jellyfish."   In other words if there is no collapse, then all these possibilities are going to start propagating all over the place, and there won't be any cause or effect anymore. We,our selves, our physical beings, being quantum systems, become duplicated, and every possible position that a human body can be in will suddenly exist in classical reality. Schrödinger looked at that and booted the idea...the implications being too mind boggling for even a quantum physicist. Even a "mystical" explanation that makes no logical sense seemed better than that to him...and many others! Everett who was very much a realist simply could not accept that consciousness was "privileged," or that the universe would not exist without it. He assumed human consciousness is a quantum-mechanical system like any other quantum-mechanical system. Everett had an advantage the earlier theorists didn't though. Information theory. We have called tthe era in which we live the Information you know why? We entered the Information Age in 1948 thanks largely to Norbert Wiener (the father of cybernetics) & Claude Shannon,  Shannon and Wiener proposed remarkable theories that said that information actually has a physical reality that is independent of any kind of meaning that you might want to give it. (Which is synchronous with observations made in QM).  The development of understanding information itself as a physical thing has given birth literally to all modern technology including the internet . Well, Everett began to calculate using information theory, which had just been invented. He developed a mathematical argument showing how data correlates within itself. Which is what happens in a superposition. essentially he showed that The Schrödinger equation never ends, including in the classical world. In Everett's theory, what happens is that when a human actually looks at a clock or any other object, he or she splits like an amoeba. (In his view, the observation interaction, is just an exchanging of energy. (A person looking at the clock, in our example, is an energetic interaction, with photons of light bouncing off the clock and going into the person's eye.) According to Everett's view, when the human correlates him-herself, (interacts or exchanges energy with the clock or whatever is interacted with) then he-she splits into copies of him- herself, one for each element in the superposition. This split creates the 'many worlds'. As bizarre as this sounds—a person splitting into numerous copies of him-herself...Everett's theory has stood the test of time and peer review. It has not been shown to be mathematically incorrect. And to be sure, people have tried very hard. They have found some minor mathematical gaps, but no one has been able fault the basic mathematical logic, which has made a very convincing case that every time there is an interaction anywhere in the universe, one of the systems splits in order to accommodate all of the elements or superpositions that are contained in the wave function that describes the observed system. So the basis for having multiple universes emerges from this solution to the measurement problem in QM. The universal wave function of Everett's theory ultimately describes a series of branching universes that make up what David Deutsch has called the "multiverse," and recent discoveries in Astrophysics as well as in Neuroscience hint at this as well. (Hence the idea of titling  this section "synchronicity") In these branching universes, there are many trillions of copies of you, of your neighbor, of Everett. There are branches in which Everett is still alive, others where he died at birth. Everything that is physically possible happens in some branch of the multiverse. The implications of this are nothing less than astonishing. There is no ceiling of improbability, other than the laws of physics which may actually operate differently in other universes. Whatever could possibly occur does. So when  you are confronted with circumstances that appear to be impossible, like a missing woman unknowingly standing in the background of a photo being taken of her family for a newspaper story about her own disappearance,  remember that nothing is impossible on a large enough scale—indeed, given an infinite number of chances, literally anything you can imagine is not only possible, but inevitable. Consider what we know about how probability works In a Multiverse; on a persoanl level, the implications become somewhat overwhelming. There are trillions of versions of you—all of which are undeniably you—but many of which are very, very different from the “you” of this world-line. The differences between the various versions of you are as vast as your imagination can allow...or more. There’s a world-line where you’re the worst dictator ever and an architect of genocide. Conversely, there’s another where you dedicate yourself to world peace. Your crappy  band in high school became the dominant musical force on the planet, somewhere. Ladies, in one timeline Johnny Depp is your lover. Men can find some solace in knowing that they are sleeping with Scarlett Johansson in at least one of these timelines..though conversely perhaps in another they are married to their former cell mate Bubba or a blow up doll. Well, you get the idea. Multiplicity might explain a lot of things...feelings of deja vu,  a close connection with someone you’ve never actually met,  the sense of synchronicity itself. Perhaps there is some type of resonance or distant memory of a previous timeline that explains this. Think of  Buddhism or Hinduism in a divergent light for a moment. Both suggest a reincarnation. They posit that we manifest physically on Earth multiple times, and that we can learn from our past and future “paths”. Indulge me for a moment but might these belief systems be an intuitive understanding of the Multiverse idea? ( I'm thinking about the previous assertion that you were, or are; an evil mass murdering dictator in one timeline... it can be comforting to know that the experience of all possible facets of human nature is explicitly required  for growth which is the case in those belief systems). This is not to suggest you should kill people or engage in any other immoral behaviors mind you, but the alleged purpose of this cycle of learning is to eventually learn all that there is to learn, to actually transcend physical existence. Of course you dear reader are highly evolved and learned many lifetimes (world-lines) ago all there was to learn from indulging the dark side of human nature. Of course you did! While some folks seem to believe that our destination is some type of eventual meta-godhood, where we are thrown from our carriages off of the great Ferris Wheel Of Karma or preside over a universe of our own creation... others believe that the cycle just simply repeats. If the whole thing runs down, plays out, or heat death ends all realities; perhaps the cycle is simply restarts and the next Multiverse begins. Perhaps this has already been happening trillions of times? Wiz Bang, expansion, contraction, collapse, Wiz Bang again! Bazoomy! Off we go! As Johnny Carson used to say "I'll be right back"! ...And perhaps he will. (But Not falling From Grace)   In this Multiverse model with its infinite world-lines, you have existed before. In fact, all the infinite versions of you have existed before, and will exist again and again.  The same goes for Kurt Vonnegut, Billy the Kid, Thomas Jefferson, Jimi Hendrix, and Attila the Hun... along with every possible idea, creation and situation throughout all of our past and future, across all realities. Of course you know if everything that exists or will exist has already existed there is nothing new and nothing original. Not exactly a concept that hasn't been touched on before. • From a 19th Century BC Egyptian poem: "What has been said has been said." • From The Old Testament : "It has all been done before; there is nothing new under the sun." • From The Beatles "All You Need Is Love": "There's nothing you can do that can't be done." • From John Steinbeck :"There are as many worlds as there are kinds of days" Many writers, artists, and musicians (myself included on occasion) describe a sense that the pieces we craft are in some manner already existing, fully formed, merely waiting for us to come along and excavate them like fossils in some ethereal tar pit.  In an infinite Multiverse, there is a possibility this may be exactly what the pieces are. Creation and interacting with art is a very unique human experience. There are aspects of the human condition that are difficult or impossible to communicate by any other means. (Often physicists too say their ideas occur fully formed in dreams...and that language simply does not exist to express them adequately.)  This quality has been ascribed to the observations of QM, especially the "observe" portion of the Copenhagen observation is kind of an interaction of any kind and it's said that there is no known language to properly describe this). While it is not possible to accurately describe in any language what love is, or what it feels like,  there are plenty of ways to communicate this in art. Most often it is through artistic expressions that resonate with us  that many of us develop our first notions of the nature of love or other complex human experiences that elude description in other forms of communication. I am reminded of the lyrics in the Who's song "905" from the Who's album  "Who Are You" written by John Entwistle: "...Now I'm to begin The life that I'm assigned A life that's been used before A thousand times I have a feeling deep inside That something is missing It's a feeling in my soul And I can't help wishing That one day I'll discover That we're living a lie And I'll tell the whole world The reason why But, until then, all I know is what I need to know And everything I do's been done before Every idea in my head Someone else has said At each end of my life is an open door" While there are actually many interpretations of QM, the two most accepted are the Copenhagen and the Many Worlds...most others are tweaks on these two and are pretty much just variations on the two main ideas. There are however others that are different: Pilot Waves, Hidden Variables and the Implicate Order David Bohm (1917-1992) came up with an elegant but more complicated theory to explain the same set of phenomena (normally, more complicated theories are disqualified by the principle known as Ockham's Razor). Bohm's theory follows original insights by Prince Louis de Broglie (1892-1987), who first studied the wave-like properties of particles in 1924. De Broglie suggested that, in addition to the normal wavefunction of the Copenhagen Interpretation, there is a second wave that determines a precise position for the particle at any particular time. In this theory, there is some 'hidden variable' that determines the precise position of the photon.  John von Neumann (1903-1957) wrote a paper in 1932 claiming that this theory was impossible. Von Neumann was such a math wiz that nobody actually bothered to check his calculations until 1966, when John Bell (1928-1990) proved he'd bungled it and VIOLA! ...there could actually be hidden variables after all ... but only if particles could communicate faster than light (this is called 'nonlocality' in physics). In 1982 Alain Aspect demonstrated that this superluminal signaling did appear to exist, but then David Mermin came along and showed that you could not actually signal anything. There is still some argument about whether this means much of anything. Bohm's theory was that this second wave was indeed faster than light, and instantly permeated the entire universe, acting as a guide for the movement of the photon. He called it a 'pilot wave'. Although this theory explains the paradoxes of quantum physics, it also introduces a new faster-than-light wave and some hidden mechanism for deciding where it goes to create an 'implicate order'. That's quite a lot of luggage to carry and science generally likes to travel light. Worse still, Bohm  become something of a mystic, identifying his 'implicate order' with Eastern spirituality. It's all very interesting and we should keep in mind that "many worlds" was not broadly accepted at fact it was more or less least privately.  Consistent Histories  This interpretation analyzes sequences of states of a system (which may or may not include the whole universe), to find what questions can be consistently answered about the system, such as “was the particle at A or B at time T?” The measurement problem, however, is not resolved: the question of which histories actually happen remains a matter of probabilities just as with the standard Copenhagen I guess it's really just a variant on Copenhagen. Shut Up and Calculate! David Mermin said in a lecture" If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be 'Shut up and calculate!"Some physicists talk of the “shut up and calculate interpretation” which is to ignore the philosophical puzzle of how the classical and the quantum coexist entirely. Transactional Interpretation This interpretation was proposed by John G. Cramer in 1986 and it has waves traveling forward and backward in time, setting up standing waves between an emitter of a particle and its detector. Other Odd Considerations Although not technically an interpretation of QM, there are some other theories related to it that frankly...give me a headache. Largely because they seem so far fetched yet undeniably plausible at the same time. One of these would be that the superposition is observed because we are actually in a computer simulation. When you open a simulation what happens? The various components of the simulation are in an undetermined state and then software tells them to form a certain way to create the simulation. It does explain the wave collapse doesn't it?  Far fetched? Certainly...wait...maybe not. After all if our technology were sufficiently advanced enough to actually simulate a universe, we likely would. If we could simulate one...why not another? Or many? ...all existing right next to each other. An advanced civilization could likely do such a thing...geez maybe even our ancestors were that advanced and are doing it right now! Maybe we are actually simulations of their ancestors in some computer somewhere. We currently actually have supercomputers performing lattice quantum chromodynamics calculations which essentially divide space-time into a four-dimensional grid which then allows researchers to examine what we call the strong force, one of the four fundamental forces of nature and the one that binds  quarks and gluons together into neutrons and protons at the core of atoms. Martin Savage, from Washington University has said “If you make the simulations big enough, something like our universe should emerge", and by studying that, we could then look for a “signature” in our universe that has an analog in the current small-scale simulations. We can test this. Savage and colleagues Silas Beane and Zohreh Davoudi, a UW physics graduate student, suggest that this signature could show up as a limitation in the energy of cosmic rays. At any rate suffice it to say some very intelligent and learned scholars do not find this notion ridiculous. Then there's the idea that the universe is a hologram.  Leonard Suskind and other highly respected physicists posit this this video... So, I know what you are saying right now..."what's the bottom line on this stuff, what happens to Schrodinger's Cat? " In Copenhagen: The cat's dead and alive simultaneously until you look at it..then it's one or the other. In Many Worlds: The cat's dead in one world, alive in another. In Pilot Waves, Hidden Variables and the Implicate Order: It's either dead or alive, of course! (But becomes a Buddhist monk.) In Consistent Histories: Don't ask, don't tell. In Shut Up And Calculate: No one cares about the cat. In Transactional: The cat is both alive and dead but it's in the past as well as the future simultaneously. In a computer simulation: The cat's a cartoon...but so are you! In a hologram: The cat's a 3D cartoon and the future is indeed now a pork chop. In 1957, when Everett's thesis was published in Reviews of Modern Physics, the editor of that issue was Bryce DeWitt. Initially, DeWitt was unimpressed with Everett's theory. He wrote to Everett and asked "if the universe is splitting, then why don't I feel myself split?"  DeWitt, who was well aware of the Newtonian reasons why they wouldn't, simply said "Touché". In the late 1960s,  DeWitt when he was working seriously in quantum cosmology, DeWitt was attracted to the universal wave function as an interpretive method of dealing with what was going on, and he started writing about it. In 1970, he published an article in Physics Today that set off a firestorm of letters back and forth in that publication, debating the theory. Then, in 1973, he went to Everett, and said, "I know that there's a longer version of your thesis than the one that I printed in 1957. I'd like to publish it." The full dissertation was  137 pages, while the thesis that was published in 1957 was only about nine pages. [You can read the full dissertation here if you like.] Apparently John Wheeler  (Noted physicist who worked with Bohr) was a professor at Princeton University, and was influential in mentoring a generation of physicists who made notable contributions to quantum mechanics and gravitation including Everett, anyway he had cut three quarters of Everett's original thesis. Wheeler is said to have had a dream that Bohr was somehow going to approve it, so he made Everett remove his direct attacks on the Copenhagen Interpretation as well as his provocative metaphors about splitting observers like amoebas, and bifurcating cannonballs. (as well as, for some unknown reason, a whole chapter on information and probability theory).   As a result, a lot of the explanation for things that people saw as weaknesses in Everett's theory were cut out of the version that people read.  In 1973, DeWitt published the long version, along with the short version and some other papers, including one by himself, in a book called The Many Worlds Interpretation of Quantum Mechanics. He used the phrase "many worlds," because he thought it would be provocative and catchy, it was! And the name stuck. Incidentally, Hugh Everett did not call his theory "Many Worlds"...he called it "The Relative-State Formulation of Quantum Mechanics." If you have read all this material and the links, congratulations! You are now a certifiable quantum mechanic! Send me the 1200 dollars and I'll print you out a certificate with a 3 dimensional hologram of a cat in a box eating a pork chop !  Of course, no electrons were harmed in the making of this blog. Monday, June 16, 2014 The Age Of Anxiety Ah yes, the Age of Anxiety! The times in which modern Fascism and Totalitarianism made their melodramatic debut on the historical stage (and their refusal to go they merely transform and masquerade as something less identifiable or less unpalatable). The age of the Atom, the Age of Information, the Age of World Wars...the Age of Grotesque Materialism, Unmitigated Greed, Institutionalized Racketeering, Glorified Moral Bankruptcy and Wholesale Looting of Nations. Does this sketch of the world you find yourself in ring true? If so, or even if not; perhaps you disagree and think everything is coming up roses...still read on. We are here and it is now...this much is indisputable and may be the only thing one can actually fully know. But how did we get here, to this lamentable yet reversible condition? Why can we not purge ourselves of the self defeating delusions and and failed ruinous notions of the past? Before actually looking at the times we exist in, it's important to understand a few facts about the history of mankind leading up to our Age Of Anxiety. No matter how far back you look, human history is the story of one group of people or another attempting to set themselves up as superior to others and come up with a plethora of bogus reasons why they should dominate the others. Whether it's the "divine right" of Pharaohs in ancient Egypt, self-regarding thuggery of ancient Romans, the glorified warlords of medieval and absolutist Europe, or assorted Shahs, Caliphs, Sultans, Imams, Popes, Spurious Holy This or That Alliances, Tzars, Robber Barons, or other types of Grand Poohbahs; it's all the same in the end as in nearly every urbanized society throughout human history some group of people have tried to constitute themselves as an aristocracy. These people, their allies, sycophants, and apologists have been the scourge of humanity since time immemorial. The 20th and 21st centuries however have provided tools and technology to the would be Pharaoh that dwarf the potential of any previous despotic oligarchical nightmare.   By the year 1939, liberal democracies in Britain, France, Scandinavia and Switzerland were realities. Unfortunately, elsewhere across the continent, assorted dictators  had also imposed their ugly mugs into the picture (Photo Bombed the process so to speak). Dictatorship seemed to be the wave of the future. Many people were resigned to accept this,while others were looking for ways to 'get in on it'. It also seemed to be the wave of the present. After all, didn't Mussolini proclaim that this century would be a century of the right? Of Fascism? This is what disturbed such writers as Arthur Koestler (1905-1983), Yevgeny Zamyatin (1884-1937), Aldous Huxley (1894-1963), Karel Capek (1890-1938) and George Orwell (1903-1950). This was an antephialtic world in which human dignity, individuality, and the innate value of a human being was trampled under by the might of  totalitarianism. These early modern totalitarian states rejected liberal values of human dignity and fairness. They exercised total control over all aspects of the lives of subjects. Totalitarianism in these states became a new religion...a system of  required beliefs...a political religion for the Age of Anxiety.    It goes without saying that the governments of Europe had been conservative and anti-democratic throughout their long histories. The leaders, whether monarchs or autocrats WERE the government, and by their very nature, prevented any incidence of social or political change that might endanger the existing social order that they benefited from. There have been enlightened monarchs, but few so enlightened as to have removed themselves from the sinews of power. Before the 19th century these monarchs all legitimized their rule by recourse to the divine right theory of kingship, an idea which  appeared in Europe during medieval times. In France, you may recall;  this was the case until the late 18th century when French revolutionaries decided to end the Bourbon 'divine' claim to the throne by cutting off the head of Louis XVI. Of course, France ended up with Napoleon who ironically also claimed the divine right of kingship. The difference was merely that this divine right emanated from Napoleon himself. One might say "the same shit, different shitter". And one would be correct. After all, the would be Pharaoh will simply rule by whatever mechanism of ascension to a throne is available. It could be divine, military, or could be using the political rhetoric of the left or right,  it doesn't matter...some people insist on being crowned. Where's my crown? I want my crown!. It's a type of sociopath really, isn't it?   In contrast to these dictatorships,  a country such as England, on the other hand, underwent twenty years of civil war in the 17th century as well as the Glorious Revolution of 1688 which had produced a constitutional monarchy. In the 19th century, both the Industrial and French Revolutions created the forces of social change which monarchs, enlightened or not, could not safely ignore. A large middle class had emerged in the 18th century but had lacked political status. Now, in the 19th century, this large class of entrepreneurs, factory owners, civil servants, teachers, lawyers, doctors, merchants and other types of professionals wanted their voices heard by their governments and there were a lot of them. They were largely at least somewhat educated unlike the peasants of previous times who were much easier to frighten, manipulate, and fool into submission. The middle class became a force which had to be reckoned with and governments began to utilize their talents by creating large, but obedient bureaucracies. In this way, government seemed to reflect the interests of all when in actual fact, they represented the interests of the same sociopath assholes it always had. There was however the illusion of democracy and every once in a while a bone was thrown to the middle class to keep them believing in the illusion.  So European governments maintained order by giving the middle classes a stake in the welfare of the nation. Governments also built strong police forces and armies of loyal soldiers to protect itself largely.  Meanwhile of course in reality, the great mass of people, the "swinish multitude," remained completely unrepresented. Radicals were either imprisoned, murdered, or exiled because of their liberal, democratic, socialist, communist or anarchist inclinations. Despite these measures, and there were others as well, traditional authoritarian governments were not completely successful. Their power and their objectives were actually limited.  Why you ask?  Well these governments lacked modern communications and modern transportation. They lacked, in other words, the ability to totally control their subject populations. Not until the twentieth century -- thanks to improved technology -- would this change . In fact, true totalitarian regimes are limited only by the extent to which mass communications have been made a reality. And, of course, with mass communications comes "mass man", and the capability of total  and complete control. In this latter Age of Anxiety, humanity faces it's greatest challenge as the would be dictators have learned they can't use the older more obvious methods of domination. The modern threat hides behind logos, holding companies, and is difficult to discern. It's obvious that a handful of multinational corporations have seized control of mass communication globally, well frankly every industry. But who actually owns these multinational cartels and monopolies? It's cloaked and difficult to discern but if you follow the tangled web to it's source you find that roughly 85 people own virtually everything. There is no way Orwell could have predicted that there would be larger economic entities than states...but today more than half of the largest 'economies' in the world are not in fact states at all. They are corporate entities that exist only in an abstract manner on paper. "Big Brother" is no political entity at all. States now are openly bought and sold and the corrupt purchase of government is open and in plain's institutionalized.  It is reversible for the same reason the aristocracies of the past had to relinquish some of their powers. It is reversible if middle class demands change...the poor are powerless and it's by design. But the middle class has power because it potentially might be smart enough to see through the trickery. It may not be demoralized and degraded enough yet to simply go along with the plans of the modern fascists hiding behind these corporate facades because it's easier than objecting. These modern fascists are well aware of this and are doing everything they can now to eliminate this middle class. So there is a time limit on the possibility of change. There is likely a tipping point where if this middle class shrinks in numbers enough they will lose what power they have to refuse this new branded corporate fascism. The salvation of mankind lies in this alone-  Only by analyzing this deception will it become possible to revive democracy. Democracy may be imperfect but of all the possible means of governance and organizing society man has invented, a manageable representative democracy is the only one that attempts in any way to limit obscene concentrations of power. All others promote it in one way or another. Fight the power. But understand where that power lies, that it's concentration is the problem... and above all else don't be conned into enabling these modern cosmetically altered brands of fascism.  To impose its order on society, the modern corporatist must destroy what we non fascists view as civilization. In particular in order to thrive it must destroy conscience, democracy, reason, and language. The goal, in the name of humanity, decency, and survival is to reverse this concentration of power. Don't believe the one has the right, divine or through accumulation of wealth to dominate society. It's a crock! It's the big lie. We can end The Age of Anxiety in one of two ways. Either by waking up and refusing to allow this concentration of power just as Dr. Martin Luther King refused to stand by and accept racism, or by rolling over in submission....going out with a whimper. The choice dear reader is yours. Tuesday, April 29, 2014 The Phenomenon Of Creeping Fascism These words belong to Sebastian Haffner, who was a young lawyer in Berlin during the 1930s. He experienced the Nazi takeover and wrote a first-hand account of it. It was not published while he was alive, but his children found the manuscript when he died in 1999 and published it the following year as "Geschichte eines Deutschen" (The Story of a German). The book became an immediate bestseller and has been translated into 20 languages.  In English it is published as "Defying Hitler." This will likely have a disconcerting resonance to anyone familiar with the Nazi ascendancy, noting how "odd" it is that the frontal attack on Constitutional & Human rights as well as civil liberties is met with  "calm, superior indifference" in our own times.  First what is fascism actually? It is described by Benito Mussolini (who is credited with coining the term and ought to know what he meant by it) as the combining of corporate and state powers. It's derived from latin...a word meaning the bundling of sticks. A single stick can be snapped and broken, where a bundle of sticks can not easily be snapped or broken. Fascism is growing in many modern democracies today...the citizens may have trouble recognizing it for what it is....but rest assured it did not disappear after WWII. Many of the nations who fought fascism and sacrificed life and limb preventing it from global domination in the mid 20th century now see their own countries engaging in it fully a few generations later. It isn't obvious to the majority, there are no literal goose stepping parades or brownshirted thugs...not in public anyway. No, that wouldn't do in today's world...people might recognize that too easily. The methods have to be more stealthy and wrapped in the local flag...not a swastika. The fulfillment of the prediction by Sinclair Lewis 'not if'  but "when fascism comes to America it will be wrapped in the flag carrying a cross"   Nazis and Those Who Enable Them Well what we can learn from Haffner's account of the fascist's rise to power in Germany is that you don't have to be a Nazi. You can just be, well for lack of a better description, a sheep. Do nothing. In his account,  Sebastian Haffner describes what he calls the "sheepish submissiveness" with which the German people reacted to a 9/11-like event, the burning of the German Parliament (Reichstag) on Feb. 27, 1933. Haffner suggests it quite telling that none of his acquaintances "saw anything out of the ordinary in the fact that, from then on, one's telephone would be tapped, one's letters opened, and one's desk might be broken into."  His his most virulent condemnation is reserved for the cowardly politicians. Do you see any contemporary parallels here? In the elections of March 4, 1933, shortly after the Reichstag fire, the Nazi party garnered only 44 percent of the vote. Only the "cowardly treachery" of the Social Democrats and other parties to whom 56 percent of the German people had entrusted their votes made it possible for the Nazis to seize full power. Haffner explains: "It is in the final analysis only of betrayal that explains the almost inexplicable fact that a great nation, which cannot have consisted entirely of cowards, fell into ignominy without a fight." The Social Democratic leaders betrayed their followers-"for the most part decent, unimportant individuals." In May they sang the Nazi anthem; by June the party was dissolved. The middle-class Catholic party Zentrum folded in less than a month, and in the end actually supplied the votes necessary for the two-thirds majority that "legalized" Hitler's dictatorship. As for the right-wing conservatives and German nationalists: "Oh God," writes Haffner, "what an infinitely dishonorable and cowardly spectacle their leaders made in 1933 and continued to make afterward.... They went along with everything: the terror, the persecution of Jews.... They were not even bothered when their own party was banned and their own members arrested." In summary he says: "There was not a single example of energetic defense, of courage or principle. There was only panic, flight, and desertion. In March 1933 millions opposed the Nazis but overnight they found themselves without leaders...At the moment of truth, when other nations rose spontaneously to the occasion, the Germans collectively and limply collapsed. They yielded and capitulated, and suffered a nervous breakdown.... The result is today the nightmare of the rest of the world." In the U.S., the Founding Fathers were not oblivious to this general behavior and it's danger. We cannot say we weren't warned. Ignorance. Fear. Greed. Selfishness. In that order, those are the reasons that explain the phenomena of creeping fascism. And this applies to the nascent fascism in the United States and other democratic nations today. Predictably and rightfully, the majority of the population in every society strives to be safe, to have a "normal" life by local standards, to count on certain basic things like employment, family ties, entertainment, friendship. So people go about their daily lives influenced by what they see around them. their experiences in shops they use to buy things, who they might talk to, maybe some house of worship they attend, or a gathering, a party, a funeral, a baby shower, etc. The conditions that lead to creeping fascism and its eventual establishment are essentially invisible to most folks (until its too late). For these folks who are unaware of it, well their sin is ignorance, so they are arguably less culpable. (Dear reader, if you happen to be in this category, understand that further reading will make you  fully aware of it,  and should you decide to do nothing, then your personal level of culpability will go up). The biggest threat to the establishment of fascism is education. A populace with a collective high intellect is not prone to be easily duped. The tell-tale heart of creeping fascism is the rise of anti-intellectualism, such as the one we have in the U.S. today. Ignorance is literally elevated to be the false equivalence of intellectual curiosity. The dim wit is elevated publicly to hero status (think of Sarah Palin and numerous others). The sure fire technique to prevent the populace from developing their collective intellect is by discouraging people from engaging in any sort of deep thinking or analysis about the world around them, government and its institutions,  issues related to power or wealth hierarchies, income disparity, etc. The best way to do this is to create a situation where people are made to work at a subsistence level (hand-to-mouth, paycheck-to-paycheck), to put up roadblocks to attaining a proper education, and to bombard people with, as in Roman times, "bread-and-circuses," which in today's world happens with the bombardment of the human mind with an incredibly effective propaganda machine in the form of the corporate-owned U.S. media. Think of all the  'reality' shows, and the fantasy of an obscure and unknown person making it big by winning in American Idiot, or any of the other mind rotting shows. Think of how a news network actually fought for and won in court the right to misinform...the right to lie as a free speech issue. In the ignorance category we can also include the religious Right, the nationalists, and the racists and how easily they are to incorporate in creeping fascism. This is because fear is the other classic way of manipulating the population. When it comes to the middle class, you have a combination of factors, including ignorance and fear (to a greater extent), and selfishness (to a lesser extent). The first priority of the middle class is to keep what they have, and to dream of possibly having something better or more.  So when fascism and oppression creeps in, it succeeds if  the middle class remains mainly dormant and docile through most of the process. Again until it is too late. Usually during the first stages of fascism, it directly affects certain maligned groups such as the poor (the most maligned and defenseless target), and certain minority groups, the nascent "baby" fascist state needs to practice with  minority groups in order to perfect it's system of domination before consolidating their power and applying their techniques on the general population.  The politicians, business people, the leaders of most liberal and progressive groups, and unions cave and cower. At this stage of the acquiescence to creeping fascism is mainly the result of pure greed and selfishness. It's a willful blindness. At this stage these people in the 'establishment'  possess the intellectual capacity to understand what's going on, but chooses to do nothing (or to do minimalist, don't-rock-the-boat ineffectual gesturing) out of pure short-term self interest. Greed. Like the middle class, they are more interested in keeping what they have, and possibly having more, in cushy jobs and positions, in grants and money from donors, corporations, employers, in being connected to the expanding power structure, and benefiting from it.   Have you ever wondered about the dismal lack of leadership from most of the top ranks in unions in modern times?... Or the lack of any real leadership in liberal and progressive organizations? Well  wonder no more.  Creeping fascism has taken them. At this level, the so-called leaders share more culpability and responsibility for allowing fascism to creep in because at an intellectual level they know full well it's happening,  they choose  to look the other way for purely greedy and selfish reasons. The only antidote is the type of leader who is totally, one hundred percent driven by duty, love of humanity, by the concept of justice, and not by self interest or greed. Do such people exist? Well yes. They do. They are rare but they exist.  In India for instance, the anti-corruption campaign of activist Anna Hazare who has been bringing the entire Indian government to its knees with the force of his conviction.  I firmly believe there are leaders (in wait) like that in the U.S, the U.K. and other democratic nations., but the manipulation and influence of the corporate-owned media is so total, that at this point it's nearly impossible for them to get any traction. If they ever make it to the point of being on the public radar, they are vilified, ridiculed, demonized, attacked, spied on, etc. But I really believe those leaders will emerge once there is a significant number of people who are able to break through the mental shackles imposed by the nascent fascist regimes. There are some signs that a large enough number of people are "waking up" from the corporate stupor and realizing what's happening, as exemplified by the occupy protests last year. If you read this, you can't really claim ignorance any more. Democracy is under attack from creeping fascism. Corporations are NOT people and money is NOT free speech. In a democracy, we consent to be governed in our own COMMON interest, not the interests of the few who can buy their own senators. We are interested in the general prosperity of all our people, not just the few who already own most everything. There are no scapegoats folks, there is no one to blame but ourselves. It's time to do something to deter creeping fascism. If not now, when? Tuesday, February 4, 2014 Henry Wallace On American Fascists Henry A.Wallace And Franklin D. Roosevelt Henry A. Wallace was the U.S. Secretary of Agriculture from 1933 to 1940, during the incredibly  difficult years of the Great Depression, and Vice President from 1941 to 1945, at the height of World War II.  Wallace was one of FDR's closest and most trusted associates, A huge supporter of the New Deal and a man determined to fashion a better world out of the ashes of war II. Wallace was born on an Iowa farm in 1888. After graduating from Iowa State College in 1910, he went to work for the family paper, Wallaces' Farmer, which was widely read throughout agricultural circles and brought the Wallace family considerable prestige among the nation's farming community. In the early 1920s, Wallace became the editor of the Farmer after his father, Henry C. Wallace, accepted an offer to serve as Secretary of Agriculture in the Harding and Coolidge administrations. A long standing Republican, the younger Wallace broke with his father's party in 1928 over the issue of farm relief and high tariffs campaigning for the Democrat, Al Smith, in his run for the White House. This brought Wallace to the attention of FDR, who, four years later, asked him to follow his father's footsteps and become his Secretary of Agriculture. He later served as FDR's Vice President, and as Secretary of Commerce. Following FDR's death, and after resigning as Secretary of Commerce in 1946, Wallace became a leading advocate for post-war cooperation with the Soviet Union and one of the most prominent critics of the Truman Doctrine and containment policies that became the Cold War. He ran an unsuccessful third party campaign for the presidency in 1948 that was tainted by false reports that he was a tool of Moscow. Roosevelt once said that "no man was more of the American soil than Wallace," and in the wake of his 1948 defeat, Wallace decided to return to his roots and retire to his beloved New York farm. For the next seventeen years he devoted himself to scientific farming, genetics, and gardening. He died on November 18, 1965. His writing has been out of print for years, but I believe modern ears deserve to hear what this wise man's common sense voice. I believe these modern ears will find his voice quite relevant. The following is taken from an article in the New York times, April 29th, 1944. Fascism is a worldwide disease. Its greatest threat to the United States will come after the war ...within the United States itself. In order for democracy to crush fascism internally it must demonstrate its capacity to "make the trains run on time." It must develop the ability to keep people fully employed and at the same time balance the budget. It must put human beings first and dollars second. It must appeal to reason and decency and not to violence and deceit. We must  tolerate neither oppressive government nor industrial oligarchy in the form of monopolies and cartels.
e2ff64912f086638
124120 – Principles of Chemistry Recommended reading: 1. General Chemistry, Petrucci, Hartwood and Herring, 8th (or 9th) ed, Prentice Hall 2002. 2. Chemistry, Raymond Chang, 6th ed, McGraw Hill, Inc 1998. 3. Chemistry, Jones and Atkins, 4th ed. W.H. Freeman, 2000 An approximate schedule of the topics to be covered: Week    1: Basic chemistry Introduction to chemistry, terminology, stoichiometry, formulas and reactions. Week    2-3: Electronic structure of the atom Electromagnetic radiation, photo-electric effect, Bohr model, light-matter duality (De-Broglie wavelength), the Schrödinger equation, atomic orbitals, quantum numbers, multielectron atoms, electron configuration (Hund rule and Pauli principle). Week    4: Periodic trends Periodic table- trends, blocks and atomic properties: atomic and ionic radius, ionization energy and electron affinity. Week    5-6: The chemical bonds Chemical bonding, Lewis configuration, dipole, resonance, VSEPR, hybridization and molecular geometry. Week    7: Gases Pressure, partial pressure, gas laws, ideal and nonideal gases and the kinetic theory of gases. Week    8: Thermochemistry Open/closed systems, heat, heat capacity, work, enthalpy, endothermic and exothermic reactions and Hess’s Law. Week    9: Phases Inter/intramolecular forces, dipole interactions, H-bonding, surface tension, capillary effect, phases of matter (vaporization, condensation, fusion, sublimation, boiling) and phase diagram Week    10: Chemical equilibrium Reversible reactions, Equilibrium constant, reaction Quotient, Le Chatelier’s principle and non soluble salts. Week    11-12: Acids and bases Water self-dissociation, Kw, pH, pOH, pKa, pKb, buffers, neutralization reactions and titrations. Week    13: Oxidation and Reduction Oxidation states and balancing REDOX equations. Week    14: Relating Chemistry to other Fields Chemistry in engineering, biotechnology and biology
20a315836fa5e4d8
Period 1 element From Wikipedia, the free encyclopedia Jump to: navigation, search Period 1 in the periodic table Hydrogen Helium Lithium Beryllium Boron Carbon Nitrogen Oxygen Fluorine Neon Sodium Magnesium Aluminium Silicon Phosphorus Sulfur Chlorine Argon A period 1 element is one of the chemical elements in the first row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate periodic (recurring) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The first period contains fewer elements than any other row in the table, with only two: hydrogen and helium. This situation can be explained by modern theories of atomic structure. In a quantum mechanical description of atomic structure, this period corresponds to the filling of the 1s orbital. Period 1 elements obey the duet rule in that they need two electrons to complete their valence shell. The maximum number of electrons that these elements can accommodate is two, both in the 1s orbital. Therefore, period 1 can have only two elements. Periodic trends[edit] All other periods in the period table contain at least 8 elements, and it is often helpful to consider periodic trends across the period. However, period 1 contains only two elements, so this concept does not apply here. In terms of vertical trends down groups, helium can be seen as a typical noble gas at the head of Group 18, but as discussed below, hydrogen's chemistry is unique and it is not easily assigned to any group. Position of period 1 elements in the periodic table[edit] Although both hydrogen and helium are in the s-block, neither of them behaves similarly to other s-block elements. Their behaviour is so different from the other s-block elements that there is considerable disagreement over where these two elements should be placed in the periodic table. Hydrogen is sometimes placed above lithium,[1] above carbon,[2] above fluorine,[2][3] above both lithium and fluorine (appearing twice),[4] or left floating above the other elements and not assigned to any group[4] in the periodic table. Helium is almost always placed above neon (which is in the p-block) in the periodic table as a noble gas,[1] although it is occasionally placed above beryllium due to their similar electron configuration.[5] Chemical element Chemical series Electron configuration 1 H Hydrogen Diatomic nonmetal 1s1 2 He Helium Noble gas 1s2 Hydrogen discharge tube Deuterium discharge tube Hydrogen (H) is the chemical element with atomic number 1. At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas with the molecular formula H2. With an atomic mass of 1.00794 amu, hydrogen is the lightest element.[6] Helium discharge tube 3. ^ Vinson, Greg (2008). "Hydrogen is a Halogen". Retrieved January 14, 2012.  4. ^ a b Kaesz, Herb; Atkins, Peter (November–December 2003). "A Central Position for Hydrogen in the Periodic Table". Chemistry International. International Union of Pure and Applied Chemistry. 25 (6): 14. Retrieved January 19, 2012.  5. ^ Winter, Mark (1993–2011). "Janet periodic table". WebElements. Retrieved January 19, 2012.  6. ^ "Hydrogen – Energy". Energy Information Administration. Retrieved 2008-07-15.  8. ^ Staff (2007). "Hydrogen Basics — Production". Florida Solar Energy Center. Retrieved 2008-02-05.  10. ^ "hydrogen". Encyclopædia Britannica. 2008.  11. ^ Eustis, S. N.; Radisic, D.; Bowen, K. H.; Bachorz, R. A.; Haranczyk, M.; Schenter, G. K.; Gutowski, M. (2008-02-15). "Electron-Driven Acid-Base Chemistry: Proton Transfer from Hydrogen Chloride to Ammonia". Science. 319 (5865): 936–939. Bibcode:2008Sci...319..936E. doi:10.1126/science.1151614. PMID 18276886.  12. ^ "Time-dependent Schrödinger equation". Encyclopædia Britannica. 2008.  18. ^ "Helium: the essentials". WebElements. Retrieved 2008-07-15.  19. ^ "Helium: physical properties". WebElements. Retrieved 2008-07-15.  20. ^ "Pierre Janssen". MSN Encarta. Retrieved 2008-07-15.  23. ^ Copel, M. (September 1966). "Helium voice unscrambling". Audio and Electroacoustics. 14 (3): 122–126. doi:10.1109/TAU.1966.1161862.  24. ^ "helium dating". Encyclopædia Britannica. 2008.  25. ^ Brain, Marshall. "How Helium Balloons Work". How Stuff Works. Retrieved 2008-07-15.  27. ^ "When good GTAW arcs drift; drafty conditions are bad for welders and their GTAW arcs". Welding Design & Fabrication. 2005-02-01.  31. ^ "Helium: geological information". WebElements. Retrieved 2008-07-15.  33. ^ "Helium supply deflated: production shortages mean some industries and partygoers must squeak by". Houston Chronicle. 2006-11-05.  Further reading[edit]
14fdb777ab42db8f
Thursday, June 30, 2016 1- Change is not merely necessary to life - it is life. ( Alvin Toffler) 2- Change is the process by which the future invades our lives. (Alvin Toffler)  3- Man has a limited biological capacity for change. When this capacity is overwhelmed, the capacity is in future shock. (Alvin Toffler)  4- The illiterate of the 21st Century are not those who cannot read and write but those who cannot learn, unlearn and relearn. (Alvin Toffler 5- The future always comes too fast and in the wrong order. (Alvin Toffler) 6- Knowledge is the most democratic source of power. (Alvin Toffler) 7- One of the definitions of sanity is the ability to tell real from unreal. Soon we'll need a new definition. (Alvin Toffler) 8- The great growling engine of change - technology. (Alvin Toffler) 9- Our technological powers increase, but the side effects and potential hazards also escalate. (Alvin Toffler) 10- Technology feeds on itself. Technology makes more technology possible. (Alvin Toffler) 11- It is better to err on the side of daring than the side of caution. (Alvin Toffler) 12-Rational behavior ... depends upon a ceaseless flow of data from the environment. It depends upon the power of the individual to predict, with at least a fair success, the outcome of his own actions. To do this, he must be able to predict how the environment will respond to his acts. Sanity, itself, thus hinges on man's ability to predict his immediate, personal future on the basis of information fed him by the environment. 13- Change is the only constant (Heidi Toffler) a) In memoriam Alvin Toffler So Alvin Toffler has died last Monday and I remembered with deep nostalgia  reading, first "The future Shock" and later "The Third Wave", both important and influential.  Toffler's role is explained in the following sentence We (I am speaking about my wife and me plus our closer circles) have liked very much these writings, Toffler was a huge literary talent and for persuasiveness. However then we were living in an anomalous  society (here the term is perfect, for LENR it must be avoided!) see my Septoe: 42. The Future Shock was amortized by irrationality. Then after 1990 our world became a bit more rational politically, socially and economically speaking and the Future has arrived, is accelerating and we can participate more actively to it. It does not happened exactly as Toffler has predicted it but we have understood that In predictions there exists things fundamentally more important than inerrancy- as catching the Spirit of the time and Toffler has done this masterfully. Has he predicted the Internet /Web? It seems yes, in a way! Perhaps Toffler has exaggerate a bit with the SHOCK; I asked my 0 years old grand-daughter Nora if it was difficult to advance from Mama's laptop to her own tablet and then to the smartphone she received at her birthday. Not at all it was much more painful to learn to read, write and the basics of math, IT is more human and rational. b) LENR's specific shock(s)  The case of Cold Fusion/LENR: its past was shocking enough, its present is a teal shock and its future has to be made also so, but in the best sense!  Iwas repeatedly shocked by the slow development of the LENR field in contrast with the Tofflerian future in action. Now when this started to change, dark forces conspire to kill the LENR technology dream. Please complete the details yourself. Of Mice, Materials and Men Who is talking about LENR on social media forums? A poem for IH and their silence at the death of their LENR dreams Do not go gentle into that good night Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. Though wise men at their end know dark is right, Because their words had forked no lightning they Do not go gentle into that good night. Good men, the last wave by, crying how bright Their frail deeds might have danced in a green bay, Rage, rage against the dying of the light. Wild men who caught and sang the sun in flight, Do not go gentle into that good night. Grave men, near death, who see with blinding sight Blind eyes could blaze like meteors and be gay, Rage, rage against the dying of the light. And you, my father, there on the sad height, Do not go gentle into that good night. Rage, rage against the dying of the light. Space team discovers universe is self-cleaning it is also about a source of cosmic energy A Good slogan: "Face your fears" Fear comes from a deep-seated place of feeling rather than thinking. (Jeff Wise) tensions of modern learning Wednesday, June 29, 2016 The Rhino Principle. A rhino is not a particularly subtle or intelligent creature, yet it has managed to dominate the savanna through sheer determination and aim. It takes initiative when it sees something it wants and puts everything into what it does best: charge! I've always been suspicious of collective truths. (Eugene Ionesco) a) Why is IH fabricating so many memes? More readers have asked why the supporters of IH try to create so many anti-Rossi and again so many pro-IH memes? A first aspect showing they are not good in the art of Memetics is the "optimal density" of memes. Exactly as in agriculture where it is an optimal density of plants at crops, too dense memes start to mutually annihilate each other; just remember the incompatible "the plant is impossible" (people killed by heat not removable) and the other"the plant does not work at all" Indeed is if a list of memes  is done and used according to a plan, or an "everything goes" ineffective mentality rules without control.  Why do they do this? Because, very probably,  they do not have anything better! If they did, they would not have  made another motion to dismiss The situation is: Rossi wants to go in Court, IH wants to escape. You can guess three causes why is this soo......please!  b) The stoppable subtleties of Jed Rothwell's personal memes Admirable hyperactive for his followers< Jed Rothwell has created his own discussion thread serving simultaneously as a meme generator, a guillotine for everything Andrea Rossi and a training camp for his, very specific subtleties. This it is: I was wrong about Rossi, but what I fear most is that I might be partly right Subtle as a rhinoceros, Jed writes here about the very document that has irreversibly convinced him that there was no excess heat in the 1MW experiment- he has received this from Andrea Rossi Subtle as a rhinoceros he seemingly suggest here: "Andrea Rossi is a collection of sins and evilness, however even he is able to appreciate a genius, a genuine high class expert and this must be the reason that this otherwise very secret document has found me. Unfortunately, contrary to Rossi's expectation the paper has convinced me quasi instantly but beyond any doubt ,that the  plant does not work at all, the Test  is worse than a disaster."   The text is imagined but this must be the idea. A possible- but morally impossible! alternative is that Jed has invented the ghost document thing as a - again- subtle trap for Rossi who desperate seeing that Jed convinces everybody that no excess heat- will come with is data and Jed will officially mas sacrate and annihilate them- an easy pray for him!. Rossi did not want to comment about Rothwell's calorimetric genius, he knows why?! I have to confess that I have not searched the rothwellisms thoroughly- surly I will not be one of his biographers however the following statement- a strong pro-IH mme: has an even higher degree of rhinocerian subtlety- it is text and context- not extract: "Our enemies will put fraud front and center. This will be another blow against cold fusion, thanks to Rossi. By the grace of God we may still have money from I.H.,  without which this field would be dead, dead, dead." Some critics have found it needs a few definitions, then Jed has never spoken about the grace of God- as far I remember- the message is crystal clear: "IH is the savior of LENR!"    Isn't it too much to say that without money from IH LENR would be 3-times dead; simply dead is not sufficient? Money from SKINR, possible from the ARMY, money outside US as in Japan, India, Russia, China,  EU etc.will not be able to keep LENR  in a half dead  state? For Jed, the idea that LENR needs even more new ideas and young researchers than (micro)-funding is too subtle? He says directly to al LENR researchers: "Be against Rossi, be with IH- otherwise your research will die, die, die!" Further- no comment! c) Re-read Eugene Ionesco's play- we LENR fighters have to avoid rhinocerization! It is plenty of absurdity in this process of creating the memes of IH. I well remember this play about people losing their humanity I am terrified and will not tell more, my readers can decide if the meme-factory has something rhinocerian in it or, if on the contrary it is a congregation of angels?! However, I will finish in Jed';s NEW style asking: "For God's sake, IH please go boldly and openly to the Trial!" 1) Is Clueless Jed Rothwell Paid or Played to Slander Penon and the ERV Reports on the MW COP~50 E-Cat Plant? 2) Andrea Rossi answers Gerard McEk June 28, 2016 at 8:11 AM Dear Andrea, You recently said that the light of the QuarkX has given you an idea how the Rossi-effect may work. (In other words: you may have seen the light). 1. Do you make any progress with the theory and 2. Do you expect it to lead to new patents? In the past you said that you were preparing many patents. 3. Do you expect some of these to be published soon? 4. Is there any progress in the domestic QuarkX or 5. Do you expect that the lower temperature Ecat to be the most suitable solution? Thank you for answering our questions. Kind regards, Gerard Andrea Rossi June 28, 2016 at 3:55 PM Gerard McEk: 1. yes 2. yes 3. no 4. yes 5. I do not know yet Thank you for your attention, Warm Regards, 3) Andrea Rossi does not answers and does not comment: June 28, 2016 at 1:52 PM Dear Dr Andrea Rossi: sifferkoll link given here My comment: IH again tries to escape from the litigation. If 1/100 of the slanders and the lies deposited in the blogs by the mad dogs of IH were true, IH would be eager to go in court…the fact that they are trying to delay and to suffocate the litigation makes clear that they are afraid of it. Evidently they know that you have evidence that will defeat them in Court, where what counts is not the chattering of the mad dogs, but the real evidence. In fact it appears that you are fighting to go in Court, they are trying to run away. 4) Russian language video: "News re LENR and CNF philosophical storm: Seminar: "Philosophical Storm"  June 28, 2016 presentation of Igor Iurievich Danilov First part: QuarkX of Andrea Rossi Second part: Microbes of Tamara Vladimirovna Sahno and Viktor Mihailovich Kurashov 5) Did Jed Rothwell Admit Being an IH Contracted Spin Doctor with a Freudian Slip? The correct link to the Calaon paper is this: Understanding of molecular hydrogen has implications from industry to medicine Tuesday, June 28, 2016 The rule or domination by a meme or memes which are cultural practices or ideas that are transmitted verbally or by repeated actions from one person's conceptions to the minds of other people. My Septoe: "20. We live in memecracies, ideas dominate us." In the original introduction to the word meme in the last chapter of 'The Selfish Gene,' I did actually use the metaphor of a 'virus.' So when anybody talks about something going viral on the Internet, that is exactly what a meme is, and it looks as though the word has been appropriated for a subset of that. (Richard Dawkins) Image result for memes quotes a) IH 's plan seems to be based on memes- killer for Rossi and friendly for them 'Meme', the cultural equivalent of "gene" is a concept and word of vital importance  however paradoxically it is not a strong meme itself- it is a bit too intellectual. However you cannot think well if you do not consider the existence of memes. I have written a lot about them including in this Blog.  If you are not familiar with memes please read at least: It is my pleasure to announce you that now again memes have helped me to solve a prob;em I found first very difficult- in retrospective I was slow and non-creative and rigid in thinking. It is about the enigma of the furious and seemingly senseless character and plant and technology assassination campaign of the IH propagandist lead by Jed Rothwell. (see a new opus by him below).  Why, for Hermes's sake, if they are right and can automatically  win the Trial? Why, for Minerva's sake, if they are wrong, does it help when facts speak at the Trial? First it is obvious that IH manifests a totally negative enthusiasm toward the Trial and try very hard to escape from it see papers at 4) Legal battle. No traces of the noble spirit of "Fiat Justitia, pereat mundus",_et_pereat_mundus Justice at any price, but perhaps the cost is too high and the chances to win not so very high. So what they actually do is clear: Stay calm but angry, inventive, efficient, and make MEMES- two types:  A- Killer anti-Rossi and anti- what belongs to Rossi memes; B- Friendly nice pro-IH memes A-memes are cheap, free, but B-memes have a cost and need more fantasy.. and money. PLEASE read for that the opinions of Doug Marker. The plan is to disseminate these memes on the Web, make them contagious, the Press, the public opinion and perhaps even the jurors from the Court  will be memefied so the 'obviously good' will increase tremendously its chances to defeat the 'evidently malefic.' We live in memecracies. Indeed?  b) Jed Rothwell's  new opus  Did Rossi and IH have a valid contract that states; that if the general performance test were successful . . ." I have not read the contract carefully, and I know little about contracts. Here is what I know: the performance was not successful. The data from Rossi proves that the machine did not work. "IH has not paid and said the Test was not good; where is the first written document with serious warnings from IH to Rossi saying this; was it after the 1st, 2nd or 3rd ERV report?" This is not a technical question. This has no bearing on calorimetry or science. This question illustrates how you have missed the point. You cannot judge a technical question by looking at people's behavior, or by examining business contracts. This question is fluff. The dispute between Rossi and I.H. is about calorimetry. It is about flow rates, temperatures, steam quality and instruments. Your questions are irrelevant. Even if you knew the answers, they would not bring you one millimeter closer to knowing whether the machine worked or not. Instead of waiting to learn the technical details, you obsess over these unrelated, non-scientific questions and gossip that has no bearing on the technical issues. I do not understand how a person with a technical background could make such a mistake.  So dear Jed there are only technical questions, later you say we have to apply the Scientific Method. Please apply it to Rossi's question regarding persuasion of the investors, OK? 1) Ok, So What Did Really Happen When Industrial Heat F*cked Up the Deal with Leonardo/Rossi? And Why? 2) Jones Day Lawyer Drones on Repeat in Another MTD. However, again Showing the Malicious Intent of IH! 6) The mystery of the irrational withdraw of the ECAT support  7) TheNewFire - LENR News 8) Yet Another LENR Theory: Electron-mediated Nuclear Reactions (EMNR)  9) Andrea Calaon∗ Independent Researcher, Monza, Italy  An attempt is made to build an LENR theory that does not contradict any basic principle of physics and gives a relatively simple explanation to the plethora of experimental results. A single unconventional assumption is made, namely that nuclei are kept together by a magnetic attraction mechanism, as proposed in the 1980s of the past century by Valerio Dallacasa and Norman Cook. This assumption contradicts a non-proven detail of the standard model, which instead attributes the nuclear force to a residual effect of the strong interaction. The theory is based also on a property of the electron which has been known for long, but has rarely been used: the Zitterbewegung (ZB). This property should allow the magnetic attraction mechanism that binds nucleons together, to manifest also between the electron and any isotope of hydrogen, leading to the formation of three neutral pseudo-particles (the component particles remain separate entities), collectively named here Hydronions (or Hyd). These pseudo-particles can then couple with other nuclei and lead to a fusion reaction “inside” the electron. The Coulomb barrier is not overcome kinetically, but through what could be interpreted as a range extension of the nuclear force itself, realized by the electron when some specific conditions are satisfied. The most important of these necessary conditions is that the electron has to “orbit” the hydrogen nucleus at a frequency of 2.055 × 1016Hz. This frequency corresponds to photons with an energy of about 85 eV or equivalently a wavelength of 14.6 nm in the Extreme Ultra Violet (EUV). So the large quanta of nuclear energy fractionate into EUV photons during the formation of the Hydronions and during the coupling of Hydronions to other nuclei. The formation of Hydronions requires the so called Nuclear Active Environment (NAE), which is what makes LENR so rare and difficult to reproduce. The numbers suggest that the NAE forms when an unshielded atomic core electron orbital that has an “orbital frequency” near to the coupling frequency is stricken by a naked Hydrogen Nucleus (HNu). This theory therefore implies that the NAE is not inside the metal matrix, but in its immediate neighbourhood. The best candidate atoms for a NAE are listed, based on the energy of their ionization energies. The coincidence with the most common LENR materials appears noteworthy. The Electron Mediated Nuclear Reactions (EMNR) theory can explain also very rapid runaway conditions, radio emissions, biological NAE, and the so called “strange radiation”. ⃝ c 2016 ISCMNS. All rights reserved. ISSN 2227-3123 Keywords: EMNR theory, Extreme ultra violet, Hydronion, 10) Electron Deep Orbits of the Hydrogen Atom J. L. Paillet-1 , A. Meulenberg- 2  1 Aix-Marseille University, France,  2 Science for Humanity Trust, Inc., USA,  This work continues our previous work [1] and in a more developed form [2]), on electron deep orbits of the hydrogen atom. An introduction shows the importance of the deep orbits of hydrogen (H or D) for research in the LENR domain, and gives some general considerations on the EDO (Electron Deep Orbits) and on other works about deep orbits. A first part recalls the known criticism against the EDO and how we face it. At this occasion we highlight the difference of resolution of these problems between the relativistic Schrödinger equation and the Dirac equation, which leads for this latter, to consider a modified Coulomb potential with finite value inside the nucleus. In the second part, we consider the specific work of Maly and Va’vra [3], [4]) on deep orbits as solutions of the Dirac equation, so-called Deep Dirac Levels (DDLs). As a result of some criticism about the matching conditions at the boundary, we verified their computation, but by using a more complete ansatz for the “inside” solution. We can confirm the approximate size of the mean radii of DDL orbits and that decreases when the Dirac angular quantum number k increases. This latter finding is a self-consistent result since (as distinct from the atomic-electron orbitals) the binding energy of the DDL electron increases (in absolute value) with k. We observe that the essential element for obtaining deep orbits solutions is special relativity. Some thoughts on why Jed Rothwell is so surprisingly persistent with his comments and claims that the entire Rossi 12 month test was a failure (and by default plus his blunt claims that Andrea Rossi and his claims dishonest). Firstly, those of us who made published comments about IH's highly questionable behavior (such as my own comments published here) honed in on these aspects ... 1) That IH had paid 2 lots of money to Rossi for eCat tech 2) That IH have been in a relationship with Rossi for close on 3 years and not given any prior clear indication before the 12 month test of any issues with that relationship. 3) That Rossi's claim that IH used the early phases of the 12 month test to do fundraising. (this was very damning to IH's position). 4) Especially, that when this all exploded, the loudest anti Rossi voices were almost all known anti LENR people as well and thus were clearly biased and opportunistically leaping in to exploit the Rossi IH rift but were still using the situation to attack LENR in general. What I am seeing, is that IH have adjusted their tactics in their publicity battle with Andrea Rossi by privately enlisting the support of people we all know are pro LENR (such as Jed Rothwell - and 1 or 2 others). Jed is currently and without doubt, a champion for IH's position. The issue you (Peter) raise regadring this is how is Jed able to be so certain when few others have access to what ever it was/is that he has been given access to. The outcome: Jed is proclaiming (probably with some sense of justification) that we now need to accept 'Jed says' vs 'Rossi says'. IMHO, it is clear that IH have set out to enlist the support of recognized pro LENR identities to counter the quite effective anti IH messages that Andrea Rossi put out around the time he filed his lawsuit. But the questions you (Peter) raise, are valid questions and deserve to be answered. Clearly there will be no answers from Jed who argues he doesn't know other than the material passed to him to enlist his active anti Rossi remarks. So, by enlisting people who are known to be pro LENR people, IH are effectively countering the harsh criticism some of us directed at them and also those people who we know are both anti LENR and anti-Rossi and who leaped on the IH bandwagon as champions of their battle with Rossi.  It seems to me IH saw itself in difficulty in the word war until it was able to associate its position with pro-LENR people and break away from only being supported by the opportunity grabbing anti-LENR voices we all know so well. In regard to IH actively enlisting pro-LENR people to publicly join in their defense, If I were in their shoes (they are clearly in a difficult position) I would do what they are doing too. But, it does raise the question as to what kind of inducements or assistance that IH might be offering these people to go public on their behalf and in their defense against Andrea Rossi. I know that all LENR researchers need and seek support so when pro-LENR people clearly become vociferous supporters of the IH position against Andrea Rossi, it justifiably raises the question as to what 'rewards' tangible / intangible, these pro IH voices are being offered. All such questions are valid and deserve answers. Doug Marker Monday, June 27, 2016 What is impossible in LENR ? That Andrea Rossi will give it up before everybody will be able to get energy from it. That is the only impossibility I can be sure of,  (Andrea Rossi). Image result for paavo nurmi quotationImage result for jesse owens quotations “Citius, Altius, Fortius. Faster, Higher, Stronger.” (Olympic quote)  In soccer, LENR and Life Citius always wins ! Yesterday my wife and I were watching soccer UEFA EURO 2016 Belgium vs, Hungary. After a few minutes remembering my career as apprentice chronometrist for athletics from the 1950s my father acting then as coach- I told my wife:"see the Belgians are running so much faster, under 10.5 seconds per 100 meters while the Hungarians are slower then 11 seconds per 100 meters. it will be  catastrophe.  Something similar happened a few days ago when the Albanians "outrun" the Romanian team. Fastness is so important in other sports too, I remember that as a kid I was sent to maestro Pellegrini's fencing school and he was contented with me- clumsy but very fast moving hands. It was in our period of reading Dumas and Michel Zevaco etc.  so it was a cult of fencing and duelling but it is over now I have some duels with LENR people as below. It is not easy to speak about fastness in classic LENR  However Rossi obviously loves speed inclusive in development. b) Facts can be understood only in their context. Jed Rothwell: You have not seen the data, so you have no basis to be convinced. Or not convinced. This is a technical issue. Opinions don't count. Everything hinges on flow rates, temperatures, instrument specifications, and so on. Based on these factors, experts at I.H. concluded that the reactor is not producing any excess heat. I am far less capable than those experts, but to the best of my ability, looking at a sample of that data, I too reached that conclusion. You, Peter Gluck and everyone else will have to wait to see the data, and also the analysis of it from Rossi and from I.H. You cannot decide anything until then. You cannot even have an opinion. The rules of engineering and science say that every judgement must be grounded in facts, and you have no facts. I think it is a grave mistake for Peter to assume he knows what is going on, and to assume that Rossi is right in this dispute, and that I.H. and I are lying. Since he has no facts, this reaction is purely emotional. It is irrational. Since he has no engineering details, he trots out all kinds of half-baked notions about business contracts, or the timing of announcements, or he quotes lies spread by Rossi -- as if you can draw a technical conclusion from such fluff! It is pathetic. Peter is wrong. He will regret it if the facts are ever revealed. In science, you must never let your emotions or wishful thinking overrule rational, objective, fact-based analysis. Jed continues stubbornly to non-answering to my 5 stupid, nosy and irrelevant questions and, as a symptom of something I still do not want to  define exactly he answers to an imaginary question I have never put This question, his not mine can be formulated as:  "Rossi says test good, IH says test bad. Being a Rossi fan and having b ovo great prejudices against IH, I believe Rossi. Why, on which basis, you, Jed are certain that IH is right? IT IS AN NONEXISTENT QUESTION! I will repeat and explain my questions in a form a bit more accessible  for you, supposing you are right 100%. Rossi wrong, IH right. e will state together if this manoeuvre contributes to the missing IQ of the questions, makes them less impertinent and gives them a minimum of sense and relevance. NOTE. I see logical rational straight thinking and discussions are not on your list of strengths so I must have more patience with you, just first I want to tell you about FACTS that are your privilege and not given to dorinary people: Facts have significance only in context. A first fast example. You read; "Edmond Dantes has mercilessly ruined the lives of three rich and happy men" What  a sadistic rascal is the natural reaction to this but if you put the fact in proper context - the story of Count Monte Cristo by Alexandre Dumas= changes the understanding of the facts completely, isn't? Now your facts being OK let's return, for the lasttime to the lowly evaluated questions 1- Did Rossi and IH have a valid contract that states; that if the general performance test were successful, they should pay a great sum to Rossi? Possible answers Yes and No- the contrct was broken by IH.  Seemingly facts missing it was not and it has opened Rossi's way to a Trial. Jed please do not sya me that IH is happy with Trial, I am stupid but not sooo stupid! It means if Rossi's results were indeed  such a catastrophe total and continuous to maintain the contract.. why 's sake (the Greek God of Greed) What could be confidential or secret in such a document of angelic honesty- "you are in trouble, we do not see excess heat even with the magnifier glass!" Harmony between thoughts, words and action is essential even to a company.It did not happened even at the end of the test or at the receipt oh ERV report no. 4. It happened when the Trial started. What is ou fact, in what context? 3- IH employees have participated at the test in parallel with Rossi's men; is there a written document showing they are in any way discontent with the test and the test being “a disaster”? Is this a toxic question?  Rossi says they were there- what is the fact yu know and those who ask- no? 4- When was the total incompetence of the ERV discovered; i.e. the inadequacy of the measuring instruments and when was it stated that the measurements are fatally flawed? (a document dated in 2015?) As far as I can understand the methods of measurement were the same- dreadful from start o finish, they were never good then. This is a sad but explosive fact in the context of  a vlid 94 milliion contract. for a successful test. 5- Rossi claims: “All I know is that Darden and JT Vaughn collected $150 million after the test of the 1 MW E-Cat began, using the first and second report of the ERV as a tool to get the money, then after the 4th report ( equal to the former ones) they said what they said and did not pay” Is this slander and false accusations? Is this slander or false accusation? This question deserves its color, it can be an infamous accusation but it comes from Rossi and who knows better the facts related to the 1MW plant?  Is it a stupid question? Not at all because it is disturbing It is nosy only if it false, completely. It is not relevant for Jed but it can be relevant for many people some of them quite influential due to its deeper significance. So Jed I ask you to not invent my questions, retract at least "nosy" and feel free to play with your facts that are flawed like the ultraviolet unicorns- invisible, intangible, unverifiable  missing birth certificates and.. prepare to get facts from the Trial. My own sources say, but I can not reveal their identity that the trial will take place in the first 5 days of September. A new rule- all the witnesses will be obliged to perform an IQ test before testifying. You have arranged this, ? 1) Excess Heat Generation in Ni + LiAlH4 System (New Report by I.N. Stepanov and V.A. Panchelyuga) 2) LENR afternoon with Ubaldo Mastromatteo- more videos Pomeriggio Lenr Ubaldo Mastromatteo (5) Claudio Pace have to ask Ubaldo to send us the text! not clear yet in which extent it is about LENR in the frame of rational mysticism 3) An interesting paper signalled by EGO OUT on June 25, is discussed here: [Vo]:Ukrainian Paper on the active particle of LENR 4) A cold fusion paper in Dutch: 5) Andrey Illich Fursov (Russian Historian, sociologist, polytologist and publicist): About the Nuclear Cold Fusion of Ivan Stepanovich Filimonenko 6) At June 21, 2016 at Geneva Switzerland it was a press conference about an epochal discovery of transmutation of chemical elements by a biochemical method. At the press conference have participated Tamar SahnoViktor Kutashov scientists who made this discovery and Vladislav Karabanov administrator and leader oif this project. Link to the patent for this invention Very interesting I started to discuss with Vladimir Vysotskii about this he is the greatest specialist in biochemical transmutations. 7) Also see the above info, here: Russian Team “Actinides” Announces Discovery of Industrial Biochemical Method of Elemental Transmutation (Press Conference and Press Release) 8 ) Greg Goble Energy 54+ Black Swans listed by Paul Maher Umair Haque: "The Art of Awakening" It is time for a LENR awakening! Why rudeness at work is contagious and difficult to stop
002847eff160491e
Quantum Mechanics and Philosophy II: Measurement and Interpretations Author: Thomas Metcalf Category: Philosophy of Science Word Count: 1000 Editor’s Note: This essay is the second in a series authored by Tom on the topic of quantum mechanics and philosophy. Read the first essay here and the third essay here. I. Measurement The story in the previous article in this series corresponds to real experiments about properties of microscopic particles.1 Recall that these experiments seem to show that particles can be partly in one position and partly in others, and that measuring their positions seems to change other properties about them. Thus there seems to be something very strange about measuring the properties of these particles. Let’s talk about what happens, physically, when someone makes a measurement. Suppose you’ve flipped a coin at t1 and haven’t looked at the result yet; it’s apparently in a superposition2 of Heads and Tails.3 You’ll look at the result at t2. Here’s what an analogue of the Schrödinger equation would say about what happens: t1: The coin is in a superposition: a combination of 50%-Heads and 50%-Tails. Then … t2You are in a superposition: a combination of 50%-observing-Heads and 50%-observing-Tails. Of course, no one has ever seemed to find herself in a superposition of two observations.4 It turns out that there are roughly three5 things we could say as the physical story about what happens when you make the observation. When you look at the coin … (Copenhagen) … the superposition “collapses” (indeterministically!)6 into 100%-Heads or 100%-Tails, but not both.7 (Many-Worlds) … the universe branches into: U1:      100%-you observes 100%-Heads. U2:      100%-you observes 100%-Tails.8 (Bohm) … you observe what was true all along: the coin was 100%-Heads (or 100%-Tails) even before you looked at it.9 Again, the Schrödinger equation predicts that measurement will not collapse a superposition; it predicts that the observer will now be in a superposition. But we don’t find ourselves to be in superpositions. So what is measurement, and does it really violate the best-confirmed equation we’ve ever used? II. Interpretations There are several ways of interpreting measurement itself, corresponding to several hypotheses about what’s actually going on in the physical world with these particles. A. Copenhagen When we look for particles, we don’t seem to find them in superpositions. But of course when we don’t look for them, they seem to stay in superpositions. So there must be something special about measurement; it must cause superposed things to stop being superposed. Copenhagen-theorists say that observation “collapses” superpositions, and as noted, this collapse is indeterministic; nothing predicts or can predict whether the particle will be found here or there.10 A nice thing about this interpretation is that it seems very much like classical physics. There are particles, and they might do strange things when we’re not looking for them, but when we do look for them, they “become” classical: they just are in a particular place. The coin just is ‘Heads’ or ‘Tails.’ A not-so-nice thing about this interpretation is that there is simply no direct experimental evidence that collapse ever actually happens.11 Indeed, collapse is incompatible with the Schrödinger equation. Copenhagen-theorists conclude that collapse must have happened (since otherwise, we’d see the particles in superpositions), but we actually don’t have a mathematical or a physical story that tells us how or why it happens. This interpretation also makes measurement mysterious. How does the coin “know” I’m looking at it? Could a cat’s observation “cause” this collapse? A bacterium’s?12 It would be better overall if we didn’t have to say that observation itself causes physical changes in the thing observed. B. Many-Worlds Roughly speaking, the Many-Worlds interpretation says that superpositions remain after observation. When you look at the coin, the world evolves into a superposition of you observing ‘Heads’ and you observing ‘Tails.’ A nice thing about this interpretation is that the mathematical side is completely straightforward.13 The best-confirmed equation we have turns out to be true. Measurement and observation aren’t really “special”; they’re simply further ways of the world evolving. Nothing collapses. A not-so-nice thing about this interpretation is that it’s incompatible with our experience unless we say that the universe itself is branching into an outcome for every observation. We don’t ever see superpositions, so it must be that each branch of the universe gets its own “outcome” of the observation. This conclusion seems very strange to many people. As it happens, this interpretation also makes probability very mysterious.14 C. Bohm The third interpretation to consider is the most “classical” of the lot. According to David Bohm and his followers, the coin was definitely ‘Heads’ or definitely ‘Tails’ before you measured it. The reason is that in addition to the coin, there was also another thing: a sort of guiding probability-wave that caused the coin to land on ‘Heads’ or ‘Tails.’ The world evolves deterministically, and superpositions, in a sense, aren’t real.15 Particles just seem to behave in a “superposition” way because we don’t have a way of monitoring everything about them. A nice thing about this interpretation is, as mentioned, that it’s very classical. Most of the mystery in quantum mechanics evaporates. There’s nothing special about observation. The Schrödinger equation merely tells us how to predict how deterministic systems evolve. A not-so-nice thing about this interpretation is that in the details, it turns out to need nonlocality.16 Basically, that means that things can affect each other at faster than the speed of light, even if they’re nowhere near each other. I observe ‘Heads’ on a coin here, and instantly, somehow, a coin ten light years away “becomes” ‘Tails.’ And there’s no obvious particle or mechanism to convey that causal signal, if it is a causal signal. Another thing some people don’t like about this interpretation is that it seems to require the existence of an object we have no way of empirically detecting: the “pilot wave” that guides the particles to do what they do.17 IV. Next Steps We don’t have any empirical tests that can easily decide between these and other interpretations. We might never.18 So again, the choice between interpretations is at least partly a philosophical choice. It turns out that the choice between interpretations also has many other implications for traditional philosophical questions. The last article in this series will take a look at some of those questions. 1Usually photons and electrons, and most commonly, spin-properties; cf. Albert 1992: 1, n. 1. 2In the “party” metaphor, this is like watching a guest arrive (at t1) through the front door before you’ve seen which item they brought, and then looking (at t2) at which item they brought. 3Coins in the real world don’t actually end up in superpositions. The reason is something called ‘decoherence’: big objects such as coins interact with their environments in lots of ways, constantly, enough to push them out of superpositions. On this, see Polkinghorne 2002: 43-44 and Ghirardi 2014: § 5. However, our best physics says that in principle, a coin could be placed in a superposition of ‘Heads’ and ‘Tails.’ See, e.g., O’Connell et al. 2010. 4What would it look like, to the observer? I have no idea. If the coin is an American quarter, would you be seeing 50% of George Washington’s face and 50% of an eagle? Would it look like a double-exposed photograph? Cf. Albert 1992: 112 ff. and Greene 2011: 207-08. 5There are different ways of dividing things up, but this sort of division is one of the most common in introductory-level works. See, e.g., Polkinghorne 2002: 46-56. 6Indeterminism can be construed as the thesis that a particular state of the universe does not physically entail any future state. See, e.g., Hoefer 2014; Haramia 2014: § 3; and Nagashima 2014: § 2. On the indeterminism in the quantum world, see Greene 2011: 191-192. 7This is sometimes called the ‘Copenhagen’ interpretation, after its main proponent, Niels Bohr (Bohr 1987a; Bohr 1987b; Bohr 1987c; Greene 2002: 208-09). Cf. Albert 1992: 80 ff. Notably, no one has ever found any direct experimental evidence that collapse of this sort actually happens (Albert 1992: 110-11). 8This is sometimes called the ‘Many-Worlds,’ ‘Everett,’ or ‘Everett-De Witt’ interpretation, after its main proponents, Hugh Everett and Brice De Witt (Everett 1957; De Witt 1970; Albert 1992: 112-13). 9This is sometimes called the ‘Bohm’ interpretation, after its main proponent, David Bohm (Bohm and Hiley 1993; Albert 1992: ch. 7). 10Albert 1992: 36; Polkinghorne 2002: 24-25. 11Albert 1992: 110-11. It’s possible to chart the evolution of a system that looks the way it would if the wavefunction is collapsing, but this is not an observation of collapse; see Murch et al. 2013. See also Greene 2011: 201-02 on the incompatibility of this interpretation with the mathematical formalism. 12For some discussion of measurement and this “macro-objectification problem,” see especially Ghirardi 2014: § 3. See also Albert 1992: 79 on the “measurement problem” and Greene 2011: 202 for the “bacterium” example. 13Albert 1992: 112-13; Greene 2011: 203-09 and 212. 14Suppose a certain quantum-mechanical process is known (empirically) to be 10% likely to result in outcome X and 90% likely to result in outcome Y. Now we run the process 1,000 times, and sure enough, about 100 times, the outcome is X, and about 900 times, the outcome is Y. But according to Many-Worlds, each of those 1,000 iterations caused the universe to branch into two universes: one for X and one for Y. Why, then, did we not observe about 500 Xs and about 500 Ys? See Green 2011: 228-37 and Greaves 2007. 15 Polkinghorne 2002: 53-54. 16Bell 1964; Albert 1992: 155 ff. 17Polkinghorne 2002: 54-55. 18Polkinghorne 2002: 55-56; Ghirardi 2014: § 13. Albert, David Z. (1992). Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press. Bell, John. (1964). “On the Einstein Podolsky Rosen Paradox.” Physics 1: 195-200. Bohm, David and Basil J. Hiley. (1993). The Undivided Universe: An Ontological Interpretation of Quantum Theory. Oxford and New York: Routledge. Bohr, Niels. (1987a). The Philosophical Writings of Niels Bohr, Vol. I: Atomic Theory and the Description of Nature. Woodbridge, CT: Ox Bow Press. ———-. (1987b). The Philosophical Writings of Niels Bohr, Vol. II: Essays 1932-1957 on Atomic Physics and Human Knowledge. Woodbridge, CT: Ox Bow Press. ———-. (1987c). The Philosophical Writings of Niels Bohr, Vol. III: Essays 1958-1962 on Atomic Physics and Human Knowledge. Woodbridge, CT: Ox Bow Press. De Witt, Bryce Seligman. (1970). “Quantum Mechanics and Reality,” Physics Today 23(9): 30-35. Everett, Hugh. (1957). “Relative State Formulation of Quantum Mechanics,” Review of Modern Physics 29: 454-62. Ghirardi, Giancarlo. “Collapse Theories.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 edition), URL =<http://plato.stanford.edu/archives/spr2014/entries/qm-collapse> Greaves, Hilary. (2007). “Probability in the Everett Interpretation.” Philosophy Compass 2: 109-28. Haramia, Chelsea. (2014). “Free Will and Moral Responsibility.” In Andrew Chapman (ed.) 1000-Word Philosophy, URL = <https://1000wordphilosophy.wordpress.com/2014/06/02/free-will-and-moral-responsibility/> Hoefer, Carl. (2014). “Causal Determinism.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 edition), URL =<http://plato.stanford.edu/archives/spr2014/entries/determinism-causal/> Ismael, Jenann. (2014). “Quantum Mechanics.” In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 edition), URL = <http://plato.stanford.edu/archives/spr2014/entries/qm> Murch, K. W. et al. (2013). “Observing Single Quantum Trajectories of a Superconducting Quantum Bit.” Nature 502: 211-14. Nagashima, Jonah. (2014). “Free Will and Free Choice.” In Andrew Chapman (ed.), 1000-Word Philosophy, URL = <https://1000wordphilosophy.wordpress.com/2014/04/03/free-will-and-free-choice/> O’Connell, A. D. et al. (2010). “Quantum Ground State and Single-Phonon Control of a Mechanical Resonator.” Nature 464: 697-703. Polkinghorne, John. (2002). Quantum Theory: A Very Short Introduction. New York: Oxford University Press. About the Author Tom is a visiting assistant professor at Spring Hill College in Mobile, AL. He received his PhD in philosophy from the University of Colorado, Boulder. He specializes in ethics, metaethics, epistemology, and the philosophy of religion. Tom has two cats whose names are Hesperus and Phosphorus. Website: http://colorado.academia.edu/ThomasMetcalf
3ec973cd5b78dc74
Science News Curated by RSF Research Staff A novel computational technique to solve the many-particle Schrödinger equation One of the goals of electronic structure theory is to precisely describe increasingly complex polyatomic systems. The most difficult question in this kind of many-body problematic is to describe electron correlation. While 99% of the question is solved by the classical variation principle, lots of important physics is happening in the remaining 1%. To look into these 1%, there are three know methods and one of them is named the Coupled-Cluster Theory. The basic assumption of coupled cluster theory is that the exact many-electron wavefunction may be generated by the operation of an exponential operator on a single determinant. The excitation operator can be written as a linear combination of single, double, triple, etc. excitations, up to N fold excitations for an N electron system. Interestingly, scientists recently discovered a new approach to solve this problem in a much more efficient way. It's like playing chess and being able to predict the outcome of the game after the initial few moves. Ψ is the exact wavefunction, T is an excitation operator and Φ0 is the reference determinant defining the Fermi vacuum, usually the Hartree-Fock determinant. The team led by Piotr Piecuch just proposed a new approach to the determination of accurate electronic energies that are equivalent to the results of high-level coupled-cluster calculations. The approach is based on merging one formalism correcting energies with a stochastic configuration interaction. They showed that this combination allows to recover high-level energetics even when electronic quasi degeneracies become substantial. This new stochastic approach is opening up new possibilities in the way high-level coupled-cluster calculations are carried out. In the case of nuclei, instead of being concerned with electrons, one would use our new approach to solve the Schrödinger equation for protons and neutrons. […] The mathematical and computational issues are similar. Just like chemists want to understand the electronic structure of a molecule, nuclear physicists want to unravel the structure of the atomic nucleus. Once again, solving the many-particle Schrödinger equation holds the key. Piotr Piecuch, Department of Physics and Astronomy, Michigan State University The advantages of the proposed methodology are illustrated by molecular examples, where the goal is to recover the energetics obtained in the coupled-cluster calculations with a full treatment of singly, doubly, and triply excited clusters. Continue reading at: https://phys.org/news/2017-12-approach-paradigm-electronic-theory.html Sharing is caring - please share this with your friends: Resonance Academy logo No SPAM. Ever. That’s a promise.
b8a5454af82c9da8
Article | Open Transient unidirectional energy flow and diode-like phenomenon induced by non-Markovian environments • Scientific Reports 5, Article number: 15332 (2015) • doi:10.1038/srep15332 • Download Citation Published online: Relying on an exact time evolution scheme, we identify a novel transient energy transfer phenomenon in an exactly-solvable quantum microscopic model consisting of a three-level system coupled to two non-Markovian zero-temperature bosonic baths through two separable quantum channels. The dynamics of this model can be solved exactly using the quantum-state-diffusion equation formalism, demonstrating finite intervals of unidirectional energy flow across the system, typically, from the non-Markovian environment towards the more Markovian bath. Furthermore, when introducing a spatial asymmetry into the system, an analogue of the rectification effect is realized. In the long time limit, the dynamics arrives at a stationary state and the effects recede. Understanding temporal characteristics of directional energy flow will aid in designing microscopic energy transfer devices. The analysis of simple-prototype quantum energy transfer problems1,2,3 assists in elucidating fundamental thermodynamic concepts in open quantum systems4. In the common construction, energy flow through a quantum system is generated by coupling it to two macroscopic objects, thermal reservoirs of different temperatures5,6. Alternatively, directional energy flow can be attained by supplying work into an asymmetric system7. Energy transfer problems are interesting for exploring the foundation of classical statistical mechanics, quantum dynamics, and the crossover between the classical and quantum worlds8,9. For example, understanding the emergence of the Fourier’s law of heat conduction from the principles of open quantum systems is a long standing problem10,11,12,13,14. Moreover, achieving control over energy flow is of an enormous importance in many areas of science and technology, including energy management in functional nanoscale devices15,16,17,18, realization of information processing and computation in open quantum systems19,20, control over molecular reactivity and dynamics21, and refrigeration in metal-superconductor junctions22. In microscopic devices, introducing a spatial asymmetry within an anharmonic structure can result in different magnitudes for the forward and backward currents, under the application of a reversed temperature bias23,24. This diode-like behavior had recently attracted considerable theoretical and experimental attention, including the demonstration of phononic25, electronic26,27,28,29 and photonic23,24,30,31,32,33,34,35 rectifications. Given these developments, it is highly desirable to identify minimal conditions under which a diode-like behavior can be obtained, controlled, and enhanced3,21. Studies of rectification and unidirectional energy flow in quantum devices2,3,16 were typically performed under certain-standard approximations (semiclassical operation, neglecting coherences in the subsystem, assuming a unique steady state), adopting quantum master equation approaches. Few works had considered design principles on exactly-solvable quantum models36. This problem is fundamentally important: Can we derive, from microscopic quantum theories, sufficient or necessary boundary conditions for realizing a certain nonlinear energy transport in atomic, molecular or a nanoscale system? In this work, we aim at achieving unidirectional energy flow in an open quantum system. While our results correspond to a temporal behavior, they expose ingredients for asymmetric dissipation thus potentially, asymmetric nonlinear transport. We employ reservoirs with different (non-Markovian) spectral properties rather than with different temperatures. A structured-non-Markovian environment is characterized by the correlation timescale of its fluctuations37,38, while in a Markovian bath the memory time is shorter than any other characteristic timescale of the system of interest. The bath memory function dictates the manner in which information and energy flow from the system to the attached macroscopic bath, and the back-action of the bath on the system. A finite memory time is crucial for achieving control over the state of an open quantum system (see, e.g.,39 and references therein). Essentially, there is nearly no revival of the system’s fidelity when it is attached to a memoryless Markovian bath (see, e.g.,40 and references therein). Therefore, if the system is coupled to two baths of different memory functions a unidirectional flow can emerge: energy is fed back from the non-Markovian bath to the system, and simultaneously, the system is releasing its energy to the more Markovian bath. We investigate the dynamics of our model by employing a nonperturbative master equation, derived from the quantum-state-diffusion (QSD) equation41,42,43,44 (see Method). We show that due to the assignment of distinct memory properties to the baths, a transient unidirectional energy flow develops in a prototype model for energy flow across a quantum open system. The model consists a three-level system45,46, one of the simplest realizations of quantum engines47, and two uncorrelated bosonic baths, bulk objects. Furthermore, we manifest that we can control the magnitude of the temporal energy flow by introducing a spatial asymmetry into the system, coupling it with different strengths to the contacts. This asymmetry grants an effect which can be categorized as a “transient diode effect”: the magnitude of the energy flow is different under forward and reversed operations, upon interchanging the channels connecting to the reservoirs. Unidirectional energy flow To investigate the flow of energy in our system, purely induced by the distinct environmental memory functions, the two baths are assumed to be at zero temperature and the three-level system is assumed to be of a degenerate Λ-type, where the energy splitting between the high level and the two lower levels, and , is set as ω. The setup is shown in Fig. 1, where the energy current across the system is determined by the two energy flows between the system and the two baths. We use the following form for the correlation functions of the two baths, , with j = 1, 2, where Γj is the coupling strength of the system to the jth bath. This form corresponds to a Lorenz spectrum. When γj → 0, the jth bath is eminently non-Markovian with long memory time as desired. On average, the dissipation rate of the system to the jth bath is greatly suppressed with decreasing γj41,42. In contrast, when γj → ∞, the jth bath is memoryless. In this case, energy flow from the system to the bath is fully irreversible. Therefore, 1/γj could be used to measure the environmental memory time. Figure 1: Schematic representation of our model, a degenerate Λ-type three-level system coupled to two zero-temperature uncorrelated baths characterized by different memory parameters γ and coupling constants Γ. Figure 1 As our initial condition, we only excite state-3, ρ33(t = 0) = 1. We demonstrate now the transient energy transfer behavior in the system when α1(t, s) ≠ α2(t, s), before reaching the stationary long time solution ρ33 = 0. In Fig. 2 we use γ1 = 0.2ω and γ2 = 10ω and show that initially energy is released into both baths simultaneously. However, around ωt = 2.2 the system begins to absorb energy from bath-1. As presented in Method, energy flow then becomes unidirectional, directed from the left reservoir (bath-1) towards the right side (bath-2), i.e., and . We end our simulation when ρ33 becomes extremely small and ρ11 and ρ22 reach the stationary states. The region of this unidirectional flow is embedded within a green-dashed frame. Figure 2: Population dynamics of the three states in the Λ system, as well as our measure for energy flow, with Γ1 = Γ2, γ1 = 0.2ω and γ2 = 10ω. Figure 2 The L → R notation indicates that the unidirectional flow proceeds from the left bath to the right one within the marked frames. The dynamics can be made more involved if both reservoirs are highly non-Markovian. In Fig. 3 we use γ1 = 0.2ω and γ2 = 1.0ω, resulting in multiple-alternating regions of bi-directional and unidirectional energy flow. Particularly, we observe three intervals (distinguished by the green-dashed frames) of unidirectional flow of energy, of nearly the same duration yet shrinking amplitude, before full relaxation of the excited state is reached. As expected, the directional flow takes place from the reservoir with longer memory time, to the side with shorter memory time, since at zero-temperature a completely Markovian bath can only absorb energy. The comparison of Figs 2 and 3 also reveals that, as expected, in the latter case the total evolution time towards the stationary solution is longer than in the first case. Figure 3 A finite difference between the memory parameters, , is a necessary yet insufficient condition for the emergence of unidirectional flow of energy. This is shown in Fig. 4, where we display the time duration of unidirectional flow in the first interval. Note that energy is flowing in opposite directions (R → L or L → R) in the regions below and above the diagonal in Fig. 4. Recall that in Fig. 3 we show that there may be more than one occurrence of unidirectional transfer in the overall dynamics. We find that to observe the effect, it is necessary to employ a reservoir with a long memory time, for example, γ1/ω < 0.5, and a second reservoir with a shorter memory time, . Figure 4: Duration time of unidirectional energy flow (in its first occurrence) as a function of the inverse memory time γ1 and γ2. Figure 4 The dark blue region corresponds to cases with vanishing unidirectional flow (zero duration). Diode-like phenomenon So far, we have demonstrated that within a certain time interval energy may flow in a unidirectional manner only due to differences in the memory capabilities of the two reservoirs. In Figs 2, 3 and 4, we used Γ1 = Γ2, and it is obvious that the direction and magnitude of the flow will be fully reversed upon the interchange of the quantum channels indicated by L1 and L2, which is equivalent to an interchange in the values γ1 and γ2. However, distinct memory times can not induce a diode-like phenomenon, an asymmetry in the magnitude of the unidirectional energy flow under opposite “polarities”, as we explain next. We now show that the dynamics may be furthermore controlled by including an asymmetry in the coupling strengths of the three-level system to the baths, Γ1 ≠ Γ2. The resulting behavior corresponds to the thermal diode effect, see Figs 5 and 6. We consider the following two setups: (i) a “forward” configuration with γ1 > γ2 and Γ1 < Γ2; (ii) a reversed geometry, in which we exchange the values of the memory times but keep the interaction energy as in (i), thus γ1 < γ2 and Γj’s hold. We emphasize that the condition γ1 ≠ γ2 allows a unidirectional flow of energy, and the spatial asymmetry , provides the transient diode effect, yielding different magnitudes for energy flow in setups (i) and (ii). Figure 5: Transient unidirectional energy flow and a diode-like effect in two configurations: (i) (blue lines) γ1 = 5ω > γ2 = 0.2ω and Γ1 = Γ2/2 = ω/2, and (ii) (red lines) γ1 = 0.2ω< γ2 = 5ω and Γ1 = Γ2/2 = ω/2. Figure 5 Figure 6: Diagram of the diode-like time (the first period of unidirectional occurrence) in unit of ωt in the parameter space of γ1 and γ2. Figure 6 We first examine geometry (i) in Fig. 5. We use γ1/ω = 5 > γ2/ω = 0.2 and Γ1 = Γ2/2. Energy flows unidirectionally towards the more Markovian bath-1, as we found before, during the so called “diode-like behavior interval” 2.5 < ωt < 5. In geometry (ii) we employ γ2/ω = 5 > γ1/ω = 0.2 while keeping Γ1 = Γ2/2. Energy now flows (during almost the same interval) towards bath-2. In geometry (ii) the ratio of the energy flow towards the more Markovian bath of γ = 5ω is larger than in case (i), given the stronger coupling to this bath. It is interesting to note that when we modify Γ we largely affect the flow of energy into the more Markovian bath, as compared to changes in flow to the highly non-Markovian bath. We could explain this phenomenon by noting that Γ, the coupling strength of the system to the reservoirs, is the only parameter which determines the rate of energy flow to a Markovian bath. In contrast, the effectiveness of flow to a non-Markovian reservoir is predominated by the memory time of the bath, characterizing how effective it is in dissipating excess energy. The duration of the diode-like behavior is plotted in Fig. 6 as a function of γ1 and γ1 − γ2. The calculation is performed on a configuration similar to geometry (i) with γ1 taken always larger than γ2, providing a unidirectional flow from bath-2 to bath-1. We find that the interval of the diode effect (considering the first interval) is highly sensitive to the memory time of the non-Markovian bath, while the difference γ2 − γ1 has a weaker overall effect. It is shown that a long diode-effect time is attainable when ; the overall duration of the diode effect is fixed once . We considered an exactly solvable model with two dissipation channels directed towards two reservoirs. We demonstrated that when adopting non-Markovian baths with different memory properties, the transient energy flow can become unidirectional, typically flowing from the highly non-Markovian to the more Markovian bath. This is the case as long as is larger than a certain threshold. Our analysis departs from the regular thermodynamic setup in which energy flow is driven by a temperature gradient across the system, to consider zero-temperature situations with non-Markovian baths. Moreover, we showed that the magnitude of the energy flow can be controlled to achieve an effect reminiscent of the diode phenomenon, by coupling the system to two contacts with different strengths. In conclusion, a sufficiently large difference in yields the effect of transient unidirectional energy flow; a sufficiently large difference in , with the additional condition of an asymmetrical coupling strengths between system and baths, results in a diode-like phenomenon. Our 3-level system could be realized in the triplet ground electronic state of a nitrogen vacancy (NV) center48,49,50, when ω is regarded as the zero field splitting. Transitions from the lower degenerate levels to the upper state can be selectively addressed via optical fields51. Besides optical relaxation, one of the two channels could be realized through the vibrations of the diamond lattice and the atoms comprising the NV center point defect. In this case, the photonic field would serve as the strong non-Markovian bath while the phononic environment would act as the more Markovian one. Transient unidirectional flow of energy can be achieved in open quantum systems prepared in a non-stationary state, when coupled to two different structured environments. This behavior, obtained in our work by a microscopic quantum model without further assumptions and approximations, can be exploited for estimating the relative memory capabilities and non-Markovianity of competing baths, for constructing nonlinear quantum devices for the transport of energy, and for controlling unidirectional energy transfer, and potentially reactivity, in molecules52. Finally, the principles governing the dynamics of the present system at zero-temperature could be employed for exploring the dynamics of a finite-temperature, driven three-level system, to study the combined role of anharmonicity, non-Markovianity, driving, and asymmetry on energy transport phenomena53,54. The system takes a Λ-type configuration, with one excited state and two degenerate lower levels, and . The excited state may decay to either of these lower levels, and these dissipation processes (referred as “channels”) are directed by two different baths55. The baths are set at zero-temperature. The total Hamiltonian is given by Here, ω is the energy splitting between the upper level and the lower two states. The system-environment coupling operators are represented by , where j = 1, 2. They open up energy transfer channels of the system into the jth bath. (ajk) is the creation (annihilation) operator for independent mode-k in the jth bath, stands for the coupling constant between the system operator Lj and the kth mode in the reservoirs. In our design unidirectional-transient flow is achieved by utilizing reservoirs with different two-time correlation functions αj(t, s), defined below Eq. (2). Furthermore, when Γ1 ≠ Γ2, a process analogous to thermal rectification can be realized. The wavefunction denoted the solution to the Schrödinger equation with the total Hamiltonian (1) in the interaction picture with respect to . We define where stands for the tensor product of the Bargmann coherent states for the environment modes. The exact QSD equation41,42, for the stochastic wave-function , is given as Here are correlated processes, describing the stochastic influence of the jth bath. are individual Gaussian-distributed complex random variables, whose ensemble average is defined as . (In our model, the two dissipation channels correspond to different bath operators thus cross correlations of the form are missing. The case in which all dissipation channels couple to the same bath is considered in ref. 55.) The operator includes the effect of the jth bath on the system dynamics. The function satisfies with fj(t) = 1 and j = 1, 2. The corresponding exact master equation for the reduced density matrix can be constructed via the Novikov theorem56, Equation (4) immediately yields the solution of time-dependent populations of the three levels, where . Since the system is prepared in its excited state, ρ33(0) = 1 and , the time-dependent energy current, defined positive when flowing from the system towards bath-1 and bath-2, is given by and , respectively, where stands for real part. These expressions identify the current as the population relaxation rate times the energy difference for the transition. For example, state-3 decays to state-1 by giving up energy through channel-1 to bath-1. Population decays from level-3 to level-1 thus directly relates to the amount of energy flowing from the system to the attached bath. The transitory energy transfer can therefore be measured by the real part of dimensionless coefficients Fj(t), j = 1, 2, without invoking confusion, before ρ33 vanishes. Placing bath-1 (2) at the left (right) side of the system, we now identify different transport situations: (A) when and , the three-level system is releasing energy to both sides; (B) the system is releasing energy to the left while absorbing energy from the right side when and ; (C) energy flows towards the right bath in the opposite scenario, and ; (D) the system is absorbing energy from both reservoirs at the same time if and . Manifesting the development of scenarios (B) and (C), with a (transient) unidirectional energy flow as indicated by , is the objective of our work. If the memory functions have the same spectral form, , then from Eq. (3). As a result, and acquire the same sign at all times, thus a unidirectional energy flow across the system cannot be realized, even when Γ1 ≠ Γ2. Evidently, to achieve our goal we should employ reservoirs with distinct memory properties. Additional Information How to cite this article: Jing, J. et al. Transient unidirectional energy flow and diode-like phenomenon induced by non-Markovian environments. Sci. Rep. 5, 15332; doi: 10.1038/srep15332 (2015). 1. 1. , & Quantum Thermodynamics, Springer: Berlin Heidelberg, (2009). 2. 2. & Heat rectification in molecular junctions, J. Chem. Phys 122, 194704 (2005). 3. 3. & Sufficient conditions for thermal rectification in hybrid quantum structures, Phys. Rev. Lett. 102, 095503 (2009). 4. 4. Quantum electromechanical systems, Phys. Rep. 395, 159 (2004). 5. 5. & Quantized thermal conductance of dielectric quantum wires, Phys. Rev. Lett. 81, 232 (1998). 6. 6. , , & Measurement of the quantum of thermal conductance, Nature 404, 974 (2000). 7. 7. , & Dissipationless directed transport in rocked single-band quantum dynamics, Phys. Rev. A 75, 033602 (2007). 8. 8. Decoherence and the transition from quantum to classical, Phys. Today 44(10), 36 (1991). 9. 9. & Equivalence of quantum and classical coherence in electronic energy transfer, Phys. Rev. E 83, 051911 (2011). 10. 10. , & Fourier’s law from Schrödinger dynamics, Phys. Rev. Lett. 95, 180602 (2005). 11. 11. , & Transition from diffusive to ballistic dynamics for a class of finite quantum models, Phys. Rev. Lett. 99, 150601 (2007). 12. 12. & Fourier’s law of heat conduction: Quantum mechanical master equation analysis, Phys. Rev. E 77, 060101(R) (2008). 13. 13. & Heat flux operator, current conservation and the formal Fourier’s law, J. Phys. A 42, 025302 (2009). 14. 14. & Dynamics and thermodynamics of linear quantum open systems, Phys. Rev. Lett. 110, 130406 (2013). 15. 15. & Molecular heat pump, Phys. Rev. E 73, 026109 (2006). 16. 16. Heat flow in nonlinear molecular junctions: Master equation analysis, Phys. Rev. B 73, 205415 (2006). 17. 17. , & Dynamic control of quantum geometric heat flux in a nonequilibrium spin-boson model, Phys. Rev. B 87, 144303 (2013). 18. 18. et al. L. Nanoscale thermal transport II: 2003-2012, App. Phys. Rev. 1, 011305 (2014). 19. 19. & Thermal logic gates: Computation with phonons, Phys. Rev. Lett. 99, 177208 (2007). 20. 20. & Heat conduction in simple networks: The effect of interchain coupling, Phys. Rev. E 76, 051118 (2007). 21. 21. & Theories of intramolecular vibrational energy transfer, Phys. Rep. 199, 73 (1991). 22. 22. & Thermodynamics of bipartite systems: Application to light-matter interactions, Phys. Rev. A 74, 063823 (2006). 23. 23. , & Controlling the energy flow in nonlinear lattices: A model for a thermal rectifier, Phys. Rev. Lett. 88, 094302 (2002). 24. 24. , & Thermal diode: Rectification of heat flux, Phys. Rev. Lett. 93, 184301 (2004). 25. 25. , , & Solid-state thermal rectifier, Science 314, 1121 (2006). 26. 26. et al. Quantum dot as thermal rectifier, New J. Phys. 10, 083016 (2008). 27. 27. , & An oxide thermal rectifier, Appl. Phys. Lett. 95, 171905 (2009). 28. 28. & Thermal rectification of electrons in hybrid normal metal-superconductor nanojunctions, Appl. Phys. Lett. 103, 242602 (2013). 29. 29. , & Rectification of electronic heat current by a hybrid thermal diode, Nat. Nanotechnol. 10, 303 (2015). 30. 30. & Near-field thermal transistor, Phys. Rev. Lett. 112, 044301 (2014). 31. 31. , & Thermal rectification at silicon-amorphous polyethylene interface, Appl. Phys. Lett. 92, 211908 (2008). 32. 32. , & Thermal rectification in asymmetric graphene ribbons, Appl. Phys. Lett. 95, 033107 (2009). 33. 33. , & Thermal conductivity and thermal rectification in graphene nanoribbons: A molecular dynamics study, Nano Lett. 9, 2730 (2009). 34. 34. et al. Phonon lateral confinement enables thermal rectification in asymmetric single-material nanostructures, Nano Lett. 14, 592 (2014). 35. 35. & A review of thermal rectification observations and models in solid materials, Int. J. Therm. Sci. 50, 648 (2011). 36. 36. A model of heat conduction, J. Stat. Phys. 18, 161 (1978). 37. 37. , & Measures of non-Markovianity: Divisibility versus backflow of information, Phys. Rev. A 83, 052128 (2011). 38. 38. & Quantification of memory effects in the spin-boson model, Phys. Rev. A 86, 012115 (2012). 39. 39. , & Nonperturbative dynamical decoupling with random control, Sci. Rep. 4, 6229 (2014). 40. 40. & Overview of quantum memory protection and adiabaticity induction by fast signal control, Sci. Bull. 60, 328 (2015). 41. 41. & The non-Markovian stochastic Schrödinger equation for open systems, Phys. Lett. A 235, 569 (1997). 42. 42. , & Non-Markovian quantum state diffusion, Phys. Rev. A 58, 1699 (1998). 43. 43. , & Open system dynamics with non-Markovian quantum trajectories, Phys. Rev. Lett. 82, 1801 (1999). 44. 44. & Non-Markovian relaxation of a three-level system: Quantum trajectory approach, Phys. Rev. Lett. 105, 240403 (2010). 45. 45. & Time-dependent treatment of a general three-level system, Phys. Rev. A 71, 063822 (2005). 46. 46. & Geometric phases and Bloch-sphere constructions for SU(N) groups with a complete description of the SU(4) group, Phys. Rev. A 78, 022331 (2008). 47. 47. & Energy transfer using unitary transformations, Entropy 15, 5121 (2013). 48. 48. , , , & Coherent dynamics of a single spin interacting with an adjustable spin bath, Science 320, 352 (2008). 49. 49. , , , & Gigahertz dynamics of a strongly driven single quantum spin, Science 326, 1520 (2009). 50. 50. et al. Strong magnetic coupling between an electronic spin qubit and a mechanical resonator, Phys. Rev. B 79, 041302(R) (2009). 51. 51. et al. Single spin states in a defect center resolved by optical spectroscopy. Appl. Phys. Lett. 81, 2160 (2002). 52. 52. , & Unidirectional vibrational energy flow in nitrobenzene, J. Phys. Chem. A. 117, 6066 (2013). 53. 53. , & Non-Markovian dynamics of a nanomechanical resonator measured by a quantum point contact, Phys. Rev. B 83, 115439 (2011). 54. 54. , , , & Broken symmetries, zero-energy modes, and quantum transport in disordered graphene: From supermetallic to insulating regimes, Phys. Rev. Lett. 110, 196601 (2013). 55. 55. , , & Solving non-Markovian open quantum systems with multi-channel reservoir coupling, Ann. Phys. 327, 1962 (2012). 56. 56. , , & Non-Markovian quantum-state diffusion: Perturbation approach, Phys. Rev. A 60, 91 (1999). Download references We acknowledge grant support from the Basque Country University UFI (Project No. 11/55-01-2013), the Basque Government (grant IT472-10), the Spanish MICINN (No. FIS2012-36673-C03-03), the NSFC No. 11175110, and Science and Technology Development Program of Jilin Province of China (20150519021JH). DS acknowledges support from an NSERC discovery grant and the Canada Research Chair Program. BWL acknowledges support from the Ministry of Education, Singapore, by Grant No. MOE2012-T2-1-114. Author information 1. Institute of Atomic and Molecular Physics and Jilin Provincial Key Laboratory of Applied Atomic and Molecular Spectroscopy, Jilin University, Changchun 130012, Jilin, China • Jun Jing 2. Department of Theoretical Physics and History of Science, The Basque Country University (EHU/UPV), PO Box 644, and Ikerbasque, Basque Foundation for Science, 48011 Bilbao, Spain • Jun Jing •  & Lian-Ao Wu 3. Chemical Physics Theory Group, Department of Chemistry, University of Toronto, 80 St. George St., Toronto, Ontario M5S 3H6, Canada • Dvira Segal 4. Department of Physics and Centre for Computational Science and Engineering, National University of Singapore, Singapore 117542, Republic of Singapore • Baowen Li 1. Search for Jun Jing in: 2. Search for Dvira Segal in: 3. Search for Baowen Li in: 4. Search for Lian-Ao Wu in: J.J. performed numerical simulations, analyzed results, and prepared figures. L.-A.W. contributed to the conception and development of the research problem. All authors (J.J., D.S., B.L. and L.A.W.) discussed the results and physical implications, and wrote the manuscript. Competing interests The authors declare no competing financial interests. Corresponding author Correspondence to Lian-Ao Wu.
3e45e248e5d0b485
Follow Slashdot blog updates by subscribing to our blog RSS feed Forgot your password? Pure Math, Pure Joy 315 e271828 writes "The New York Times is carrying a nice little piece entitled Pure Math, Pure Joy about the beauty and applicability of pure math as carried out at the Mathematical Sciences Research Institute. There is an accompanying slideshow of pictures of mathematicians in action; I particularly loved the picture titled Waging Mental Battle with a Proof." Pure Math, Pure Joy Comments Filter: • by wmspringer ( 569211 ) on Sunday June 29, 2003 @01:41PM (#6325892) Homepage Journal It doesn't actually have to be useful for anything now; in the academic setting you can research from obscure branch of mathematics just because you find it interesting. • by Manhigh ( 148034 ) on Sunday June 29, 2003 @01:45PM (#6325921) I think that Mathematicians largely arent the philanthropists that scientists are. However, seeing as how every science consists largely of mathematical models, the ends justify the means, so to speak. In other words, while a mathematician isnt looking for a way to make a longer lasting lightbulb, his or her ideas eventually work their way into science and engineering applications, even if it takes decades to happen. • by Jaalin ( 562843 ) on Sunday June 29, 2003 @01:46PM (#6325929) Homepage Mathematicians do it for the beauty. Society funds them because what is beautiful to a mathematician often turns out to be useful in many other ways. The NSF is paying me to do math research this summer, and honestly I don't care if what I'm doing has any relevance to anything -- I'm just doing it because what I'm studying is really cool and beautiful. But it may turn out that something I find is useful for something else that I never even thought of. This is what happened in large part with number theory -- many of the underlying results were discovered i nthe 1800's and early 1900's, and only later turned out to be useful in cryptography. You can't predict what will be useful and what won't. • by andy666 ( 666062 ) on Sunday June 29, 2003 @01:46PM (#6325931) could someone please explain the point of this article ? like most nytimes science article it seems to have zero content. it would be nice if for a change they explained something about mathematics • by Ella the Cat ( 133841 ) on Sunday June 29, 2003 @01:50PM (#6325940) Homepage Journal If mathematicans aren't really interested in helping understand the world, why should society fund them? Because they're able to create beauty, like artists and writers and musicians do. Not all human activity should be measured with money, even if money is needed to make it happen • by foonf ( 447461 ) on Sunday June 29, 2003 @01:52PM (#6325953) Homepage These are two separate things. Many people are attracted to the natural sciences, and even engineering disciplines, not because of a desire to improve the world, but because they find pleasure and abstract beauty in those fields. Yet undeniably work in those areas can lead to benefits for "society", and therefore people doing research in those areas are funded, even if their personal reasons for doing the work have nothing to do with those benefits. Likewise with mathematics, many ideas thought of as purely abstract and disconnected from practical application have turned out, later on, to be useful tools in understanding various real-world phenomena. It is totally unscientific and ultimately counter-productive to close off areas of inquiry because at the time they are undertaken no one can know exactly what the consequences will be. And ultimately the motivations of the people involved are irrelevant; we know based on history that there could turn out to be uses for it in the future, even if neither "we" (the society making the decision to support the research), nor those doing the research, can see any at this time, and this potentiality alone should justify providing support. • by k98sven ( 324383 ) on Sunday June 29, 2003 @01:53PM (#6325957) Journal I sure hope this isn't really true. If mathematicans aren't really interested in helping understand the world, why should society fund them? I certainly know that a major motivation for my career in science is that understanding the world through science will help people, cure diseases, etc. Guess what? It gets worse.. it's not only the mathematicians, but just about anyone and everyone involved in fundamental research. I know I am.. I do theoretical chemistry.. and although I'd love to see something useful come out of what I do, I cannot see any immediate uses for my work. The point is: It's the foundation research, the fundamentals, that lead to the big, *big* innovations. Although it might not seem useful at the time, it may (or may not) turn out to be very very important in the future. However, by it's nature, we can't know which research is going to pay off in practical terms. Einsteins work on stimulated emission probably didn't look very useful back in 1910 either, but it lead to the devlopment of the laser, which noone could've predicted at that time. That's why we need to fund this stuff. • by Sprunkys ( 237361 ) on Sunday June 29, 2003 @01:54PM (#6325960) For the sheer beauty of it. Asking why you should fund mathematics is asking why you should fund art. Who ever got cured by art? I certainly know that a major motivation for my career in science is the beauty of it. It's like the sunset outside my window, it's like Dido's new single emerging from my speakers. Today I spent studying for my thermodynamics exam and even the simple mathematics used therein is beautiful. Wednesday is my Quantum Mechanics exam and if it weren't for the beauty of the mathematics of the Schrödinger equation it would be a whole lot less intruiging. I make that exam for the joy and beauty I find in the mathematics and physics, not because it makes your cd player work. Beauty. That is why you should fund mathematics. The fact that it helps society is a secondary concern. But hey, that's just my opinion. And that of the Pythagoreans, to name a few. Beauty can be found in more things than a painting or Natalie Portman. It's in logic, in mathematics, hell, it's even in code. It's in patterns, it's in reason, it's in deduction as much as it's in nature, an individual or a thought. • OK, not in it's entirety, and not it is a serious problem, but it would be nice if the editors could make sure that each Sunday, we don't see so many postings from a single news source. Maybe some sort of summary each Sunday on interesting stories in the NYT Sunday Edition. Pure Math, Pure Joy [] Does Google = God? [] Harry Potter and the Entertainment Industry [] • by somethinsfishy ( 225774 ) on Sunday June 29, 2003 @02:02PM (#6325993) I'd never studied linear algebra until recently when I had to learn just enough to work through the inverse kinematics of a robot arm. Actually, I never really got along with Mathematics very well anyway. But looking at how matrices can solve all kinds of problems just by drawing zig-zags through rows and columns of numbers made me wonder whether the problems they model or the problems themselves came first. As I was learning the little bit of this math that I did, it started to seem to me that the Math has an independent existence, and a somewhat mysterious set of relationships of correlations and causalities connected to but not dependant on physical nature. • by Anonymous Coward on Sunday June 29, 2003 @02:03PM (#6325997) How do we know that this "math" thing they write about even exists? • by Zork the Almighty ( 599344 ) on Sunday June 29, 2003 @02:08PM (#6326019) Journal For the most part, we're in it because we want to know. Maybe you think that's a selfish reason, and maybe it is, but when we discover something we immediately share it with the world. The enduring gifts of mathematics are that it extends the boundaries of what is possible with current technology, while presenting us with direction for the future. • by Roelof ( 5340 ) on Sunday June 29, 2003 @02:09PM (#6326025) Homepage I think that Mathematicians largely arent the philanthropists that scientists are. Thus mathematicians aren't scientists. • by xant ( 99438 ) on Sunday June 29, 2003 @02:10PM (#6326031) Homepage "Being interested in helping the world" is not the same thing as "helping the world". An ox is not interested in helping plow the farmer's field, but the farmer still feeds it. • by KDan ( 90353 ) on Sunday June 29, 2003 @02:19PM (#6326081) Homepage Very large prime numbers are the basis of the RSA asymmetric encryption algorithms which you trust your credit card numbers and other private information to. Anyway, I'm almost thinking you're trolling because the rest of your post demonstrates some sort of keen-ness for over-simplification. Maybe you're just not out of secondary school yet, but for your information, trig, calculus and the rest are useful for a lot more stuff than what you mention. All the different areas of maths often intermingle in any physical subject. For the interesting tidbit of information, there has yet to be a mathematical discovery which has not found practical applications. Even group theory, which at first was thought to have nothing to do with physics or any engineering sciences, was found to be very applicable to some extremely interesting problems of fundamental physics (describing the symmetries of fundamental particles). • by GoofyBoy ( 44399 ) on Sunday June 29, 2003 @02:42PM (#6326193) Journal How arbitrary is that? How is e) (prime) less valid than the solution? How about g) (The only number greater than 29)? How about a) because its the "bad luck" number in Chinese culture (Too bad you missed out on that one, "white devil")? How about j) (Because today is Sunday and I feel like its the correct answer)? • by TheRaven64 ( 641858 ) on Sunday June 29, 2003 @02:44PM (#6326202) Journal How about this one: What is the next in the sequence of: My answer was . The sequence is the largest number of separate enclosed areas it is possible to make by adding a single straight line to a circle. (i.e. 1 for no lines, 2 for one line, 4 for two lines) I hate this kind of question, because it is possible to design a sequence such that any number comes next, so any test which includes the possibility of incorrect answers is just plain wrong. Of course you should have to justify your answer, but since the IQ tests are multiple choice... • by f97tosc ( 578893 ) on Sunday June 29, 2003 @02:46PM (#6326206) Which is the odd one out: (a) 4 (b) 15 (c) 9 (d) 12 (e) 5 (f) 8 (g) 30 (h) 18 (i) 24 (j) 10 Well, anyone who knows a prime from a hole in the ground would choose (e), but the correct answer was (f), 8. And why? Because it is the only "symmetrical" number, as printed on the page! Well, according to Ockhams razor I would argue that Mensa is right. The concept of symmetry is much simpler than the concept of prime numbers. • by BWJones ( 18351 ) on Sunday June 29, 2003 @02:46PM (#6326210) Homepage Journal So, this is the deal with science and making it attractive to folks, so they see the importance of it. How do you impart the feeling of accomplishment and how efforts of pure thought impact the world? I thought this photo essay did an admirable job of conveying what thinking for a living is like, yet how does one make this approachable to the general population? I had a conversation with a film director once sitting in an airport (forget his name), but he was asking me what it was like to be a scientist and how one would impart that feeling in film. I responded that he would probably be best by following a scientist for a couple of weeks and shooting lots of time with rather tired looking individuals who had much passion for what they do but who spend lots of time thinking, applying for grants, staring through microscopes, writing code, writing papers, giving talks and talking with colleagues and above all, no matter what they are doing (eating, running, showering etc...), they are thinking. How do you impart that on film? I had some ideas, but he was probably thinking of an action movie. All told however, this article with the accompanying photo essay was well worth the time spent, it would have been nicer to have a more in depth article however. • by samhalliday ( 653858 ) on Sunday June 29, 2003 @02:57PM (#6326266) Homepage Journal i am a PhD student in maths... and obviously i will disagree with you. but i have a reason... we may not WANT to change/understand the world; but it happens!!! surprise surprise, but the maths we create is used by physicists (about a 50->100 year time lag), which in turn is applied and picked up by engineers/chemists/biologists (another 10->50 year lag) which ends up being some new device or revolution for society to play with. you kill off maths, you kill off science as a whole. perfect examples involve ANY piece of electrical equipment, communications, medical care and transport. parent is a troll and is very VERY short sighted (see his home page ;-)). • by f97tosc ( 578893 ) on Sunday June 29, 2003 @03:33PM (#6326428) Can you point us to the authoritative "hierarchy of simplicity? No. I think the best way is to imagine that you have to explain both alternatives to somebody who is completely clueless, and see which is quicker and easier to explain. Of course this method does not always work, but I think that in this case most would agree that the symmetry alternative is simpler. "See if, you turn the paper, the 8 still looks the same. It is the same if you look at it from either direction. If you put a mirror in the middle it does not change. If you look at the other numbers, this does not happen; look!" "See, the 5 is a prime number. That means that it can only be divided evenly by itself, and one. Division means that...[lengthy explanation]. Even division means that [lengthier explanation]. The reason that one is not included in the definition is that [....]. Now we can look at all the other numbers in turn and see that they are not prime numbers [lengthy calculations, or even lengthier explanations on how they can be indentifed quickly]. Etc. Etc." • by backdoorstudent ( 663553 ) on Sunday June 29, 2003 @03:41PM (#6326456) It is correct that any number can come next in that sequence or any other. This is called the Matiyasevich-Robinson theorem. • by drooling-dog ( 189103 ) on Sunday June 29, 2003 @03:45PM (#6326476) Oh, I wouldn't argue that they were wrong; in fact I think that they set up the question this way deliberately to smack mathematically literate people who see numbers and assume that it's about number theory. They're measuring some function of intelligence minus education. • Pure Math (Score:3, Insightful) by MimsyBoro ( 613203 ) on Sunday June 29, 2003 @03:54PM (#6326511) Journal I'm a second year college student of pure math. I just wanted to tell all you non-believers taht its true. There is something amazingly beautiful in pure math. And in the way it is almost "above" reality. Math is applied philosophy. And if you've ever tried tackling a hard philosophical problem you know what it's like trying to understand a prinicipal in math... • by Wavicle ( 181176 ) on Sunday June 29, 2003 @04:28PM (#6326671) If they are deliberately creating questions that have a "correct but not the answer we were looking for" solution, then they are knowingly creating poor tests of intelligence. What they are really looking for then is "people who think like we do" not "very intelligent people". It's sort of like the old biased college aptitude tests and the cup/saucer question where kids from well off white families would know that cup and saucer go together, but poor minority kids had probably never encountered a saucer in their life. • by Anonymous Coward on Sunday June 29, 2003 @04:48PM (#6326763) That's why we need to fund this stuff." Its a good point; even if you believe that mathematics needs to yield real world applications in order to be justified, it would be short cited to restrict research to topics with anticipated applications. However, I think research in mathematics should be encouraged for more idealogical reasons. We enrich our culture whenever we add to our knowledge of anything. This is why we support the study of fine arts, literature, history, anthropology etc. We do not demand applications from these subjects; the payback is less tangible than that. Pure mathematics gives us beautiful truths that are valuable in themselves even if they don't penetrate into the popular culture. The fact that pure mathematics provides a rich resevoir of knowledge that is heavily exploited by all fields of science and engineering should not be construed as its sole justification. Anyway, when it comes to funding, you'll find it much easier to get support for research under the banner of applied mathematics or engineering than for research in pure math. The money available for the latter is probably more akin to that of the humanities than it is to that of the applied sciences. And that is fine, but there is no cause to whine about money being wasted on research in pure mathematics. • by dpbsmith ( 263124 ) on Sunday June 29, 2003 @05:42PM (#6327050) Homepage Euclid alone has looked on Beauty bare. Let all who prate of Beauty hold their peace, And lay them prone upon the earth and cease To ponder on themselves, the while they stare At nothing, intricately drawn nowhere In shapes of shifting lineage; let geese Gabble and hiss, but heroes seek release From dusty bondage into luminous air. O blinding hour, O holy, terrible day, When first the shaft into his vision shone Of light anatomized! Euclid alone Has looked on Beauty bare. Fortunate they Who, though once only and then but far away, Have heard her massive sandal set on stone. --Edna St. Vincent Millay • by f97tosc ( 578893 ) on Sunday June 29, 2003 @06:00PM (#6327137) If you a) Write the number in binary it is not symmetric. Mind you, it is:) OK. Scratch that. b) If you use an OCR front it is not (the top part of the glyph is skew and smaller). c) If you do not write down the number but represent it in, for instance, a binary set of charges in capacitors ina dynamic RAM device I am not sure that the concept of symmetry applies at all. d) If you write it as a Maya numeral (Which would be 1 line and 3 dot on top of it) it would only be symmetrical in one axis, but so would some of the other numbers. e) Put your computer in a font which displays numbers with different glyphs and wham, no more symmetry. Try Adobe WoobBlock or something weird. So symmetry is NOT a property of the number itself. Primeness is though. Yes, but the whole issue here was whether the symbol should be just a character or treated as an abstraction for a numerical quantity. All these points assume that we have decided that it is an abstraction for a numerical quantity (and that the symmetric property should hold for other ways of writing the same numerical quantity). If the figure 8 is just a meaningless character, then you write it as 8, with the same font, in Maya as well. You cannot asume the mathematical-abstraction interpretation to prove itself. • Re:Pure Math (Score:1, Insightful) by Anonymous Coward on Sunday June 29, 2003 @06:12PM (#6327183) Ah, yes, but /everything/ but math is applied math. I'm a physicist; I'm only a notch lower than the mathematicians on the totem pole. Everything but math and physics is applied physics. :) • by hobit ( 253905 ) on Sunday June 29, 2003 @07:39PM (#6327596) I work in the maths department of a University, and yes.. it's very much like this. We sit around all day in small groups, staring at blackboards, "battling with proofs". Just like in that wonderful movie with the violent australian, "A Beautiful Mind". I'm a computer scientist who does a bit of theory. By far the very best, most enjoyable and most rewarding thing I've done as a graduate student is work on proofs. Usually in small groups, often on a blackboard (although I prefer having colors so a white board is much prefered). There is a fair amount of reading involved but it can be fun... Nowdays I teach, which I enjoy, but occasionally do some math where all I do is sit around and think. Now if I could just find someone to do the write-ups (which I hate). I don't do anything horribly insightful (although some of it has been published) but it is fun! • by Anonymous Coward on Sunday June 29, 2003 @08:36PM (#6327854) Generally, most classical mathematics was inspired by real-world problems. Geometry, for instance (literally "earth measure") came about as a way to mark off crop boundaries that got washed away after the river periodically flooded. But I'd say that since the golden age of mathematics (about 18th century), new mathematics has been created primarily for its own sake. Often the only "applications" are in proving theorems in other areas of mathematics as opposed to real-world problems. • by Wavicle ( 181176 ) on Sunday June 29, 2003 @11:59PM (#6328602) Well, I am probably extrapolating it beyond what he would ever have done; but I am not the first to realize it's applicability to this type of problem. So you are saying because numerical symbols are simpler to explain as shapes than as a field of philosophy, that any problem involving numbers should first consider their shape since any solution involving that would be simpler to explain? No, you haven't realized a valid use of Ockham's razor. You are simply using the validity given to it, and twisting its meaning to make your argument seem more valid. Ockham's razor, as it applies to philosophy, eliminates one of two theories trying to explain the same thing. For example, why do planets in the sky move in such a peculiar way? One theory says "the sun is at the center and we and the other 8 are going around it" the other theory spends a few pages of explanation about the earth being at the center and the planets going around it, and on another sub orbital on their major orbit... all kinds of craziness. Clearly one requires less multiplications than the other. If you want to apply Ockham's razor here, you must have two theories explaining the same thing. But they don't. One theory says "8", the other says "5". By your logic, 1 + 1 = X, because you can make an "X" by crossing the two shapes and it is much easier to explain two shapes overlapping than elementary arithmetic. Just because there is an easier explanation to get a different answer doesn't mean the easier explanation is right, or that Ockham's razor is in any way involved. This is a circular argument. The whole point with the other solution is that "8" can be analyzed by just the properties of the symbol itself, and not by the properties of the mathematical abstraction. You assume it is a mathematical abstraction, and then use that assumption to prove itself. Please quote me proving that it is a mathematical abstraction. I assume that they are numbers and not shapes and then using that assumption evaluate that one and only one is prime. But that doesn't prove that they are abstractions, merely that there is a valid answer if they are. • by njj ( 133128 ) on Monday June 30, 2003 @05:24AM (#6329379) This is an important question, and in my opinion has two particularly valid answers. The first of these is the one that usually gets advanced - that (as with other pure scientific disciplines) we just don't know what `useless' knowledge might turn out to be useful or vital in fifty year's time. This is all well and good, and a perfectly decent reason to study something. The other one, which I've come to believe more strongly over the past few years, is that which is often advanced in support of arts funding - that it benefits a society greatly (often in intangible and undefinable ways) to study and research things whether or not they have any practical use. This is a point which, in the UK at least, a succession of education ministers have either missed or fundamentally disagreed with over the past few decades. Last month, Charles Clarke, the current Secretary of State for Education made some very disturbing comments about how he didn't see the point in spending taxpayers' money on maintaining a group of ``mediaeval seekers after truth''. He was initially misquoted as saying he didn't see the point in the study of mediaeval history, which rightly got a lot of historians angry, but a later statement clarified that he actually didn't see the point in studying any subject which didn't have a direct positive contribution to UK industrial or economic interests. Which I find even more disturbing - it's understandable (even ok) for the Chancellor of the Exchequer to have such a viewpoint, but I like to think that the Secretary for Education should at least see some worth in all of the education system. A friend of mine [] (an eminent evolutionary and reproductive biologist who's also helped design aliens for people like Anne McCaffrey, Larry Niven and Jerry Pournelle, and co-written a couple of books with Terry Pratchett) once said ``Most people think that the end-product of a PhD is a neatly-typeset hardback thesis. It's not - the end product of the PhD is the person who's done the PhD'' which I rather agree with. Studying or researching any subject changes the way you look at the world - often for the better. It teaches you new or variant modes of thought which you can then apply (often unconsciously) to other areas of interest. For example: A former office-mate of mine now works for the NHS Breast Cancer Screening Service. The topic of her thesis (permutation group theory) is irrelevant to what she does now. But I find it tremendously reassuring to know that there are people that well-educated, and who have been trained to such a high level in thinking clearly and carefully, involved in something that important and worthwhile. • by Pig Bodine ( 195211 ) on Monday June 30, 2003 @05:53AM (#6329441) In most cases society doesn't fund them to do mathematical research. Research grants among pure mathematicians are not so prevalent. They earn their keep teaching math to (mostly) scientists and engineers and then prove theorems in whatever time that leaves open. Even aside from the argument that mathematics is intrinsically beautiful like music, art or literature, it doesn't make practical sense to expect everyone to have an eye on applications of their work. People have to specialize if they hope to learn enough to accomplish anything these days and a mathematician who also becomes enough of an expert in curing diseases to let that guide new mathematical research probably won't have time to prove new theorems. Letting mathematicians do math so that everyone can pull out what theorems they might apply in their own field has been pretty effective historically.
d99b67774dcf56a7
<p>Emergent Locality in Quantum Systems with Long Range Interactions</p> Gauss Centre for Supercomputing e.V. Emergent Locality in Quantum Systems with Long Range Interactions Principal Investigator: Fabien Alet (1) and David J. Luitz (2) (1) Centre national de la recherche scientifique (CNRS), Toulouse University, France, (2) Max Planck Institute for the Physics of Complex Systems (MPIPKS), Dresden, Germany Local Project ID: HPC Platform used: Hazel Hen of HLRS Date published: How fast can information travel in a quantum system? While special relativity yields the speed of light as a strict upper limit, many quantum systems at low energies are in fact described by nonrelativistic quantum theory, which does not contain any fundamental speed limit. Interestingly enough, there is an emergent speed limit in quantum systems with short ranged interactions which is far slower than the speed of light. Fundamental interactions between particles are, however, often of long range, such as dipolar interactions or Coulomb interactions. A very-large scale computational study performed on Hazel Hen revealed that there is no instantaneous information propagation even in the presence of extremely long ranged interactions and that most signals are contained in a spatio-temporal light cone for dipolar interactions. Full Report The best quantum theory for high energy particles, such as they appear in cosmic radiation or in particle accelerators like CERN’s large hadron collider, is based on Einstein’s theory of special relativity and includes the speed of light as a strict speed limit for the transmission of information. However, most quantum systems of many particles which can be produced in laboratories are at much lower energies and therefore can exhibit different physics. It turned out that there is an additional quantum speed limit for quantum systems with short ranged interactions between many particles, for example the interaction between electrons in solids, which is screened by the presence of many other electrons. For such systems, Lieb and Robinson proved in a seminal work in 1972 that there is an emergent speed limit slower than the speed of light, which limits the maximal information transport in quantum many-body systems. This plays a crucial role for the buildup of correlations of particles, for how fast a quantum system can reach thermal equilibrium, as well as for practical implementations of quantum computers as this bound limits the loss of quantum information. Today there is an increasing interest in quantum systems which exhibit long range interactions between their constituents, since they can be manufactured in experiments with ultracold quantum gases of atoms with dipolar interactions. One recent example are experiments with exotic Dysprosium atoms, which have a large magnetic moment and exhibit long ranged dipolar interactions. For such systems, we currently have limited knowledge for how fast quantum information can travel. The current computational study addressed this issue by an exact numerical simulation of two generic models of strongly interacting quantum systems with long ranged interactions. Models of quantum matter with many-body interactions represent a formidable challenge since they are not analytically solvable and experiments are currently not precise and universal enough to provide a universal answer. Therefore it is of crucial importance to solve these models numerically with state of the art computational techniques. This is the aim of the STIDS project. The main numerical challenge is that the complexity of the calculation grows exponentially with the number of particles in the system: in a nutshell, the complexity is (at best) doubled when adding one more particle. In the precise study of information propagation, the long-range nature of the interactions leads to very fast dynamics and requires to simulate systems as large as possible. The work of the STIDS team has pushed calculations on HLRS’s Hazel Hen supercomputer located in Stuttgart to the limit of what is currently feasible, reaching 15 quantum particles on 31 lattice sites. These simulations are converged in terms of system size, meaning that the results do not change if the system size is further increased. The complete description of the problem is encoded in the wave function of the quantum many-body system, which time evolution is obtained through the solution of the Schrödinger equation. Storing and computing this wave-function requires a massive amount of computer time and memory (RAM) for these large systems. The largest calculations for this project required more than 10TB of memory and 100 nodes of the Hazel Hen supercomputer in parallel for a single calculation. These resources were crucial for reaching the largest system sizes to prove that the findings are converged with the number of particles. The main findings for the one dimensional systems of this study are: 1. There is an emergent speed limit for systems with long range interactions which decay with distance r faster than 1/r. This leads to a “light cone”, a region of causality in space-time outside of which no quantum communication can occur. 2. For interactions which decay slower than 1/r with distance, there is a causal region with power law envelope, which excludes immediate quantum communication even for very long-range interactions. 3. All quantum speed limits known so far for long ranging interactions are not tight, i.e. there are actual limits which are even slower than previous work suggested. In conclusion, the present numerically exact study represents considerable progress on the question of how fast quantum information can travel in solids or ultracold atomic gases. These results are of fundamental importance for a deeper understanding of thermalization and of the time scales for the generation of quantum entanglement, the resource of quantum computation. Reference for the present research: Emergent locality in systems with power-law interactions. David J. Luitz and Yevgeny Bar Lev. Phys. Rev. A 99, 010105(R) – Published 30 January 2019 -- DOI: https://doi.org/10.1103/PhysRevA.99.010105 Contact for the present research: David J. Luitz Max Planck Institute for the Physics of Complex Systems Nöthnitzer Straße 38, D-01187 Dresden (Germany) e-mail: dluitz [at] pks.mpg.de NOTE: This project was made possible by PRACE (Partnership for Advanced Computing in Europe) allocating a computing time grant on GCS HPC system Hazel Hen of the High Performance Computing Center Stuttgart (HLRS), Germany. HLRS project ID: PP16153659 February 2019 Tags: Universite de Toulouse HLRS Materials Science
278501cff97978de
Structure and zero-dimensional polariton spectrum of natural defects in GaAs/AlAs microcavities Joanna M Zajac [email protected]    Wolfgang Langbein School of Physics and Astronomy, Cardiff University, The Parade, Cardiff CF24 3AA, United Kingdom September 30, 2020 We present a correlative study of structural and optical properties of natural defects in planar semiconductor microcavities grown by molecular beam epitaxy, which are showing a localized polariton spectrum as reported in Zajac et al., Phys. Rev. B 85, 165309 (2012). The three-dimensional spatial structure of the defects was studied using combined focussed ion beam (FIB) and scanning electron microscopy (SEM). We find that the defects originate from a local increase of a GaAs layer thickness. Modulation heights of up to 140 nm for oval defects and 90 nm for round defects are found, while the lateral extension is about 2m for oval and 4m for round defects. The GaAs thickness increase is attributed to Ga droplets deposited during growth due to Ga cell spitting. Following the droplet deposition, the thickness modulation expands laterally while reducing its height, yielding oval to round mounds of the interfaces and the surface. With increasing growth temperature, the ellipticity of the mounds is decreasing and their size is increasing. This suggests that the expansion is related to the surface mobility of Ga, which with increasing temperature is increasing and reducing its anisotropy between the and crystallographic directions. Comprehensive data consisting of surface profiles of defects measured using differential interference contrast (DIC) microscopy, volume information obtained using FIB/SEM, and characterization of the resulting confined polariton spectrum are presented. I Introduction Fundamental physics was demonstrated in planar semiconductor microcavities over last 10 years including Bose-Einstein condensation of exciton-polaritons Kasprzak et al. (2006), formation of vortices Lagoudakis et al. (2008) and superfluidity Amo et al. (2009). To understand these two-dimensional inhomogeneous non-equilibrium systems, it is important to understand and harness spatial disorder in these structures. A significant contribution to polariton disorder in molecular beam epitaxy (MBE) grown microcavities is photonic disorder including a cross-hatched dislocation pattern Zajac et al. (2012a); Abbarchi et al. (2012) and point-like-defects (PDs)Zajac et al. (2012b). Such disorder creates a potential landscape for the in-plane motion of polaritons, which results in inhomogeneous broadening, enhanced backscattering Langbein et al. (2002); Gurioli et al. (2005), and creation of localized polariton condensates and polariton vortices Lagoudakis et al. (2008); Krizhanovskii et al. (2009). In this work, we present a correlative study of the structural and optical properties of natural defects in planar semiconductor GaAs/AlAs microcavities grown by MBE. The paper is organized as follows: in Sec. II we review the literature on defects in MBE grown GaAs structures, in Sec. III we discuss the samples and experimental methods used, in Sec. IV we present the experimental results, followed by a discussion of the defect formation in Sec. V and conclusions in Sec. VI. Ii Point-like defects in GaAs heterostructures Point-like-defects observed in MBE-grown GaAs based heterostructures are classified into several types, with the most common being oval defects. It was reported that oval defects originate from an excess of Ga, Ga droplets, or surface contaminationChand and Chu (1990); Fujiwara et al. (1987), or low growth temperaturesOrme et al. (1994); Kreutzer et al. (1999). Oval defects were extensively studied in the 1990s as they were responsible for the failure of electronic devices such as field-effect transistors.Chand and Chu (1990) The defects have typical sizes in the order of several m and a roughly 3:1 aspect ratio along the : crystal directions.Fujiwara et al. (1987) Their height on the surface were found to be several tens of nanometers. Another type of defect observed in this work were round defects having similar diameters and heights as oval defects. They were attributedKawada et al. (1993) to Ga oxide or effusion cell spitting. We found that the defects investigated in our work originate from a GaAs thickness modulation, which we attribute to Ga droplets with sizes in the order of 100 nm emitted by the Ga source during growth. The droplet formation was previously ascribedChand and Chu (1990) to an inhomogeneous temperature distribution in the Ga crucible of the effusion cell. Specifically, Ga cools near the orifice of the crucible and, since it does not wet the pyrolytic Boron Nitride (PBN) crucible surface, forms droplets which can fall back into the liquid Ga, causing a spatter of smaller Ga droplets. Recommended methods to reduce this mechanism include 1) use solid instead of liquid Ga; 2) modification of the orifice geometry of the Ga cell to inhibit condensed Ga droplets entering into the Ga source; 3) creating a positive axial temperature gradient toward the orifice to prevent condensation of Ga; 4) treating the crucible with Al, which forms a AlN layer which Ga is wetting, suppressing the formation of droplets. Another mechanism for formation of oval defects is suggested in Ref. Brunemeier, 1991 and referred to as Ga source spitting. During heating of the Ga source up to C, within the range of Ga evaporation Herman and Sitter (1989), explosions in the Ga liquid were observed, which resulted in Gallium droplets deposition on the MBE chamber walls. It was speculated that these explosions were due to GaO shells encapsulating Ga and creating an effusion barrier. Another possible mechanismClarke is that particulates released from the walls of the MBE chamber are entering the molten Ga in the crucible causing a turbulent reaction. Summarizing, a number of mechanisms for the Ga nano-droplet formation have been suggested, and the mechanism dominant in a given growth is not obvious. Iii Samples and Experimental methods In this work we investigated two microcavity samples, MC1 and MC2, grown in a VG Semicon V90 MBE machine with a hot-lip Veeco ’SUMO’ cell as Ga source, with structures given in Table  1. Sample MC1 was studied in Ref. Zajac et al., 2012b. Sample MC1 MC2 cavity length 1 2 DBR periods top(bottom) 24(27) 23(26) growth DBR AlAs 715 590 temperature DBR GaAs 660 590 (C) cavity GaAs 630 590 Table 1: Parameters of samples MC1 and MC2. During the growth of MC1, the wafer temperature was ramped up to C for the AlAs Bragg layers and down to C for GaAs Bragg layers, while the cavity layer was grown at C. During the growth of MC2 instead, the growth temperature was C for all layers. The two samples show a significantly different aspect ratio of defects on their surface. In MC1 they are essentially round (see Fig. 4), while in MC2 they have an aspect ratio between 3:1 and 2:1 along the : direction as shown in Fig. 8. At a temperature of T=80 K, the cavity mode energy in the center of the wafer of MC1 (MC2) is at  eV, respectively, while the bulk GaAs exciton resonance of the cavity layer is at 1.508 eV. Sketch of the optical imaging spectroscopy setup used to measure the localized polariton states. M1: Gimbal mounted mirror, L1-L5: Lenses, MC: Microcavity sample, LS1,LS2: movable lenses for imaging, dashed lines: removable mirrors, BS: Beam-splitter. Figure 1: Sketch of the optical imaging spectroscopy setup used to measure the localized polariton states. M1: Gimbal mounted mirror, L1-L5: Lenses, MC: Microcavity sample, LS1,LS2: movable lenses for imaging, dashed lines: removable mirrors, BS: Beam-splitter. The low temperature optical measurements were performed using the optical setup sketched in Fig. 1. The samples were mounted strain-free on a mechanical translation stage moving along the sample surface in a bath cryostat at  K in nitrogen gas at 100-300 mbar. Two aspheric lenses of 8 mm focal length and 0.5 numerical aperture (NA) were mounted at the opposing faces of the sample inside the cryostat to focus the excitation and collimate the emission, respectively, providing a diffraction limited resolution of m. The axial positions of both lenses were adjustable at low temperatures to control the focus of excitation and detection. The excitation was provided by a mode-locked Ti:Sapphire laser (Coherent Mira) delivering 100 fs pulses at 76 MHz repetition rate and a spectral width of approximately 20 meV. The transmission of the samples excited from the substrate side and detected from the epi-side was imaged onto the input slit of an imaging spectrometer with a spectral resolution of eV. Scanning of the sample image across the spectrometer input slit while keeping the directional image on the spectrometer grating fixed for two-dimensional hyperspectral imaging was achieved by moving the lenses LS1 and LS2 appropriatelyLangbein (2010). Spatial height profiles of defects on the sample surface were measured with differential interference contrast (DIC) microscopy using an Olympus BX-50 microscope with a 20x 0.5NA objective. DIC images in green light (wavelength range 525-565 nm) were taken by a Canon EOS 500D camera with an array of 4752 x 3168 pixels of m size in the intermediate image plane. The resulting image resolution was  nm in the plane of the sample, and about  nm in the plane perpendicular to the sample surface using quantitative DIC. Details on the procedure used to extract height profiles of defects using DIC microscopy are given in the Appendix. SEM images of oval defect PD3. a) prior to milling, green lines indicate defect extension on the surface. b) after milling the well exposing the epilayer cross-section at its side wall. On the lower part of the images, the edge of the alignment photomask is visible. Figure 2: SEM images of oval defect PD3. a) prior to milling, green lines indicate defect extension on the surface. b) after milling the well exposing the epilayer cross-section at its side wall. On the lower part of the images, the edge of the alignment photomask is visible. Hyperspectral imaging of polariton states bound to PD1 in MC1, measuring the spatially and spectrally resolved transmission intensity Figure 3: Hyperspectral imaging of polariton states bound to PD1 in MC1, measuring the spatially and spectrally resolved transmission intensity . First and fourth column: . The energy is shown relative to the polariton band edge at  eV. Intensity on a logarithmic color scale as indicated. Second and fifth column: Real space intensity maps of individual states . The state number and the relative energy are given, and the orange lines indicate the related peaks in . Third and sixth column: Real space distributions of corresponding Mathieu functions, labeled by their parity (’e’: even, ’o’: odd), radial () and angular () order, and parameter (see Eq.(1) and Eq.(2)). Color scale as for measured data indicated in the first column. To investigate the sample structure below the surface, we used a Carl Zeiss XB1540 Cross-Beam focussed-ion-beam (FIB) microscope which combines an ion-beam milling/imaging column with field-emission scanning electron microscope (FESEM) Giannuzzi and Stevie (2005), providing an imaging resolution of about  nm for the samples studied. The internal epi-layer structure was exposed by FIB milling. Smooth cross-sections were obtained using a two stage milling procedure. In the first step, a high beam current of nA, was used, resulting in fast milling but leaving a rough and inhomogeneous interface due to sputtering and redeposition of material. In the second step, the surface was polished with a lower beam current of pA removing a layer of about nm per cut resulting in a negligible surface roughness. In the next steps, layers with a thickness of m or less were removed using the low beam current. Different stages of this milling procedure are shown in Fig. 2. Before milling, the oval defect (PD3) is seen in SEM (Fig. 2a), with the vertical image scale corrected for the viewing angle of to the sample surface. A rectangular well of about m width and m depth is milled into the surface to one side of the defect (Fig. 2b), exposing a cross-section through the epi-layers to be measured at its side walls. After imaging the exposed cross-section with SEM, the subsequent cross-section at a controlled distance further into the structure is milled. Steps sizes of about m were used at the outskirts of the defect, reducing down to nm at its center to resolve the defect source. The cross-sections were m deep to expose the complete epitaxial structure, and their width was adjusted to the lateral extension of the defect observed at the surface. In order to mark defects on the sample surface for the correlative studies, a gold alignment mask was fabricated on the surface by photolithography. The mask consisted of a grid of mm squares with column and row indexing. Considering the defect density in the order of /cm, this mask allows to trace individual defects through the different measurement techniques used in the present investigation. Iv Results iv.1 Sample MC1 The localized polariton states in the round defects of sample MC1 were examined previously in Ref. Zajac et al., 2012b. Here we report on the correlation between the states and the three-dimensional structure of these defects using two defects referred to as PD1 and PD2 as examples. The localized polariton states of PD1 are observed in the hyperspectral transmission images shown in Fig. 3. The defect shows 16 localized modes down to  meV below the polariton band edge. It can be noted that the lowest observed stated is of -type symmetry. We expect to have 2 states with lower energies, another -state and an -state. These were not recorded. Structural characterization of PD1. a) DIC image of the sample surface (linear grey scale) and resulting height profile across the defect center (green line). b) SEM images of cross-sections through the epitaxial structure, taken along the black lines indicated in a). The relative distances from the edge of the defect are indicated. Figure 4: Structural characterization of PD1. a) DIC image of the sample surface (linear grey scale) and resulting height profile across the defect center (green line). b) SEM images of cross-sections through the epitaxial structure, taken along the black lines indicated in a). The relative distances from the edge of the defect are indicated. To qualitatively understand the states bound to this defect, we compare them with solutions of the two-dimensional time-dependent wave equation with the velocity for elliptical boundary conditions. Using elliptical coordinates , with the focus distance , and the ansatz results in the ordinary and modified Mathieu equations for the angular part and radial part , respectively:Abramowitz and Stegun (1965) where is the square of the normalized frequency. The solutions of and are angular and radial Mathieu functions. For a given , the periodic boundary condition of the angular part in Eq.(1) determines a series of with ascending number of nodes , each of which except is a doublet, having an odd (o) or even (e) symmetry for inversion of . The elliptical boundary is given by a unique and , at which the boundary condition, for example , holds. This condition and Eq.(2) using determines the mode frequencies corresponding to modes with ascending number of nodes in radial direction. Analytic expressions for the solutions are given in Ref. McLachlan, 1947. Since the polaritons show a quadratic in-plane dispersion for small wavevectors, their in-plane motion is well described by the Schrödinger equation.Kaitouni et al. (2006) By modifying the definition of , the above solutions for the Helmholtz equation are also solving the Schrödinger equation for elliptical boundary conditions, such as the elliptical quantum well with infinite barriers.Waalkens et al. (1997) Another family of analytic solutions of the Schrödinger equation with elliptical symmetry are given by the Hermite-Gaussian modes of an anisotropic two-dimensional harmonic oscillator. However, we found that they do not describe the observed distributions well, since, as we will see later, the confining potential of PD1 is not similar to a parabolic potential but rather to an elliptical well. The solutionsCoisson et al. (2009) with the mode orders , assigned to the measured states of intensities are shown in Fig. 3, yielding a qualitative agreement. For each energy state, was adjusted in order to reproduce the experimental patterns. The structural characterization of PD1 by DIC and FIB/SEM is given in Fig. 4. The DIC data shows a diameter of the defect on the surface of about m, similar to the extension of the spatially resolved transmission from this defect, and a height up to about  nm. The FIB/SEM cross-sections show the origin of the defect - a thickened GaAs layer with a center depression in the third DBR period above the cavity, extending over m in , and adding about  nm in thickness in the center, visible in the m cross-section. Further discussion of the defect growth dynamics will be given later. Hyperspectral imaging of polariton states bound to PD2 in MC1 with Figure 5: Hyperspectral imaging of polariton states bound to PD2 in MC1 with  eV, detailed description as for Fig. 3. We now move to the second defect PD2 on MC1, for which the hyperspectral transmission images are shown in Fig. 5. The states are arranged along a ring of about m diameter, with the lowest state localized at small , and higher states gradually extending along the ring, as in a one-dimensional harmonic oscillator, until the whole ring is filled. The underlying potential for the polaritons appears to be a ring-shaped well, with a depth decreasing with increasing . The analysis of the potential from the states shown in Sec. IV.3 confirms this interpretation. Structural characterization of PD 2, detailed description as for Figure 6: Structural characterization of PD 2, detailed description as for Fig. 4. The structural characterization of PD2 by DIC and FIB/SEM is given in Fig. 6. The surface profile is similar to PD1, with a size of about m diameter. The defect source is much deeper in the structure than for PD1, in the 23rd period of the DBR below the cavity, and has a larger height of about nm. This height is about twice the nominal height of the GaAs layer, and leads to discontinuities of the DBRs for about 4 layers above the defect. The deep center depression of the defect is consistent with the ring-shaped polariton confinement potential shown in Fig. 11. iv.2 Sample MC2 Polariton spectrum of PD 3 with Figure 7: Polariton spectrum of PD 3 with  eV, detailed description as for Fig. 3. Structural characterization of PD 3, detailed description as for as Figure 8: Structural characterization of PD 3, detailed description as for as Fig. 4. This sample was grown at lower temperature than MC1 (see Table  1), and shows oval-shaped defects. Two examples of defects for this samples are given here, referred to as PD3 and PD4. The polariton states of PD3 are visible in the hyperspectral transmission images in Fig. 7. The states come in nearly degenerate pairs with point reflected wavefunctions, extended along the x-axis (), for example the state pairs (1,2) and (3,4), (5,6). The states also have an approximate mirror symmetry about the x axis. Similar "double-well" eigenstates were observed for several other oval defects in this sample. All of them exhibit the same sequence of states, while the number of confined states was varying. Localization of the states close to the center of the defect, and the absence of mixed states of different parities indicate a very high potential barrier in the middle, and deep wells on both sides, with the corresponding potential in direction could be written as with the scaling constants . The resulting states of the two sides do not mix significantly. The states for are shifted by  meV to higher energies. In both wells we observe a ground state, followed by the first excited state some  meV above having one node and the second excited state some  meV further having two nodes, the third excited state some  meV with 3 nodes, and higher states. The structural characterization of PD3 is given in Fig. 8. The surface profile is oval, with the extension along of m, reduced by a factor of two compared to the one of PD1, while the extension along of m is similar to one of PD1. The height of the surface modulation is about 140 nm, twice the value seen for PD1 and PD2, and about twice the nominal height of the GaAs Bragg layer. The height increase is consistent with the reduced lateral size when accommodating the same volume. The defect source is in the 20th DBR period below the cavity. Polariton spectrum of PD 4 with Figure 9: Polariton spectrum of PD 4 with  eV, detailed description as for Fig. 3. Structural characterization of PD 4, detailed description as for as Figure 10: Structural characterization of PD 4, detailed description as for as Fig. 4. We now move to the second defect PD4 on MC2, for which the hyperspectral transmission images are given in Fig. 9. It shows the deepest localized states of all PDs studied, with the ground state meV below the continuum. The states show an approximate mirror symmetry along the and axis. We can model them with Mathieu functions as shown in Fig. 9, using a strong ellipticity. The structural characterization of PD4 is given in Fig. 10. The surface profile is oval as PD3, but with a 20% smaller extension and a three time smaller height. The defect source is in second period of the DBR below the cavity, and has a height of about nm. Being so close to the cavity, the additional GaAs has a strong influence on the polariton states, and the crater in the middle gives rise to a confinement potential with a barrier between the center and the peripheral area, as evidenced by the spatial distribution of the confined wavefunctions. Figure 11: Potentials of PDs calculated using Eq.(4). The color scale is given, covering 0 to -24 meV for PD1 and PD2 and 0 to -46 meV for PD3 and PD4. iv.3 Potential Reconstruction The observed localized polariton states can be related to an effective confinement potential for the in-plane polariton motion. We can estimate using the spectrally integrated density of states created by below the continuum edge as introduced in Ref. Zajac et al., 2012b. We use where the bound state probability densities are taken as the normalized measured intensities This expression assumes that the emitted field is proportional to the polariton wavefunction, which is valid for a cavity lifetime which is constant for the in-plane wavevector components of the bound states. This is a good approximation for small in-plane wavevectors, less than 10% of the light wavevector in the cavity of about m. Some of the strongly localized states in our study with small feature sizes are likely to deviate from this approximation. is given by the integral of the free density of states from zero kinetic energy at the potential floor to the continuum when neglecting the spatial variation of the confinement potential, i.e. in the limit of small level splitting compared to the confinement potential. In two dimensions the density of states is constant and given by , and we find . We use the effective mass of the polaritons from the measured dispersion , where is the free electron mass. The resulting confinement potentials for the investigated PDs are shown in Fig. 11. The symmetry of the potentials reflect the symmetry of the localized states. iv.4 Surface Reconstruction The series of SEM cross-section images taken at various (see Figs. 4, 6, 8, 10), provide volume information of the defects. To reconstruct the shape of the defects in three dimensions, we determine the position of the interfaces between the GaAs and AlAs layers in the SEM image. We use PD3 (see Fig.8) as an example here. The SEM images show a signal , which is proportional to the detected secondary electron current, differing by about 10% between AlAs and GaAs surfaces. The RMS noise in was about 5% of the GaAs signal. SEM images were taken with a nominal magnification between 65600 and 79500. We calibrated the vertical () axis to match the nominal Bragg period, yielding pixel sizes between  nm and  nm with 2% error. The noise of is limiting the precision with which the layer interface positions can be determined. To enable a reliable fit of the interface positions, we have averaged the data over 5 pixels (60 nm) along , orthogonal to the growth direction . The resulting was fitted with a model function of the epitaxial structure along . The model assumes a Gaussian resolution of the imaging with a variance of , and a constant thickness of the AlAs layers, not affected by the defect, which is motivated by the small surface diffusion length of AlAs compared to GaAs. The sequence of AlAs layers in GaAs is then described by The polynomial coefficients describe the background, is half the signal difference between AlAs and GaAs, and are the positions of the lower interfaces of the AlAs layers. The layer index is the AlAs layer number in growth direction. The topmost 3-4 layers were excluded from the fitted region as the background varied strongly due to the change in secondary electron collection efficiency (see e.g. Fig. 4b). The resolution parameter was 40 nm corresponding to a FWHM of 67 nm. The fitted layer positions show a noise of a few nm. An example of such a fit is given in Fig. 12. Example of a fit (blue line) Figure 12: Example of a fit (blue line) to the SEM profile (black line) of PD3, position mm in Fig. 8. A linear offset has been subtracted for better visibility. Using the for the different cross-sections , we can reconstruct height maps of the AlAs layers within the structure across and . A linear slope and an offset along were subtracted from each of an individual cross-section to reproduce the nominal position outside the defect. Height maps of AlAs layers in PD3. The sequential numbers of the layers Figure 13: Height maps of AlAs layers in PD3. The sequential numbers of the layers are given. Linear grey scale from -20 nm (black) and +140 nm (white) relative to the nominal layer position. The layer 7 is the first above the GaAs layer containing the droplet, layer 27 is the first layer above the GaAs cavity layer, and layer 46 is the last fitted layer. The height maps of PD3 reconstructed from 15 cross-sections (see lines in Fig. 8, not all shown) are displayed in Fig. 13. The evolution of the surface modulation can be followed. The first layer above the defect source () shows the center depression of the GaAs, similar to what observed in liquid droplet epitaxy Mano et al. (2005). Two maxima of the thickness are observed along the preferential surface diffusion direction . With increasing layer number, first the depression disappears (), followed by a general extension and flattening of the structure. By integrating height profiles of defect for different cuts we have determined the volume of the additional GaAs material as constant within the error. From this volume, we can deduce the radius of the deposited Ga droplet which is  nm. V Discussion For all of the 15 PDs investigated in this work, of which four have been shown as examples, we find a similar origin - a local increase in a GaAs layer thickness with a depression in the center. The additional GaAs volume can only be created by a local deposition of Ga, as the growth is limited by the group III element, while the group V element As is provided in a much larger amount given by the V/III flux ratio of about 50, and desorbs if not bound to the surface with a Ga atom to form GaAs. The only available source for this excess Ga deposition are Ga droplets from the Ga source. The shapes of the polariton potentials created by the PD are a consequence of the Ga droplet size, and its deposition position relative to the cavity layer. In order to simulate the 0-dimensional polariton states quantitatively, a full three-dimensional simulation of the mode structure in the cavity would be needed, which is beyond the scope of the present work. For a qualitative argument, one can use a first-order perturbation picture. The polariton intensity is decaying exponentially into the Bragg mirror with a decay length of about 400 nm. For PD2, the GaAs layer thickening is nm, but it is separated by 23 DBR periods, about m, or 8 decay lengths from the cavity, where the polariton intensity has decayed to 0.02%. This results in a small influence to the polaritons and a small localization energy of the ground state of 5 meV. For this defect we observe a large central crater and DBR layer discontinuities, as can be seen on the central cut in Fig. 4. This could give rise to a repulsive central part of the PD2 potential as observed in Fig. 11. In PD1, the Ga droplet had a similar size as in PD2 (GaAs thickening 90 nm), but it hit the surface only three DBR periods above the cavity layer. Even though the induced structural perturbation propagates away from the cavity layer, is has a much larger influence, with the third excited state at -9 meV and an estimated ground state confinement energy of 20 meV. The significantly smaller lateral extension of the defects in MC2 leads to a larger height of the perturbation, which in turn results in stronger confinement of polariton states with shapes as seen for PD3 and PD4. The evolution of the defect structure during growth can be pictured as follows. After a Ga droplet was deposited on the surface, Ga diffuses over the surface from the droplet to the surrounding areas. Due the large V-III flux ratio, there is sufficient surplus of As impinging onto the surface to convert the diffusing Ga into GaAs, leading to an additional GaAs thickness which decays with the distance from the deposition spot, according to the Ga diffusion length. The depression in the center of the resulting profile is due to reduced GaAs growth below the Ga droplet, which requires the diffusion of As through the Ga droplet to the GaAs surface. Once the droplet has been consumed, the subsequent GaAs growth generally tends to smooth the surface due to the Ga surface diffusion and the preferential attachment of Ga at monolayer steps, which have a density proportional to the surface gradient for gradients superseding the gradient due to monolayer islands (for a island size of 20 nm a gradient of 1%). Al instead has a much shorter diffusion length, and therefore the surface profile is essentially conserved during the growth of the AlAs layers. For Ga grown on (001) oriented substrate at C at a V/III flux ratio of 2 and a growth rate of m/h the diffusion length was reportedKoshiba et al. (1994) to be m and m for Ga and Al, respectively. The observed PD anisotropy of 1:2 to 1:3 along the : directions for MC2 grown at a temperature of C reduces to less than 1:1.1 for MC1 grown at C. This finding can be explained by temperature dependent diffusion lengths for these two crystallographic directions. In Ref. Ohta et al., 1989 was found resulting in diffusion lengths of for a V/III flux ratio of 1.5 and growth temperatures in the range of C, in agreement with the aspect ratio of the defects found in MC2. The reduction of the anisotropy for the higher growth temperature of MC1 indicates an activated diffusion with different activation energies in the two directions. At higher temperatures, the thermal energy supersedes the activation energies and a kinetically limited isotropic diffusion is recovered. The presence of different activation energies for diffusion in the two crystallographic directions is plausible as during growth the GaAs surface shows a reconstructionBiegelsen et al. (1990) giving rise to a channel-like structure along . Vi Conclusions We have shown that oval or round defects in MBE grown GaAs microcavities create zero-dimensional polariton states of narrow linewidths. We have revealed their three-dimensional structure and their formation mechanism, an impinging Ga droplet during growth. While we have deduced effective confinement potentials for the defects, a quantitative modeling of the polariton spectra from the three-dimensional structural information obtained by the FIB/SEM data is presently missing. In the context of polaritonic devices,Liew et al. (2011) our work indicates an approach to manufacture two-dimensional polaritonic traps by intentional creation of Ga droplets at a specific position during the MBE growth of a microcavity, rather than ex-situ etching as described in Ref. Cerna et al., 2009. One could also use Ga droplet epitaxy Mantovani et al. (2004) with a low density to create well-defined localized polariton states in microcavities. The narrow linewidths of the polariton states formed in this way are favorable for zero-dimensional polariton switches.Paraiso et al. (2010) Vii Acknowledgments The samples were grown at the EPSRC National Centre for III-V Technologies, Sheffield, UK, by Maxime Hugues and Mark Hopkinson (MC1), and Edmund Clarke (MC2). The FIB/SEM investigations were conducted at the London Centre for Nanotechnology, and funded by the EPSRC Access to Materials Research Equipment Initiative under grant EP/F019564/1. We thank Suguo Huo and Paul Warburton for training and assistance with the FIB/SEM. We acknowledge discussions with Paola Atkinson and Edmund Clarke on the growth kinetics, and help with the photolithography by Phil Buckle and Karen Barnett. This work was supported by the EPSRC under grant n. EP/F027958/1. Viii Appendix viii.1 Quantitative Differential Interference Contrast Microscopy Differential interference contrast microscopy (DIC), also know as Nomarski microscopy, was used in reflection in this experiment. A Nomarski prism assembly (DIC Slider U-DICT with Polarizer U-ANT) is mounted in a Olympus BX-50 upright microscope. The illumination from a mercury arc lamp is split by the Normarki prism into two beams 1,2 shifted by the shear displacement in the object plane, with linear polarizations along and orthogonal to . The reflected beams are recombined by the prism, creating a polarization state depending on their relative phase . The transmission through the polarizer depends on the polarization state, such that the intensity depends on the relative phase in the way where the offset phase is introduced by an adjustable spatial offset of the Normarski prism along the optical axis from its nominal position for which the beams are not displaced in the directional space (objective back focal plane). The shear is similar to the optical resolution of the microscope objective, which allows to approximate the phase difference between the two beams in first order as the shear times the phase gradient at the observed position, , such that Eq.(6) can be written as . Choosing , and developing up to first order in the phase difference, we get . Measuring for both offset phases, we determine the contrast DIC contrast Figure 14: DIC contrast (black line) as function of the sample position along the shear direction, and resulting height profile (red line) calculated using Eq.(8). We can now integrate the contrast along the shear direction to retrieve the phase . In reflection, the phase is related to the surface height by with the wavelength of the light, such that we arrive at We assumed here that sample is not birefringent and that the phase shift of the reflected light is given by the height of the sample surface only, neglecting internal interfaces. The latter is justified as the green light is absorbed strongly by the structure. The height was determined using Eq.(8) with along the direction of the shear . In the measurements presented in this work we used a UplanFL 20x/0.5NA objective, for which the shear was determined to be m using a calibration slide in transmission DIC consisting of a PMMA pattern of a 200 nm thickness on a glass coverslip, in which case the phase is given by with the refractive index difference between PMMA and air. To compensate for systematic errors, the measured across the center of the defect was corrected by the along a line displaced perpendicular to the shear, just outside of the defect. An examples of a measured and the resulting height profile for defect PD4 are shown in Fig. 14.
a15d3fc19220c824
Mysterious Quantum Rule Reconstructed From Scratch Mysterious Quantum Rule Reconstructed From Scratch By Philip Ball The Born rule, which connects the math of quantum theory to the outcomes of experiments, has been derived from simpler physical principles. The new work promises to give researchers a better grip on the core mystery of quantum mechanics. Everyone knows that quantum mechanics is an odd theory, but they don’t necessarily know why. The usual story is that it’s the quantum world itself that’s odd, with its superpositions, uncertainty and entanglement (the mysterious interdependence of observed particle states). All the theory does is reflect that innate peculiarity, right? Not really. Quantum mechanics became a strange kind of theory not with Werner Heisenberg’s famous uncertainty principle in 1927, nor when Albert Einstein and two colleagues identified (and Erwin Schrödinger named) entanglement in 1935. It happened in 1926, thanks to a proposal from the German physicist Max Born. Born suggested that the right way to interpret the wavy nature of quantum particles was as waves of probability. The wave equation presented by Schrödinger the previous year, Born said, was basically a piece of mathematical machinery for calculating the chances of observing a particular outcome in an experiment. In other words, Born’s rule connects quantum theory to experiment. It is what makes quantum mechanics a scientific theory at all, able to make predictions that can be tested. “The Born rule is the crucial link between the abstract mathematical objects of quantum theory and the world of experience,” said Lluís Masanes of University College London. The problem is that Born’s rule was not really more than a smart guess — there was no fundamental reason that led Born to propose it. “It was an intuition without a precise justification,” said Adán Cabello, a quantum theorist at the University of Seville in Spain. “But it worked.” And yet for the past 90 years and more, no one has been able to explain why. Without that knowledge, it remains hard to figure out what quantum mechanics is telling us about the nature of reality. “Understanding the Born rule is important as a way to understand the picture of the world implicit in quantum theory,” said Giulio Chiribella of the University of Hong Kong, an expert on quantum foundations. Several researchers have attempted to derive the Born rule from more fundamental principles, but none of those derivations have been widely accepted. Now Masanes and his collaborators Thomas Galley of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and Markus Müller of the Institute for Quantum Optics and Quantum Information in Vienna have proposed a new way to pull it out of deeper axioms about quantum theory, an approach that might explain how, more generally, quantum mechanics connects to experiment through the process of measurement. “We derive all the properties of measurements in quantum theory: what the questions are, what the answers are, and what the probability of answers occurring are,” Masanes said. Lluís Masanes, a physicist at University College London. It’s a bold claim. And given that the question of what measurement means in quantum mechanics has plagued the theory since the days of Einstein and Schrödinger, it seems unlikely that this will be the last word. But the approach of Masanes and colleagues is already winning praise. “I like it a lot,” Chiribella said. The work “is a sort of ‘cleaning’ exercise,” Cabello said — a way of ridding quantum mechanics of redundant ingredients. “And that is absolutely an important task. These redundancies are a symptom that we don’t fully understand quantum theory.” Where the Puzzle Is Schrödinger wrote down his equation in 1925 as a formal description of the proposal by the French physicist Louis de Broglie the previous year that quantum particles such as electrons could behave like waves. The Schrödinger equation ascribes to a particle a wave function (denoted ψ) from which the particle’s future behavior can be predicted. The wave function is a purely mathematical expression, not directly related to anything observable. The question, then, was how to connect it to properties that are observable. Schrödinger’s first inclination was to suppose that the amplitude of his wave function at some point in space — equivalent to the height of a water wave, say — corresponds to the density of the smeared-out quantum particle at that point. But Born argued instead that the amplitude of the wave function is related to a probability — specifically, the probability that you will find the particle at that position if you detect it experimentally. In the lecture given for his 1954 Nobel Prize for this work, Born claimed that he had simply generalized from photons, the quantum “packets of light” that Einstein proposed in 1905. Einstein, Born said, had interpreted “the square of the optical wave amplitudes as probability density for the occurrence of photons. This concept could at once be carried over to the ψ-function.” "Our result shows that not only is the Born rule a good guess, but it is the only logically consistent guess." ~Lluis Masanes “Born got quantum theory to work using wire and bubble gum,” said Mateus Araújo, a quantum theorist at the University of Cologne in Germany. “It’s ugly, we don’t really know why it works, but we know that if we take it out, the theory falls apart.” Yet the arbitrariness of the Born rule is perhaps the least odd thing about it. In most physics equations, the variables refer to objective properties of the system they are describing: the mass or velocity of bodies in Newton’s laws of motion, for instance. But according to Born, the wave function is not like this. It’s not obvious whether it says anything about the quantum entity itself — such as where it is at any moment in time. Rather, it tells us what we might see if we choose to look. It points in the wrong direction: not down toward the system being studied, but up toward the observer’s experience of it. “What makes quantum theory puzzling is not so much the Born rule as a way of computing probabilities,” Chiribella said, “but the fact that we cannot interpret the measurements as revealing some pre-existing properties of the system.” What’s more, the mathematical machinery for unfolding these probabilities can only be written down if you stipulate how you’re looking. If you do different measurements, you might calculate different probabilities, even though you seem to be examining the same system in both cases. That’s why Born’s prescription for turning wave functions into measurement outcomes contains all of the reputed paradoxical nature of quantum theory: the fact that observable properties of quantum objects emerge, in a probabilistic way, from the act of measurement itself. “Born’s probability postulate is where the puzzle really is,” Cabello said. So if we could understand where the Born rule comes from, we might finally understand what the vexed concept of measurement really means in quantum theory. The Argument That’s what has largely motivated efforts to explain the Born rule — rather than simply to learn and accept it. One of the most celebrated attempts, presented by the American mathematician Andrew Gleason in 1957, shows that the rule follows from some of the other components of the standard mathematical structure of quantum mechanics: In other words, it’s a tighter package than it originally seemed. All the same, Gleason’s approach assumes some key aspects of the mathematical formalism needed to connect quantum states to specific measurement outcomes. One very different approach to deriving the Born rule draws on the controversial many-worlds interpretation of quantum mechanics. Many-worlds is an attempt to solve the puzzle of quantum measurements by assuming that, instead of selecting just one of the multiple possible outcomes, an observation realizes all of them — in different universes that split off from our own. In the late 1990s, many-worlds advocate David Deutsch asserted that apparent quantum probabilities are precisely what a rational observer would need to use to make predictions in such a scenario — an argument that can be used to derive the Born rule. Meanwhile, Lev Vaidman of Tel Aviv University in Israel, and independently Sean Carroll and Charles Sebens of the California Institute of Technology, suggested that the Born rule is the only one that assigns correct probabilities in a many-worlds multiverse during the instant after a split has occurred but before any observers have registered the outcome of the measurement. In that instant the observers do not yet know which branch of the universe they are on — but Carroll and Sebens argued that “there is a uniquely rational way to apportion credence in such cases, which leads directly to the Born Rule.” The many-worlds picture leads to its own problems, however — not least the issue of what “probability” can mean at all if every possible outcome is definitely realized. The many-worlds interpretation “requires a radical overhaul of many fundamental concepts and intuitions,” Galley said. What’s more, some say that there is no coherent way to connect an observer before a split to the same individual afterward, and so it is logically unclear what it means for an observer to apply the Born rule to make a prediction “before the event.” For such reasons, many-worlds derivations of the Born rule are not widely accepted. Thomas Galley, a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. Masanes and colleagues have now set out an argument that does not require Gleason’s assumptions, let alone many universes, to derive the Born rule. While the rule is typically presented as an add-on to the basic postulates of quantum mechanics, they show that the Born rule follows from those postulates themselves once you admit that measurements generate unique outcomes. That is, if you grant the existence of quantum states, along with the “classical” experience that just one of them is actually observed, you’ve no choice but to square the wave function to connect the two. “Our result shows that not only is the Born rule a good guess, but it is the only logically consistent guess,” Masanes said. To reach that conclusion, we just need a few basic assumptions. The first is that quantum states are formulated in the usual way: as vectors, possessing both a size and a direction. It’s not that different from saying that each place on Earth can be represented as a point assigned a longitude, latitude and altitude. The next assumption is also a completely standard one in quantum mechanics: So long as no measurement is made on a particle, it changes in time in a way that is said to be “unitary.” Crudely speaking, this means that the changes are smooth and wavelike, and they preserve information about the particle. This is exactly the behavior that the Schrödinger equation prescribes, and it is in fact unitarity that makes measurement such a headache — because measurement is a non-unitary process, often dubbed the “collapse” of the wave function. In a measurement, only one of several potential states is observed: Information is lost. The researchers also assume that, for a system of several parts, how you group those parts should make no difference to a measurement outcome. “This assumption is so basic that it is in some sense a precondition of any reasoning about the world,” Galley said. Suppose you have three apples. “If I say, ‘There are two apples on the right and one on the left,’ and you say, ‘There are two apples on the left and one on the right,’ then these are both valid ways of describing the apples. The fact of where we place the dividing line of left and right is a subjective choice, and these two descriptions are equally correct.” The final assumption embraces measurement itself — but in the most minimal sense conceivable. Simply, a given measurement on a quantum system must produce a unique outcome. There’s no assumption about how that happens: how the quantum formalism must be used to predict the probabilities of the outcomes. Yet the researchers show that this process has to follow the Born rule if the postulate about uniqueness of measurement is to be satisfied. Any alternatives to the Born rule for deriving probabilities of observed outcomes from the wave function won’t satisfy the initial postulates. "Born got quantum theory to work using wire and bubble gum." ~ Mateus Araújo The result goes further than this: It could also clear up what the measurement machinery of quantum mechanics is all about. In short, there’s a whole technical paraphernalia of requirements in that mechanism: mathematical functions called Hermitian operators that “operate on” the wave function to produce things called eigenvalues that correspond to measurement probabilities, and so on. But none of that is assumed from the outset by Masanes and colleagues. Rather, they find that, like the Born rule, all of these requirements are implicit in the basic assumptions and aren’t needed as extras. “We just assume that there are questions, and when asked these return a single answer with some probability,” Galley said. “We then take the formalism of quantum theory and show that the only questions, answers and probabilities are the quantum ones.” The work can’t answer the troublesome question of why measurement outcomes are unique; rather, it makes that uniqueness axiomatic, turning it into part of the very definition of a measurement. After all, Galley said, uniqueness “is required for us to be able to even begin to do science.” However, what qualifies as a “minimal” assumption in quantum theory is rarely if ever straightforward. Araújo thinks that there may be more lurking in these assumptions than meets the eye. “They go far beyond assuming that a measurement exists and has a unique outcome,” he said. “Their most important assumption is that there is a fixed set of measurements whose probabilities are enough to completely determine a quantum state.” In other words, it’s not just a matter of saying measurements exist, but of saying that measurements — with corresponding probabilities of outcomes — are able to tell you everything you can know. That might sound reasonable, but it is not self-evidently true. In quantum theory, few things are. So while Araújo calls the paper “great work,” he adds, “I don’t think it really explains the Born rule, though, any more than noticing that without water we die explains what water is.” And it leaves hanging another question: Why does the Born rule only specify probabilities, and not definite outcomes? Law Without Law The project pursued here is one that has become popular with several researchers exploring the foundations of quantum mechanics: to see whether this seemingly exotic but rather ad hoc theory can be derived from some simple assumptions that are easier to intuit. It’s a program called quantum reconstruction. Cabello has pursued that aim too, and has suggested an explanation of the Born rule that is similar in spirit but different in detail. “I am obsessed with finding the simplest picture of the world that enforces quantum theory,” he said. His approach starts with the challenging idea that there is in fact no underlying physical law that dictates measurement outcomes: Every outcome may take place so long as it does not violate a set of logical-consistency requirements that connect the outcome probabilities of different experiments. For example, let’s say that one experiment produces three possible outcomes (with particular probabilities), and a second independent experiment produces four possible outcomes. The combined number of possible outcomes for the two experiments is three times four, or 12 possible outcomes, which form a particular, mathematically defined set of combined possibilities. Markus Müller, a physicist at the Institute for Quantum Optics and Quantum Information in Vienna and the Perimeter Institute. Such a lawless reality sounds like an unlikely recipe for producing a quantitatively predictive theory like quantum mechanics. But in 1983 the American physicist John Wheeler proposed that statistical regularities in the physical world might emerge from such a situation, as they sometimes do from unplanned crowd behavior. “Everything is built higgledy-piggledy on the unpredictable outcomes of billions upon billions of elementary quantum phenomena,” Wheeler wrote. But there might be no fundamental law governing those phenomena — indeed, he argued, that was the only scenario in which we could hope to find a self-contained physical explanation, because otherwise we’re left with an infinite regression in which any fundamental equation governing behavior needs to be accounted for by some even more fundamental principle. “In contrast to the view that the universe is a machine governed by some magic equation, … the world is a self-synthesizing system,” Wheeler argued. He called this emergence of the lawlike behavior of physics “law without law.” Cabello finds that, if measurement outcomes are constrained to obey the behaviors seen in quantum systems — where for example certain measurements can be correlated in ways that make them interdependent (entangled) — they must also be prescribed by the Born rule, even in the absence of any deeper law that dictates them. “The Born rule turns out to be a logical constraint that should be satisfied by any reasonable theory we humans can construct for assigning probabilities when there is no law in the physical reality governing the outcomes,” Cabello said. The Born rule is then dictated merely by logic, not by any underlying physical law. “It has to be satisfied the same way as the rule that the probabilities must be between 0 and 1,” Cabello said. The Born rule itself, he said, is thus an example of Wheeler’s “law without law.” But is it really that? Araújo thinks that Cabello’s approach doesn’t sufficiently explain the Born rule. Rather, it offers a rationale for which quantum correlations (such as those seen in entanglement) are allowed. And it doesn’t eliminate all possible laws governing them, but only those that are forbidden by the consistency principles. “Once you’ve determined which [correlations] are the forbidden ones, everything that remains is allowed,” Araújo said. So it could be lawless down there in the quantum world — or there could be some other self-consistent but still law-bound principle behind what we see. Any Possible Universe Although the two studies pull out the Born rule from different origins, the results are not necessarily inconsistent, Cabello said: “We simply have different obsessions.” Masanes and colleagues are looking for the simplest set of axioms for constructing the operational procedures of quantum mechanics — and they find that, if measurement as we know it is possible at all, then the Born rule doesn’t need to be added in separately. There’s no specification of what kind of underlying physical reality gives rise to these axioms. But that underlying reality is exactly where Cabello starts from. “In my opinion, the really important task is figuring out which are the physical ingredients common to any universe in which quantum theory holds,” he said. And if he’s right, those ingredients lack any deep laws. Evidently that remains to be seen: Neither of these papers will settle the matter. But what both studies have in common is that they aim to show how at least some of the recondite, highly mathematical and apparently rather arbitrary quantum formalism can be replaced with simple postulates about what the world is like. Instead of saying that “probabilities of measurement outcomes are equal to the modulus squared of the wave function,” or that “observables correspond to eigenvalues of Hermitian operators,” it’s enough to say that “measurements are unique” or that “no fundamental law governs outcomes.” It might not make quantum mechanics seem any less strange to us, but it could give us a better chance of understanding it. This article was first published in Quanta Magazine Related Content
d68716231a91f6dd
@article{2651, abstract = {The GABAergic system, a major inhibitory regulator in the central nervous system, may also play important roles in peripheral nonneuronal tissues and cells. Recent studies showed that GABAB receptor is expressed in testis and sperm. To understand the role of the GABAergic system in spermiogenesis, we examined cellular localization of GABA and GABAB receptor subunits in rat spermatids by immunocytochemistry. Immunoreactivity for GABA was detected around acrosomal granules of spermatids during the Golgi and cap phases. GABAB(1) immunoreactivity was observed in the acrosomal vesicle of spermatids in Golgi phase, and during cap phase, this reactivity expanded to the entire region of the acrosome covering the nuclear membrane. The level of reactivity decreased gradually with maturation of spermatids. In contrast, GABAB(2) immunoreactivity was not observed in spermatids during Golgi phase but was detected in the equatorial region during cap phase. Both GABA immunoreactivity and GABAB(2) immunoreactivity were transferred to the residual cytoplasm during the release of spermatozoa. Electron microscopic immunocytochemistry revealed that, during cap phase, GABA and GABAB(1) were distributed within the whole acrosomal vesicle but not in the acrosomal granule. GABAB(2) immunoreactivity was observed in the narrow space between the inner acrosomal and nuclear membrane and was limited to the equatorial region of the spermatid head. These results indicate that the GABAergic system might be involved in regulation of spermiogenesis.}, author = {Kanbara, Kiyoto and Okamoto, Keiko and Nomura, Sakashi and Kaneko, Takeshi and Ryuichi Shigemoto and Azuma, Haruhito and Katsuoka, Yoji and Watanabe, Masahiko}, journal = {Journal of Andrology}, number = {4}, pages = {485 -- 493}, publisher = {American Society of Andrology}, title = {{Cellular localization of GABA and GABAB receptor subunit proteins during spermiogenesis in rat testis}}, doi = {10.2164/jandrol.04185}, volume = {26}, year = {2005}, } @article{2652, abstract = {We studied neurogliaform neurons in the stratum lacunosum moleculare of the CA1 hippocampal area. These interneurons have short stellate dendrites and an extensive axonal arbor mainly located in the stratum lacunosum moleculare. Single-cell reverse transcription-PCR showed that these neurons were GABAergic and that the majority expressed mRNA for neuropeptide Y. Most neurogliaform neurons tested were immunoreactive for α-actinin-2, and many stratum lacunosum moleculare interneurons coexpressed α-actinin-2 and neuropeptide Y. Neurogliaform neurons received monosynaptic, DNQX-sensitive excitatory input from the perforant path, and 40 Hz stimulation of this input evoked EPSCs displaying either depression or initial facilitation, followed by depression. Paired recordings performed between neurogliaform neurons showed that 85% of pairs were electrically connected and 70% were also connected via GABAergic synapses. Injection of sine waveforms into neurons during paired recordings resulted in transmission of the waveforms through the electrical synapse. Unitary IPSCs recorded from neurogliaform pairs readily fatigued, had a slow decay, and had a strong depression of the synaptic response at a 5 Hz stimulation frequency that was antagonized by the GABA B antagonist (2S)-3-[[(1S)-1-(3,4-dichlorophenyl)ethyl]amino-2-hydroxypropyl](phenylmethyl) phosphinic acid (CGP55845). The amplitude of the first IPSC during the 5 Hz stimulation was also increased by CGP55845, suggesting a tonic inhibition of synaptic transmission. A small unitary GABA B-mediated IPSC could also be detected, providing the first evidence for such a component between GABAergic interneurons. Electron microscopic localization of the GABA B1 subunit at neurogliaform synapses revealed the protein in both presynaptic and postsynaptic membranes. Our data disclose a novel interneuronal network well suited for modulating the flow of information between the entorhinal cortex and CA1 hippocampus.}, author = {Price, Christopher J and Cauli, Bruno and Kovács, Endre R and Kulik, Ákos and Lambolez, Bertrand and Ryuichi Shigemoto and Capogna,Marco}, journal = {Journal of Neuroscience}, number = {29}, pages = {6775 -- 6786}, publisher = {Society for Neuroscience}, title = {{Neurogliaform neurons form a novel inhibitory network in the hippocampal CA1 area}}, doi = {10.1523/JNEUROSCI.1135-05.2005}, volume = {25}, year = {2005}, } @article{2653, abstract = {Synaptic vesicle release occurs at a specialized membrane domain known as the presynaptic active zone (AZ). Several membrane proteins are involved in the vesicle release processes such as docking, priming, and exocytotic fusion. Cytomatrix at the active zone (CAZ) proteins are structural components of the AZ and are highly concentrated in it. Localization of other release-related proteins including target soluble N-ethylmaleimide-sensitive-factor attachment protein receptor (t-SNARE) proteins, however, has not been well demonstrated in the AZ. Here, we used sodium dodecyl sulfate-digested freeze-fracture replica labeling (SDS-FRL) to analyze quantitatively the distribution of CAZ and t-SNARE proteins in the hippocampal CA3 area. The AZ in replicated membrane was identified by immunolabeling for CAZ proteins (CAZ-associated structural protein [CAST] and Bassoon). Clusters of immunogold particles for these proteins were found on the P-face of presynaptic terminals of the mossy fiber and associational/commissural (AJC) fiber. Co-labeling with CAST revealed distribution of the t-SNARE proteins syntaxin and synaptosomal-associated protein of 25 kDa (SNAP-25) in the AZ as well as in the extrasynaptic membrane surrounding the AZ (SZ). Quantitative analysis demonstrated that the density of immunoparticles for CAST in the AZ was more than 100 times higher than in the SZ, whereas that for syntaxin and SNAP-25 was not significantly different between the AZ and SZ in both the A/C and mossy fiber terminals. These results support the involvement of the t-SNARE proteins in exocytotic fusion in the AZ and the role of CAST in specialization of the membrane domain for the AZ.}, author = {Hagiwara, Akari and Fukazawa, Yugo and Deguchi-Tawarada, Maki and Ohtsuka, Toshihisa and Ryuichi Shigemoto}, journal = {Journal of Comparative Neurology}, number = {2}, pages = {195 -- 216}, publisher = {Wiley-Blackwell}, title = {{Differential distribution of release-related proteins in the hippocampal CA3 area as revealed by freeze-fracture replica labeling}}, doi = {10.1002/cne.20633}, volume = {489}, year = {2005}, } @article{2654, abstract = {Presynaptic metabotropic glutamate receptors (mGluRs) show a highly selective expression and subcellular location in nerve terminals modulating neurotransmitter release. We have demonstrated that alternatively spliced variants of mGluR8, mGluR8a and mGluR8b, have an overlapping distribution in the hippocampus, and besides perforant path terminals, they are expressed in the presynaptic active zone of boutons making synapses selectively with several types of GABAergic interneurons, primarily in the stratum oriens. Boutons labeled for mGluR8 formed either type I or type II synapses, and the latter were GABAergic. Some mGluR8-positive boutons also expressed mGluR7 or vasoactive intestinal polypeptide. Interneurons strongly immunopositive for the muscarinic M2 or the mGlu1 receptors were the primary targets of mGluR8-containing terminals in the stratum oriens, but only neurochemically distinct subsets were innervated by mGluR8-enriched terminals. The majority of M2-positive neurons were mGluR8 innervated, but a minority, which expresses somatostatin, was not. Rare neurons coexpressing calretinin and M2 were consistently targeted by mGluR8-positive boutons. In vivo recording and labeling of an mGluR8-decorated and strongly M2-positive interneuron revealed a trilaminar cell with complex spike bursts during theta oscillations and strong discharge during sharp wave/ripple events. The trilaminar cell had a large projection from the CA1 area to the subiculum and a preferential innervation of interneurons in the CA1 area in addition to pyramidal cell somata and dendrites. The postsynaptic interneuron type-specific expression of the high-efficacy presynaptic mGluR8 in both putative glutamatergic and in identified GABAergic terminals predicts a role in adjusting the activity of interneurons depending on the level of network activity.}, author = {Ferraguti, Francesco and Klausberger,Thomas and Cobden, Philip M and Baude, Agnès and Roberts, John D and Szűcs, Péter and Kinoshita, Ayae and Ryuichi Shigemoto and Somogyi, Péter and Dalezios, Yannis}, journal = {Journal of Neuroscience}, number = {45}, pages = {10520 -- 10536}, publisher = {Society for Neuroscience}, title = {{ Metabotropic glutamate receptor 8-expressing nerve terminals target subsets of GABAergic neurons in the hippocampus}}, doi = {10.1523/JNEUROSCI.2547-05.2005}, volume = {25}, year = {2005}, } @article{2655, abstract = {Input-dependent left-right asymmetry of NMDA receptor ε2 (NR2B) subunit allocation was discovered in hippocampal Schaffer collateral (Sch) and commissural fiber pyramidal cell synapses (Kawakami et al., 2003). To investigate whether this asymmetrical ε2 allocation is also related to the types of the postsynaptic cells, we compared postembedding immunogold labeling for ε2 in left and right Sch synapses on pyramidal cells and interneurons. To facilitate the detection of ε2 density difference, we used ε1 (NR2A) knock-out (KO) mice, which have a simplified NMDA receptor subunit composition. The labeling density for ε2 but not ζ1 (NR1) and subtype 2/3 glutamate receptor (GluR2/3) in Sch-CA1 pyramidal cell synapses was significantly different between the left and right hippocampus with opposite directions in strata oriens and radiatum; the left to right ratio of ε2 labeling density was 1:1.50 in stratum oriens and 1.44:1 in stratum radiatum. No significant difference, however, was detected in CA1 stratum radiatum between the left and right Sch-GluR4-positive (mostly parvalbumin-positive) and Sch-GluR4-negative interneuron synapses. Consistent with the anatomical asymmetry, the amplitude ratio of NMDA EPSCs to non-NMDA EPSCs in pyramidal cells was approximately two times larger in right than left stratum radiatum and vice versa in stratum oriens of ε1 KO mice. Moreover, the amplitude of long-term potentiation in the Sch-CA1 synapses of left stratum radiatum was significantly larger than that in the right corresponding synapses. These results indicate that the asymmetry of ε2 distribution is target cell specific, resulting in the left-right difference in NMDA receptor content and plasticity in Sch-CA1 pyramidal cell synapses in ε1 KO mice.}, author = {Wu, Yue and Kawakami, Ryosuke and Shinohara, Yoshiaki and Fukaya, Masahiro and Sakimura, Kenji and Mishina, Masayoshi and Watanabe, Masahiko and Ito, Isao and Ryuichi Shigemoto}, journal = {Journal of Neuroscience}, number = {40}, pages = {9213 -- 9226}, publisher = {Society for Neuroscience}, title = {{Target-cell-specific left-right asymmetry of NMDA receptor content in Schaffer collateral synapses in ε1/NR2A knock-out mice}}, doi = {10.1523/JNEUROSCI.2134-05.2005}, volume = {25}, year = {2005}, } @article{2656, abstract = {Previous studies have shown that neurons in the sacral dorsal commissural nucleus (SDCN) express neurokinin-1 receptor (NK1R) and can be modulated by the co-release of GABA and glycine (Gly) from single presynaptic terminal. These results raise the possibility that GABA/Gly-cocontaining terminals might make synaptic contacts with NK1R-expressing neurons in the SDCN. In order to provide morphological evidence for this hypothesis, the triple-immunohistochemical studies were performed in the SDCN. Triple-immunofluorescence histochemical study showed that some axon terminals in close association with NK1R-immunopositive (NK1R-ip) neurons in the SDCN were immunopositive for both glutamic acid decarboxylase (GAD) and glycine transporter 2 (GlyT2). In electron microscopic dual- and triple-immunohistochemistry for GAD/GlyT2, GAD/NK1R, GlyT2/NK1R, or GAD/GlyT2/NK1R also revealed dually labeled (GAD/GlyT2-ip) synaptic terminals upon SDCN neurons, as well as GAD- and/or GlyT2-ip axon terminals in synaptic contact with NK1R-ip SDCN neurons. These results suggested that some synaptic terminals upon NK1R-expressing SDCN neurons co-released both GABA and Gly.}, author = {Feng, Yu-Peng and Li, Yun-Qing and Wang, Wen and Wu, Sheng-Xi and Chen, Tao and Ryuichi Shigemoto and Mizuno, Noboru}, journal = {Neuroscience Letters}, number = {3}, pages = {144 -- 148}, publisher = {Elsevier}, title = {{Morphological evidence for GABA/glycine-cocontaining terminals in synaptic contact with neurokinin-1 receptor-expressing neurons in the sacral dorsal commissural nucleus of the rat}}, doi = {10.1016/j.neulet.2005.06.068}, volume = {388}, year = {2005}, } @article{2658, abstract = {Enhanced glutamatergic neurotransmission via the subthalamopallidal or subthalamonigral projection seems crucial for developing parkinsonian motor signs. In the present study, the possible changes in the expression of metabotropic glutamate receptors (mGluRs) were examined in the basal ganglia of a primate model for Parkinson's disease. When the patterns of immunohistochemical localization of mGluRs in monkeys administered systemically with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) were analysed in comparison with normal controls, we found that expression of mGluR1α, but not of other subtypes, was significantly reduced in the internal and external segments of the globus pallidus and the substantia nigra pars reticulata. To elucidate the functional role of mGluR1 in the control of pallidal neuron activity, extracellular unit recordings combined with intrapallidal microinjections of mGluR1-related agents were then performed in normal and parkinsonian monkeys. In normal awake conditions, the spontaneous firing rates of neurons in the pallidal complex were increased by DHPG, a selective agonist of group I mGluRs, whereas they were decreased by AIDA, a selective antagonist of group I mGluRs, or LY367385, a selective antagonist of mGluR1. These electrophysiological data strongly indicate that the excitatory mechanism of pallidal neurons by glutamate is mediated at least partly through mGluR1. The effects of the mGluR1-related agents on neuronal firing in the internal pallidal segment became rather obscure after MPTP treatment. Our results suggest that the specific down-regulation of pallidal and nigral mGluR1 ot in the parkinsonian state may exert a compensatory action to reverse the overactivity of the subthalamic nucleus-derived glutamatergic input that is generated in the disease.}, author = {Kaneda, Katsuyuki and Tachibana, Yoshihisa and Imanishi, Michiko and Kita, Hitoshi and Ryuichi Shigemoto and Nambu, Atsushi and Takada, Masahiko}, journal = {European Journal of Neuroscience}, number = {12}, pages = {3241 -- 3254}, publisher = {Wiley-Blackwell}, title = {{Down-regulation of metabotropic glutamate receptor 1α in globus pallidus and substantia nigra of parkinsonian monkeys}}, doi = {10.1111/j.1460-9568.2005.04488.x}, volume = {22}, year = {2005}, } @article{2743, abstract = {We consider the supersymmetric quantum mechanical system which is obtained by dimensionally reducing d = 6, N = 1 supersymmetric gauge theory with gauge group U(1) and a single charged hypermultiplet. Using the deformation method and ideas introduced by Porrati and Rozenberg [1], we present a detailed proof of the existence of a normalizable ground state for this system.}, author = {László Erdös and Hasler, David G and Solovej, Jan P}, journal = {Annales Henri Poincare}, number = {2}, pages = {247 -- 267}, publisher = {Birkhäuser}, title = {{Existence of the D0-D4 bound state: A detailed proof}}, doi = {10.1007/s00023-005-0205-0}, volume = {6}, year = {2005}, } @article{2744, abstract = {We study the long time evolution of a quantum particle interacting with a random potential in the Boltzmann-Grad low density limit. We prove that the phase space density of the quantum evolution defined through the Husimi function converges weakly to a linear Boltzmann equation. The Boltzmann collision kernel is given by the full quantum scattering cross-section of the obstacle potential.}, author = {Eng, David and László Erdös}, journal = {Reviews in Mathematical Physics}, number = {6}, pages = {669 -- 743}, publisher = {World Scientific Publishing}, title = {{The linear Boltzmann equation as the low density limit of a random Schrödinger equation}}, doi = {10.1142/S0129055X0500242X}, volume = {17}, year = {2005}, } @article{2788, abstract = {We present the results of an experimental investigation into the nature and structure of turbulent pipe flow at moderate Reynolds numbers. A turbulence regeneration mechanism is identified which sustains a symmetric traveling wave within the flow. The periodicity of the mechanism allows comparison to the wavelength of numerically observed exact traveling wave solutions and close agreement is found. The advection speed of the upstream turbulence laminar interface in the experimental flow is observed to form a lower bound on the phase velocities of the exact traveling wave solutions. Overall our observations suggest that the dynamics of the turbulent flow at moderate Reynolds numbers are governed by unstable nonlinear traveling waves.}, author = {Björn Hof and van Doorne, Casimir W and Westerweel, Jerry and Nieuwstadt, Frans T}, journal = {Physical Review Letters}, number = {21}, publisher = {American Physical Society}, title = {{Turbulence regeneration in pipe flow at moderate reynolds numbers}}, doi = {10.1103/PhysRevLett.95.214502}, volume = {95}, year = {2005}, }
065573d03b09ae73
My watch list   Ab initio quantum chemistry methods Ab initio quantum chemistry methods are computational chemistry methods based on quantum chemistry.[1] The term ab initio indicates that the calculation is from first principles and that no empirical data is used. Robert Parr claims in an interview that the term was first used in letter to him by David Craig and was put into the manuscript of their paper on the excited states of benzene published in 1950.[2] [3] The simplest type of ab initio electronic structure calculation is the Hartree-Fock (HF) scheme, in which the instantaneous Coulombic electron-electron repulsion is not specifically taken into account. Only its average effect (mean field) is included in the calculation. This is a variational procedure, therefore the obtained approximate energies, expressed in terms of the system's wave function, are always equal to or greater than the exact energy, and tend to a limiting value called the Hartree-Fock limit as the size of the basis is increased.[4] Many types of calculations begin with a Hartree-Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. Møller-Plesset perturbation theory (MPn) and coupled cluster theory (CC) are examples of these post-Hartree-Fock methods.[5] [6] In some cases, particularly for bond breaking processes, the Hartree-Fock method is inadequate and this single-determinant reference function is not a good basis for post-Hartree-Fock methods. It is then necessary to start with a wave function that includes more than one determinant such as Multi-configurational self-consistent field and methods have been developed that use these multi-determinant references for improvements.[5] Almost always the basis set (which is usually built from the LCAO ansatz) used to solve the Schrödinger equation is not complete, and does not span the Hilbert space associated with ionization and scattering processes (see continuous spectrum for more details). In the Hartree-Fock method and the Configuration interaction method, this approximation allows one to treat the Schrödinger equation as a "simple" eigenvalue equation of the electronic molecular Hamiltonian, with a discrete set of solutions. Classes of methods The most popular classes of ab initio electronic structure methods: Hartree-Fock methods • Hartree-Fock (HF) • Restricted Open-shell Hartree-Fock (ROHF) • Unrestricted Hartree-Fock (UHF) Post-Hartree-Fock methods Multi-reference methods Example: Is Si2H2 like acetylene (C2H2)? Accuracy and scaling Ab initio electronic structure methods have the advantage that they can be made to converge to the exact solution, when all approximations are sufficiently small in magnitude. In particular configuration interaction where all possible configurations are included (called "Full CI") tends to the exact non-relativistic solution of the Schrödinger equation. The convergence, however, is usually not monotonic, and sometimes the smallest calculation gives the best result for some properties. The downside of ab initio methods is their computational cost. They often take enormous amounts of computer time, memory, and disk space. The HF method scales nominally as N4 (N being the number of basis functions) – i.e. a calculation twice as big takes 16 times as long to complete. However in practice it can scale closer to as the program can identify zero and extremely small integrals and neglect them. Correlated calculations scale even less favorably - MP2 as N5; MP4 as N6 and coupled cluster as N7. DFT methods scale in a similar manner to Hartree-Fock but with a larger proportionality term. Thus DFT calculations are always more expensive than an equivalent Hartree-Fock calculation. Linear scaling approaches The problem of computational expense can be alleviated through simplification schemes.[13] In the density fitting scheme, the four-index integrals used to describe the interaction between electron pairs are reduced to simpler two- or three-index integrals, by treating the charge densities they contain in a simplified way. This reduces the scaling with respect to basis set size. Methods employing this scheme are denoted by the prefix "df-", for example the density fitting MP2 is df-MP2 (lower-case is advisable to prevent confusion with DFT). In the local approximation, the molecular orbitals are first localized by a unitary rotation in the orbital space (which leaves the reference wave function invariant, i.e., is not an approximation) and subsequently interactions of distant pairs of localized orbtials are neglected in the correlation calculation. This sharply reduces the scaling with molecular size, a major problem in the treatment of biologically-sized molecules. Methods employing this scheme are denoted by the prefix "L", e.g. LMP2. Both schemes can be employed together, as in the recently developed df-LMP2 and df-LCCSD(T0) methods. In fact, df-LMP2 calculations are faster than df-Hartree-Fock calculations and thus are feasible in nearly all situations in which also DFT is. Valence bond methods Valence bond (VB) methods are generally ab initio although some semi-empirical versions have been proposed. Current VB approaches are[1]:- Quantum Monte Carlo methods A method that avoids making the variational overestimation of HF in the first place is Quantum Monte Carlo (QMC), in its variational, diffusion, and Green's function forms. These methods work with an explicitly correlated wave function and evaluate integrals numerically using a Monte Carlo integration. Such calculations can be very time-consuming, but they are probably the most accurate methods known today. See also 1. ^ a b Levine, Ira N. (1991). Quantum Chemistry. Englewood Cliffs, New jersey: Prentice Hall, 455 - 544. ISBN 0-205-12770-3.  2. ^ History of Quantum Chemistry: Robert G. Parr 3. ^ Parr, Robert G.; Craig D. P,. and Ross, I. G (1950). "Molecular Orbital Calculations of the Lower Excited Electronic Levels of Benzene, Configuration Interaction included". Journal of Chemical Physics 18: 1561 - 1563. 4. ^ Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley & Sons, Ltd., 153 - 189. ISBN 0-471-48552-7.  5. ^ a b Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley & Sons, Ltd., 191 - 232. ISBN 0-471-48552-7.  6. ^ Jensen, Frank (2007). Introduction to Computational Chemistry. Chichester, England: John Wiley and Sons, 98 - 149. ISBN 0470011874.  7. ^ Colegrove, B. T.; Schaefer, Henry F. III (1990). "Disilyne (Si2H2) revisited". Journal of Physical Chemistry 94: 5593. 8. ^ Grev, R. S.; Schaefer, Henry F. III (1992). "The remarkable monobridged structure of Si2H2". Journal of Chemical Physics 97: 7990. 9. ^ Palágyi, Zoltán; Schaefer, Henry F. III, Kapuy, Ede (1993). "Ge2H2: A Molecule with a low-lying monobridged equilibrium geometry". Journal of the American Chemical Society 115: 6901 - 6903. 10. ^ Stephens, J. C.; Bolton, E. E.,Schaefer, H. F. III, and Andrews, L. (1997). "Quantum mechanical frequencies and matrix assignments to Al2H2". Journal of Chemical Physics 107: 119 - 223. 11. ^ Palágyi, Zoltán; Schaefer, Henry F. III, Kapuy, Ede (1993). "Ga2H2: planar dibridged, vinylidene-like, monobridged and trans equilibrium geometries". Chemical Physics Letters 203: 195 - 200. 12. ^ DeLeeuw, B. J.; Grev, R. S. and Schaefer, Henry F. III (1992). "A comparison and contrast of selected saturated and unsaturated hydrides of group 14 elements". Journal of Chemical Education 69: 441. 13. ^ Jensen, Frank (2007). Introduction to Computational Chemistry. Chichester, England: John Wiley and Sons, 80 - 81. ISBN 0470011874.  This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Ab_initio_quantum_chemistry_methods". A list of authors is available in Wikipedia.
390f18a3ece2bec8
Web-Schrödinger 3.2 (C)2007-2020 G. I. Márk, Ph. Lambin, L. P. Biró, MTA MFA Budapest,Hungary -- FUNDP Namur, Belgium Subscribe to the mailing list to receive E-mail news about Web-Schrödinger (new versions. etc) Watch introductory videos from the Web-Schrödinger YT channel. Web-Schrödinger is a program for the interactive solution of the stationary (time independent) and time dependent two dimensional (2D) Schrödinger equation. The program itself runs on our server and can be used through the Internet with a simple Web browser (Internet Explorer, Mozilla, Opera, Chrome was tested). Nothing is installed on the user's computer. The user can load, run, and modify ready-made example files, or prepare her/his own configuration(s), which can be saved on her/his own computer for later use. See [1] for a detailed description of the program. Theoretical background Time dependent Schrödinger equation The time evolution of the quantum mechanical wave function ψ(r;t) is governed by the time dependent Schrödinger equation: time dependent Schroedinger equation where r  = (x,y) is the position coordinate, t is the time and H = K + V is the Hamilton operator, K is the operator of the kinetic energy, and V = V(x,y) is the operator of the potential energy. When the potential function V(x,y) and the initial wave function ψ(x,y,t0) = ψ0(x,y) is known, the time dependent Schrödinger equation determines the wave function ψ(x,y,t) for any time value. We can calculate all observables from the wave function, for example the rho(x,y,t) probability density and the j(x,y,t) probability current density. Stationary Schrödinger equation rho(x,y,t) gives the probability of finding the quantum mechanical particle around the point (x,y) at time t. We call those ψ(x,y,t)=ψ(x,y) states, where ψ(x,y) is independent of time, stationary states. The stationary (time independent) states are given by the stationary Schrödinger equation: Hψ(r) = Eψ(r) where E is the energy of the state. User Guide All functions of the program are available through a menu system. Upon starting the program a default configuration is loaded, the user can immediatelly run this through the Calculation menu, or load another configuration with the Load Example, or Load menu points. All parameters can be modified in the Edit menu and the current setup can be saved anytime with the help of the Save function. Menu system Load Example We have prepared several characteristic examples, illustrating the most important phenomena of quantum mechanics, including the spreading of the wave packet, tunneling, bound states, etc. The current list of the examples is given in Appendix A. The example library is continuously expanding, see Appendix A for the up to date status. After loading an example setup the user can study and modify the parameters through the Edit menu or go straight to Calculation to calculate the time development and/or the stationary states. This function makes it possible to load the user's own configuration files, from her/his own computer. Such parameter files can be created either by saving a (possibly modified) example configuration (or the default configuration) or writing a configuration file from the scratch with a text editor or any other program. The current state of the parameters can be saved anytime to the user's own computer. The wave function and the potential is represented on a 2D mesh. Here you can specify the number of mesh points (Nx , Ny) in the x and y direction and the size of the calculation region in Angström (sx, sy). For typical applications the Δx = sx/Nx, Δy = sy/Ny values should be between 0.1 - 1 Å. The origin of the coordinate system is in the middle of the calculation region. The numerical algorith uses a periodic boundary condition, i.e. what goes out of the calculation region at the right side, comes in at the left side. It is like if the whole plane were "tiled" with the calculation region. As a consequence when the wave packet approaches the boundary of the calculation box, it "meets" its copy at the neighboring box and this causes unphysical interference effects to appear in the probability density. The parameters of the calculation (spatial- and temporal mesh, potential, and initial state) should be carefully chosen to avoid this effect. V0 gives the default value of the potential in eV (Electronvolt). Note: due to the difference of the algorithms used for the solution of the time dependent and stationary Schrödinger equations, generally a finer mesh is necessary for the time dependent calculation. E.g. a Nx=256 is typical value for the time dependent, and Nx=64 for the stationary calculation The potential V(x,y) can be interactively assembled from objects of several types: circle, rectangle, and plane. Any number of these objects can be given. For each object the user can specify its geometrical parameters and its potential value. For pixels where several objects overlap, the object given most recently determines the pixel potential value. The program shows the potential function generated from the current set of objects as a grayscale image. Initial state Here the user can specify the initial wave function ψ0(x,y), which is the input of the time dependent calculation (it is not used at the stationary calculation). Its general form is a so called truncated plane wave [8] wave packet, i.e. a Gaussian wave packet convolved with a 2D square window function. The program displays the chosen initial state together with the potential function, as a composite color image. In order to ensure that the wave packet has its ideal form (minimal size and flat envelope) when it hits the potential, a time retardation procedure is included into the initial state preparation. The user can specify the retardation time by giving the the bx, by distance values, which mean that after proceeding such distances in x, and y the wave packet should have its "ideal" form. ax, ay give the spatial width of the wave packet. The initial state should be specified such a way, that its overlap with the potential objects is negligible. The user can place horizontal or vertical line segments (detectors) into the calculation window. The program calculates the probability current I(t) passing through each line segment during the time evolution of the wave packet and also its time integral T for the whole calculation time. T is called transmission, because it gives the probability that the quantum particle crosses the given line segment (detector). Calculation parameters Here we can specify the parameters of the time dependent and the stationary calculation. Parameters used for the time evolution calculation: The number of time points is Nt and Δt gives calculation time step. Δt has to be given in atomic time units, 1 au time = 0.0242 fs (femtosecond). The numerical algorithm imposes a condition on the maximal Δt value that can be used: Δt < 4/π (Δx)2 / D, where D is the number of dimensions, D=2 in 2D. (This formula is valid in atomic units, i.e. one has to insert Δx in Bohr, 1 Bohr = 0.529 Å. For the default Δx = 0.3 Å, Δt = 0.2 au is suitable and this is the default time step.) It is not necessary, however, to display the results in such a fine time scale. Therefore the user can input the "display timestep", i.e. the number of calculation time steps, when the wave function is displayed. Parameters used for the stationary calculation: Nstat gives the number of states calculated. Time development When the user hits the "RUN" button, the time development calculation starts on the server. The progress of the calculation is shown by small thumbnail images. For typical parameters the time development calculation takes 1-2 minutes. (If there are more concurrent jobs on the server – either from this user or from others – the calculation may be somewhat slower. The program writes out the number of concurrent jobs – if there is any – after hitting the "RUN" button.) When the user hits the "RUN" button, the calculation of the stationary states starts on the server. It takes several second, or minutes, depending on the mesh size, and the number of orbitals requested. (If there are more concurrent jobs on the server – either from this user or from others – the calculation may be somewhat slower. The program writes out the number of concurrent jobs – if there is any – after hitting the "RUN" button.) When the calculation is completed, the program displays the energies and the wave functions of the stationary states. After the time development calculation is completed on the server, the time development of the probability density is displayed in composite color images. The program first calculates the global maximum of the probability and normalizes each frame using this value. A nonlinear color scale (γ=2.5) is used in order to facilitate presentation. If the user placed detectors into the calculation window before the start of the calculation, the program also displays the I(t) probability current functions and T transmission values for each of the detectors. Appendix A: Examples The examples are diveded into two groups: examples for time development calculation and examples for stationary states calculation. Nothing prevents to perform both a time evolution and a stationary states calculation for the same example, but those examples listed under "time development" demonstrate interesting cases of time development, those listed under "stationary states" demonstrate interesting cases of eiegenstates. For some cases, however, e.g. for a potential box, both the time evolution and the stationary states gives instructive results. The examples were carefully designed to prevent the effect of the periodic boundary condition. For the time evolution examples, this was accomplished by halting the time development calculation before the wave packet reaches the edge of the calculation box. For the stationary states calculation, we applied a potential wall at the edges in each examples. Examples for time development calculation A wave packet is approaching a periodic potential with energy in the allowed band. The wave packet is passing through the potential. A wave packet is approaching a periodic potential with energy in the forbidden band. The wave packet is reflected from the potential. Wave packet scattering on a potential forming a Christmas tree Quantum analogue of a projectile motion. Wave packet scattering on a linearly increasing potential. The "Results" menu shows the transferred probabilities and probability densities crossing the detectors shown by the red line segments. Scattering of a wave packet on a circular hardcore potential. Note the circular component of the final state. Demonstration of the "quantum revival" phenomenon. Simulation of Scanning Tunneling Microscope imaging of a carbon nanotube. See [4] for details.. Tunneling of a wave packet through a potential wall of V>E. The WP is hitting the wall at 75o angle. Tunneling of a wave packet through a potential wall of V>E. The WP is hitting the wall az 90o angle. Examples for stationary states calculation Eigenstates of a rectangular potential box. Eigenstates of a circular potential box. Eigenstates of a two-dimensional radial quadratic potential. Eigenstates of a simple model for a diatomic molecule. Note the two lowest orbitals are "s" like orbitals, similar to the atomic orbitals, the third orbital is a "sigma" orbital, and the fourth and fifth orbitals are "pi" orbitals. This includes a potential step inside a potential box: the left half of the potential has a slightly higher potential value than the righ half. Example file contest Develop your own example files demonstrating interesting quantum phenomena! You can send the SAVE-d files to mark@mfa.kfki.hu . Best example files will be included into the Web-Schrödinger "Examples" directory. Please attach also a brief description of the example! Mailing list We have a mailing list for announcing new features and examples. The mailing list is hosted by Google Groups. 1. Márk, Géza, I.: Web-Schrödinger: Program for the interactive solution of the time dependent and stationary two dimensional (2D) Schrödinger equation; arXiv:2004.10046 [physics.ed-ph] (2020) 2. Schrödinger equation; (in several languages) 3. Time development of quantum mechanical systems; (1995-) (English and Hungarian) 4. Márk, Géza, I.; Biró, László, P.; Gyulai, József: Simulation of STM images of 3D surfaces and comparison with experimental data: carbon nanotubes; Phys. Rev. B 58, 12645(1998). 5. Márk, Géza, I.; Biró, László, P.; Gyulai, József; Thiry, Paul, A.; Lucas, Amand, A.; Lambin, Philippe: Simulation of scanning tunneling spectroscopy of supported carbon nanotubes; Phys. Rev. B 62, 2797(2000). 6. Lambin, Philippe; Márk, Géza, I.; Meunier, Vincent; Biró, László, P.: Computation of STM images of carbon nanotubes; Int. J. Qunatum.. Chem. 95, 495(2003). 7. Márk, Géza, I.; Biró, László, P.; Lambin, Philippe: Calculation of axial charge spreading in carbon nanotubes and nanotube Y-junctions during STM measurement; Phys. Rev. B 70, 115423-1(2004). 8. Géza I. Márk PhD Thesis, FUNDP Namur, 2006. 9. Márk, Géza, I.; Vancsó, Péter; Hwang, Chanyong; Lambin, Philippe; Biró, László, P.: Anisotropic dynamics of charge carriers in graphene; Phys. Rev. B 85, 125443-1(2012). 10. Vancsó, Péter; Márk, Géza, István; Hwang, Chanyong; Lambin, Philippe; Biró, László, P.: Time and energy dependent dynamics of the STM tip – graphene system; European Journal of Physics B 85, 142-1(2012) 11. Márk, Géza, I.; Vancsó, Péter; Lambin, Philippe; Hwang, Chanyong; Biró, László, P.: Forming electronic waveguides from graphene grain boundaries; Journal of Nanophotonics 6, 061719-1(2012) 12. S. Janecek, E. Krotscheck: A fast and simple program for solving local Schrödinger equations in two and three dimensions; Comput. Phys. Comm. 178 (11) (2008) 835–842. 13. S.A. Chin, S. Janecek, and E. Krotscheck: An arbitrary order diffusion algorithm for solving Schrödinger equations; Computer Physics Communications 180 (2009) 1700–1708. Last updated: February 4, 2021 by Géza I. Márk , mark@mfa.kfki.hu This page was accessed  times since Feb 8, 2013.
b6e7c5ea42093f0e
Teoria y Estructura Atómica Atomic Theory III: Wave-Particle Duality and the Electron por Adrian Dingle, B.Sc., Anthony Carpi, Ph.D. As discussed in our Atomic Theory II module, at the end of 1913 Niels Bohr facilitated the leap to a new paradigm of atomic theoryquantum mechanics. Bohr’s new idea that electrons could only be found in specified, quantized orbits was revolutionary (Bohr, 1913). As is consistent with all new scientific discoveries, a fresh way of thinking about the universe at the atomic level would only lead to more questions, the need for additional experimentation and collection of evidence, and the development of expanded theories. As such, at the beginning of the second decade of the 20th century, another rich vein of scientific work was about to be mined. Periodic trends lead to the distribution of electrons In the late 19th century, the father of the periodic table, Russian chemist Dmitri Mendeleev, had already determined that the elements could be grouped together in a manner that showed gradual changes in their observed properties. (This is discussed in more detail in our module The Periodic Table of Elements.) By the early 1920s, other periodic trends, such as atomic volume and ionization energy, were also well established. The Periodic Table of Elements The Periodic Table of Elements The German physicist Wolfgang Pauli made a quantum leap by realizing that in order for there to be differences in ionization energies and atomic volumes among atoms with many electrons, there had to be a way that the electrons were not all placed in the lowest energy levels. If multi-electron atoms did have all of their electrons placed in the lowest energy levels, then very different periodic patterns would have resulted from what was actually observed. However, before we reach Pauli and his work, we need to establish a number of more fundamental ideas. Wave-particle duality The development of early quantum theory leaned heavily on the concept of wave-particle duality. This simultaneously simple and complex idea is that light (as well as other particles) has properties that are consistent with both waves and particles. The idea had been first seriously hinted at in relation to light in the late 17th century. Two camps formed over the nature of light: one in favor of light as a particle and one in favor of light as a wave. (See our Light I: Particle or Wave? module for more details.) Although both groups presented effective arguments supported by data, it wasn’t until some two hundred years later that the debate was settled. At the end of the 19th century the wave-particle debate continued. James Clerk Maxwell, a Scottish physicist, developed a series of equations that accurately described the behavior of light as an electromagnetic wave, seemingly tipping the debate in favor of waves. However, at the beginning of the 20th century, both Max Planck and Albert Einstein conceived of experiments which demonstrated that light exhibited behavior that was consistent with it being a particle. In fact, they developed theories that suggested that light was a wave-particle – a hybrid of the two properties. By the time of Bohr’s watershed papers, the time was right for the expansion of this new idea of wave–particle duality in the context of quantum theory, and in stepped French physicist Louis de Broglie. de Broglie says electrons can act like waves In 1924, de Broglie published his PhD thesis (de Broglie, 1924). He proposed the extension of the wave-particle duality of light to all matter, but in particular to electrons. The starting point for de Broglie was Einstein’s equation that described the dual nature of photons, and he used an analogy, backed up by mathematics, to derive an equation that came to be known as the “de Broglie wavelength” (see Figure 1 for a visual representation of the wavelength). The de Broglie wavelength equation is, in the grand scheme of things, a profoundly simple one that relates two variables and a constant: momentum, wavelength, and Planck's constant. There was support for de Broglie’s idea since it made theoretical sense, but the very nature of science demands that good ideas be tested and ultimately demonstrated by experiment. Unfortunately, de Broglie did not have any experimental data, so his idea remained unconfirmed for a number of years. De Broglie Wavelength Figure 1: Two representations of a de Broglie wavelength (the blue line) using a hydrogen atom: a radial view (A) and a 3D view (B). It wasn’t until 1927 that de Broglie’s hypothesis was demonstrated via the Davisson-Germer experiment (Davisson, 1928). In their experiment, Clinton Davisson and Lester Germer fired electrons at a piece of nickel metal and collected data on the diffraction patterns observed (Figure 2). The diffraction pattern of the electrons was entirely consistent with the pattern already measured for X-rays and, since X-rays were known to be electromagnetic radiation (i.e., waves), the experiment confirmed that electrons had a wave component. This confirmation meant that de Broglie’s hypothesis was correct. Davisson-Germer Experiment Figure 2: A drawing of the experiment conducted by Davisson and Germer where they fired electrons at a piece of nickel metal and observed the diffraction patterns. image © Roshan220195 Interestingly, it was the (experimental) efforts of others (Davisson and Germer), that led to de Broglie winning the Nobel Prize in Physics in 1929 for his theoretical discovery of the wave-nature of electrons. Without the proof that the Davisson-Germer experiment provided, de Broglie’s 1924 hypothesis would have remained just that – a hypothesis. This sequence of events is a quintessential example of a theory being corroborated by experimental data. Punto de Comprensión Theories must be backed up by Schrödinger does the math In 1926, Erwin Schrödinger derived his now famous equation (Schrödinger, 1926). For approximately 200 years prior to Schrödinger’s work, the infinitely simpler F = ma (Newton’s second law) had been used to describe the motion of particles in classical mechanics. With the advent of quantum mechanics, a completely new equation was required to describe the properties of subatomic particles. Since these particles were no longer thought of as classical particles but as particle-waves, Schrödinger’s partial differential equation was the answer. In the simplest terms, just as Newton’s second law describes how the motion of physical objects changes with changing conditions, the Schrödinger equation describes how the wave function (Ψ) of a quantum system changes over time (Equation 1). The Schrödinger equation was found to be consistent with the description of the electron as a wave, and to correctly predict the parameters of the energy levels of the hydrogen atom that Bohr had proposed. Schrodinger equation Equation 1: The Schrödinger equation. Schrödinger’s equation is perhaps most commonly used to define a three-dimensional area of space where a given electron is most likely to be found. Each area of space is known as an atomic orbital and is characterized by a set of three quantum numbers. These numbers represent values that describe the coordinates of the atomic orbital: including its size (n, the principal quantum number), shape (l, the angular or azimuthal quantum number), and orientation in space (m, the magnetic quantum number). There is also a fourth quantum number that is exclusive to a particular electron rather than a particular orbital (s, the spin quantum number; see below for more information). Schrödinger’s equation allows the calculation of each of these three quantum numbers. This equation was a critical piece in the quantum mechanics puzzle, since it brought quantum theory into sharp focus via what amounted to a mathematical demonstration of Bohr’s fundamental quantum idea. The Schrödinger wave equation is important since it bridges the gap between classical Newtonian physics (which breaks down at the atomic level) and quantum mechanics. The Schrödinger equation is rightfully considered to be a monumental contribution to the advancement and understanding of quantum theory, but there are three additional considerations, detailed below, that must also be understood. Without these, we would have an incomplete picture of our non-relativistic understanding of electrons in atoms. Punto de Comprensión Max Born further interprets the Schrödinger equation German mathematician and physicist Max Born made a very specific and crucially important contribution to quantum mechanics relating to the Schrödinger equation. Born took the wave functions that Schrödinger produced, and said that the solutions to the equation could be interpreted as three-dimensional probability “maps” of where an electron may most likely be found around an atom (Born, 1926). These maps have come to be known as the s, p, d, and f orbitals (Figure 3). Atomic orbitals Figure 3: Based on Born's theories, these are representations of the three-dimensional probabilities of an electron's location around an atom. The four orbitals, in increasing complexity, are: s, p, d, and f. Additional information is given about the orbital's magnetic quantum number (m). image © UC Davis/ChemWiki Punto de Comprensión Werner Heisenberg’s Uncertainty Principle In the year following the publication of Schrödinger’s work, the German physicist Werner Heisenberg published a paper that outlined his uncertainty principle (Heisenberg, 1927). He realized that there were limitations on the extent to which the momentum of an electron and its position could be described. The Heisenberg Uncertainty Principle places a limit on the accuracy of simultaneously knowing the position and momentum of a particle: As the certainty of one increases, then the uncertainty of other also increases. The crucial thing about the uncertainty principle is that it fits with the quantum mechanical model in which electrons are not found in very specific, planetary-like orbits – the original Bohr model – and it also dovetails with Born’s probability maps. The two contributions (Born and Heisenberg’s) taken together with the solution to the Schrödinger equation, reveal that the position of the electron in an atom can only be accurately predicted in a statistical way. That is to say, we know where the electron is most likely to be found in the atom, but we can never be absolutely sure of its exact position. Punto de Comprensión The Heisenberg uncertainty principle concerning the position and momentum of a particle states that as the certainty of one increases, the _____ of the other increases. Angular momentum, or "Spin" In 1922 German physicists Otto Stern, an assistant of Born’s, and Walther Gerlach conducted an experiment in which they passed silver atoms through a magnetic field and observed the deflection pattern. In simple terms, the results yielded two distinct possibilities related to the single, 5s valence electron in each atom. This was an unexpected observation, and implied that a single electron could take on two, very distinct states. At the time, nobody could explain the phenomena that the experiment had demonstrated, and it took a number of scientists, working both independently and in unison with earlier experimental observations, to work it out over a period of several years. In the early 1920s, Bohr’s quantum model and various spectra that had been produced could be adequately described by the use of only three quantum numbers. However, there were experimental observations that could not be explained via only three mathematical parameters. In particular, as far back as 1896, the Dutch physicist Pieter Zeeman noted that the single valence electron present in the sodium atom could yield two different spectral lines in the presence of a magnetic field. This same phenomenon was observed with other atoms with odd numbers of valence electrons. These observations were problematic since they failed to fit the working model. In 1925, Dutch physicist George Uhlenbeck and his graduate student Samuel Goudsmit proposed that these odd observations could be explained if electrons possessed angular momentum, a concept that Wolfgang Pauli later called “spin.” As a result, the existence of a fourth quantum number was revealed, one that was independent of the orbital in which the electron resides, but unique to an individual electron. By considering spin, the observations by Stern and Gerlach made sense. If an electron could be thought of as a rotating, electrically-charged body, it would create its own magnetic moment. If the electron had two different orientations (one right-handed and one left-handed), it would produce two different ‘spins,’ and these two different states would explain the anomalous behavior noted by Zeeman. This observation meant that there was a need for a fourth quantum number, ultimately known as the “spin quantum number,” to fully describe electrons. Later it was determined that the spin number was indeed needed, but for a different reason – either way, a fourth quantum number was required. Punto de Comprensión Some experimental observations could not be explained mathematically using three parameters because Spin and the Pauli exclusion principle In 1922, Niels Bohr visited his colleague Wolfgang Pauli at Göttingen where he was working. At the time, Bohr was still wrestling with the idea that there was something important about the number of electrons that were found in ‘closed shells’ (shells that had been filled). In his own later account (1946), Pauli describes how building upon Bohr’s ideas and drawing inspiration from others’ work, he proposed the idea that only two electrons (with opposite spins) should be allowed in any one quantum state. He called this ‘two-valuedness’ – a somewhat inelegant translation of the German zweideutigkeit (Pauli, 1925). The consequence was that once a pair of electrons occupies a low energy quantum state (orbitals), any subsequent electrons would have to enter higher energy quantum states, also restricted to pairs at each level. Using this idea, Bohr and Pauli were able to construct models of all of the electronic structures of the atoms from hydrogen to uranium, and they found that their predicted electronic structures matched the periodic trends that were known to exist from the periodic table – theory met experimental evidence once again. Pauli ultimately formed what came to be known as the exclusion principle (1925), which used a fourth quantum number (introduced by others) to distinguish between the two electrons that make up the maximum number of electrons that could be in any given quantum level. In its simplest form, the Pauli exclusion principle states that no two electrons in an atom can have the same set of four quantum numbers. The first three quantum numbers for any two electrons can be the same (which places them in the same orbital), but the fourth number must be either +½ or -½, i.e., they must have different ‘spins’ (Figure 4). This is what Uhlenbeck and Goudsmit’s research suggested, following Pauli’s original publication of his theories. Spin angular momentum Figure 4: A model of the fourth quantum number, spin (s). Shown here are models for particles with spin (s) of ½, or half angular momentum. The period described here was rich in the development of the quantum theory of atomic structure. Literally dozens of individuals, some mentioned throughout this module and others not, contributed to this process by providing theoretical insights or experimental results that helped shape our understanding of the atom. Many of the individuals worked in the same laboratories, collaborated together, or communicated with one another during the period, allowing the rapid transfer of ideas and refinements that would shape modern physics. All these contributions can certainly been seen as an incremental building process, where one idea leads to the next, each adding to the refinement of thinking and understanding, and advancing the science of the field. The 20th century was a period rich in advancing our knowledge of quantum mechanics, shaping modern physics. Tracing developments during this time, this module covers ideas and refinements that built on Bohr’s groundbreaking work in quantum theory. Contributions by many scientists highlight how theoretical insights and experimental results revolutionized our understanding of the atom. Concepts include the Schrödinger equation, Born’s three-dimensional probability maps, the Heisenberg uncertainty principle, and electron spin. Conceptos Clave • Electrons, like light, have been shown to be wave-particles, exhibiting the behavior of both waves and particles. • The Schrödinger equation describes how the wave function of a wave-particle changes with time in a similar fashion to the way Newton’s second law describes the motion of a classic particle. Using quantum numbers, one can write the wave function, and find a solution to the equation that helps to define the most likely position of an electron within an atom. • Max Born’s interpretation of the Schrödinger equation allows for the construction of three-dimensional probability maps of where electrons may be found around an atom. These ‘maps’ have come to be known as the s, p, d, and f orbitals. • The Heisenberg Uncertainty Principle establishes that an electron’s position and momentum cannot be precisely known together, instead we can only calculate statistical likelihood of an electron’s location. • The discovery of electron spin defines a fourth quantum number independent of the electron orbital but unique to an electron. The Pauli exclusion principle states that no two electrons with the same spin can occupy the same orbital. • NGSS • HS-C1.4, HS-C4.4, HS-PS1.A2, HS-PS2.B3 • Referencias • Bohr, N. (1913). On the constitution of atoms and molecules. Philosophical Magazine (London), Series 6, 26, 1–25. • Born, M. (1926). Zur Quantenmechanik der Stoßvorgänge. Zeitschrift für Physik, 37(12), 863–867. • Davisson, C. J. (1928). Are electrons waves? Franklin Institute Journal, 205(5), 597-623. • de Broglie, L. (1924). Recherches sur la théorie des quanta. Annales de Physique, 10(3), 22-128. • Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift für Physik, 43(3-4), 172-198. • Pauli, W. (1925). Ueber den Einfluss der Geschwindigkeitsabhaengigkeit der Elektronenmasse auf den Zeeman-Effekt. Zeitschrift für Physik, 31(1), 373-385. • Pauli, W. (1946). Remarks on the history of the exclusion principle. Science, New Series, 103(2669), 213-215. • Schrödinger, E. (1926). Quantisierung als Eigenwertproblem. Annalen der Physik, 384(4), 273–376. • Stoner, E. C. (1924). The distribution of electrons among atomic energy levels. The London, Edinburgh and Dublin Philosophical Magazine (6th series), 48(286), 719-736 Adrian Dingle, B.Sc., Anthony Carpi, Ph.D. “Atomic Theory III” Visionlearning Vol. CHE-3 (6), 2015.
6c91812195a67f4c
Science X Newsletter Wednesday, Mar 31 Dear ymilog, Be an ACS Industry Insider: Here is your customized Science X Newsletter for March 31, 2021: Spotlight Stories Headlines A new strategy to enhance the performance of silicon heterojunction solar cells Neuroscientists have identified a brain circuit that stops mice from mating with others that appear to be sick Snakes, rats and cats: the trillion dollar invasive species problem Researchers achieve world's first manipulation of antimatter by laser Deep diamonds contain evidence of deep-Earth recycling processes 450-million-year-old sea creatures had a leg up on breathing New study discovers ancient meteoritic impact over Antarctica 430,000 years ago Scientists create the next generation of living robots 'Sweat sticker' diagnoses cystic fibrosis on the skin in real time Indian astronomers probe X-ray pulsar 2S 1417–624 Small-molecule therapeutics: Big data dreams for tiny technologies Quantum material's subtle spin behavior proves theoretical predictions Decades of hunting detects footprint of cosmic ray superaccelerators in our galaxy Greenland caves: Time travel to a warm Arctic Scientists discover unique Cornish 'falgae' Physics news Researchers achieve world's first manipulation of antimatter by laser Researchers with the CERN-based ALPHA collaboration have announced the world's first laser-based manipulation of antimatter, leveraging a made-in-Canada laser system to cool a sample of antimatter down to near absolute zero. The achievement, detailed in an article published today and featured on the cover of the journal Nature, will significantly alter the landscape of antimatter research and advance the next generation of experiments. Quantum material's subtle spin behavior proves theoretical predictions Using complementary computing calculations and neutron scattering techniques, researchers from the Department of Energy's Oak Ridge and Lawrence Berkeley national laboratories and the University of California, Berkeley, discovered the existence of an elusive type of spin dynamics in a quantum mechanical system. Lab-made hexagonal diamonds stiffer than natural diamonds 'Agricomb' measures multiple gas emissions from... cows After the optical frequency comb made its debut as a ruler for light, spinoffs followed, including the astrocomb to measure starlight and a radar-like comb system to detect natural gas leaks. And now, researchers have unveiled the "agricomb" to measure, ahem, cow burps. Super-precise Fermilab experiment carefully analyzing the muon's magnetic moment Modern physics is full of the sort of twisty, puzzle-within-a-puzzle plots you'd find in a classic detective story: Both physicists and detectives must carefully separate important clues from unrelated information. Both physicists and detectives must sometimes push beyond the obvious explanation to fully reveal what's going on. New theory suggests uranium 'snowflakes' in white dwarfs could set off star-destroying explosion A pair of researchers with Indiana University and Illinois University, respectively, has developed a theory that suggests crystalizing uranium "snowflakes" deep inside white dwarfs could instigate an explosion large enough to destroy the star. In their paper published in the journal Physical Review Letters, C. J. Horowitz and M. E. Caplan describe their theory and what it could mean to astrophysical theories about white dwarfs and supernovas. Heat conduction record with tantalum nitride A thermos bottle has the task of preserving the temperature—but sometimes you want to achieve the opposite: Computer chips generate heat that must be dissipated as quickly as possible so that the chip is not destroyed. This requires special materials with particularly good heat conduction properties. A successful phonon calculation within the quantum Monte Carlo framework The focus and ultimate goal of computational research in materials science and condensed matter physics is to solve the Schrödinger equation—the fundamental equation describing how electrons behave inside matter—exactly (without resorting to simplifying approximations). While experiments can certainly provide interesting insights into a material's properties, it is often computations that reveal the underlying physical mechanism. However, computations need not rely on experimental data and can, in fact, be performed independently, an approach known as "ab initio calculations." The density functional theory (DFT) is a popular example of such an approach. Study shows promise of quantum computing using factory-made silicon chips The qubit is the building block of quantum computing, analogous to the bit in classical computers. To perform error-free calculations, quantum computers of the future are likely to need at least millions of qubits. The latest study, published in the journal PRX Quantum, suggests that these computers could be made with industrial-grade silicon chips using existing manufacturing processes, instead of adopting new manufacturing processes or even newly discovered particles. Development of a broadband mid-infrared source for remote sensing A research team of the National Institutes of Natural Sciences, National Institute for Fusion Science and Akita Prefectural University has successfully demonstrated a broadband mid-infrared (MIR) source with a simple configuration. This light source generates highly-stable broadband MIR beam at 2.5-3.7 μm wavelength range maintaining the brightness owing to its high-beam quality. Such a broadband MIR source facilitates a simplified environmental monitoring system by constructing a MIR fiber-optic sensor, which has the potential for industrial and medical applications. Astronomy and Space news Indian astronomers probe X-ray pulsar 2S 1417–624 Using the Neutron Star Interior Composition Explorer (NICER) instrument aboard the International Space Station (ISS) and NASA's Swift spacecraft, astronomers from India have investigated an X-ray pulsar known as 2S 1417–624. Results of the study, published March 24 on, provide important information about the evolution of different timing and spectral properties of this source during its recent outburst. Decades of hunting detects footprint of cosmic ray superaccelerators in our galaxy An enormous telescope complex in Tibet has captured the first evidence of ultrahigh-energy gamma rays spread across the Milky Way. The findings offer proof that undetected starry accelerators churn out cosmic rays, which have floated around our galaxy for millions of years. The research is to be published in the journal Physical Review Letters on Monday, April 5. NASA tests mixed reality, scientific know-how and mission operations for exploration Mixed reality technologies, like virtual reality headsets or augmented reality apps, aren't just for entertainment—they can also help make discoveries on other worlds like the Moon and Mars. By traveling on Earth to extreme environments—from Mars-like lava fields in Hawaii to underwater hydrothermal vents—similar to destinations on other worlds, NASA scientists have tested out technologies and tools to gain insight into how they can be used to make valuable contributions to science. Two strange planets: Neptune and Uranus remain mysterious after new findings Uranus and Neptune both have a completely skewed magnetic field, perhaps due to the planets' special inner structures. But new experiments by ETH Zurich researchers now show that the mystery remains unsolved. New study sows doubt about the composition of 70 percent of our universe Until now, researchers have believed that dark energy accounted for nearly 70 percent of the ever-accelerating, expanding universe. First X-rays from Uranus discovered Astronomers have detected X-rays from Uranus for the first time, using NASA's Chandra X-ray Observatory. This result may help scientists learn more about this enigmatic ice giant planet in our solar system. US, China consulted on safety as their crafts headed to Mars As their respective spacecrafts headed to Mars, China and the U.S. held consultations earlier this year in a somewhat unusual series of exchanges between the rivals. NASA's Webb Telescope General Observer scientific programs selected Mission officials for NASA's James Webb Space Telescope have announced the selection of the General Observer programs for the telescope's first year of science, known as Cycle 1. These specific programs will provide the worldwide astronomical community with one of the first extensive opportunities to investigate scientific targets with Webb. Venus plots a comeback In terms of space exploration, Mars is all the rage these days. This has left our closest neighbor, Venus—previously the most attractive planet to study because of its proximity and similar atmosphere to Earth—in the lurch. A new article in Chemical & Engineering News, the weekly newsmagazine of the American Chemical Society, highlights how scientists and space agencies are turning their eyes back toward Venus to learn more about its atmosphere and geology. Technology news A new strategy to enhance the performance of silicon heterojunction solar cells Crystalline silicon (c-Si) solar cells are among the most promising solar technologies on the market. These solar cells have numerous advantageous properties, including a nearly optimum bandgap, high efficiency and stability. Notably, they can also be fabricated using raw materials that are widely available and easy to attain. Scientists create the next generation of living robots The global race to develop 'green' hydrogen It's seen as the missing link in the race for carbon-neutrality: "green" hydrogen produced without fossil fuel energy is a popular buzzword in competing press releases and investment plans across the globe. Roboreptile climbs like a real lizard While a Mars rover can explore where no person has gone before, a smaller robot at the University of the Sunshine Coast in Australia could climb to new heights by mimicking the movements of a lizard. Scientists design 'smart' device to harvest daylight Thermal power nanogenerator created without solid moving parts As environmental and energy crises become increasingly more common occurrences around the world, a thermal energy harvester capable of converting abundant thermal energy—such as solar radiation, waste heat, combustion of biomass, or geothermal energy—into mechanical energy appears to be a promising energy strategy to mitigate many crises. Assessing how much data iOS and Android share with Apple and Google The School of Computer Science and Statistics in Dublin, Ireland, has begun investigating how much user data iOS and Android send to Apple and Google, respectively. Overall, they discovered that, even when the devices are idle or minimally configured, each tends to share an average of 4.5 minutes' worth of data every day. Even without a brain, these metal-eating robots can search for food When it comes to powering mobile robots, batteries present a problematic paradox: the more energy they contain, the more they weigh, and thus the more energy the robot needs to move. Energy harvesters, like solar panels, might work for some applications, but they don't deliver power quickly or consistently enough for sustained travel. Volkswagen hoaxes media with fake news release as a joke Volkswagen of America issued false statements this week saying it would change its brand name to "Voltswagen," as a way to stress its commitment to electric vehicles, only to reverse course Tuesday and admit that the supposed name change was just a joke. A hydrogen future for planes, trains and factories Hydrogen could potentially power trains, planes, trucks and factories in the future, helping the world rid itself of harmful emissions. A physical party to prove you're a real virtual person The ease of creating fake virtual identities plays an important role in shaping the way information—and misinformation—circulates online. Could 'pseudonym' parties, that would verify proof of personhood not proof of identity, resolve this tension? New AI tool 85% accurate for recognizing and classifying wind turbine blade defects Demand for wind power has grown, and with it the need to inspect turbine blades and identify defects that may impact operation efficiency. ESAIL captures 2 million messages from ships at sea The ESAIL microsatellite for making the seas safer has picked up more than two million messages from 70 000 ships in a single day. Facebook's new tool lets users control what they see, share on their News Feeds Facebook is launching new updates that allows users to control their News Feed algorithm, according to a statement by the tech giant. Tesla's range put to the test Edmunds' test team recently published the results of its real-world range testing for electric vehicles. Notably, every Tesla the team tested in 2020 came up short of matching the EPA's range estimate. Almost all other EVs Edmunds tested met or exceeded those estimates. High production rates for fuel cells To create a sustainable road traffic system, hundreds of thousands of fuel cells will be needed for hydrogen-powered cars in the future. Until now, though, fuel cell production has been complex and too slow. The Fraunhofer team is therefore developing a continuous production line that will be able to process fuel cell components in cycles lasting just seconds. The pilot line is set to be presented at the Hannover Messe Digital Edition from April 12 to April 16, 2021. Smart algorithms make packaging of meat products more efficient In supermarkets you can find a large variety of poultry products, all conveniently packaged in fixed-weight quantities. However, poultry processing plants face numerous challenges due to these fixed-weight batches, growing throughput requirements and small profit margins. To assist the poultry processing plant industry, TU/e-researcher Kay Peeters has developed new production control and planning strategies that reduce operational costs. Will you be paying with a Visa, Mastercard or Bitcoin? Spotify acquires Clubhouse competitor Betty Labs as live audio popularity grows Spotify is entering the live audio market after it announced Tuesday its acquisition of Betty Labs, the creators of the live audio app Locker Room. Advocacy groups urge FTC to be tougher on Google with protecting kids privacy on apps Two advocacy groups want the Federal Trade Commission to take a tougher stance against Google, accusing its app store of recommending apps that transmit kids' personal information such as location without their parents' consent in violation of a 1998 law that protects children online. How many countries are ready for nuclear-powered electricity? As demand for low-carbon electricity rises around the world, nuclear power offers a promising solution. But how many countries are good candidates for nuclear energy development? New OnePlus models take the flagship phone game up a notch There isn't much in the tech world that makes me happier than a day when we get new flagship phones. New Hampshire coastal recreationists support offshore wind As the Biden administration announces a plan to expand the development of offshore wind energy development (OWD) along the East Coast, research from the University of New Hampshire shows significant support from an unlikely group, coastal recreation visitors. From boat enthusiasts to anglers, researchers found surprisingly widespread support with close to 77% of coastal recreation visitors supporting potential OWD along the N.H. Seacoast. Microsoft wins $22 billion deal making headsets for US Army Microsoft won a nearly $22 billion contract to supply U.S. Army combat troops with its augmented reality headsets. Japan's Hitachi acquires GlobalLogic for $9.6 billion Hitachi Ltd. is buying U.S. digital engineering services company GlobalLogic Inc. for $9.6 billion, the Japanese industrial, electronic and construction conglomerate said Wednesday. Deliveroo skids on stock market debut Deliveroo skidded on its stock market launch Wednesday, with its share price slumping by almost a third in value after the app-driven meals delivery company faced criticism from institutional investors over its treatment of self-employed riders. Counting begins in vote on first Amazon labor union Counting of votes cast by Amazon employees at an Alabama warehouse began Tuesday to determine whether it would become the first union shop at the e-commerce colossus. Huawei posts record profit but US pressure, pandemic hit revenue Chinese telecom giant Huawei said Wednesday it achieved the latest in a string of record profits last year, but revenue growth slowed sharply because of the pandemic and tightening US pressure that has pushed it into new business lines to survive. Sports cards have gone virtual, and in a big way Maybe the Luka Doncic rookie basketball card that recently sold at auction for a record $4.6 million was a bit rich for your blood. Perhaps you'd be interested in a more affordable alternative—say, a virtual card of the Dallas Mavericks forward currently listed for a mere $150,000? Delta joins other US airlines in ending empty middle seats Delta Air Lines, the last U.S. airline still blocking middle seats, will end that policy in May as air travel recovers and more people become vaccinated against COVID-19. This email is a free service of Science X Network You received this email because you subscribed to our list.
5f2d7038b628b31e
Spin-wave oscillations in gradient ferromagnets: Exactly solvable models Ignatchenko, V. A.; Tsikalov, D. S. Journal Of Magnetism And Magnetic Materials. https://doi.org/10.1016/j.jmmm.2020.166643 The method of searching for the profiles of the gradient dependence of the material parameters of matter on the coordinates that allow the exact solution of wave equations, developed previously for electromagnetic and elastic waves, was generalized to spin waves in gradient ferromagnets. Such profiles were found and exact solutions of the wave equations for a ferromagnet with uniaxial magnetic anisotropy β(z) or exchange α(z) varying in space were obtained. The obtained solutions were used to develop the theory of spin-wave resonance in gradient thin magnetic films. The dependences of the eigenfunctions mn(z), the frequencies of the discrete spectrum ωn, and the high-frequency susceptibility χn on the number of spectral levels n were found. The cardinal differences between the spin-wave spectra of films with gradients β(z) and α(z) are shown. The variable anisotropy β(z) changes the shape of the energy potential of the magnetic film and leads to a change in the discrete spectrum for frequencies ωn(n) lower than the frequency of the gradient potential well or potential barrier ωc. The variable exchange α(z) does not change the shape of the energy potential. Spin-wave oscillations occur in a rectangular potential well created by the surfaces of the film, regardless of profile α(z). The discrete frequency spectrum ωn(n) is quadratic on n, or has negligible deviations from the quadratic, for all n. An analytical expression for the effective exchange parameter is obtained. Exact solutions of the Schrödinger equation with spatially dependent effective mass m(z) were found for the profile of m(z) inverse to the function of α(z).
18ae35e752e2ce79
Frontiers in Optics: T,W,Th One of the things that happens to me as the years go by is that I spend less time at meetings listening to talks and more time talking to friends and colleagues and planning new research collaborations.  From discussions with said colleagues, I get the feeling that this shift in emphasis is not unique to me.  (I suppose this is why young professionals make better conference bloggers.) So for my discussion of the last three days of the conference, let me just point out a few general observations that I had while attending. First, there were some unconventional and very interesting talks at the conference.  On Tuesday, I attended a session on “Rogue waves and related phenomena.”  Rogue waves, also known as “freak waves”, are highly dangerous waves which can arise in open ocean, often against the prevailing winds and currents and in the absence of storms, and can attain heights of 100 ft (extreme ocean storm waves typically are no higher than 50 ft).  These waves were only positively confirmed by science in 1995, though mariners had spoken of them for at least a hundred years.  Rogue waves can sink even the largest ship in minutes, and are now thought to occur with some regularity. Peter Janssen of the European Center for Medium-Range Weather Forecasts discussed modeling used to estimate the likelihood of rogue waves.  He noted in his talk that very little photographic evidence exists of rogue waves; I had a chance to ask him about this during the meeting, and he pointed out that buoy readings provide most of the data relating to rogue wave behavior. What is the connection to optics?  Rogue waves are modeled by the nonlinear Schrödinger equation, which also can be used to describe nonlinear effects in optical systems.  A study of one system therefore gives some insight into the other. Plasmonics and metamaterials research remains quite popular; there were no fewer than 10 sessions on the topics.  Plasmonics sessions seemed to be much more applications oriented (“plasmonic emitters and resonators”, “plasmonic sensors”, “plasmonic waveguides and devices”), which suggested that the field has matured enough that we may start to see some really interesting technological output related to plasmons in the near future. There were also a surprising number of sessions on X-ray generation and imaging, somewhat unusual for an “optics” meeting!  It seems that new and improved methods of doing imaging with X-rays is leading to a resurgence in popularity of the subject. Two other imaging concepts seemed to be very “hot” at this meeting, and are worth saying a few more words about: compressive sensing and “ghost” imaging.  I knew relatively little about either topic before going to the meeting, an oversight I’m now working to correct! Compressive sensing refers to the measurement of an image at a resolution much higher than the resolution of the measuring device.  For example, a “compressive imaging camera” might be able to record a 200 by 200 pixel image using only a 100 by 100 pixel detector. The genesis of this idea comes from image compression, such as the jpeg compression done by digital cameras: my Kodak Z1012IS camera, for instance, has 10 million pixels, but produces an image which is only 2 million bytes in size.  Since a single color pixel requires at least three bytes of storage, this suggests that the stored image is a factor of ten smaller than the amount of data actually recorded by the camera.  How is this possible?  Most images contain a very large amount of redunancy in them: as a crude example, if I took a picture of a perfectly white wall (or a polar bear in a snowstorm), my image could be characterized by a single RGB color: the particular shade of white of the wall.  Since most scenes we photograph have some amount of redundancy (forests are green, the sky is blue, goth clubs are black), a standard camera is typically recording more information than it needs. The philosophy of compressive sensing (or compressive imaging) is to design an optical system that is, in a sense, optimally efficient.  Instead of measuring too much data and throwing out the redundant information, one measures the minimal amount of data and uses signal processing techniques to reconstruct an image of much higher resolution.  Strategies for doing so seem to involve a combination of developing novel optical devices which collect data in unusual ways and computational techniques to analyze said data. “Ghost” imaging is a technique that is in some sense an extreme version of compressive sensing: the main detector has only a single pixel!  The crux of the technique is the comparison of the intensities of two optical beams, one of which has interacted with the object to be imaged.  The original version of the experiment utilized the quantum correlations associated with entangled photons, as shown in the schematic below: Light from a thermal source is split into two paths by a beam splitter, one of which illuminates the object to be imaged and the other of which illuminates a CCD camera.  It should be noted that the CCD camera does not have any view of the object.  Light scattered from the object is recorded by a “bucket” detector, and signals from this “bucket” are correlated with photons arriving at the CCD camera.  By only keeping CCD signals which are correlated with the “bucket” signals, one remarkably finds that an image of the object can be reconstructed on the CCD camera! I’ll come back and describe “ghost” imaging in more detail in a future post.  It should be noted, however, that researchers have determined that quantum effects are not strictly necessary, and a classical version of ghost imaging has been demonstrated.  The consequences and applications of such imaging strategies are not immediately obvious to me, but it is a very clever idea. The OSA meeting seemed rather quiet this year, overall.  I suspect that attendance was down due to the ongoing financial crisis.  I still had a great time and had a lot of productive discussions, but here’s hoping that next year’s meeting, in Rochester, will be back up to speed. P.S. I should give a shout-out to Maceió Brazilian Steakhouse, which I can highly recommend if you happen to be in downtown San Jose!  There’s one thing on the menu: the rotational dinner, which involves the servers bringing around skewers of 14 different types of meat until you beg them to stop!  Nancy, the proprietor, is a very nice lady and made our group feel right at home. This entry was posted in Optics, Science news. Bookmark the permalink. 4 Responses to Frontiers in Optics: T,W,Th • Hi Ori, Thanks for the comment! As it turns out, I was at your talk – very nice work, and very nicely presented! I’ll hopefully come back and take a closer look at the research and blog about it in the near future. 1. Wade Walker says: Interesting! The October 2009 issue of Physics Today has a mini-article on ghost imaging, but I didn’t make the connection with the Hanbury Brown and Twiss effect ( until I saw your diagram, which looks similar to the usual HBT diagram. The two effects look very similar to my untrained eye, but there seems to be some controversy in the references I could find on the subject. Did the presenters have anything to say about the subject? • I didn’t hear of any explicit controversy from the talks I attended, but my impression is that there was an early argument about whether or not there was something inherently quantum-mechanical about the effect. It is now clear, and has been demonstrated, that one doesn’t need quantum mechanics to do a form of ghost imaging, but it is not clear (at least to me) whether quantum effects add something to the mix. This sort of controversy is very similar to that which appeared when the HBT experiment was first reported. Researchers attempted to reproduce the HBT experiment with laser light, with negative results. It turns out that natural light is necessary to get meaningful HBT data; it seems that the same is true for ghost imaging. I’ll try and sort through this in more detail in a future post; now I’m curious… Leave a Reply to Ori Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
8aa66ac41a41499d
Skip to main content Chemistry LibreTexts 6.5: Various Approaches to Electron Correlation • Page ID • There are numerous procedures currently in use for determining the best Born-Oppenheimer electronic wave function that is usually expressed in the form: \[\psi = \sum_i m_i C_I \Phi_i,\] where \(\Phi_I\) is a spin-and space- symmetry-adapted configuration state function (CSF) that consists of one or more determinants \(| \phi_{I1}\phi_{I2}\phi_{I3}\cdots \phi_{IN}|\) combined to produce the desired symmetry. In all such wave functions, there are two kinds of parameters that need to be determined- the CI coefficients and the LCAO-MO coefficients describing the fIk in terms of the AO basis functions. The most commonly employed methods used to determine these parameters include: The CI Method In this approach, the LCAO-MO coefficients are determined first usually via a single-configuration HF SCF calculation. The CI coefficients are subsequently determined by making the expectation value \(\langle \psi | H | \psi \rangle / \langle \psi | \psi \rangle\) variationally stationary with \(\psi\) chosen to be of the form \[\psi = \sum_I C_I \Phi_I.\] As with all such linear variational problems, this generates a matrix eigenvalue equation \[\sum_J \langle\Phi_I|H|\Phi_J\rangle C_J=EC_I\] to be solved for the optimum {\(C_I\)} coefficients and for the optimal energy \(E\). The CI wave function is most commonly constructed from spin- and spatial- symmetry adapted combinations of determinants called configuration state functions (CSFs) \(\Phi_J\) that include: 1. The so-called reference CSF that is the SCF wave function used to generate the molecular orbitals \(\phi_i\). 2. CSFs generated by carrying out single, double, triple, etc. level excitations (i.e., orbital replacements) relative to the reference CSF. CI wave functions limited to include contributions through various levels of excitation are denoted S (singly), D (doubly), SD (singly and doubly), SDT (singly, doubly, and triply) excited. The orbitals from which electrons are removed can be restricted to focus attention on correlations among certain orbitals. For example, if excitations out of core orbitals are excluded, one computes a total energy that contains no core correlation energy. The number of CSFs included in the CI calculation can be large. CI wave functions including 5,000 to 50,000 CSFs are routine, and functions with one to several billion CSFs are within the realm of practicality. The need for such large CSF expansions can be appreciated by considering (i) that each electron pair requires at least two CSFs to form the polarized orbital pairs discussed earlier in this Chapter, (ii) there are of the order of \(\dfrac{N(N-1)}{2} = X\) electron pairs for a molecule containing \(N\) electrons, hence (iii) the number of terms in the CI wave function scales as \(2^X\). For a molecule containing ten electrons, there could be \(2^{45} = 3.5 \times 10^{13}\) terms in the CI expansion. This may be an over estimate of the number of CSFs needed, but it demonstrates how rapidly the number of CSFs can grow with the number of electrons. The Hamiltonian matrix elements \(H_{I,J}\) between pairs of CSFs are, in practice, evaluated in terms of one- and two- electron integrals over the molecular orbitals. Prior to forming the \(H_{I,J}\) matrix elements, the one- and two- electron integrals, which can be computed only for the atomic (e.g., STO or GTO) basis, must be transformed to the molecular orbital basis. This transformation step requires computer resources proportional to the fifth power of the number of basis functions, and thus is one of the more troublesome steps in most configuration interaction (and most other correlated) calculations. To transform the two-electron integrals \(\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\chi_d(r')\rangle\) from this AO basis to the MO basis, one proceeds as follows: 1. First one utilizes the original AO-based integrals to form a partially transformed set of integrals \[\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\phi_l(r')\rangle = \sum_d C_{l,d} \langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\chi_d(r')\rangle.\] This step requires of the order of \(M^5\) operations. 2. Next one takes the list \(\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\phi_l(r')\rangle\) and carries out another so-called one-index transformation \[\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle = \sum_c C_{k,c} \langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\chi_c(r)\phi_l(r')\rangle.\] 3. This list \(\langle \chi_a(r)\chi_b(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle\) is then subjected to another one-index transformation to generate \(\langle \chi_a(r)\phi_j(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle\), after which 4. \(\langle \chi_a(r)\phi_j(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle\) is subjected to the fourth one-index transformation to form the final MO-based integral list \(\langle \phi_i(r)\phi_j(r')|\dfrac{1}{|r-r'|}|\phi_k(r)\phi_l(r')\rangle\). In total, these four transformation steps require \(4M^5\) computer operations. A variant of the CI method that is sometimes used is called the multi-configurational self-consistent field (MCSCF) method. To derive the working equations of this approach, one minimizes the expectation value of the Hamiltonian for a trial wave function consisting of a linear combination of CSFs In carrying out this minimization process, one varies both the linear {\(C_I\)} expansion coefficients and the LCAO-MO coefficients {\(C_{J,\mu}\)} describing those spin-orbitals that appear in any of the CSFs {\(\Phi_I\)}. This produces two sets of equations that need to be solved: 1. A matrix eigenvalue equation of the same form as arises in the CI method, and 2. equations that look very much like the HF equations \[\sum_\mu \langle\chi_\nu |h_e| \chi_\mu\rangle C_{J,\mu} = \epsilon_J \sum_\mu \langle\chi_\nu|\chi_\mu\rangle C_{J,\mu} \] but in which the he matrix element is \[\langle\chi_\nu| h_e| \chi_\mu\rangle = \langle\chi_\nu| -\dfrac{\hbar^2}{2m} \nabla^2 |\chi_\mu\rangle + \langle\chi_\nu| -\frac{Ze^2}{r} |\chi_\mu\rangle\] \[+ \sum_{\eta,\gamma} \Gamma{\eta,\gamma} [\langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\mu(r) \chi_\gamma(r’)\rangle - \langle\chi_\nu(r) \chi_\eta(r’) |\frac{e^2}{|r-r’|} | \chi_\gamma(r) \chi_\mu (r’)\rangle].\] Here \(\Gamma_{\eta,\gamma}\) replaces the sum \(\sum_K C_{K,\eta} C_{K,\gamma}\) that appears in the HF equations, with \(\Gamma_{\eta,\gamma}\) depending on both the LCAO-MO coefficients {\(C_{K,\eta}\)} of the spin-orbitals and on the {\(C_I\)} expansion coefficients. These equations are solved through a self-consistent process in which initial {\(C_{K,\eta}\)} coefficients are used to form the matrix and solve for the {\(C_I\)} coefficients, after which the \(\Gamma_{\eta,\gamma}\) can be determined and the HF-like equations solved for a new set of {\(C_{K,\eta}\)} coefficients, and so on until convergence is reached. Perturbation Theory This method uses the single-configuration SCF process to determine a set of orbitals {\(\phi_i\)}. Then, with a zeroth-order Hamiltonian equal to the sum of the \(N\) electrons’ Fock operators \(H_0 = \sum_{i=1}^N h_e(i)\), perturbation theory is used to determine the CI amplitudes for the other CSFs. The Møller-Plesset perturbation (MPPT) procedure is a special case in which the above sum of Fock operators is used to define \(H_0\). The amplitude for the reference CSF is taken as unity and the other CSFs' amplitudes are determined by using \(H-H_0\) as the perturbation. This perturbation is the difference between the true Coulomb interactions among the electrons and the mean-field approximation to those interactions: \[V=H-H^{(0)}=\frac{1}{2}\sum_{i \ne j\ne 1}^N \frac{1}{r_{i,j}}-\sum_{k=1}^N[J_j(r)-K_k(r)]\] where \(J_k\) and \(K_k\) are the Coulomb and exchange operators defined earlier in this Chapter and the sum over \(k\) runs over the \(N\) spin-orbitals that are occupied in the Hartree-Fock wave function that forms the zeroth-order approximation to \(\psi\). In the MPPT method, once the reference CSF is chosen and the SCF orbitals belonging to this CSF are determined, the wave function \(\psi\) and energy \(E\) are determined in an order-by-order manner as is the case in the RSPT discussed in Chapter 3. In fact, MPPT is just RSPT with the above fluctuation potential as the perturbation. The perturbation equations determine what CSFs to include through any particular order. This is one of the primary strengths of this technique; it does not require one to make further choices, in contrast to the CI treatment where one needs to choose which CSFs to include. For example, the first-order wave function correction \(\psi_1\) is: \[\psi_1 = - \sum_{i < j,m < n} \dfrac{\langle i,j |\dfrac{1}{r_{12}}| m,n \rangle -\langle i,j |\dfrac{1}{r_{12}}| n,m \rangle}{ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j} | \Phi_{i,j}^{m,n} \rangle,\] where the SCF orbital energies are denoted \(\varepsilon_k\) and \(\Phi_{i,j}^{m,n}\) represents a CSF that is doubly excited (\(\phi_i\) and \(\phi_j\) are replaced by \(\phi_m\) and \(\phi_n\)) relative to the SCF wave function \(\Phi\). The denominators \([ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j]\) arise from \(E_0-E_k^0\) because each of these zeroth-order energies is the sum of the orbital energies for all spin-orbitals occupied. The excited CSFs \(\Phi_{i,j}^{m,n}\) are the zeroth-order wave functions other than the reference CSF. Only doubly excited CSFs contribute to the first-order wave function; the fact that the contributions from singly excited configurations vanish in \(\psi_1\) is known at the Brillouin theorem. The Brillouin theorem can be proven by considering Hamiltonian matrix elements coupling the reference CSF \(F\) to singly-excited CSFs Fim. The rules for evaluating all such matrix elements are called Slater-Condon rules and are given later in this Chapter. If you don’t know them, this would be a good time to go read the subsection on these rules before returning here. From the Slater-Condon rules, we know that the matrix elements in question are given by \[\langle \Phi|H|\Phi_i^m\rangle= \langle \phi_i(r)| -\frac{1}{2}\nabla^2 - \sum_a \dfrac{Z_a}{|r-R_a|} |\phi_m(r)\rangle + \sum_{j=1(\ne i,m)}^N \langle \phi_i(r) \phi_j(r')|\dfrac{1-P_{r,r'}}{|r-r'|}| \phi_m(r) \phi_j(r')\rangle.\] Here, the factor \(P_{r,r’}\) simply permutes the coordinates \(r\) and \(r’\) to generate the exchange integral. The sum of two electron integrals on the right-hand side above can be extended to include the terms arising from \(j =i\) because vanishes. As a result, the entire right-hand side can be seen to reduce to the matrix element of the Fock operator \(h_{\rm HF}(r)\): \[\langle \Phi|H|\Phi_i^m\rangle=\langle \phi_i|h_{\rm HF}(r)|\phi_m(r)\rangle=\varepsilon_m\delta_{i,m}=0.\] The matrix elements vanish because the spin-orbitals are eigenfunctions of \(h_{\rm HF}(r)\) and are orthogonal to each other. The MPPT energy \(E\) is given through second order as in RSPT by \[E = E_{SCF} - \sum_{i < j,m < n} \frac{| \langle i,j | \dfrac{1}{r_{12}} | m,n \rangle -\langle i,j | \dfrac{1}{r_{12}} | n,m \rangle |^2}{ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j }\] and again only contains contributions from the doubly excited CSFs. Both \(\psi\) and \(E\) are expressed in terms of two-electron integrals \(\langle i,j | \frac{1}{r_{12}} | m,n \rangle\) (that are sometimes denoted \(\langle i,j|k,l\rangle\)) coupling the virtual spin-orbitals \(\phi_m\) and \(\phi_n\) to the spin-orbitals from which electrons were excited \(\phi_i\) and \(\phi_j\) as well as the orbital energy differences \([ \varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j ]\) accompanying such excitations. Clearly, major contributions to the correlation energy are made by double excitations into virtual orbitals \(\phi_m \phi_n\) with large \(\langle i,j | \frac{1}{r_{12}} | m,n \rangle\) integrals and small orbital energy gaps \([\varepsilon_m-\varepsilon_i +\varepsilon_n-\varepsilon_j]\). In higher order corrections, contributions from CSFs that are singly, triply, etc. excited relative to the HF reference function \(F\) appear, and additional contributions from the doubly excited CSFs also enter. The various orders of MPPT are usually denoted MPn (e.g., MP2 means second-order MPPT). The Coupled-Cluster Method As noted above, when the Hartree-Fock wave function \(\psi_0\) is used as the zeroth-order starting point in a perturbation expansion, the first (and presumably most important) corrections to this function are the doubly-excited determinants. In early studies of CI treatments of electron correlation, it was observed that double excitations had the largest \(C_J\) coefficients (after the SCF wave function, which has the very largest \(C_J\)). Moreover, in CI studies that included single, double, triple, and quadruple level excitations relative to the dominant SCF determinant, it was observed that quadruple excitations had the next largest \(C_J\) amplitudes after the double excitations. And, very importantly, it was observed that the amplitudes \(C_{abcd}^{mnpq}\) of the quadruply excited CSFs \(\Phi_{abcd}^{mnpq}\)​ could be very closely approximated as products of the amplitudes \(C_{ab}^{mn} C_{cd}^{pq}\)​ of the doubly excited CSFs \(\Phi_{ab}^{mn}\) and \(\Phi_{cd}^{pq}\). This observation prompted workers to suggest that a more compact and efficient expansion of the correlated wave function might be realized by writing \(\psi\) as: \[\psi = \exp(T) \Phi,\] where \(\Phi\) is the SCF determinant and the operator \(T\) appearing in the exponential is taken to be a sum of operators \[T = T_1 + T_2 + T_3 + … + T­_N \] that create single (\(T_1\)), double (\(T_2\)), etc. level excited CSFs when acting on \(\Phi\). As I show below, this so-called coupled-cluster (CC) form for \(\psi\) then has the characteristic that the dominant contributions from quadruple excitations have coefficients nearly equal to the products of the coefficients of their constituent double excitations. In any practical calculation, this sum of \(T_n\) operators would be truncated to keep the calculation practical. For example, if excitation operators higher than \(T_3\) were neglected, then one would use \(T » T_1 + T_2 + T_3\). However, even when \(T\) is so truncated, the resultant \(\psi\) would contain excitations of higher order. For example, using the truncation just introduced, we would have \[y = (1 + T_1 + T_2 + T_3 + \frac{1}{2} (T_1 + T_2 + T_3) (T_1 + T_2 + T_3) + \frac{1}{6} (T_1 + T_2 + T_3) \] \[(T_1 + T_2 + T_3) (T_1 + T_2 + T_3) + …) \Phi. \] This function contains single excitations (in \(T_1\Phi\)), double excitations (in \(T_2\Phi\) and in \(T_1T_1\Phi\)), triple excitations (in \(T_3\Phi\), \(T_2T_1\Phi\), \(T_1T_2\Phi\), and \(T_1T_1T_1\Phi\)), and quadruple excitations in a variety of terms including \(T_3 T_1\Phi\) and \(T_2 T_2\Phi\), as well as even higher level excitations. By the design of this wave function, the quandruple excitations \(T_2 T_2\Phi\) will have amplitudes given as products of the amplitudes of the double excitations \(T_2\Phi\) just as were found by earlier CI workers to be most important. Hence, in CC theory, we say that quadruple excitations include unlinked products of double excitations arising from the \(T_2 T_2\) product; the quadruple excitations arising from \(T_4\Phi\) would involve linked terms and would have amplitudes that are not products of double-excitation amplitudes. After writing \(\psi\) in terms of an exponential operator, one is faced with determining the amplitudes of the various single, double, etc. excitations generated by the \(T\) operator acting on \(\Phi\). This is done by writing the Schrödinger equation as: \[H \exp(T) \Phi = E \exp(T) \Phi,\] and then multiplying on the left by \(\exp(-T)\) to obtain: \[exp(-T) H \exp(T) \Phi = E \Phi.\] The CC energy is then calculated by multiplying this equation on the left by \(\Phi^*\) and integrating over the coordinates of all the electrons: \[\langle\Phi| \exp(-T) H \exp(T) \Phi> = E.\] In practice, the combination of operators appearing in this expression is rewritten and dealt with as follows: \[E = \langle\Phi| T + [H,T] + \frac{1}{2} [[H,T],T] + \frac{1}{6} [[[H,T],T],T] + \frac{1}{24} [[[[H,T],T],T],T] |\Phi\rangle;\] this so-called Baker-Campbell-Hausdorf expansion of the exponential operators can be shown truncate exactly after the fourth power term shown here. So, once the various operators and their amplitudes that comprise \(T\) are known, \(E\) is computed using the above expression that involves various powers of the \(T\) operators. The equations used to find the amplitudes (e.g., those of the \(T_2\) operator \(\sum_{a,b,m,n} t_{ab}^{mn}T_{ab}^{mn}\), where the \(t_{ab}^{mn}\) are the amplitudes and \(T_{ab}^{mn}\) are the excitation operators that promote two electrons from \(\phi_a\) and \(\phi_b\) into \(\phi_m\) and \(\phi_n\)) of the various excitation level are obtained by multiplying the above Schrödinger equation on the left by an excited determinant of that level and integrating. For example, the equation for the double-excitations is: \[0 = \langle\Phi_{ab}^{mn}| T + [H,T] + \frac{1}{2} [[H,T],T] + \frac{1}{6} [[[H,T],T],T] + \frac{1}{24} [[[[H,T],T],T],T] |\Phi\rangle. \] The zero arises from the right-hand side of \(\exp(-T) H \exp(T) \Phi = E \Phi\) and the fact that \(\langle\Phi_{ab}^{mn}|\Phi\rangle = 0 \); that is, the determinants are orthonormal. The number of such equations is equal to the number of doubly excited determinants \(\Phi_{ab}^{mn}\), which is equal to the number of unknown \(t_{ab}^{mn}\) amplitudes. So, the above quartic equations must be solved to determine the amplitudes appearing in the various \(T_J\) operators. Then, as noted above, once these amplitudes are known, the energy \(E\) can be computed using the earlier quartic equation. Having to solve many coupled quartic equations is one of the most severe computational challenges of CC theory. Clearly, the CC method contains additional complexity as a result of the exponential expansion form of the wave function \(\psi\) and the resulting coupled quartic equations that need to be solved to determine the \(t\) amplitudes. However, it is this way of writing \(\psi\) that allows us to automatically build in the fact that products of double excitations are the dominant contributors to quadruple excitations (and \(T_2 T_2 T_2\) is the dominant component of six-fold excitations, not \(T_6\)). In fact, the CC method is today one of the most accurate tools we have for calculating molecular electronic energies and wave functions. The Density Functional Method These approaches provide alternatives to the conventional tools of quantum chemistry, which move beyond the single-configuration picture by adding to the wave function more configurations (i.e., excited determinants) whose amplitudes they each determine in their own way. As noted earlier, these conventional approaches can lead to a very large number of CSFs in the correlated wave function, and, as a result, a need for extraordinary computer resources. The density functional approaches are different. Here one solves a set of orbital-level equations \[ - \frac{\hbar^2}{2m_e} \nabla^2 - \sum_a \frac{Z_ae^2}{|\textbf{r}-\textbf{R}_a|} + \int \rho(\textbf{r}')\frac{e^2}{|\textbf{r}-\textbf{r}'|} + U(r)] \phi_i = \varepsilon_i \phi_i\] in which the orbitals {\(\phi_i\)} feel potentials due to the nuclear centers (having charges \(Z_a\)), Coulombic interaction with the total electron density \(\rho(\textbf{r}')\), and a so-called exchange-correlation potential denoted \(U(\textbf{r}')\). The particular electronic state for which the calculation is being performed is specified by forming a corresponding density \(\rho(\textbf{r}')\) that, in turn, is often expressed as a sum of squares of occupied orbitals multiplied by orbitial occupation numbers. Before going further in describing how DFT calculations are carried out, let us examine the origins underlying this theory. The so-called Hohenberg-Kohn theorem states that the ground-state electron density \(\rho(\textbf{r})\) of the atom or molecule or ion of interest uniquely determines the potential \(V(\textbf{r})\) in the molecule’s electronic Hamiltonian (i.e., the positions and charges of the system’s nuclei) \[H = \sum_j {-\frac{\hbar^2}{2m_e} \nabla_j^2 + V(r_j) + \frac{e^2}{2} \sum_{k\ne j} \frac{1}{r_{j,k}} },\] and, because H determines all of the energies and wave functions of the system, the ground-state density \(\rho(\textbf{r})\) therefore determines all properties of the system. One proof of this theorem proceeds as follows: 1. \(\rho(\textbf{r})\) determines the number of electrons \(N\) because \(\int \rho(\textbf{r}) d^3r = N\). 2. Assume that there are two distinct potentials (aside from an additive constant that simply shifts the zero of total energy) \(V(\textbf{r})\) and \(V'(\textbf{r})\) which, when used in \(H\) and \(H’\), respectively, to solve for a ground state produce \(E_0\), \(\psi (r)\) and \(E_0’\), \(\psi'(r)\) that have the same one-electron density: \(\int |\psi|^2 dr_2 dr_3 ... dr_N = \rho(\textbf{r})= \int |\psi'|^2 dr_2 dr_3 ... dr_N \). 3. If we think of \(\psi'\) as trial variational wave function for the Hamiltonian \(H\), we know that \(E_0 < \langle \psi'|H|\psi'\rangle = \langle \psi'|H’|\psi'\rangle + \int \rho(\textbf{r}) [V(\textbf{r}) - V’(\textbf{r})] d^3r = E_0’ + \int \rho(\textbf{r}) [V(\textbf{r}) - V’(\textbf{r})] d^3r\). 4. Similarly, taking \(\psi\) as a trial function for the \(H’\) Hamiltonian, one finds that \(E_0’ < E_0 + \int \rho(\textbf{r}) [V’(\textbf{r}) - V(\textbf{r})] d^3r\). 5. Adding the equations in c and d gives \[E_0 + E_0’ < E_0 + E_0’,\] a clear contradiction unless the electronic state of interest is degenerate. Hence, there cannot be two distinct potentials \(V\) and \(V’\) that give the same non-degenerate ground-state \(\rho(\textbf{r})\). So, the ground-state density \(\rho(\textbf{r})\) uniquely determines \(N\) and \(V\), and thus H. Furthermore, because the eigenfunctions of \(H\) determine all properties of the ground state, then \(\rho(\textbf{r})\), in principle, determines all such properties. This means that even the kinetic energy and the electron-electron interaction energy of the ground-state are determined by \(\rho(\textbf{r})\). It is easy to see that \(\int \rho(\textbf{r}) V(r) d^3r = V[\rho]\) gives the average value of the electron-nuclear (plus any additional one-electron additive potential) interaction in terms of the ground-state density \(\rho(\textbf{r})\). However, how are the kinetic energy \(T[\rho]\) and the electron-electron interaction \(V_{ee}[\rho]\) energy expressed in terms of r? There is another point of view that I find sheds even more light on why it makes sense that the ground-state electron density \(\rho(\textbf{r})\) contains all the information needed to determine all properties. It was shown many years ago, by examining the mathematical character of the Schrödinger equation, that the ground-state wave function \(\psi_0(r)\) has certain so-called cusps in the neighborhoods of the nuclear centers \(R_a\). In particular \(\psi_0(r)\) must obey \[\frac{\partial \psi_0(r_1,r_2,\cdots,r_N)}{\partial r_k}=-\frac{m_eZ_ae^2}{\hbar^2}\psi_0(r_1,r_2,\cdots,r_N)\text{ as }\textbf{r}_k \rightarrow \textbf{R}_a\] That is, the derivative or slope of the natural logarithm of the true ground-state wave function must be as any of the electrons’ positions approach the nucleus of charge \(Z_a\) residing at position \(R_a\). Because the ground-state electron density can be expressed in terms of the ground-state wave function as it can be shown that the ground-state density also displays cusps at the nuclear centers as \(r \rightarrow R_a\). where me is the electron mass and e is the unit of charge. So, imagine that you knew the true ground-state density at all points in space. You could integrate the density over all space to determine how many electrons the system has. Then, you could explore over all space to find points at which the density had sharp points characterized by non-zero derivatives in the natural logarithm of the density. The positions \(R_a\) of such points specify the nuclear centers, and by measuring the slopes in \(\ln(\rho(\textbf{r}))\) at each location, one could determine the charges of these nuclei through \[{\rm slope}=\left(\dfrac{\partial\ln(\rho(r))}{dr}\right)_{r\rightarrow R_a}=-2\frac{m_eZ_ae^2}{\hbar^2}\] This demonstrates why the ground-state density is all one needs to fully determine the locations and charges of the nuclei as well as the number of electrons and thus the entire Hamiltonian \(H\). The main difficulty with DFT is that the Hohenberg-Kohn theorem shows the values of \(T\), \(V_{ee}\), \(V\), etc. are all unique functionals of the ground-state \(\rho\) (i.e., that they can, in principle, be determined once \(\rho\) is given), but it does not tell us what these functional relations are. To see how it might make sense that a property such as the kinetic energy, whose operator \(-\hbar^2 /2m_e \nabla^2\) involves derivatives, can be related to the electron density, consider a simple system of \(N\) non-interacting electrons moving in a three-dimensional cubic box potential. The energy states of such electrons are known to be \[E = \frac{\hbar^2}{8m_eL^2} (n_x^2 + n_y^2 +n_z^2 ),\] where \(L\) is the length of the box along the three axes, and \(n_x\), \(n_y\), and \(n_z\) are the quantum numbers describing the state. We can view \(n­_x^2 + n_y^2 +n_z^2 = R^2\) as defining the squared radius of a sphere in three dimensions, and we realize that the density of quantum states in this space is one state per unit volume in the \(n_x\), \(n_y\), \(n_z\) space. Because \(n_x\), \(n_y\), and \(n_z\) must be positive integers, the volume covering all states with energy less than or equal to a specified energy \(E = (h^2/8m_eL^2) R^2\) is 1/8 the volume of the sphere of radius \(R\): \[\Phi(E) = \frac{1}{8} \frac{4\pi}{3} R^3 = \frac{\pi}{6} \left(\frac{8m_eL^2E}{\hbar^2}\right)^{3/2} \] Since there is one state per unit of such volume, \(\Phi(E)\) is also the number of states with energy less than or equal to \(E\), and is called the integrated density of states. The number of states \(g(E) dE\) with energy between \(E\) and \(E+dE\), the density of states, is the derivative of \(\Phi\): \[g(E) = \frac{d\Phi}{dE} = \frac{\pi}{4} \left(\frac{8m_eL^2}{\hbar^2}\right)^{3/2} \sqrt{E} .\] If we calculate the total energy for these non-interacting \(N\) electrons that doubly occupy all states having energies up to the so-called Fermi energy (i.e., the energy of the highest occupied molecular orbital HOMO), we obtain the ground-state energy: \[E_0=2\int_0^{E_F} g(E)EdE = \frac{8\pi}{5} \left(\frac{2m_e}{\hbar^2}\right)^{3/2} L^3 E\Phi^{5/2}.\] The total number of electrons \(N\) can be expressed as \[N = 2\int_0^{E_F} g(E)dE = \frac{8\pi}{3} \left(\frac{2m_e}{\hbar^2}\right)^{3/2} L^3 E\Phi^{3/2},\] which can be solved for \(E\Phi\) in terms of \(N\) to then express \(E_0\) in terms of \(N\) instead of in terms of \(E\Phi\): \[E_0 = \frac{3\hbar^2}{10m_e} \left(\frac{3}{8\pi}\right)^{2/3} L^3 \left(\frac{N}{L^3}\right)^{5/3} .\] This gives the total energy, which is also the kinetic energy in this case because the potential energy is zero within the box and because the electrons are assumed to have no interactions among themselves, in terms of the electron density \(\rho (x,y,z) = \dfrac{N}{L^3}\). It therefore may be plausible to express kinetic energies in terms of electron densities \(\rho(\textbf{r})\), but it is still by no means clear how to do so for real atoms and molecules with electron-nuclear and electron-electron interactions operative. In one of the earliest DFT models, the Thomas-Fermi theory, the kinetic energy of an atom or molecule is approximated using the above kind of treatment on a local level. That is, for each volume element in \(\textbf{r}\) space, one assumes the expression given above to be valid, and then one integrates over all \(\textbf{r}\) to compute the total kinetic energy: \[ T_{\rm TF}[\rho] = \int \frac{3\hbar^2}{10m_e} \left(\frac{3}{8\pi}\right)^{2/3} [\rho(\textbf{r})]^{5/3} d^3r = C_F \int [\rho(\textbf{r})]^{5/3} d^3r ,\] where the last equality simply defines the \(C_F\) constant. Ignoring the correlation and exchange contributions to the total energy, this \(T\) is combined with the electron-nuclear \(V\) and Coulombic electron-electron potential energies to give the Thomas-Fermi total energy: \[E_{\rm 0,TF} [\rho] = C_F \int [\rho(\textbf{r})]^{5/3} d^3r + \int V(r) \rho(\textbf{r}) d^3r + e^2/2 \int \frac{\rho(\textbf{r}) \rho(\textbf{r}’)}{|r-r’|} d^3r d^3r’,\] This expression is an example of how \(E_0\) is given as a local density functional approximation (LDA). The term local means that the energy is given as a functional (i.e., a function of \(\rho\)) which depends only on \(\rho(\textbf{r})\) at points in space but not on \(\rho(\textbf{r})\) at more than one point in space or on spatial derivatives of \(\rho(\textbf{r})\). Unfortunately, the Thomas-Fermi energy functional does not produce results that are of sufficiently high accuracy to be of great use in chemistry. What is missing in this theory are the exchange energy and the electronic correlation energy. Moreover, the kinetic energy is treated only in the approximate manner described earlier (i.e., for non-interacting electrons within a spatially uniform potential). Dirac was able to address the exchange energy for the uniform electron gas (\(N\) Coulomb interacting electrons moving in a uniform positive background charge whose magnitude balances the total charge of the \(N\) electrons). If the exact expression for the exchange energy of the uniform electron gas is applied on a local level, one obtains the commonly used Dirac local density approximation to the exchange energy: \[E_{\rm ex,Dirac}[\rho] = - C_x \int [\rho(\textbf{r})]^{4/3} d^3r,\] with \(C_x = (3/4) (3/\pi)^{1/3}\). Adding this exchange energy to the Thomas-Fermi total energy \(E_{\rm 0,TF} [\rho]\) gives the so-called Thomas-Fermi-Dirac (TFD) energy functional. Because electron densities vary rather strongly spatially near the nuclei, corrections to the above approximations to \(T[\rho]\) and \(E_{\rm ex,Dirac}\) are needed. One of the more commonly used so-called gradient-corrected approximations is that invented by Becke, and referred to as the Becke88 exchange functional: \[E_{\rm ex}({\rm Becke88}) = E_{\rm ex,Dirac}[\rho] -\gamma \int \frac{x^2 r^{4/3}}{1+6 \gamma x \sinh^{-1}(x)} dr,\] where \(x =r^{-4/3} |\nabla\rho|\), and \(\gamma\) is a parameter chosen so that the above exchange energy can best reproduce the known exchange energies of specific electronic states of the inert gas atoms (Becke finds \(\gamma\) to equal 0.0042). A common gradient correction to the earlier local kinetic energy functional \(T[\rho]\) is called the Weizsacker correction and is given by \[\delta{T_{\rm Weizsacker}} = \frac{1}{72} \frac{\hbar}{m_e} \int \frac{ | \nabla \rho(\textbf{r})|^2}{\rho(\textbf{r})} dr.\] Although the above discussion suggests how one might compute the ground-state energy once the ground-state density \(\rho(\textbf{r})\) is given, one still needs to know how to obtain \(\rho\). Kohn and Sham (KS) introduced a set of so-called KS orbitals obeying the following equation: \[{-\dfrac{\hbar^2}{2m} \nabla^2 + V(r) + e^2 \int \frac{\rho(\textbf{r}’)}{|r-r’|} dr’ + U_{\rm xc}(r) }\phi_J = \varepsilon_j \phi_j ,\] where the so-called exchange-correlation potential \(U­_{xc} (r) = dE_{\rm xc}[\rho]/d\rho(\textbf{r})\) could be obtained by functional differentiation if the exchange-correlation energy functional \(E_{\rm xc}[\rho]\) were known. KS also showed that the KS orbitals {\(\phi_J\)} could be used to compute the density \(\rho\) by simply adding up the orbital densities multiplied by orbital occupancies \(n_j\): \[\rho(\textbf{r}) = \sum_j n_j |\phi_J(r)|^2\] (here \(n_j =0,1,\) or 2 is the occupation number of the orbital \(\phi_J\) in the state being studied) and that the kinetic energy should be calculated as \[T = \sum_j n_j \langle \phi_J(r)| -\dfrac{\hbar^2}{2m} \nabla^2 |\phi_J(r)\rangle \] The same investigations of the idealized uniform electron gas that identified the Dirac exchange functional found that the correlation energy (per electron) could also be written exactly as a function of the electron density \(\rho\) of the system for this model system, but only in two limiting cases- the high-density limit (large \(\rho\)) and the low-density limit. There still exists no exact expression for the correlation energy even for the uniform electron gas that is valid at arbitrary values of \(\rho\). Therefore, much work has been devoted to creating efficient and accurate interpolation formulas connecting the low- and high- density uniform electron gas. One such expression is \[E_C[\rho] = \int \rho(\textbf{r}) \varepsilon_c(r) dr,\] \[\varepsilon_c(r) = \dfrac{A}{2}\ln\Big(\dfrac{x}{X}\Big) + \dfrac{2b}{Q} \tan^{-1}\dfrac{Q}{2x+b} -\dfrac{bx_0}{X_0} [\ln\Big(\dfrac{(x-x_0)^2}{X}\Big) +\dfrac{2(b+2x_0)}{Q} \tan^{-1}\dfrac{Q}{2x+b}\] is the correlation energy per electron. Here \(x = \sqrt{r_s}\), \(X=x^2 +bx+c\), \(X_0 =x_0^2 +bx_0+c\) and \(Q=\sqrt{4c - b^2}\), \(A = 0.0621814\), \(x_0= -0.409286\), \(b = 13.0720\), and \(c = 42.7198\). The parameter \(r_s\) is how the density \(\rho\) enters since \(4/3 \pi r_s^3\) is equal to \(1/\rho\); that is, \(r_s\) is the radius of a sphere whose volume is the effective volume occupied by one electron. A reasonable approximation to the full \(E_{\rm xc}[\rho]\) would contain the Dirac (and perhaps gradient corrected) exchange functional plus the above \(E_C[\rho]\), but there are many alternative approximations to the exchange-correlation energy functional. Currently, many workers are doing their best to cook up functionals for the correlation and exchange energies, but no one has yet invented functionals that are so reliable that most workers agree to use them. To summarize, in implementing any DFT, one usually proceeds as follows: 1. An atomic orbital basis is chosen in terms of which the KS orbitals are to be expanded. Most commonly, this is a Gaussian basis or a plane-wave basis. 2. Some initial guess is made for the LCAO-KS expansion coefficients \(C_{j,a}: \phi_J = \sum_a C_{j,a} \chi_a\) of the occupied KS orbitals. 3. The density is computed as \[\rho(\textbf{r}) = \sum_j n_j |\phi_J(r)|^2\] . Often, \(\rho(\textbf{r})\) itself is expanded in an atomic orbital basis, which need not be the same as the basis used for the \(\phi_J\), and the expansion coefficients of \(\rho\) are computed in terms of those of the this new basis. It is also common to use an atomic orbital basis to expand \(\rho^{1/3}(r)\), which, together with \(\rho\), is needed to evaluate the exchange-correlation functional’s contribution to \(E_0\). 4. The current iteration’s density is used in the KS equations to determine the Hamiltonian \[{-\dfrac{\hbar^2}{2m} \nabla^2 + V(r) + e^2 \int \frac{\rho(\textbf{r}’)}{|r-r’|} dr’ + U_{\rm xc}(r) }\] whose new eigenfunctions {\(\phi_J\)} and eigenvalues {\(\epsilon_J\)} are found by solving the KS equations. 5. These new \(\phi_J\) are used to compute a new density, which, in turn, is used to solve a new set of KS equations. This process is continued until convergence is reached (i.e., until the \(\phi_J\) used to determine the current iteration’s \(\rho\) are the same \(\phi_J\) that arise as solutions on the next iteration. 6. Once the converged \(\rho(\textbf{r})\) is determined, the energy can be computed using the earlier expression \[E [\rho] = \sum_j n_j \langle \phi_J(r)| -\dfrac{\hbar^2}{2m} \nabla^2|\phi_J(r)\rangle + \int V(r) \rho(\textbf{r}) dr + \frac{e^2}{2} \int \frac{\rho(\textbf{r})\rho(\textbf{r}’)}{|r-r’|}dr dr’+ E_{\rm xc}[\rho].\] Energy Difference Methods In addition to the methods discussed above for treating the energies and wave functions as solutions to the electronic Schrödinger equation, there exists a family of tools that allow one to compute energy differences directly rather than by finding the energies of pairs of states and subsequently subtracting them. Various energy differences can be so computed: differences between two electronic states of the same molecule (i.e., electronic excitation energies \(\Delta E\)), differences between energy states of a molecule and the cation or anion formed by removing or adding an electron (i.e., ionization potentials (IPs) and electron affinities (EAs)). In the early 1970s, the author developed one such tool for computing EAs (J. Simons, and W. D. Smith, Theory of Electron Affinities of Small Molecules, J. Chem. Phys., 58, 4899-4907 (1973)) and he called this the equations of motion (EOM) method. Throughout much of the 1970s and 1980s, his group advanced and applied this tool to their studies of molecular EAs and electron-molecule interactions. Because of space limitations, we will not be able to elaborate much in great detail on these methods. However, it is important to stress that: 1. These so-called EOM or Greens function or propagator methods utilize essentially the same input information (e.g., atomic orbital basis sets) and perform many of the same computational steps (e.g., evaluation of one- and two- electron integrals, formation of a set of mean-field molecular orbitals, transformation of integrals to the MO basis, etc.) as do the other techniques discussed earlier. 2. These methods are now rather routinely used when \(\Delta E\), IP, or EA information is sought. The basic ideas underlying most if not all of the energy-difference methods are: 1. One forms a reference wave function \(\psi\) (this can be of the SCF, MPn, CI, CC, DFT, etc. variety); the energy differences are computed relative to the energy of this function. 2. One expresses the final-state wave function \(\psi’\) (i.e., that describing the excited, cation, or anion state) in terms of an operator \(\Omega\) acting on the reference \(\psi\): \(\psi’ = \Omega \psi\). Clearly, the \(\Omega\) operator must be one that removes or adds an electron when one is attempting to compute IPs or EAs, respectively. 3. One writes equations which \(\psi\) and \(\psi’\) are expected to obey. For example, in the early development of these methods, the Schrödinger equation itself was assumed to be obeyed, so \(H\psi = E \psi \) and \(H\psi' = E’ \psi’\) are the two equations. 4. One combines \(\Omega\psi = \psi’\) with the equations that \(\psi\) and \(\psi’\) obey to obtain an equation that \(\Omega\) must obey. In the above example, one (a) uses \(\Omega\psi = \psi’\) in the Schrödinger equation for \(\psi’\), (b) allows \(\Omega\) to act from the left on the Schrödinger equation for \(\psi\), and (c) subtracts the resulting two equations to achieve \((H\Omega - \Omega H) \psi = (E’ - E) \Omega \psi \), or, in commutator form \([H,\Omega] \psi = \Delta E \Omega \psi\). 5. One can, for example, express \(\psi\) in terms of a superposition of configurations \(\psi = \sum_J C_J \phi_J\) whose amplitudes \(C_J\) have been determined from a CI or MPn calculation and express \(\Omega\) in terms of operators {\(O_K\)} that cause single-, double-, etc. level excitations (for the IP (EA) cases, \(\Omega\) is given in terms of operators that remove (add), remove and singly excite (add and singly excite, etc.) electrons): \(\Omega = \sum_K D_K O_K\). 6. Substituting the expansions for \(\psi\) and for \(\Omega\) into the equation of motion (EOM) \([H,\Omega] \psi = \Delta E \Omega \psi\), and then projecting the resulting equation on the left against a set of functions (e.g., {\(O_{K’} |\psi>\)}) gives a matrix eigenvalue-eigenvector equation \[\sum_K \langle O_{K’}\psi| [H,O_K] \psi \rangle D_K = \Delta E \sum_K \langle O_{K’}\psi|O_K\psi\rangle D_K\] to be solved for the \(D_K\) operator coefficients and the excitation (or IP or EA) energies \(\Delta E\). Such are the working equations of the EOM (or Greens function or propagator) methods. In recent years, these methods have been greatly expanded and have reached a degree of reliability where they now offer some of the most accurate tools for studying excited and ionized states. In particular, the use of time dependent variational principles have allowed a much more rigorous development of equations for energy differences and non-linear response properties. In addition, the extension of the EOM theory to include coupled-cluster reference functions now allows one to compute excitation and ionization energies using some of the most accurate ab initio tools.
f2c69a0f171338cc
AdvanceSagePrinceton.pdf (294.38 kB) If quantum mechanics is the solution, what should the problem be? Download (294.38 kB) posted on 01.05.2020, 15:04 by Vasil Penchev The paper addresses the problem, which quantum mechanics resolves in fact. Its viewpoint suggests that the crucial link of time and its course is omitted in understanding the problem. The common interpretation underlain by the history of quantum mechanics sees discreteness only on the Plank scale, which is transformed into continuity and even smoothness on the macroscopic scale. That approach is fraught with a series of seeming paradoxes. It suggests that the present mathematical formalism of quantum mechanics is only partly relevant to its problem, which is ostensibly known. The paper accepts just the opposite: The mathematical solution is absolute relevant and serves as an axiomatic base, from which the real and yet hidden problem is deduced. Wave-particle duality, Hilbert space, both probabilistic and many-worlds interpretations of quantum mechanics, quantum information, and the Schrödinger equation are included in that base. The Schrödinger equation is understood as a generalization of the law of energy conservation to past, present, and future moments of time. The deduced real problem of quantum mechanics is: “What is the universal law describing the course of time in any physical change therefore including any mechanical motion?” Declaration of conflicts of interest Corresponding author email Lead author country Lead author job role Higher Education Faculty 2-yr College Lead author institution Bulgarian Academy of Sciences Human Participants Log in to write your comment here... Advance: Social Sciences & Humanities
8b10bb8479a6df9b
Schrödinger equation The Schrödinger equation is a linear partial differential equation published by Erwin Schrödinger in 1926. It describes the wave function or state function of a quantum-mechanical system. • Just four weeks after the first paper (Q1) the Annalen received on February 23 the second paper (Q2) in the series 'Quantization as an Eigenvalue Problem'. ... It consists of a detailed exploration of the Hamiltonian analogy between mechanics and optics leading to a new derivation of the wave equation, an analysis of the relations between geometrical and undulatory mechanics, and applications of the wave equation to the harmonic oscillator and the diatomic molecule. • With the growing importance of models in statistical mechanics and in field theory, the path integral method of Feynman was soon recognized to offer frequently a more general procedure of enforcing the first quantization instead of the Schrödinger equation. To what extent the two methods are actually equivalent, has not always been understood. ... the Coulomb potential and the harmonic oscillator ... point the way: For scattering problems the path integral seems particularly convenient, whereas for the calculation of discrete eigenvalues the Schrödinger equation. • Interactions that look instantaneous are well suited to Schrödinger’s equation, which requires the potential between particles at equal times. It would be quite awkward to explicitly describe finite-velocity forces in the Schrödinger equation because the potential for one particle at a time t would depend on the positions of the others at the retarded times, and one would need the past histories of all the particles to propagate the system forward in time. External linksEdit Wikipedia has an article about:
3e2f33f3795330f5
The nature of splitting worlds in the Everett interpretation This post is about an aspect of the Everett many-worlds interpretation of quantum mechanics. I’ve given brief primers of the interpretation in earlier posts (see here or here), in case you need one. Sean Carroll, as he does periodically, did an AMA on his podcast. He got a number of questions on the Everett interpretation, one of which in particular I want to look at, because it’s about an issue that bugged me for a long time. From the transcript: 0:26:50.1 SC: David H says, “When the universe splits a la Everett, is the split instantaneous across the whole pre-existing universe, or does it propagate at the speed of light?” So the nice answer is, it’s up to you. And this goes exactly back to what we were talking about, about Laplace’s demon earlier. The branching of the wave function of the universe into separate worlds is not part of the fundamental theory. The fundamental theory is, there’s a wave function and it evolves according to the Schrödinger equation. That’s the entire theory. The splitting into worlds is something that we human beings do for our convenience. So, the right way to ask this question is, is it more convenient to imagine the world splitting all at once across all of space, or propagating at the speed of light? 0:27:31.0 SC: And for that, it’s completely dependent on what your purpose is, right? I actually tend to think of it as simpler just to imagine the universe splitting all at once, pre-existing, simultaneously across the whole pre-existing universe. That bothers some people, because they say, “Well, that’s not compatible with special relativity, which says that signals can’t travel faster than the speed of light.” But there’s no signal traveling faster than the speed of light; it’s just our description is traveling faster than the speed of light, and that’s perfectly okay. While this answer makes sense to me now, I don’t think it would have when I was struggling with it. This post is my attempt to explore the answer in such a way that someone who doesn’t yet get it, might. Let’s start with an analogy, the Louisiana purchase. In 1803 France sold a large chunk of territory in North America to the United States. Consider this question. When did the territory become part of the US? From a legal perspective, that would have been when the US Senate ratified the purchase agreement with France, which happened on October 20, 1803. On that ratification, all of the territory became part of the US, and all of the inhabitants became US residents. Of course, news of the purchase took time to spread. There was a ceremony in New Orleans on December 20, 1803. But the news took longer to reach many residents. In particular, no one had really bothered to consult or inform most of the Native Americans living in the territory. So while the legal transfer happened instantly, the social results took time, years in fact, to be felt throughout the territory. Which way is the right way to look at when the Louisiana territory became part of the US? The legal transfer date? The boots on the ground occupation? Or the overall assimilation into US culture? There isn’t really a fact of the matter here. Borders and nationality are human conventions. The land is the land. Nature doesn’t care. So we can validly talk about it in different ways. That’s what Carroll is trying to get at when he talks about the raw theory, the universal wave function, versus our ways of talking about worlds or universes splitting. Similar to the transfer of the Louisiana territory, there are multiple ways of looking at and talking about the same reality. Here are three: 1. On a quantum measurement, the world begins splitting at the time and location of the measurement. The split propagates out at the speed of quantum interactions. The propagation can happen no faster than the speed of light. 2. On a quantum measurement, previously existing worlds, which had until then been identical, begin to diverge from each other at the time and location of the measurement. The divergence propagates out at the speed of quantum interactions, no faster than light. 3. On a quantum measurement, what we considered one world, we now instantly consider split into multiple whole worlds, which had until then been identical. They begin to diverge from each other at the time and location of the measurement, propagating out via quantum interactions no faster than light. The thing to remember here is that a “world” or “universe” in Everett is a slice of the universal wavefunction. But our divvying up of the wavefunction is a human convention. In nature it’s just a continuum. So we can talk about the slice we’re on “splitting” into two or more slices, or nearby slices “diverging” from each other, or even decide that what we once divvyed up as one slice we’re considering multiple slices. It’s all different ways of talking about the same reality. Option 1 has historically made the most sense to me. It was how I needed to think of the Everett interpretation to consider it a viable possibility. It also makes more sense when considering something like an isolated quantum system, such as a quantum computer, which has qubit circuits in combined superposition. Under 1, these could be seen as world splits that are contained for a time, until the measurement magnifies the quantum state differences into the universe. But 1, which is us constantly being split into multiple people, is an existentially disconcerting way to think about this. It also makes the probability of observed measurement outcomes awkward to talk about since all possible outcomes happen. And each split effectively divides up the energy of the world among the new worlds, which many find difficult to accept. Option 2 is David Deutsch’s preferred way of looking at it. In this view, we are who we are, and there are other people in parallel worlds identical to us but diverging away anytime a quantum event is magnified, so we can see ourselves as having a classical timeline. Isolated quantum superpositions are basically the conditions necessary to detect the interference between worlds. Talking about probabilities is much easier since we’re now talking about the probabilities of outcomes in this world. And the energy of this world is what it is. It’s also easier to understand why Bell’s theorem isn’t an issue for Everett within this view, because within any one world, the correlations can exist from the beginning. The drawback of this option is it requires more explanation. Option 3 is Carroll’s preference, and this is the way Everett is usually presented in quick summaries, although without the explanation of why it doesn’t violate relativity. It also seems to inherit the existential angst and other issues from 1. I’m not sure why Carroll prefers it. It might be because the existential issues can also be seen as exciting. And the hybrid model can be seen as preserving that while also making clear why Bell isn’t an issue. But it seems to have the highest explanatory burden. Of course, all of this is about a theory that already requires a lot of explanation, one most people won’t wait on before summarily dismissing the whole thing as absurd and outrageous. So maybe worrying about additional explanatory burden isn’t productive. Which option works for you? Or is the least problematic? Is there another way of looking at it? 83 thoughts on “The nature of splitting worlds in the Everett interpretation 1. “…a theory that already requires a lot of explanation, one most people won’t wait on before summarily dismissing the whole thing as absurd and outrageous.” What about those who’ve given it considerable thought and analysis and still find it absurd? Doesn’t the mere fact that proponents can’t even say for sure how the splitting works say something about how absurd the theory is? (Or, for that matter, define how energy can be “thinned”?) Liked by 1 person 1. I wasn’t describing you with that passage Wyrd. You’ve at least read about it and have often been willing to talk about it. On your question about the splitting, it seems clear I didn’t get my point across in the post, at least not to you. Oh well, maybe next time. (Doesn’t energy get thinned all the time in physics? What else is an explosion? Or the big bang?) 1. While I agree I’m not “most people” the way it’s written verges on the evangelistic ‘if you don’t agree with this, you don’t get it’ mode that I see as making MWI something of a case of groupthink. If your point is that it’s dealer’s choice, I got it, and it’s what I’m suggesting makes this not even a theory but a metaphysical belief. Too much is undefined, and there isn’t any math for any of it. Energy is never “thinned” out in the sense I think you know I mean (especially in light of any number of previous conversations). In physics, energy is conserved. I noticed each of your three options starts with “On a quantum measurement” but what really is a measurement under MWI? Measurements collapse the wave-function, which MWI explicitly denies. 1. Wyrd, my friend, based on our other conversations on this, I feel like if I address your points, things are just going to get progressively more heated. I acknowledge you think this theory has zero merit and is utterly misguided. Can we just agree to disagree on this particular topic? 2. I must read back into quantum mechanics to be able to comment better on your interesting speculations. What I am wondering is whether it doesn’t all result from the amplitude of the wave function being available to us but the phase always being unavailable and probabilistic. When we make a measurement, the phase of the wave function gets translated into an amplitude accessible to us. A measurement is then just an interaction that is accessible to us. Is it then necessary for some ‘wiring up’ behind the scenes to track which particles are entangled (= have correlated phase?), or does that drop out of the universal wave function, evolving according to Schodinger’s equation. Liked by 1 person 1. I have to admit your first paragraph is pushing beyond my understanding. My reading about the phase is that it’s a factor in maintaining coherence, and when it gets disrupted, we lose that coherence, that is, we get decoherence and the disappearance of quantum effects. That might match up with what you’re describing, but I’m not sure. It took me a while to appreciate how thoroughly entanglement features in the Everettian view. As I understand it, the wavefunction collapse in Copenhagen and other collapse interpretations ends entanglement. But under Everett, there is no collapse, just the evolution of the wavefunction. Decoherence is the quantum system becoming entangled with the environment. So with a universal wave function, entanglement is pervasive. When we talk about the entanglement, under Everett, it seems like we’re talking about systems more entangled than the background levels. It’s so pervasive that Carroll, working with others on their on theory of quantum gravity, has proposed that space may be emergent from entanglement. 1. You may regret asking… Given the canonical “zero” state, |0⟩, defined as: It’s the case that: The |0⟩ state is indistinguishable for any global phase angle theta. The reason, as PJMartin mentioned, is that the magnitude of that exponential is always 1.0, so the state always looks like the |0⟩ state. But given the states |+⟩ and |-⟩, defined as: Which differ by a relative phase, we can apply a rotation operator such that the states become |0⟩ and |1⟩, which we can distinguish. 2. Thanks. I think I follow the mathematics, but not sure if I follow the concept. Would it be accurate to say the global phase is the overall background phase of everything in the environment, and the relative phase is the local variance? If so, it makes sense that global phase could never be detected, since anything used to detect it would have the same phase which would just cancel out. 3. Both global and relative phase are properties of the quantum system and have nothing to do with the background. A physical intuition might be something like: Imagine a rotating ball. The vector pointing along the axis of rotation is, in some sense, rotating, but since its coordinates never change, there’s no way to detect that rotation. For the vectors not aligned with the axis, their coordinates do change under rotation, and we can detect that change. Liked by 1 person 3. The way I read the quote you provided from Carroll is that there is no effective difference, at the level of physics we can do, between universes in which a split happens everywhere at once and words in which it spreads at the speed of light. It makes zero difference to the physics. If you want to imagine the whole universe splits everywhere at once, have a ball. Or if you want to imagine the split having a fixed location in space and spreading at the speed of light, knock yourself out. But… if you start actually doing physics and you want to know which point on a detection screen a photon hit, and you’re ten light-years away, you’ll have to wait ten years to find out. Doesn’t matter if you think you split instantly with the photon, or you split when it arrives. Neither depiction matters because they’re indistinguishable in practice. When the radio signal reaches you with the information, then you’ll know! With your Louisiana Purchase example, imagine that what all those different people you described who are rambling around the Territory “know” about the purchase defines their phase correlation. And imagine that the moment of congressional ratification was actually a moment that could have gone either way. That is the quantum system we’re curious about. Let’s say the world “splits” everywhere instantly. So then everyone in the Territory is replicated instantly: in one world there is a version of themselves who know the purchase was ratified, and in the other world there is a version who knows the purchase was repudiated. But these two sets of people never interact because they know different things. OR… the replication of all those people doesn’t occur until the news actually reaches them, (traveling at the speed of light), since prior to this news reaching them Louisiana was owned by both the US and France simultaneously (in the quantum sense). But when news reaches them, THEN a version of them takes up residence in both worlds, since the news must be one way or the other. But at the end of the day, it doesn’t matter which version of the splitting worlds story is “right” because there’s no way for anyone in the territory to actually distinguish them… There’s also a bunch of people who think it was ratified and bunch of otherwise identical people who think it split. Right? Liked by 1 person 1. Hi Michael, You parsed it well! I think I agree with everything you wrote here, with a few minor but important quibbles. (Which might come down to just word choices.) The first is I think the word “replicate” gives the wrong impression. It implies a copy is being made. But that’s wrong. “Split” really is the right word for what’s happening, if we want to think about one world becoming two. Think of it that every world has a certain “thickness”, a certain energy. When a split happens, the resulting worlds are thinner. (Which raises the question of how thin things can get. Carroll says it may be infinite, but if not, based on a maximum entropy calculation, he estimates it should allow at least e^10^122 slices of the observable universe.) Or we can think about it as two worlds that were always there with the thinner thickness. They were identical, running side by side, until the ratification vote, then they started having differences after the vote went different ways in each one. Where we draw the boundaries and when we change them is really up to us, because the boundaries are just accounting, something to make it easier for us to think about it. No matter which way we do it, the actual dynamics only propagate under the speed of light (or the speed of early 1800s mail in the analogy). So, on the second quibble, it’s important to understand that it’s not a matter of not knowing which of different ontologies is “right” but all of them being compatible with the mathematics. It’s that both versions are the same. The underlying ontology (if Everett is correct) is identical. The variance is just in how we choose to slice up the universal wave function in our accounting. Hope that makes sense (and I got it all right on my end). Liked by 1 person 1. I have no quibbles with your quibbles, Mike. Replication was meant to suggest that when a split occurs there’s potentially two of me now–one where the purchase was ratified and one where it wasn’t. But I realized after I hit ‘Send’ that this was based only one of the three scenarios you had described. Both could have been there all along in some of the others. So no issue. On the issue of an ontology being “right” or not, this gets interesting to me in the following sense: I think what you’re saying is that both scenarios are fictional representations of processes that don’t exist quite as imagined to begin with, and because both are compatible with the observable processes that do exist, they are the “same.” But to one disinterested in physics, they seem like they could be different. To the non-technical part of me, for instance, it sure seems like a split that happens everywhere at once is not the same as one that propagates in time. I understand it is a difference without distinction, but maybe the pause that arises when we consider this is worth attending to… If the mathematics equates two scenarios which common sense tells us are not in all ways equal, then what is happening? This was prompted by thinking further about this equality of conditions that don’t seem equal–but ultimately are in terms of how they cash out. It’s not an objection… more of a curiosity. Liked by 1 person 1. I understand the difficulty Michael. Remember, I did a whole post questioning Deutsche’s view and had a hard time for months thinking of it as the same theory as Everett’s. The idea that they’re discussing the same reality isn’t obvious. What is a fiction is the idea that worlds are definite things in Everettian theory. It’s more of a continuum in which we can interact with a narrow slice. I think about a post Chad Orzel did on the Everett interpretation, which I linked to in my post about Carroll’s book. At the time, I misinterpreted his post as taking an anti-real stance toward the worlds because he used the word “metaphor”. But when I recently went back to it, I realized that I (and many other people) had missed his meaning. He meant the same thing that Carroll meant. The main reality is the evolution of the wave function. What we call “worlds” or “universes” are just a convenient way for us to think about that reality, to relate it to our experiences. Lev Vaidman, in the SEP article on the many-worlds interpretation, describes the theory as having two components: The MWI consists of two parts: i. A mathematical theory which yields the time evolution of the quantum state of the (single) Universe. ii. A prescription which sets up a correspondence between the quantum state of the Universe and our experiences. Part (i) is essentially summarized by the Schrödinger equation or its relativistic generalization. It is a rigorous mathematical theory and is not problematic philosophically. Part (ii) involves “our experiences” which do not have a rigorous definition. It’s funny that we use the word “interpretation” to refer to theories like Copenhagen, deBroglie-Bohm, and Everett, when they have different postulates and make different predictions. They really are different theories. (I think the word “interpretation” in this case arose for historical reasons, an attempt to get these alternate theories past the old guard.) But what Vaidman calls Part(ii) is actually an interpretation of Everettian physics, and there are multiple. But unlike what we normally call “interpretations”, these really are interpretations, all with exactly the same cash out predictions. I should also note that there are plenty of Everettians who do take either an anti-real stance toward the worlds, or an agnostic one. Stephen Hawking was one. He was an Everettian, but also an instrumentalist. His attitude was that Part(i) was the important part and that it was predictive of our observations. He stopped there. As someone with instrumentalist sympathies, it’s a view I can understand. Liked by 1 person 1. Hi Mike, Thanks for the link; I enjoyed Chad Orzel’s article. What it reinforced for me personally is that MWI suffers from the same problem every other form of QM suffers: there is no explicit connection between the mathematical theory and our experience of the world, which is to say, something in addition to the core mathematical theory is required to derive the world we experience. And that something in addition is always a little wonky compared to the underlying mathematical structure. This is Vaidman’s point I think. If you posit the wave equation is describing what is real, then our collective, objective perception of a classical world is a shared hallucinatory negation of everything else, and some physical vehicles or mechanisms are required to explain how this “filtering” occurs. And it isn’t just a filtering of conscious perceptions, but something more extensive. We know this because the “me” on one branch doesn’t bump into the “me” on another branch in the hallway. So if all branches are equally real, a mechanism for physical differentiation or divisibility is required. I’m not aware of any hypothetical means by which this shared hallucinatory negation or physical divisibility of elements of reality occurs. And I’m probably missing something because physicists don’t seem to bothered by this. In this notion, the “we” that we think we are, are ghosts. We pass through everything else. It seems more likely to me, as an explanatory position, that the wave equation is describing a universe of possibilities that are all quite real to one another at the level they exist, and that they can interact on this level, but that only a subset of particular conditions or branches are then physically instantiated somehow through a process we have yet to even imagine. And that produces the collective, objective reality. In this notion what is “real” is only what is instantiated, and the wave equation is actually describing a realm of ghosts. The “we” that we think we are is what is “real” and everything else is a ghost. Not sure it matters which is correct, but how do you reconcile the basic claim of MWI and the human experience without needing to define different notions of what is “real?” Liked by 1 person Nor am I. (Normally Fermi exclusion principle prohibits matter from coinciding.) The claim is that some magical form of decoherence is responsible, but decoherence as we know it does the opposite. You don’t sink through the chair you’re sitting in because you and the chair are both decohered. Liked by 1 person 3. Hi Michael, I’m not wild about the word “hallucinatory”, but I think from your full description you don’t mean something that should be perceptible by the nervous system. It happens at a much lower level. I wouldn’t say that physicists aren’t bothered by the concept you’re describing. In fact, the first physicist to be bothered by it was Albert Einstein, since the basic mechanism which would allow this to work is entanglement. The physicists have just been wrestling with it for a lot longer than we have. It’s old hat to them. It’s entanglement that allows for multiple particles to be in a combined superposition. The Everett interpretation is that entanglement doesn’t end on measurement, but propagates into the environment (decoherence results from the system becoming entangled with the environment). Consider quantum computers. A 50 qubit circuit can be in up to 2^50 concurrent states (over a quadrillion) simultaneously. When changes ripple through the circuits, how do the qubit states in each version of the circuit “know” which version they’re in? Because they’re still in a coherent state, there is detectable (and usable) interference between the versions, but each version is still distinct. Under collapse interpretations, when the circuit is measured, it collapses to one classical state. Under Everett, the entanglement instead spreads into the environment. The success of quantum computing is actually one of the things that led me to take another look at this stuff a few years ago. On only a subset of the branches being real, well, that’s the rub. Collapse interpretations say only one is real, but no one can identify a mechanism to identify why any particular one should be more real than any other, except to just say it’s random. But there’s nothing in the raw quantum formalism, the part of QM that has been validated through almost a century of experiments, to indicate any outcome should be any more real than the others. Doesn’t mean some experiment might not find one tomorrow, and so falsify Everett, but that’s where we are. Liked by 1 person 4. “It’s entanglement that allows for multiple particles to be in a combined superposition.” Except that entangled particles are distinct particles with their own energy/mass, and are subject to the Fermi exclusion principle. They cannot physically coincide, which, I believe, is what Michael is getting at. In a Stern-Gerlach experiment, for instance, under MWI there are suddenly two silver atoms where there was only one entering the apparatus. If the claim is there were two silver atoms all along, then how did they coincide? If the branch split one atom into two atoms, how does that happen? Either way you seem to need new physics. re QC: In all the reading I’ve done, most texts don’t mention MWI in the context of QC. I finally did find a reference to it. Deutsch believes the power of QC comes from the myriad branches, which really raised my eyebrows. For one, how do other branches return the result of their computations? MWI suggests branches cannot affect each other. For another, QC is fully explained in its own mathematics. Deutsch seems to treat QC like binary computing, but it’s not, it’s a form of analog computing, hence its ability to have those myriad superposed states. It’s like saying we need multiple worlds to explain the different timbres of diverse musical instruments playing the same note. The notes sound different because they are different superpositions of harmonics. The notes, and QC, are analog and fully capable of having myriad wave forms combined. Liked by 1 person 5. Do you mean the Pauli exclusion principle? As I understand it, that states that no two fermions can be in the same quantum state at the same time. Since the various states of a particle in superposition are, by definition, different states, I don’t think there’s an issue here. I’ve said it before, but when we think we’ve found a cheap way to dismiss Everett, we’re almost certainly missing basic stuff. On QC, most books on it don’t go into quantum interpretations because they’re controversial and it’s not needed to explain the techniques. But many of the theoreticians, like Deutsch or John Preskill, thought through it within the Everettian paradigm. That, in and of itself, doesn’t make that the only paradigm it can work in, just the one it does most straightforwardly. In any case, I can’t imagine anything Deutsch might say about the Everett interpretation that you wouldn’t have a strong reaction against. 🙂 I’ll repeat what I said above, over a quadrillion concurrent states, each one able to do its own calculations. At 300 qubits, there will be more states than there are particles in the observable universe. When we get into the thousands and higher, the alternative explanations to quantum states are going to get increasingly strained. As to how the branches return their results, remember that Deutsch is looking at this from Option 2 in the post. The main thing is these branches aren’t yet decohered from each other. They still have coherent interference. Under option 2, that interference is between worlds / universes. I think you know the interference is utilized and manipulated to promote the correct answer so that it has a high probability of being in the measured version. Liked by 1 person 6. Oops, yes, Pauli, not Fermi. I jumped from fermion to Fermi there! This is why I mentioned silver atoms (which are made of fermions). In particular, the electrons already occupy all available quantum states, except for the lone valence electron in the 5s shell. It’s that electron that allows a silver atom to have an over all spin. The other 46 electrons pair off in spin-up+spin-down pairs. And those 46 electrons are fully described within the silver atom, there are no extra quantum states they can have to differentiate from supposedly superposed “identical” electrons. Think of it this way: When Sean Carroll gives a lecture about MWI and uses his beam-splitter and then jumps one way or the other depending on the result, the implication is he, the podium, the stage, the audience, and the auditorium, all branch into closely identical versions that physically coincide. What magical quantum state allows all those fermions to do that in violation of Pauli? The usual explanation is “decoherence” but that’s magical, too, at least in terms of how we currently understand decoherence. Or the outcomes. Exactly. It’s not the controversy. It’s that there’s no need for it. It’s just one calculation — one set of operations performed on the qubits. The thing about interference, which yes, is where the QC power comes from, is that, as in the two-slit experiment, which we previously agreed didn’t seem to invoke MWI except in where the particle actually gets measured (as in a beam-splitter experiment), interference is a single-world phenomenon that, while we don’t fully understand it, doesn’t seem to require, or even suggest, multiple worlds. Liked by 1 person 7. Mike, Wyrd seems to understand pretty well the question I was trying to ask. I wasn’t sure from your answer you fully understood what I was driving at, or if you did, your own reading may have given you a perspective on this I don’t grasp, which results in our talking past one another just a bit. Imagine I am in a room observing a double-slit experiment. And there are various possibilities for the outcome I might observe. If I understand MWI, they all occur. And in popular writing about this, it seems to imply there is a “me” who sees one outcome, as well as a “me” in another branch that sees another. Let’s say I’m sitting behind a desk where a computer is telling me what the detector in “my” branch of the wave function registered. Presumably, in another branch, a completely independent instantiation of “me”, seated at the same desk (albeit an independent instantiation of the desk), registers a different result for where the photon landed. Now, the entanglement that allows the double slit experiment to create the interference pattern presumably is a physical process occurring within this room. So there are all these instantiations of me seated at this desk, by they do not “bump” into one another or know of one another or interact in any way. So it’s a lot like the Exclusion Principle problem, only we’re talking about entire portfolios of nearly identical physical systems that would seem to exist in the same physical space but don’t interact. When I asked if this issue concerned physicists, I wasn’t speaking about entanglement itself, which I know Einstein objected to–I’m wondering what it is I’m missing about all these nearly identical physical systems along separate branches that would intuitively be in the same room. If they are all in parallel “worlds” then where are those worlds? This seems like a straightforward question to ask if the premises are right: the key premise being that in branches with different outcomes than the one I know about, there is a version of “me” there also who witnesses the other outcomes. Where do these versions of “me” reside that all witness different outcomes of the same physical experiment, but never physically interact? Because it seems an obvious question, and because many people smarter than me don’t seem worried about it, I am wondering if I’m misunderstanding something essential about the MWI to begin with. Liked by 2 people 8. Michael: You’re right on point with the coincidence issue. I think the understanding required is this: MWI places the Schrödinger equation as central to its ontology, and proponents have faith the coincidence issue, the energy issue, the probability issue, the preferred basis issue, and the Hilbert space ontology issue, all have reasonable explanations we’ll someday understand based on the central notion that the Schrödinger equation explains everything. Those on the Copenhagen side of things have faith that wave-function “collapse” has a reasonable explanation we’ll someday understand based on, or extending, QM principles. (As I tried to illustrate with the spin experiments, even MWI experiences sudden changes to the wave-function in experiments, so it actually does include a form of “collapse” — that wave-function vector suddenly jumps to a known eigenstate.) The irony to me is that MWI is often claimed as the more parsimonious view based on the simplicity of the premise. I think the consequences of a premise need to be considered as well, and as total views there is far more physics unexplained under MWI and it is therefore the less parsimonious view overall. Liked by 1 person 9. Michael and Wyrd, On the exclusion principle, I don’t have a researched answer. However, I’ll note again that the Everett interpretation is not going to be dismissed on the cheap. If it was incompatible with something as fundamental as the Pauli Exclusion Principle, Hugh Everett wouldn’t have gotten it past John Wheeler, or his thesis committee, or the peer review for publication, not to mention all the people who’ve attacked the theory over the decades. So my answer here might not be right, but if it isn’t, it just means we’re overlooking something a first year physics graduate student probably knows. I think the answer is that the exclusion principle is based on interactions, on bosons being exchanged by fermions. However, in a group of entangled particles, such as all the elementary particles in an atom or molecule in superposition, those types of interactions can only happen between versions of the particles in the same element of the composite superposition. In other words, an electron in one version of an atom in superposition isn’t going to exchange photons with the same electron in another version of that atom. Remember that the photons are part of the entanglement too, so there will be versions for each element of the overall entangled superposition. (I wish I knew less awkward language to express this.) I’ll admit I’m not sure how interference factors into this, except to say it’s only a factor until decoherence. Wyrd laid the entire explanation on decoherence, but I’m not sure that’s true. I think there is already a separation before then. It’s just that interference is gone (or well, no longer significant) after decoherence. Anyway, that’s my amateur (possibly very wrong) shot at the answer. It’s the way I’ve assumed it worked for a while. I might do some digging around to find out how the exclusion principle and superpositions relate to each other. I think it’s where the answer lies. Liked by 1 person 10. Mike, It is precisely because I agree MWI won’t be dismissed on the cheap that I’m wondering what I’m missing. I think the focus/discussion above on the Pauli Exclusion Principle has perhaps led you away from the bigger picture, even simpler question I was asking. I think to your point, it’s easy enough to deal with the Exclusion Principle. Might we note for instance that the Pauli Exclusion Principle holds in any given branch or world, and that when we deal with entanglement all the “versions” of an electron, say, in MWI, have something unique about them (a different spin or position or momentum), which is why they’re in another branch to begin with. I’m less concerned about such a specific and technical nuance of the theory, and more curious about where the physicists think all the various branches of the wave function reside such that they are all equally “real” but utterly hidden from one another on the large. Liked by 1 person 11. Michael, I think the principle remains the same on broader considerations. Our ability to detect something depends on interactions. For example, we only see something by having photons from it strike our retina. When we touch something, it’s electromagnetic interactions that stop our hand from going through it, etc. This is one of the reasons dark matter is supposed to be so hard to detect, because it only seems to interact gravitationally. We could think of the other “worlds” as dark matter without the gravitational interactions. (Although each world obviously interacts with itself.) We can only interact with the slice of the wave function we’re on, essentially with the stuff in the same element of the superposition of the entangled environment we’re a part of. The other worlds are all right here, but we can’t interact with them, and they can’t interact with us. (At least aside from interference that is so fragmentary and canceled out that detecting it would require knowledge of all the relevant microstates.) Liked by 1 person 12. “The other worlds are all right here, but we can’t interact with them, and they can’t interact with us.” Nothing in physics explains how that can be true of normal matter. It’s an unfounded assertion. Liked by 1 person 13. I think that shoe is actually on the other foot. It’s my logic you have consistently denied in all these conversations. MWI doesn’t really have logic so much as assertions based on the notion that the Schrödinger equation must be the whole and entire truth. Liked by 1 person 14. I’m recalling now we had this discussion once before, Mike. I understand the notion that dark matter doesn’t interact with us except gravitationally so it’s in essence right here all the time though we never sense it. But I think what you’re suggesting here is that not only is red different from blue in the branch of reality in which we’re having this conversation, but that in 10^(10^120 something) co-located branches of reality, there is a red that is different from every other red in some way that doesn’t reduce it’s redness. I can imagine ways of describing this, but I think it requires additional properties of matter, and a HUGE range of them. I guess the question is: are these properties part of the wave equation? I think this is part of the extra stuff that is needed to relate the theory to our experiments. All the versions of QM have a problem with that specific issue I think. When you say we only interact with the stuff in the same element of the superposition of the entangled environment we’re a part of, I don’t really know how to parse that. I think of the double slit experiment again, and understand at some conceptual level that the entanglement between possible outcomes of the experiment is replaced by new entangled relationships that spread through the environment. But where this gets confusing is that if I’m listening to channel 96.5 on the FM band, everything on this station must be somehow related in a way that everything else is not. When we do a double slit experiment, the entanglement passes from all possible electron states to a specific electron and the detector, and then it bangs around the detector as a whole as atoms interact or what have you. Point being: the baton of entanglement is passed through specific interactions is it not? Two particles collide and now we don’t know which one has more of the energy. Entanglement doesn’t just get broadcast to every atom in our light cone once the electron hits the detector right? So if I’m correct that entanglement disperses through chains of interaction, then at some level it seems like we’re saying every element of matter/energy touched by this chain has to obtain or activate some underlying property that unifies them on the one hand, and differentiates them from all the other chains going on out there, right? I just don’t see how such a world practically works, or even is contained in the wave equation if there are no variable properties that are shared to unify all the matter and energy contained in a particular branch. Liked by 2 people 15. Michael, From what I’ve read, entanglement is a complex topic. It can exist on various properties (like spin) while not on others. And there can be different degrees of it. One of the sources that helped me think about it was this post. But at a fundamental level, I generally take entanglement to be correlation, which makes sense when you think about how correlations form and that they can exist to greater or lesser degrees. Of course, under collapse interpretations, it must be something stronger than that. And even under Everett, it feels like that isn’t sufficient. This feels particularly true when we’re talking about a quantum circuit in a superposition of quadrillions of composite states, much less of a whole environment that, under Everett, is also in a superposition of some unfathomable number of composite states. The feeling that there must be something else, some hidden variables to keep everything straight, is very strong. But there’s a good chance our intuitions here are simply not reliable. My understanding is that entanglement, under normal conditions, is constantly being “broadcast”. Remember that this is often described as information about the quantum system leaking into the environment. But what we’re really saying is that the system in question is having causal effects on the environment, while the environment is also having causal effects on it. A lot of the effort involved in keeping quantum circuits coherent involves inhibiting those causal interactions with the environment as much as possible until the desired result is ready. When it is ready, it is then allowed to causally cascade into the environment (i.e. be measured). What this means is that, under Everett, there’s a background level of entanglement, which we don’t notice because it’s everywhere. When we discuss whether or not particles are entangled, we’re really discussing whether they’re more entangled then that background level. All of this makes sense when you remember that we’re talking about a universal wavefunction. But as I mentioned somewhere else on this thread, entanglement is so pervasive that there are physicists now thinking that space could be emergent from it. Conversely, in the context of multiple worlds, it might be that entanglement’s broad ranging correlations are depending on space itself branching. That’s something we haven’t discussed here. Everett requires gravity to eventually be brought into the quantum fold. Which might help with keeping all those reds from each other. Although when we remember that red only comes about through interactions, I’m not sure it’s strictly necessary. This reply feels somewhat rambling. Hopefully somewhere in it your concerns were addressed. Liked by 1 person 16. Thanks for additional info. This is a very interesting topic… Focusing on your paragraph that begins, “What this means is that, under Everett, there’s a background level of entanglement…,” there are definitely questions that arise. I’ve read a few books on entanglement–Amir Aczel’s and Louisa Gilder’s–but it’s been a while. I don’t recall either of them spending much of any time on widespread universal entanglement networks, as they were focused moreso on the “more entangled” situations of experiments. In the experiments and in quantum computing, the entanglement is very fragile and has to be kept isolated, but I think what you’re saying is that in the most general case, entanglement is a pervasive condition of things. I want to say something like, “I can see that…,” but the truth is I’m pretty fuzzy on what that really means. Doesn’t mean I’m opposed to it. It’s just that the properties of such a reality would have to be explained a bit to me I think so I could understand it better. What I understand entanglement to mean is this: two or more particles are said to be entangled when a) they are in a superposition and haven’t interacted with the environment or otherwise been “measured” and b) when conservation of spin or momentum or something requires that their states, whenever they are actually determined, are mathematically related such that if I know the state of one I also know the state of the other. Perhaps as I ramble here in return, an important element of what you’re describing is noting that in an Everettian universe, the wave function never collapses, so the entanglement never really dissolves. Particles are never released from their obligations to one another, although they can trade those obligations with one another. Say particle A and B are entangled. Particle B could have a drink with Particle C, and they could agree to share somehow in the fulfillment of the obligation Particle B originally had with Particle A. This could go on and on and on. It’s kind of like those financial securities that got us in so much trouble in 2008. Pretty soon everyone has an obligation to everyone else and no one knows who owes who. But, it seems to me that all these trading of mutual obligations are actually moments when the wave function branches–since it doesn’t collapse, it must branch–and the question in my mind remains: how does one speak meaningfully about the “classical” world we experience in this case? So what keeps coming up for me, Mike, is this. There’s an intriguing “truth” I’ve encountered in a number of contexts: everything and nothing are indistinguishable. There is nothing interesting about either one. They are the alpha and the omega. Things only get interesting when one thing happens and other things do not, so I cannot help but think this notion of an ever-evolving wave function in which everything happens is only one part of what’s really happening, and that there are very likely selection processes at work. It’s just a suspicion. Otherwise, this notion of extended entanglement networks, which is a lot like an economy as Orzel noted, doesn’t quite explain how you could have trillions of such economies that are mutually exclusive. But it makes for interesting thought experiments and I’m inclined to run a few before saying anything more. Haha. Liked by 1 person 17. Thanks Michael. I hadn’t heard of those books. Interesting. I picked up a couple myself late last year, but was disappointed in them. They were fairly shallow pop-science books, and only lightly touched on Everett. One source that gave me a little insight about the relationship between entanglement and Everett was briefly discussed by Matt O’Dowd in this video. (Hopefully I got the timestamp right. He takes the option 1 approach from the post.) Sean Carroll occasionally veers into this on his podcast, particularly on the solo eps, although most of it is him interviewing others about their ideas. Thinking through scenarios is the way to approach this. Every time I think I’ve found a fatal flaw, it turns out to have a solution. As long as the raw quantum formalism continues to be validated in experiments, it’s hard to dismiss. Of course, that could change at any time with new evidence. Liked by 1 person 18. I skipped ahead to the eight minute mark that Wyrd pointed out. It was interesting and it was consistent with what I’ve heard on this topic before I think. Statements like this, at the 10:20 mark, are the ones I think require additional assumptions on top of the wave equation, “The evolution of the wave function is deterministic. That means all future branching of the wave function of your present, by which I mean the entanglement network that you currently belong to, is pre-defined. What isn’t defined is your own experience of that future branching. You will be the thread of conscious experience that travels one of those branches. You’ll also travel the others, but each version of you will only feel like you travel one of them. (emphasis added)” This gets quickly into the relationship of the wave function to conscious experience that I said earlier is tricky. More scenarios to ponder… 🙂 Liked by 1 person 19. FWIW, the first paragraph of the “Meaning of entanglement” section of the Wiki article for quantum entanglement does a fair job of describing it: (In general, Wiki is a pretty good resource for QM. It’s one of the first places I check when I have a question about some aspect of it.) Liked by 1 person 20. As I said, this isn’t a researched answer. It’s possible bosons aren’t involved, but the relation with superposition still applies. Or not. I think whatever it is, it’s standard physics that we’re simply missing. 21. Okay, good, I would have been surprised. I’m pretty sure Fermi-Dirac statistics (which may be why I confused the names) are due entirely to fermions having 1/2 integer spin. It’s a fundamental part of how such particles behave, and it comes from their mathematics. The thing about all these interpretations of QM is that there’s a metaphysics aspect to them, and metaphysical positions are easy to believe in a hard to refute. Ask why billions believe in some form of God. The only available tool is logic, and its value depends on people accepting the premises involved. I have long suspected the commitment to MWI comes, in part, from seeing its viability on the quantum scale and, from the premise “everything is quantum,” assuming it scales up to the classical world. As such, I’ve also long suspected the key to refuting MWI lies in figuring out the Heisenberg cut. 22. It seems like fermions have to interact with each other in some manner, otherwise how does one “know” where to avoid? If a Heisenberg cut were ever found, it would falsify Everett. As would evidence for an objective collapse. I also understand Everett needs gravity to be quantized. 23. Ha, yeah, we all need gravity to be quantized! To answer your question (as best I can), the mathematics of 1/2 integer spin only allow for a fixed set of quantum states. Recall that particles act like waves, and it’s in the interaction of those matter waves that the particles “know” what they can, or cannot, do. The matter waves for fermions act differently than the matter waves for bosons. (In fact, because of the dynamics of the wave behavior, bosons like to clump together. I think that pop sci series you’ve mentioned got into that in one of the essays.) Tunneling, for instance, is because when a particle is near a barrier its wave-function extends beyond the barrier and, because the wave-function determines the probability of finding the particle in a given position, there is therefore some probability the particle is on the other side of the barrier. With particles, the wave description always obtains until the particle is somehow observed. (FWIW, my abiding belief is that the Heisenberg cut will be figured out. We’re currently vexed because it all takes place down on the Planck level which we can’t see.) 24. Thus sayeth Schrödinger: “He who hath an ear let him hear. My equation was from the beginning, it is the premise upon which all understanding of the natural world rests. The great Schrödinger speaks; my equation is a probabilities mathematical synthesis, an Immortal Law derived from a quantum wave that has never been demonstrated to exist. He who hath an ear, let him hear…” As mother calls; “Children, it’s time to quit playing in the sandbox of discourse; it’s nappy time…” Liked by 1 person 25. This snippet from last week’s Ars Technica article on quantum physics is worth noting: Quantum mechanics is not only written in math, but there are three completely different versions of the math in widespread use: the Schrödinger wave approach, the Dirac formulation, and Feynman’s path integrals. The Schrödinger approach emphasizes the waviness of particles and uses differential equations. The Dirac formulation focuses on quantum mechanics’ sensitivity to measurement order and uses the language of linear algebra. Feynman’s path integrals also have a wavy point of view and can be seen as an extension of the Huygens–Fresnel principle of wave propagation. This leads to some truly terrifying path integrals, covering all possible paths and possibilities. Feynman diagrams are a shorthand for keeping track of the approximations you need to make to actually solve things. While the mental models behind the three mathematical traditions are quite distinct, they always give the same answers. So why are there three equivalent versions of quantum mechanics? Depending on the problem you are worrying about, it turns out that it can be easier to get the answer using one of the three approaches. And physicists are all about using the path of least resistance. So it’s not really about the Schrodinger equation in and of itself, but about what it models. But yes, the Everett view is that we live in a quantum universe. If right, it’s far from the first time science would be shifting our view of reality out from under our feet. 4. Hmm. I think that what Sean Carroll is hinting at when he playfully says, “So the nice answer is, it’s up to you,” is that the universe isn’t really splitting in the way MWI is popularly presented. If I understand the first thing about MWI, it’s that the “world” doesn’t “split” when a “quantum measurement” occurs, but is constantly accessing multiple states. To put it in mathematical terms, there is no measurement in the Schrodinger equation. It simply describes a time-dependent system whose solution is a superposition of eigenstates. And your assertion that “And each split effectively divides up the energy of the world among the new worlds, which many find difficult to accept,” is something we’ve discussed before and I thought you’d moved on from that misconception. The energy is not divided up between worlds. There aren’t different universes. Liked by 2 people 1. The main thing to understand about the Everett interpretation is that the core theory is simply the evolution of the universal wave function, the raw quantum formalism applied to the whole universe, a deterministic theory with local dynamics. Everything else is us interpreting the interpretation. So yes, Carroll is making clear that that’s the core theory, the main reality. I don’t think it’s accurate to say there’s no measurement in Everett, it just doesn’t have the ontological role it does in Copenhagen. Any magnification of an individual quantum outcome to macroscopic scale is a measurement like event. So when a cosmic ray knocks an atom loose in DNA resulting in a mutation, that is a measurement type event, even though there’s no conscious observer. When I talk about splits and dividing up the energy, that is about the interpretation of the interpretation. You can interpret it in different ways. Whether there are different universes, or portions of the same universe which don’t have access to each other, is just semantics. It’s like when other galaxies were discovered in the 1920s, they were often referred to as “island universes”, before the term “universe” got reserved for all of space. So if it makes you feel better to think of it all as one universe, that’s fine. It was the approach that Everett himself seemed to prefer. I use the term “world”, in the sense that there are many classical worlds in the universe. Others prefer to go with explicit multiverse language. Or you can think of it as the one universe in a superposition of many and an ever growing number of quantum states. It’s all compatible ways of thinking about the core theory. 1. The problem is that measurement necessarily alters — “collapses” — the wave-function, so figuring out what “measurement” actually means under MWI is one of the many undefined things about it. Which is why I pointed to your three options that all start with: “On a quantum measurement…” Under MWI, what does it mean to “measure” something? If Alex does a spin experiment and branches into Alex-Up and Alex-Down, both versions have a different wave-function than prior to the experiment. 1. There is a phenomenological collapse. That’s true in every interpretation. But if you want to say there’s an ontological collapse, then that’s not accepting the most fundamental thing about Everett, and I wouldn’t expect the theory to make much sense from there. From what I’ve read, the best way to think of a measurement under Everett is the magnifying of the effects of a quantum event. As I mentioned to Steve, there are natural measurement events. Certainly the wavefunction evolves and changes, and measurement has effects (such as decoherence). In the Alex scenario, each branch of Alex is dealing with a different element of the superposition of the spin of the particle in question. 1. “In the Alex scenario, each branch of Alex is dealing with a different element of the superposition of the spin of the particle in question.” The problem is that the experiment “picks out” a specific part of that superposition, the up and down on the selected axis, and now they each have a wave-function in a suddenly altered state. They can demonstrate this by repeating the same measurement and with 100% probability getting the same result they got the first time. Their respective shares of the wave-function have superpositions of possible measurements on other axes. If they first measured Z-axis, both would expect “random” results on the X-axis, because the Z-axis measurement eliminates any knowledge of the state of the X-axis. Doing such a test would cause further branching, whereas repeating the Z-axis test would not. Say they measure the Z-axis, branch into Alex-Z.up and Alex-Z.down, and both now measure the X-axis. Now there is: Alex-… Z.up-X.up, Z.up-X.down, Z.down-X.up, and Z.down-X.down. For each of the four branches, Alex now has knowledge of X-axis spin and has eliminated knowledge of the Z-axis spin. Considering just one Alex, say the one who got Z.up-X.up, what do you think they would get if they measured the Z-axis again? Note that these experiments are set up so only the final result is actually measured. The various branches of Alex only ever see the final result (e.g. Z.up-X.down-Z.up), although they know the path the particle took through the system and, hence, the outcome of each step along the way. (Note also that the tests I’m describing are physically possible and have been done and verified.) 2. One of the physicists I read, possibly the Ask a Physicist guy, said that a definite spin result on a particular axis just is a superposition of the other perpendicular axes. (I know it’s more complicated for the diagonal ones.) So I think the sequence would happen as you describe. Every time the superposition gets measured there is branching. Note that we could actually think of it as every time the same axis gets remeasured with no other axes measured in between, there’s also branching, but the branches are all the same, so it’s not usually thought of as branching. On running the experiment so the results aren’t measured until the end, are you saying they could know the intermediate spin results? I think I’d want more details on how that works. Speculating a bit (and possibly getting it very wrong), I suppose you could keep all the particles involved (electrons, photons, etc) isolated so that the changes to spin happen. But all the particles that interact would end up entangled with each other, and when information from the system did finally spread into the environment, it would all become entangled with the environment, with every element in the composite superposition of the entangled particles having its own branch. 3. Yes, we treat a Z-axis measurement as an equal superposition of up-down on other orthogonal axes because knowledge of spin on orthogonal axes is mutually exclusive (very similar to position and momentum being mutually exclusive). Spin of particles can be measured by a Stern-Gerlach experiment. Essentially a magnetic field cases a deflection of the particle such that there is one path into the “spin box” and two paths out, one representing spin-up and one representing spin-down. Under MWI we’d say that the particle interacting with the magnetic field causes a superposition (branch) and the particle follows both exit paths. When the particle hits a detection screen, it’s “measured” and we only see it in one place. (Not unlike a beam-splitter experiment.) Detection, of course, prevents further tests of the particle’s spin because it’s been splatted against the screen. But we can direct the output paths into a second stage pair of “spin boxes”. For instance, we could measure the Z-axis in both stages. If we do, we find the spin-up path from the first stage results in 100% spin-up particles and the spin-down path results in 100% spin-down. Or we can measure a different axis the second time. If we measure the X-axis, we see a 50/50 split from both second stages. In the first case, Z-Z, we see a [50%, 0; 0, 50%] distribution. [Z-up+Z-up, Z-up+Z-dn; Z-dn+Z-up, Z-dn+Z-dn] In the second, Z-X, the distribution we see is [25%, 25%; 25%, 25%] The final result tells us what path the particle had to take, so we know what its spin was at different stages of the experiment. Note that, until the detection screen at the end, the particle does not interact with other particles, only the magnetic field of the S-G device. So let me ask my question about the results of a Z-X-Z experiment again. The particle distribution after two stages is, as mentioned, [25%, 25%; 25%, 25%]. After the third stage, a second Z-axis test, there would be eight outcomes (branches). My question is: What is the final distribution? 4. BTW, with regard to a known eigenstate such as Z-up being a superposition, note that such a superposition is different if the known eigenstate is Z-down. For Z-up: But for Z-down it’s: Note the plus-minus difference between them. Both superpositions give a 50/50 probability for X-axis measurements. The difference means there are certain unitary operations that can change the state, and further such operations can return it to the original state. (Measuring the spin state would not be such an operation.) 5. I should maybe emphasize that, after measurement, the definite eigenstate itself is considered to be just |0⟩ for spin up and |1⟩ for spin-down. The superposition applies to the possibility of a measurement on some other axis. There are, in fact, infinite superpositions of measurements on other possible axes. In general: Where α and β are normalized coefficients that depend on the angle of the axis, and there is an implicit such superposition for every possible angle. The superpositions I showed you above involve an orthogonal axis where both coefficients are 1/sqrt(2). (Remember that we square the coefficient to get the probability of seeing that result, and that the sum of the squared coefficients must be 1.0.) 6. On the experiment, thanks. I had forgotten about that setup in the MIT lecture. On your question, not sure what you’re looking for. I agree there would be eight outcomes and so eight branches. 7. That there are eight outcomes is a given. My question involves the distribution of outcomes. In the two-stage versions, in the Z-Z version, the distribution is [50%, 0%; 0%; 50%]. In the Z-X version, it’s [25%, 25%; 25%, 25%]. I’m asking about the three-stage version comprised of Z-X-Z. [?, ?; ?, ?;; ?, ?; ?, ?] 8. Nope, exactly right. The point is that, after the first Z measurement the particle is in a known state, either |0⟩ or |1⟩, but in a superposition of measurements on other axes. In particular, the orthogonal X-axis is a 50/50 superposition so the second test on the X-axis has a “random” (uncorrelated) result. That second test gives us a definite state for the X-axis, often thought of as |+⟩ and |-⟩ in contrast to the Z-axis. This again puts the particle’s wave-function into a superposition of states for other axes, and again the orthogonal Z-axis is uncorrelated so there is again a 50/50 chance of measuring |0⟩ or |1⟩. For a single particle going through the apparatus, its wave-function changes as a result of each test. Importantly, the first state, either |0⟩ or |1⟩, is erased during the second test, which is why the third test has 50/50 odds. Liked by 1 person 2. I like this 4th option, “there aren’t different universes.” But to me it’s just one option; each option is useful for understanding certain aspects and each poses a danger of misleading when taken overly literally. This 4th option is good for correcting some of those dangers of the other three. Liked by 1 person 5. And when the final curtain falls, materialists sit around scratching their asses wondering why idealists think that materialism is such a screwed up metaphysical position!!??!!☹️ Party on Sean Carrol….. 1. Same difference right Mike? MWI is materialism’s archetype of idealism’s M@L. Sean Carrol professes to be physicist; one would think that Sean would use his high profile celebrity status as an academic in the profession of adult day care for more productive means other than promoting himself so he can write and market more ridiculous books. 1. M@L is Mind at Large; an imbecile god who has Dissociative Identity Disorder (DID) and splits off into multiple personalities…. Sound familiar? M@L, MWI……. pick your ridiculous construct, only you can choose. 2. Sounds like Kastrup’s theory. We’re all one big mind with multiple personalities. Kastrup is pretty vehement in his opposition to the Everett interpretation. I can understand why, since it undercuts the quantum physics justifications for his philosophy. 6. Is this a case of perspective? Outside looking in vs being inside, living it? And then the thought (and it’s only a thought) of instantinaity? Some human transaction or statement that theoretically impacts the subject instantly. “I’m king of the world!” would travel at the speed of thought, instantly. And it’s due to perspective how such a declaration is evaluated. It all sounds like fun mind-games to me. (Until you unscroll the deed to The People’s land and tell them to leave, ‘cuz you now own it.) Liked by 1 person 1. It definitely is a case of perspective. You also just reminded me about this from one of Terry Pratchett’s stories: Liked by 2 people 1. Now that you mention it, I do believe I’ve used it fairly recently. It’s one of the many, many bits I love about Pratchett — his twisted use of physics of which he seems to have a very good grasp. Liked by 1 person 7. FWIW: I was looking for a good explanation of global versus relative phase, and I found the following, which nails it. It’s inescapably mathematical, but you said you were doing okay with the math. We can define a two-state quantum system like this: Where r_i are normalized real-valued constants and θ_i is the phase. Then we can have: Doing the math: And then: As I showed you before, that leading term (the global phase) isn’t something we can detect, but the relative phase, θ_2-θ_1, is significant and accounts for interference. Mathematically it doesn’t get more clear than that. Intuitionally is another matter… 🙂 8. Much of the distinction seems irrelevant. Anything changing in our galaxy, for example, is irrelevant to things happening in other galaxies. So, whether the result of the change spreads instantaneously or at the speed of light matters little. The amount of change or repercussion of a change fades with an inverse square law, no? So, this “transmission” of the world split is a local affair. Basically what would happen if the effect of a split here on Earth weren’t noticed in a galaxy 100,000 light years away for 100,000 years? I argue, nada. Liked by 1 person 1. It’s definitely true that under all the options, the dynamics are always local. An analogy might be if we decided to change the name of the Andromeda Galaxy to Ralph’s Galaxy. In our mind, the change would be instantaneous. But if we sent a signal to that galaxy telling any inhabitants what we’d decided, it wouldn’t have any causal effects for at least 2.5 million years. So the various options could be seen as how we decide to account for the name change. Whatever we decide, it’s irrelevant to the physics. 9. I don’t know why I’ve never thought about this, but I just realized there’s a conundrum in MWI regarding “particles” — observing a particle requires collapsing the aspect of the wave-function that describes the position of the “particle.” In the two-slit experiment, for instance, the (unobserved) particle in flight is described by its momentum (its energy) which means its position is unknown. Until it hits something, and then its position is known. Even if we assume branching, each branch sees a “particle.” But that “collapses” the wave-function, so how can there ever be point-like interactions (“particles”) unless MWI does have wave-function collapse? Even positing a universal wave-function comprised only of interactions still seems to require abrupt changes to the state vector. Liked by 1 person 1. So tell me if this is crazy or not, but is this another way of expressing what I see as a fundamental question of QM in all its forms, and that is: how does the wave equation–which regardless of interpretation clearly has some part of the picture pretty well-nailed–relate to the reality we experience? And none of the QM theories can explain this without some assumptions that are in addition to the fundamental mathematical theory, unless I’m mistaken. Thinking about Newton’s equations of motion is (perhaps?) helpful as a view of a theory where this is not an issue. When we define “x” as the distance of a flying cannonball from the cannon, there are really no additional steps required to relate the math to the world we experience. If we use the equations to show that the cannonball is “x” = 73.5 meters from the cannon when “t” (time) = 0.9 seconds of flight, we know exactly what that means. There really aren’t additional assumptions required, just our definition of “x.” In QM, we have the wave equation, but it doesn’t describe a single outcome like Newton’s equations of motion do. So the rub in all QM interpretations is that we only see one thing, and the math predicts many things, no? And I think your point is related: we don’t see waves, whenever we measure something what we see are discrete quanta, or particles. So there are a number of ways it seems challenging to relate the fundamental mathematics to what is actually observed. 1. You’re not crazy. QM is the only branch of science I know that requires interpretation, even though it has very precise and extremely well-tested mathematics. I suspect that speaks to our ignorance of it. Its complete lack of compatibility with GR is another indicator we’re missing a big part of the picture. Comparisons with Newton’s F=ma are quite apt, and, as you say, seem complete at the classical level. And, also as you say, classical calculations predict single future results — the cannonball will strike here with this much force. The wave equation says, well, if you decide to look for a free particle here, there’s this probability of seeing it there, but that much probability of seeing it there, if you look there. And because the wave equation implicitly includes all possible locations (in the universe) there is some (vanishingly small) probability of finding it a zillion miles away. Liked by 1 person 2. I think it’s worse than that. We see interference effects, or what we infer to be interference effects, and from that infer waves. But we also never see a particle. Ever. We infer their existence as well though instruments we hope work according to our theories. Neils Bohr made the point that the quantum realm is inaccessible. Our data comes from the macroscopic effects of our interactions with it. But really, this just calls attention to something that always exists, because our senses work by inferring things in the world as well. We just feel like it’s more concrete at the classical level. There may be fewer levels of inference at classical scales, but all observation is inescapably theory laden. Liked by 1 person 1. “But we also never see a particle.” They’re too small to be seen by any instrument, but devices such as cathode ray tubes give us the same inference about, at least, point-like interactions, that interference gives us about waves. Einstein’s Nobel was another strong inference about the existence of particle-like behavior. Liked by 1 person 1. Schrodinger was inspired by de Broglie’s discovery. He intended his equation to model how the waves worked. But from what I’ve read, he couldn’t complete it until spin was discovered. (The Copenhagen camp played down the physicality of the waves. Schrodinger never agreed with that move. Obviously the Everettians agreed with him.) Your thoughts? You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
5f39e2ba62379799
Leonard explains the user interface. The Lenwoloppali Differential Equation Scanner was the name of the handwriting-recognition differential equation solving smart phone app created by Leonard, Howard, and Raj in The Bus Pants Utilization. One just uses their smartphone, takes a picture of the differential equation, such as the Schrödinger equation, and triggered by a button-press event, an algorithm drops the results from a database. The app would use handwriting recognition, and then run the equation through a symbolic evaluation engine. A button allows for scanning a new equation, while another substitutes new values for the coefficients. Not only can one store their favorite equations, but users can also forward them to their friends or post them on Facebook right from the app. Twenty people from the university signed up for a private beta. The spherical Bessel functions satisfy the differential equation of which Howard takes a picture, thusly named the spherical Bessel differential equation. (The spherical Bessel functions are often invoked to solve the Schrödinger equation.) If the solution is regular at the origin, equals the proper spherical Bessel functions. In the case the solution is irregular at the origin, is equal to the Neumann functions or the Hankel functions of the first and second kind. Howard cites "spherical Hankel function" from the app. The development project was codenamed "Project Lenwoloppali" (as opposed to "Koothranardowitz") and so probably led to the naming of the application itself. Howard worked on much of the programming and an install time problem, trying to have it pick up from the libraries dynamically. Sheldon was fired twice from the project for his actions and he was excluded in the secret code designation and app name. He began working, at one point, as an independent contractor on "Project NODLEHS", later asking Howard and Raj to join him in a rival company, as well as making an attempt to sabotage the guys' project by playing his theremin. He finally joined Penny in "Project Shoe".
6de2dc9fe1252513
Tuesday, March 31, 2009 Vitamin C Vitamin C, also called ascorbic acid, has a number of important roles including tissue growth and repair and acting as an antioxidant. It also increases iron absorption, so large doses of ferrous supplementation may need to be decreased in order to avoid iron toxicity. Vitamin C deficiencies can lead to depression. And scurvy. The proposed theory as to why supplementation of vitamin C might aid in alleviating depressive symptoms is because ascorbic acid is involved in pathways making the neurotransmitters dopamine, serotonin, and norepinephrine. Since serotonin is low in depressed patients, increased ascorbic acid intake may help increase the levels. Despite the large number of sites on the interweb claiming vitamin C fights depression, I had a difficult time locating articles on the subject in the medical database. The recommended daily intake is 75-2000mg/day (the Vitamin C foundation recommends 3000mg/day and other sources recommend doses in the several thousands). Good plant sources are red pepper (190g/each), broccoli (90g/1cup), large orange (100g/each), and spinach (90g/cup). References: 1, 2, 3 Monday, March 30, 2009 I find these lists of “How To” get better irritating when I read them because it’s never as easy as the bullet point notes imply. This article has been on my waitlist for a while as I wasn’t entirely sure what to do with it. Then this morning, very early, at an hour I would never usually be awake at if I didn’t try, I had one of those really obvious epiphanies which you are certain everybody else knew about all along. Motivation is not synonymous with eagerness (I checked in the thesaurus just to be sure). Eagerness is excitement, fervour, impatience. A motive is an incentive, reason, purpose, inspiration. Certainly eagerness aids in motivation, but it is not necessary. Rather, eagerness is something that is developed through motivation; the more you work at something, the more you enjoy it, and the more you want to continue doing it. Yet, I think eagerness can be what motivates a person with a mood or personality disorder as excitement is sometimes easier to conjure up than purpose, which tends to be lacking. But eagerness needs motivation to be sustained and if it fades too quickly, it becomes a reason to not engage in an activity; if you’re not excited about it, why should you bother, right? So instead your depression has increased your motivation to remain in that state. This is not an easy cycle to break. It takes time and work. That said, for those with more severe depressive symptoms who do not have a reason to do anything, eagerness might be a necessary starting point – motivation through exposure. Depression is not as constant as it appears to be. The immutable state is an illusion of the depression. When you find yourself feeling better, make note of it, without thinking about it. This will give you evidence that there were times when things were better and material on which to build motivation. Of the following list, accepting setbacks is probably the most important. If you accept setbacks as a natural occurrence in the process of any undertaking, then your illness will have a more difficult time convincing you that the regression is an indicator of failure. Staying Motivated: * Find a way to personalise the activity. If your goal is to go running every morning but you really hate running, try cycling or rollerblading instead. Or try running in different locations, explore other neighbourhoods, run along the beach or on trails, run at a track, or use a treadmill. * Recognise progress, celebrate success, and reward yourself. The reward does not have to be material, maybe you allow yourself to read and have tea for one hour instead of cleaning house. * Ask for help. Get some tutoring, take classes, discuss on a forum, talk to your therapist, and ask friends or strangers questions (people like to help others with the same interests. Asking a stranger for help, even if you make up the question, can help you develop your social network). * Share your goals, efforts, and progress with others. * Have a role model. When you feel you need a bit of a push, ask yourself, “What would (insert name of role model here) do?” * Have short and long term goals. Break larger goals into smaller pieces. Accept setbacks and readjust short term goals as necessary. * Keep daily priority list of things to do and do at least 3 of them a day. * Help others in same area. You will gain a new perspective on things when approached from a different angle. * Don’t overdo it. Zealousness is great, but keep your goals realistic. This doesn’t mean settling for less, only don’t make becoming an Olympic athlete a top priority if you don’t know how to swim. * Think of the activity as a prescription, a medicine you take regularly to help manage symptoms. * Be cautious of excuses which may seem more dramatic than they are (your depression will try to convince you otherwise). Do the activity anyway. Don’t ignore your feelings but work through them in a mindful manner. * Keep a journal of your efforts and outcomes. Make note of time of day, day of week, extenuating circumstances, how long you spent on the activity, perceptions before you start and after you’ve completed the task. * Enforce a ten minute rule. If you’re having one of those days where you want to do something, but are having trouble finding something or deciding what to do, allow yourself ten minutes to contemplate the options and when that ten minutes is up, if you haven’t made a choice yet, pick any activity and just do it without any more consideration. * Add more desired activities, or devote more time to a favourite, to your daily schedule as you gain more energy and start to feel better. Sunday, March 29, 2009 Psychology GRE Study Guide Page 6 Oh my god, I’m bored. Do you realise the majority of the GRE is based on first year material? I’m too lazy to link all of the previous pages, but you can find them in the archives and under the label “GRE”. Page 1 has a link to the official Practice GRE from which this guide is developed. a. Resisting persuasion is to defend oneself against attempts at manipulation. b. Group polarisation is when a group’s dominant view, usually determined by majority, increases in strength over time. Also related is groupthink, where members of a group so intensely seek consensus that they ignore other views. c. Fear arousal is frightening people into compliance or into a desired behaviour. d. Halo effect is when an opinion about one object influences opinions in the same direction on related objects. For example, if a kitten is furry and I like kittens, then if I dress in kitten fur, I will be liked. e. Two-sided arguments which state two different points of view, pro and con. A one-sided argument presents only the pro side. 44. Binet and Simon were commissioned by the French government way back when to develop a test that would identify children who were more likely to encounter difficulties with the school curriculum. As such, the test was designed to measure memory, reasoning, and verbal comprehension in order to determine the child’s mental age by comparing each individual score with the average of an age group. The Binet and Simon test eventually evolved into the Stanford-Binet IQ test. a. Crystallised intelligence is knowledge gained through experience. Fluid intelligence is a person’s innate ability to reason and problem solve. 45. The law of effect, developed by Thorndike, is that when an event is followed by a rewarding experience, the event will be carried out more quickly in subsequent sessions (such as a rat receiving food for successfully completing a maze. a. Skinner was an American psychologist, behaviourist, who focused on operational conditioning which is when a subject (rat) operates on a mechanical device (lever) and an event occurs (pellets come out). b. Thorndike developed animal intelligence experiments leading to the idea of instrumental conditioning (the animal learns behaviour because of a reward and this behaviour is performed more quickly each time). (In classical conditioning, the animal does not need to perform a behaviour, but is presented with an external stimulus, to receive a reward). c. Dewey was a founder of pragmatism and functional psychology which regards mental health as an active adaptation to the environment. He also worked with visual perceptions. d. Wertheimer was one the founders of Gestalt therapy. e. J.B. Watson was a behaviourist. 46. Lewis Terman’s study of gifted children found that children with a higher IQ were generally taller and in better physical and mental health. He was the inventor of the Stanford-Binet IQ test and believed IQ was inherited. a. A longitudinal study is a design such that the same people are studied repeatedly. b. A cross-sectional study is a design in which people of different ages are compared. c. An experimental study is such that the investigator alters aspects of the test in order to observe the result. The aspect of the environment/test that is altered is the independent variable. The measured behaviour is the dependent variable. d. A quasi-experimental study is an experimental study that does not include random assignment to groups. e. A qualitative study is an intellectual inquiry without quantitative, numerical, scientific, evidence. 47. According to the neurodevelopmental hypothesis states that impaired cognitive abilities lead to impairment of second-order cognitive processes (memory, emotion) and may result in schizophrenic traits. 48. Eleanor Gibson’s is known for the visual cliff experiment in which infants demonstrated depth perception by avoiding a virtual cliff (a table with a glass extension) and even when encouraged by their mothers, the infants would not cross onto the glass. She concluded perception to be a learning mechanism. Similar studies were done with kittens! 49. The procedure of determining attachment developed by Ainsworth is called the strange situation test and can be used to classify children (10-24months) into three groups – secure, resistant, and avoidant attachment. The test is conducted in a clinical setting by observing child and parent in a secure environment and then adding stressors and observing how the child responds (when strangers are near, parent leaves). 50. Yay, brain stuff!!! a. The foramen of monro is a ventricle. b. The medulla and the pons are located in the hindbrain and serve as pathways for neural impulses from the spinal cord to the brain. Functions include life support such as sneezing, heart rate, vomiting, breathing, and blood pressure. c. Broca’s area is concerned with language. d. The hippocampus is associated with emotions and memory. e. The thalamus relates sensory information to the cerebral cortex, regulates sleep cycles, and regulates arousal. Friday, March 27, 2009 Art Appreciation Metaphor If you stand too close to a painting, you only get to see a small piece of it. If you stand too far away, you never get to appreciate the detail in the work. You might approach a particular painting and, from a distance, anticipate that you won’t enjoy when you get up closer, but when you do get closer you start to see there are more colours in it than you could see from further away. And as you get closer still, you start to see the fine detail in the brush strokes and you can start to understand, or muse on, why the artist chose to paint the picture in that way. The more time you spend observing the painting, from all different distances and angles, you begin to see even more colour and detail, small animal tracks in the snow, a bird hidden behind some leaves in a tree. One thing about art, and life, is that the process and the movements of the process need not always be intentional and deliberate in order to achieve a work of beauty; sometimes mistakes and errors can take the painting, or life, in a new and surprisingly pleasant direction. But a great painting can’t be created on impulse alone; even the most abstract works need some intention behind them. You have to choose your colours, brushes, medium, size, and surface. And it’s the small, intentional details of a painting, those three, tiny red dots in the corner that don’t quite fit with the rest of the colour scheme, that can evoke the most intimate feelings. And that’s just one painting. The entirety of a life is composed of a whole gallery of artworks. On the first floor of your gallery might be more traditional and classical works, a homage to your past, perhaps. On the second floor might be very contemporary works, a place where new things and ideas in and about your life are developing. On the third floor you might house your abstract works, paintings of non-literally expressed emotions which are more mature in content and method than the second floor, but lack the constraints of the classical works on the first floor. And up on the top floor is your studio; a place full of ideas, materials, and potential. In regards to the fourth floor, I think cognitive behavioural therapy could stand in for ideas, pharmacotherapy or other treatments for materials, and potential, well that’s inherent in everyone. Thursday, March 26, 2009 Psychology GRE Study Guide Page 5 35. Children begin to form two word sentences at about 1yr. a. Pretend play is a fantastical, symbolic method of play beginning at about 2yrs. b. Conservation of volume relates to understanding the concept behind an idea like fluid displacement and usually begins at about 11yrs. (Conservation of number is achieved at about 6yrs and is concerned with understanding the number of objects remains the same when oriented differently.) c. Metalinguistic awareness is the interaction between language and written text, especially in bilingual literacy development. In children, it refers to the ability to concentrate on sounds and language patterns usually beginning at school age. d. Visually guided reaching is the control of reaching behaviour towards an object of particular size and depth. It is said to occur in infancy, but I came across one study questioning the actual mechanism of infant reaching. e. The Palmer grasp reflex occurs the first week after birth and is so strong the baby can support its own weight. a. An ill-defined problem has no clear goal, start, or evaluation method such as the search for happiness. b. A systematic random search is when possible solutions are tested in relation to sets of rules. c. A confirmation bias is when one seeks out evidence to support their hypothesis. d. Functional fixedness is the tendency to see objects and their functions in a fixed and typical way. A person with this view might see a spoon as only an eating utensil and not as a shovel that can be used to bury a dead cat. Mental sets are established patterns of perception and thought which are usually an effective problem solving strategy. e. The framing effect is when an option presented in a different way alters a person’s decision. a. Automatisation is the process of learning by which the subject is first learned with full, conscious effort and then later using connections to make recall automatic. b. Second-order conditioning is using a conditioned stimulus to condition a second signal. An example is pairing a bell with food for a dog which conditions salivation and then pairing a rock with the bell which will also induce salivation. Finally, when the dog is presented with the rock alone, salivation will occur. Second-order associations are used in marketing by pairing something desirable, a box full of kittens, with something else, a used tissue. c. Sensory preconditioning refers to an association between two (or more) stimuli before conditioning. An example would be exposing a kitten to both a light and a bell. Afterwards, the kitten is conditioned to clean its own cat box at the sound of the bell. However, upon being exposed to a flash of light, without direct conditioning, the kitten will start scooping its own waste. d. Chaining is the reinforcement of behaviours occurring in a subsequent fashion. e. Autoshaping is a Pavlonian-type conditioning experiment, but done with a light and pigeons instead of a tone and dogs. 38. Schema are clusters of information and facts that are organised in a knowledgeable way into structured relations. An example is understanding that a towel is not only a piece of cloth, but is used to dry oneself after showering or swimming, needs to be hung to dry properly, and is usually found in the bathroom. In terms of memory, schema makes recall more efficient since you only have to remember general knowledge. However, it can also lead to false memories because of these general associations. a. Connectionism models behaviour based on the emergent processes of neural networks. b. Information processing systems are physiological brain structures that have developed to process environmental information in order to solve problems. Cognitive psychologists use this system to explain behaviours. c. The ACT (Acceptance and Commitment Therapy) model uses acceptance, mindfulness, and behavioural changes to increase psychological flexibility. d. The Atkinson-Shiffrin model states there are two types of memory storage, long-term and short-term. Sensory memory was added later as a third category. e. The encoding specificity theory states that for material that is to be learned for later recall, there must be a connection made between the cue and the material at the time of learning. If you are trying to remember the Schrödinger equation and are using a picture as a learning aid, it might be helpful to think of a cat in a box. Or, if you are trying to remember the word ‘vase,’ you may associate that word with the word ‘flower.’ Whatever the cue is, it has to be learned at the time of memorisation, otherwise there is no specific connection between the cue and the material in the brain. a. Eidetic imagery is often called photographic memory and represents total recall of a previous experience. b. Proactive inhibition is a theory of forgetting in which old memories inhibit the recall of new ones. Retroactive inhibition is when new memories interfere with the recall of older ones. c. The complexity of expression phenomenon is the relation of an event using more complicated language structure and incorporating more aspects into the description. (I made that up, but it sounds right enough.) d. The tip of the tongue phenomenon is knowing something that can not be recalled. e. The template model is the storage of knowledge in easily accessible templates, representations of object categories. 41. A morpheme is the smallest unit that carries meaning, usually consisting of single words, prefixes, and suffixes (‘tie’ has one morpheme and ‘untie’ and ‘ties’ have two). A phoneme is the smallest significant sound unit in speech. Babies have many more sounds than adults and phonemes vary between languages (in English, a speaker does not differentiate between the ‘p’ sound in ‘paw’ or ‘stamp’). 42. The bystander effect is the reluctance to help another when others are around because of diffusion of responsibility and the belief that someone has already, or will, assist. A related idea is social loafing which is the tendency to put less effort into a task when working in a group rather than alone. Monday, March 23, 2009 Vagus Nerve Stimulation The vagus nerve (left and right), or Cranial Nerve X, is the tenth cranial nerve (there are twelve in total). It begins in the brain and travels down through the chest to the organs. The functions of the vagus nerve include afferent sensory and motor information and autonomic function of viscera (digestion, heart rate). VNS is another electrical brain stimulation technique. The difference between this method and ECT, TMS, and MST, is that the device sending the electrical impulse is surgically implanted in the chest. A wire connects this device, the pulse generator, to the left vagus nerve which is located in the back of the neck. The device is activated by a physician after implantation (a short surgery) to deliver frequent, short impulses (30 second stimulation every 5 minutes) automatically. A magnetic device is also supplied so that the patient can turn off the stimulation by holding the magnet over the device when necessary. It is used primarily to treat epilepsy and treatment resistant depression. Many research studies suggest promising effects of VNS on TRD especially over the long-term. As well, there is evidence that VNS improves sleep cycles, which are associated with mood. Theses studies each have their problems, as usual, causing hesitancy in the medical community. Side effects include obstructive sleep apnea, laryngeal problems (changes in voice, coughing, pharyngitis, and bradyarrithmias. There have been no documented cognitive side effects. The procedure is also safe to use during pregnancy. Sunday, March 22, 2009 Psychology GRE Study Guide Page 4 26. A sign stimulus is an evolutionary, external, environmental stimulus that elicits a specific patterned behaviour to a specific stimulus, such as behavioural imprinting. 27. Primary prevention is education of how to prevent certain ailments from occurring. Secondary prevention is the early identification of risk factors, screening. Tertiary prevention is the treatment and containment of an illness once it has begun. a & d. Antidepressants are not a guarantee of long-term recovery; their effects vary with dose, interactions with other drugs (including vitamins), and the individual. Relapse is common in treatment resistant depression. Many people often need to try a few different antidepressants before finding one that is suitable. c. Personality is a complicated collection of beliefs, experiences, and personal history. e. Side effects vary between the different medications, but are common. 29. A behavioural approach is designed to change behaviours, as opposed to thoughts, through the application of learning principles (e.g. desensitisation for phobias). Only one of the answers is non-cognitive. a. The expectancy theory explains the choice making process of an individual which predicts that employees will be more motivated when they believe more effort will result in better performance, better performance will lead to work-related and personally valued awards. b. The balance theory is proposed to understand a person’s drive towards psychological balance. It involves assigning a negative symbol to disliked objects and a positive symbol to liked objects and then multiplying these signs for inter-related objects. c. The social comparison theory explains self-evaluation processes of an individual in comparison to a desirable social group. d. The equity theory concerns an individuals perception of fairness is relationship/social exchanges. e. A drive is a psychological state arising from a physiological need, such as thirst, in order to restore homeostasis. 31. A preposition is a word that comes before a noun or a pronoun. a. Descriptive means to describe, outline. b. Prescriptive is a rule or guideline. c. Orthographic means concerned with spelling. d. Pragmatic means concerned with practical considerations or consequences. e. Semantics is the study of meaning in language. a. Contextual retrieval cues include visual aids such as graphs and punctuation as well as linguistic and semantic clues. b. Retroactive interference is when the formation of new memories inhibits the recovery of older memories. Proactive interference is when old memories interfere with the formation of new memories. c. Decay is the idea that memories are forgotten with the passing of time. d. Learning is learning, not remembering. e. Motivated forgetting is synonymous with repression and is a defence mechanism used to push unwanted, traumatic memories out of consciousness. a. Afferent means leading towards the CNS from an organ or part. b. Efferent means leading away from the CNS towards an organ or part. c. Dorsal describes a position towards the back; dorsal fin. d. Ventral describes a position towards the abdomen. e. Frontal, anterior, describes a position towards the front; forehead. Posterior is towards the rear; tail. 34. Erikson’s stages of personal identity development are (the backslashes indicate ‘vs.’): i. Infancy/childhood: trust/mistrust (birth-1yr), autonomy/shame (1-3yrs), initiative/guilt (development and decision making regarding the carrying out of plans, 3-6yrs), industry/inferiority (ability and competency issues, 6-12yrs). ii. Adolescence: identity/role confusion (adolescence), intimacy/isolation (young adulthood). iii. Adulthood: generativity/stagnation (concerns over contributions to younger generations), integrity/despair (regarding one’s life, successes and failures). Friday, March 20, 2009 Quantum Tunnelling Metaphor I very much dislike when people use quantum physics to explain…whatever they want. But I think I can get away with it here as I am using quantum tunnelling only as a metaphor. To start off, imagine a potential barrier of width L and potential V>E. Now consider a particle described by wave mechanics that has a wavelength, λ = 2πħ /(√2m(E-Vo)). For E> Vo, the wavelength is real (because the square root in the expression above is positive). However, the change in potential causes an increase in wavelength and a change in wavelength means that the particle is both reflected from and transmitted across the barrier. For E less than Vo, the wavelength in the region of the barrier (region II) is imaginary (the square root in the expression above is negative). Because the wavelength is imaginary, the wave function decreases exponentially inside the barrier. If L is large (infinite), the wave will not cross the barrier. But if L is small, the wave will resume at the other end of the barrier, although with a decreased amplitude. In my metaphor, the wave function is the patient and the barrier is the depression (or other illness). More severe depressions will have larger barriers. Classically speaking, a particle would not be able to pass through the barrier. However, we have just seen that the seemingly impossible is possible from a different perspective. I would say the classical perspective is the way a patient views their situation through their depression. Inside the barrier is where the therapy occurs, working in and through the depression. While in the depression, it may seem like there is no way out as the wave function decreases, especially if the barrier is large. However, both psychotherapy and pharmacotherapy can decrease the width of the barrier, decrease the length and intensity of the depression. An when the patient does emerge on the other side, because L isn’t infinite, they may interpret the attenuated wavelength as them being less of their original self when in fact, the variations in their mood is now more stable instead of fluctuating into extreme values as it did before they passed through the barrier of depression. Physics saves lives. Oh, there is also an energy barrier in a game called Rune Village which keeps unwanted souls from entering the port, but to pass through that barrier all you have to do is give the guards some money. Thursday, March 19, 2009 Five-Factor Model The FFM is a tool used to describe personality based on five different categories. It was developed in the 1930’s using the lexical hypothesis which states that all of human emotions can be encoded into language. Some scientists went through the English language and extracted all words relating to emotion and then reduced this list to words they felt were most descriptive and pertinent to describing human emotion and this list was further reduced over the years to the Big Five. The Big Five categories are: 1. Openness/Intellect – interest and appreciation of ideas, art, experience 2. Agreeableness – compassionate, cooperative 3. Conscientiousness – self-reliant, purposeful, sense of duty, planned experiences 4. Extraversion – energetic, socially outgoing, positive emotions 5. Neuroticism – emotional instability, negative emotions Criticisms of the FFM include it not being a theoretically based, it does not describe all of human emotion, the Big Five are not linearly independent, and problems with the methodology. Some other scientists compared five-factor profiles to the ten personality disorder categories in the DSM-IV and concluded there was a correlation between a particular five-factor profile and each of the personality disorders, in the DSM – FFM direction. But a study just released in Am J Psych (1) examining the clinical utility of the FFM shows that psychiatric workers (psychiatrist, clinical psychologists, and social workers) had a more difficult time making a DSM diagnosis based on the FFM. “We emphasize that our goal was not to compare the DSM-IV and the FFM in the exact format proposed to be adopted and determine which system excels… We acknowledge that the current methods do not experimentally control for all possible differences between the DSM-IV and FFM (e.g., clinicians’ familiarity with the systems)…. by not overcontrolling for practicing clinicians’ current understanding of the FFM, the results identify consequences that normal clinicians would face if the FFM replaced the DSM-IV axis-II diagnoses. Overall, any potential descriptive system to be incorporated into the DSM-V should take into account not only validity, but also clinicians’ ability to reason with the system.” References: 1, 2 Wednesday, March 18, 2009 Types of Love Depending on the source, there are 3, 4, 6, or 9 types of love. But my list first: 1. Requited love: mutual, romantic, sexual, committed, secure. This is probably the best. 2. Unrequited love: one-sided, possibly sexual, non-secure. This one breaks hearts. But on the positive side, you loved, and something good most always comes out of that. 3. Enough to not tell my/your significant other love: because monogamy is unrealistic (I have heard of people who have done it though). Healthy, caring, non-committed. 4. Transference love (not therapists): because you remind me of the person I would rather be with. Can still be happy, healthy, and committed. 5. Associative love: using a person to get closer to another or make someone jealous. 6. Pet love: that just sounds wrong. 7. Math love: ∫, ∏,µ, √, ф, ɛ, Ω, ∆, ℮ 8. Fantasy love: expressed towards an imaginary person, celebrity, or someone you don’t really know but think you do. 9. Self love: VERY important. Seriously. Confidence and self-esteem are two very positively powerful forces for setting and achieving goals. 10. Instinctual love: when there’s no reason you can find for loving a particular person. 11. Google love: because the interweb will give you the answers to the questions the person you are searching won’t. 12. Thrill love: passionate, risk-taking, addictive. Not necessarily directed towards a person, e.g. skydiving. 13. Obligatory love: usually applied to family members. Pretending to love a person which therefore indicates care, but deep feelings are absent. 14. Desperate love: derived from loneliness and/or aging. May or may not develop into requited love. 15. Abusive love: When a person says they love you but is physically or psychologically abusive. 16. Summer Glau love: see #11. Also leads to hours of addictively viewing really bad Fox programming (just Fox, not Joss Whedon). *** see footnote. 17. Sandwich love: see the Qwantz comic. I call it Ice Cream love. May also apply to other foods, significant material objects, and immaterial sensory stimuli. The 3 types of love are: 1. Eros love is based on passion, sexual attraction and desire. 2. Philia is an interested, affectionate, strong liking with an emotional connection. 3. Agape love has its roots in Christianity and is a spiritual (not sexual), gentle, selfless love towards all. The 4 types of love are: 1. Security love: nurturing and caring; ideally, this love is the type parents have for their children. 2. Friendship love: this is the same as Philia love. 3. Romantic love: similar to Eros love; butterflies-in-your-stomach-love-at-first-sight love. 4. Unconditional love: this is a sort of romantic-agape love, unconditional. The 6 types of love are: 1. Eros 2. Ludus: an uncommitted love, conquest driven love which may include lying. 3. Storge: a friendship love similar to Philia which may include dispassionate sex. 4. Pragma: a pragmatic, practical, mutually beneficial relationship where sex may be viewed as a technical requirement. 5. Mania: an obsessive, jealous love. 6. Agape The 9 types of love are: 1. Affection: non-sexual but touching and kissing may occur; caring and secure. 2. Sexual: sex and sex related feelings; short-term. 3. Platonic: non-sexual, contented, trusting; like affection but without the kissing. 4. Romantic: rose petals and sunset walks on the beach love. 5. Puppy: youthful, innocent, short-term, infatuation. 6. Friendship: not sure how this is different from platonic love. 7. Committed: respectful, long-term, sexual. 8. Passionate: lust, sex, euphoria. 9. Infatuation: obsessive, blinded. *** typical conversation while watching The Sarah Connor Chronicles: “I don’t understand why they would do that. They obviously changed the story mid-shoot. There not even wearing the same clothes. Am I seriously supposed to believe that these time travelling robot killers haven’t figured out that they shouldn’t use cel phones for communicating sensitive information?” “Just watch the show.” “But I have all my disbelief suspended, and this still doesn’t make any sense. How does a story even get this bad? And you know it’s not the writers, it’s just Fox ruining what would otherwise be a kick-ass show.” “Do you wanna watch another episode?” “Yeah. Of course.” Tuesday, March 17, 2009 Psychology GRE Study Guide, Page 3 Also, feel free to email me with questions. Page 1 - question #1 updated Page 2 a. Synchrony means occurring at the same time. b. Proximodistal development means that physical development occurs from the inside (near) out (distal); spinal cord develops before outside regions of body, arms develop before hands. c. Reciprocal socialisation is a bidirectional process where by parents and children socialise each other. d. Symbiosis is two organisms living together with either one or both members benefiting from the attachment (parasites). e. Insecure attachment can be either avoidant (child is not concerned with coming or going of parent and may avoid contact) or resistant (child is very upset when parent leaves and is not easily comforted upon reunion). Secure attachment is when a child is slightly distressed in the absence of a caregiver and is easily reassured upon reunion. 18. Jean Piaget believed everyone is born with a tendency to organise their environment in a meaningful way. He also believed that children think differently than adults. Especially, that children’s views of the world are inaccurate and that their schemata (models of the world) change with children’s reasoning errors. Piaget suggested that the ability to correct errors in schemata is based on assimilation (fitting new experiences into existing schemata) and accommodation (modification of schemata based on new experiences). Piaget believed there were four stages of childhood development. These were sensorimotor (birth-2yrs, schemata based on sensory and motor information), preoperational (2-7yrs, more abstract and symbolic thought-absent objects), concrete operational (7-11yrs, verbalising, visualising, and mental manipulation), and formal operational (11-adult, mastery of abstract thinking). Piaget allowed for variations in timing in how children progress through these stages, but that the stages occur in sequence. 19. This one relates to #18 where a young child’s view of the world is still developing and they are still in the process of accommodating new experiences. In assimilation, the child will take something new, say a bowl, and fit it into their current view by categorising it as a cup because both are round with an open top. 20. Epstein believed that to get an accurate estimate of a personality trait, you need many observations. The psychometric approach aims to identify stable individual differences by analysing large groups of people with various tests. 21. Heterozygous means carrying to different alleles. In this case, one allele from each parent must be recessive. In order for a recessive trait to be expressed, the person must have two of the recessive alleles (one from each parent). There are four possible combinations of two alleles from each parent. The probability that a specific trait is expressed is (the number of possible combinations of alleles allowing for that expression)/(the total number of possible allele combinations). In the figure below, 'A' is the dominant allele and 'a' is the recessive allele. The probability that the dominant trait is expressed is 3/4. 22. These are pretty straightforward terms. One thing to notice is that A, C, D, and E are all types of learning, which is what is needed for development in any area. 23. Psychosexual stages of psychoanalytic theory are: oral (birth-1yr; pleasure from suckling), anal (1-2yrs; pleasure from defecation), phallic (3-5yrs; attention to genitals), latency (5-puberty; suppressed sexual feelings), genital (puberty; appropriate sexuality). 24. The right hemisphere of the brain is involved in spatial tasks such is assembling a puzzle or orienting an object in an environment and emotional processing. It is more engaged in fantasy and music. The left hemisphere is involved in verbal tasks. It is more engaged in analytic and rational tasks such as math. 25. The reticular formation is situated in the hindbrain. It is linked to life-supports functions such as control of breathing, heart rate, vomiting, sneezing, blood pressure, and coughing. b. Olfaction and gestation refer to smell and taste, respectively. e. Homeostasis is the process by which the body maintains a steady state and includes temperature regulation and fluid balances, among other things. Monday, March 16, 2009 Vitamin B Vitamin B comes in many varieties. Three of those, B6, B12, and folate (B9) have been related to depression. B6 in its active form is called pyridoxal 5′-phosphate (PLP). PLP, through some biochemical reactions, is related to serotonin levels, and serotonin is related to moods and sleep cycles. It has been shown that depressed patients have lower levels of vitamin B12, PLP, and folate. However, B6 studies are still relatively new in the exploration of effects on depression. A couple of studies (5, 6) have shown positive effects, but inconclusive, of B6 treatment, as an antidepressant augmentation, in patients with schizophrenia and depression. A recent study of depression in the SUN cohort (1) demonstrated an association of depression with low folate levels in smoking men and low B12 levels in women. The study did not find any significant association with B6 levels. Similar results were found in another study (2) where low folate levels were associated with depression in Japanese males. Yet another study (3) also showed a significant association between low folate levels and depression; this study further hypothesises that insufficient folate levels may be a consequence, rather than a cause, of depression and that there may be some sort of negative feedback type cycle with low folate further decreasing appetite. Folate and B12 are involved in biochemical reactions which affect neurological functioning (7). In the relationship between these vitamins and depression, deficiencies are markers for low homocysteine levels and low homocysteine levels are associated with depression. A 2008 study (4) of the effects of B6 on depression suggested that the intake method of B6, dietary versus supplemental, may be significant, but that more studies need to be done to verify the hypothesis. The recommended daily intakes of B6, B12, and folate are 1.3-2.0mg, 2.4 µg, and 400 – 1000 µg, respectively (for both men and women). Sources of B12 include beef (3oz-2.1µg), salmon (30z-2.4µg), milk (80z-0.9µg), and cheese (brie, 10z-0.5µg). Sources of B6 include fortified cereal (3/4c-2g), baked potato (0.7g), banana (0.68g), chick-peas (1/2c-0.57g), and chicken (half breast-0.52g). Sources of folate include fortified cereal (3/4c-400µg), spinach (1/2c, cooked-100µg), and black-eyed peas (1/2c-105µg). Sunday, March 15, 2009 Vitamin A Vitamin A comes in both fat-soluble (from animal sources) and water-soluble (from plant sources) varieties. Lipid-soluble vitamin A is stored in fat cells and in the liver and excess can lead to toxicity whereas water-soluble provitamin A is more easily excreted. Vitamin A can be found as retinol (an alcohol), retinal (an aldehyde), or retinoic acid (RA). Deficiencies or excesses of Vitamin A can be detrimental to neurological health. Isotretonoin, a synthetic retinoid used to treat severe acne (and other dermatological symptoms), can cause psychiatric side effects including depression and suicidality, especially in teens who are more likely to be taking the drug. A recent study (1) which reviewed publications linking psychopathology to isotretonoin showed many studies with convincing evidence of the link, though with flawed methodologies. Other studies showed no correlation. The author does point out the obvious fact that a dermatological condition can also be a cause of psychiatric symptoms. It is believed that isotretonoin’s effect on psychopathology is through its many effects on neurotransmitter systems (2). The pathways are complicated, but evidence indicates a correlation between RA pathways and Alzheimer’s, schizophrenia, and the serotonergic system (sleep and mood). The recommended daily intake of Vitamin A is 900 – 3000 µg for adult males and 700 – 300 µg for adult females. Some sources of vitamin A are liver (6500 µg and gross), carrots (835 µg), sweet potatoes (709 µg, I still don’t know the difference between a sweet potato and a yam), spinach (496 µg), and broccoli (31 µg, the leaves have much more content, 800 µg). References: 1, 2 Saturday, March 14, 2009 Happy Pi Day! Today is pi day. I thought I should write something relating to psychiatry and numbers, but I have a pi related event to attend, so I will just say that everything thing in life can be represented by numbers. Each individual has a series of numbers associated with their identity – driver’s license, hospital records, phone number, age… For the record, I met my goal set in the earlier post and memorised the first 100 digits of pi. It turned out to be a pretty easy task. Also, a colleague of mine has devoted an episode of his web comic, The Skeleton Show, to pi day. Happy Pi Day Everyone! Thursday, March 12, 2009 Therapeutic Writing There have been a few studies (two published quite recently) which have shown patients who completed writing homework exercises exhibited fewer anxiety and depression symptoms as well as greater therapeutic progress than patients in the writing control groups. One of these studies also showed an efficacy of structured writing comparable to that of CBT alone. There have also been studies investigating the physiological effect of expressive writing and some of those findings include a lower heart rate and lower blood pressure in the emotionally expressive groups. Writing can help identify thematic stressors as well as give insight into thought processes and problem solving strategies, which may be impaired in the presence of a psychiatric disorder but can be remedied once they are recognised. Writing can also be a way of divulging secrets while still keeping them safe. Writing is common in CBT, but it is usually highly structured. The reason for this format is that it forces the patient to spend only a certain amount of page space in their usual thinking patterns and then using the rest of the space to analyse and restructure their thought patterns. A very simple exercise is to divide a page in half by drawing a line down the middle; on the left side of the page one can record a negative thought, but this thought must be challenged on the right hand side of the page. For example, on the left side there might be a comment such as, “I am always late for work,” and on the right side this might be balanced with, “I have only been late for work twice this year and both times were due to unusually high traffic.” This kind of task might be very difficult in the beginning. IF someone has suffered a chronic depression, even the positive side might come out sounding negative or cynical, but it will get easier the more the exercise is done. It is important for the therapist to be supportive during this learning phase and acknowledge the patient’s effort. It is also important for the patient to listen to criticism on how their homework can be improved. I certainly do not argue against the benefits of structured CBT writing, but I also know that thoughts don’t come in discrete units. Sure it’s nice to put things neatly into columns, but there is always that possibility of a something-is-missing feeling. As a balance to other therapies where the patient may feel disconnected from their work and therefore abandon it, journal or creative writing can be helpful. While writing can be a healthy, cathartic process giving the person a sense of control, I still think there needs to be some structure to the writing, at least in terms of content. The patient should spend an equal amount of time on positive writing, but they can be placed anywhere in the piece. As well, I do not believe it to be particularly healthy for someone to write about the same issue repetitively or at length. I have my reservations regarding the Pennebaker protocol, simple because of the intensity of negative emotions that will likely be experienced, and there is no balance during those four days of analytic catharsis. For those in psychiatiric treatment, I would recommend only one day of writing before their next session and do not tackle events that are too traumatic to deal with now (you can come back to these situations later if you decide to). And, as I have said, any negative writing should be balanced with something positive, even if it’s fiction. Tuesday, March 10, 2009 Well, not quite, though the literal interpretation could be applied to the metaphor. It’s not my metaphor; someone said to me today that a person who engages in a variety of short-term activities is akin to a promiscuous man sleeping with many women because he’s trying to find the best mate. My replies to any conversation have a significant delay period, so I didn’t respond, but it bothers me when people are wrong. First off, the con-arguer in this debate is implying that pre-marital (or marital if you really want to push the boundaries here) sexual promiscuity is a negative behaviour. That would be like me saying that waiting until you’re married before engaging in sexual intercourse is wrong. I might not think it’s the best approach, but there’s certainly nothing wrong with it. Just as there is nothing wrong with enjoying consensual sex with multiple partners. Sexual compatibility is extremely important in a relationship. And I would say that sexual compatibility is even more important when one or both people has experienced a trauma. Now, the metaphor. Similarly, I believe it is healthy for a person to venture, non-committedly, into different interests. This is certainly the attitude towards academics; take lots of different classes during your first year or two to find out what you like. Would you call this person an academic slut? (I would, but only in jest). There are so many things to experience in life, it seems a shame to pick one right off the bat and fully commit to it; that’s what leads to disappointed monogamy. Likewise, with the traumatised person, and here the trauma can be any psychiatric illness, they may not have had many positive experiences in any activity. In which case, they may need to try smaller, safer activities, without the pressure of full commitment, in order to build confidence that they can enjoy, not just one or two, but many of the experiences life has to offer. Some difficulties may arise with the borderline patient or any avoidance behaviour, which is probably what the con-arguer was getting at, and I agree with in such cases, but the metaphor from their point of doesn’t hold up under detailed analysis. Monday, March 9, 2009 Psychology GRE Study Guide Page 2 My “free” internet signal has left me despondent these past couple of weeks. Yesterday, the signal returned, but only for a few hours. As a result, I was forced to use one of those book-thingy’s on my shelf. I had to reference things in the index and flip the pages (by hand!), but it was still faster, and more enjoyable and relaxing, than retrieving information from the interweb. Wait, unless you are studying for the GRE, in which case it’s easier to come here. Page 1 Page 2: 9. Protection-motivation theory was developed by Rogers. It describes coping strategies for perceived health threats using either a threat or a coping appraisal. The theory proposes that intended self protection depends upon: perceived severity of event, perceived probability or vulnerability, the efficacy of the recommended prevention (response efficacy), and the perceived self-efficacy. 10. An attitude, according to social psychologists, is a positive or negative evaluation or belief of something. It may affect behaviour. Attitudes can be broken down into cognitive, affective, and behavioural components. 11. Elaine Hatfield did some studies about the types of love. a. Agape love has its roots in Christianity and is a spiritual (not sexual) and selfless love towards all. b. Companionate love is described more by trust and warmth than intense emotions. Tends to start later in the relationship and be more enduring. c. Erotic love (eros) is sexual attraction and desire. d. Passionate love is an intense emotional state in which the individual is influenced by a powerful longing to be with the other person. e. Friendship love (philia) is an interested, affectionate, strong liking with an emotional connection. a. Problem-solving heuristics refer to learning by experiment. b. Linguistics is the study of language. c. Self-monitoring behaviour is the tendency to change behaviour in accordance with a situation. High self-monitors are more likely than low self-monitors to change their actions and beliefs in order to best suit their needs. d. Intrinsic motivation refers to situations where there is no internal or external motivation; the behaviour is self-motivated. e. The fundamental attribution error is the overestimation of internal factor and the underestimation of external forces when interpreting another person’s behaviour. 13. Self-esteem is a person’s own evaluation of their worth. There is only one answer here that is concerned directly with the individual and can not be attributed to other environmental factors. 14. Adrenaline (epinephrine) is involved in autonomic responses to stressors, particularly the fight-or-flight response, increasing heart-rate and available energy for a short period of time. Schachter and Singer found that experience of emotion was determined by expectation and developed the two-factor theory of emotion which includes autonomic arousal as well as individual interpretation of that arousal. Dopamine and serotonin affect sleep, mood, attention, and learning. 15. The correlation between two variables is the slope of the best-fit line. 16. Paul Ekman did an experiment where people were asked to match photographs of provocative facial expressions to a wide number of emotion labels and found agreement on six. (For the record, I don’t buy into this theory; I have more emotions than that in ten minutes). Sunday, March 8, 2009 Iceberg Metaphor Why is it icebergs always represent problems? I guess problems need their metaphors too, but I’m going to change the symbolism anyway. The iceberg is you, in a solid, contained form. The tip of the iceberg is what depression allows you to see. The surface of the ocean is the depression; it doesn’t allow you to see how much more there is to you below itself reflecting back what is above the surface thereby convincing you that all that lies below the surface-depression is more of what is above the surface. And so the tip of the iceberg, which is the negative distortions of depression, looks to only be a small amount of much more depression, when in fact there is only that small amount of negativity while the rest of the iceberg, which is much larger, contains everything else. Actually, the tip of the iceberg is only 1/8 of the total size. Icebergs melt in about a year, so you might not recover from your depression immediately, but each positive thing you do can aid in the process. The water inside the iceberg is fresh and not likely to be polluted. Though devoid of pollutants, icebergs do usually carry impurities such as volcanic dust and other terrestrial material. These impurities might be viewed as undesirable aspects of the self when in fact as the iceberg melts, these materials aid in significantly increasing biological life for up to two miles around the iceberg. So there are actually vast quantities of life below the surface of your depression, both contained within yourself and in the surrounding ocean which can be thought of as everything else in your life. And furthermore, the life within you, when released, will have dramatic positive effects on everything surrounding you - work, friendships, health, “thriving communities of seabirds above and a web of phytoplankton, krill, and fish below.” References: 1 Saturday, March 7, 2009 CBT with Euler’s Number In honour of the number e (because pi is getting so much attention these days and also because e has more higher order numbers in its beginning than does pi), here is a traditional priority ranking exercise, but using the digits of e. The way it works is simple; for each digit of, in sequence, you assign an activity of either pleasure or obligation to the number with 0 being the least enjoyable or easiest to accomplish and 9 being the most enjoyable of difficult to accomplish. The catch is you have to use each digit in order. So if you have three activities that rank at level 1, you have to fill in all of the digits up to the third 1 (that’s 27 digits if you skip the leading 2). And no cheating by changing the rank of an activity to match the digit you need to fill. The point of the exercise is to help you recognise what you need and want to do during your day/week/month as well as to help you recognise how much of your time is spent on less enjoyable activities. The next step in this type of exercise would be to track each activity throughout the week. You will probably notice some activities during this tracking period that were not included in your original list. If you do, make sure to take note of it so that you can place it with all the others. Eventually, you will move on to replacing the less enjoyable activities in your schedule with enjoyable ones. But for now, all you are doing is taking an inventory. I will post an example schedule soon. The activity you list can also be something you have wanted to do but haven’t for whatever reason. If you’ve always wanted to go skydiving, you can use that as one of your high ranking activities. However, when you use desired activities to fill in the blanks, you have to make an honest effort to actually do them. From a philosophical point of view, the infinite digits of e can be thought to represent the infinite number of possibilities of things you may accomplish. So the exercise is also to force you to recognise the more important aspects of your life. And because e is irrational (the digits are random), a very significant event can occur after many less significant ones, a metaphor for change and improvement from your current disposition. Most CBT exercises of this type encourage you to find ten activities, but allow for less. I think I am going to introduce the rule that you have to use at least the first 20 digits, but I challenge you to do more. 7 – Go for run in morning 1 – Check email 8 – Brush teeth before bed 2 – Brush teeth in morning 8 – Eat a proper dinner 1 – Make coffee to go 8 – Learn to do the splits 2 – Water plants 8 – Spend time with Summer Glau 4 – Read a novel That’s ten digits already, so I know you can easily fill at least twenty (you can break large tasks into smaller components if you want). e = 2.71828 18284 59045 23536 02874 71352 66249 77572 47093 69995 95749 66967 62772 40766 30353 54759 45713 82178 52516 64274 27466 39193 20030 59921 81741 35966 29043 57290 03342 95260 59563 07381 32328 62794 34907 63233 82988 07531 Thursday, March 5, 2009 Is it OK for Therapists to Cry (with Patients)? No. That was easy. Here’s why: • It blurs the line of the doctor-patient relationship, potentially putting the patient in a very awkward position. Does patient now need to comfort therapist? Of course not, but the patient might not be able to discern that, especially in their emotional turmoil, and this is a question they will inevitably be asking themselves. • Not crying during a distressing session is a demonstration to the patient that intense emotions can be managed and tolerated without becoming overruled by them. • For the sake of the therapist, it is not healthy to be getting caught up in patient’s problems on a regular basis. • It de-stabilises a therapeutic alliance. If doctor is emotionally shaken at one declaration by patient, patient might be more reluctant to share other information, whether they are uncomfortable with doctor’s crying or because they don’t want to upset doctor. An exception occurs when the therapist makes an effort to not become emotionally caught up with the patient, but is physically unable to prevent tears (however, if this happens frequently, I would say there is an issue with the therapist). Crying is like sneezing; you might not be able to stop it (sometimes you can), but you can at least minimise its display. I myself have been deeply, and sadly, moved by some patient’s stories, and I have cried. But I did not cry in the presence of that person, because it was not about me, it was about them. Wednesday, March 4, 2009 Psychology GRE Study Guide Page 1 This is for all those psychology students out there who want to be professional psychology students. And since the GRE is required for most graduate programs, I will be going through the practice test page by page explaining not only what the question is asking, but also each of the answer options. I will not be posting the actual answer, but this still may want to be something you look at after you’ve done the practice test, though I won’t be doing it that way. I figure by the time I’ve completed all 31 pages, I will have forgotten the earlier ones and I would rather do a 31 page assessment of my knowledge after studying rather than before. 1. Transformational grammar uses grammar in a logical way to convey meaning and thought behind the sentence rather than just as a structural tool. a. Roger Brown is an American social psychologist who studied paediatric linguistics, flashbulb memories, and tip-of-the-tongue phenomenon. b. Alan Turing was a British mathematician, logician and cryptanalyst. c. Jerry Fodor is an American philosopher and cognitive scientist who philosophised much on language believing that communication was achieved by ‘language of thought’ which states that cognition and related processes are plausible only when expressed as representational systems and that thought follows the same rules as language. d. B.F. Skinner was an American psychologist who focused on operational conditioning which is when a subject (rat) operates on a mechanical device (lever) and an event occurs (pellets come out). e. Noam Chomsky is an American linguist, philosopher, cognitive scientist, political activist, author, and lecturer and is known for being the father of modern linguistics. Also, Noam Chomsky is known for critiquing the beliefs of Skinner saying that there exists a language instinct in each individual and that language learning is more complicated than behavioural teaching citing cases of development of language in the absence of structured or unstructured teaching. 2. A mnemonic device is an association between one object/word/poem and the relevant data. b. A teaching machine is a mechanical device to aid in learning developed by Sidney Pressey and built again later by B.F. Skinner. d. The Grecian Method and Illocutionary Force Indicating Devices are used to convey sentence meaning. 3. Countertransference is a Freudian concept describing when a therapist develops an emotional attachment to a patient. a. Psychodrama is, usually, a group therapy where personal conflicts are acted out. b. Psychoanalysis is a Freudian technique used to examine a patient’s issues by means of verbal communication (free-associations, dreams…). d. Client-centred therapy is a non-directive, supportive, and validating approach developed by Carl Rogers. e. Behaviour modification is the modification of behaviours – reinforcing desired behaviours and punishing negative behaviours. a. Projection is when a person ascribes their emotions onto another person. b. Reaction formation is when a person avoids a position by assuming the opposite position. c. Displacement is the deflection of an effect from one target to another. d. Compensation is when a person covers up perceived negative aspects of the self by exaggerating the inadequacies in another area. e. Rationalisation is when a person justifies behaviours/emotions through logical means. 5. Systematic desensitisation is the use an in increasing grade of agitating stimuli to accustom a person to the actual agitation-inducing stimulus. a & b. Rods are more sensitive to light than cones, are more numerous, and not sensitive to colour perception. c. Cones do enable greater visual acuity (but remember, only when the lights are on). d & e. Foveal acuity is better than peripheral acuity. The fovea also has no rods and a high density of cones. a. Equilibrium is when competing forces in a system are balanced. b. Enervation means to weaken or destroy the strength or vitality of. It also means to remove a section of or a complete nerve. c. Myelinisation is the development of a myelin, electrically insulating, sheath around the axon of a nerve. d. Sensitisation is the amplification of a response to a stimulus. e. Hyperpolarisation is when the potential across a membrane increases to greater than its resting potential. The resting potential is negative therefore the hyperpolarisation potential is more negative. Hyperpolarisation occurs after the firing of a neuron. a. Preoperational thinking occurs between ages 2 – 6 and is characterised by language development. b. Cognitive perspective-taking is theorised to be important in intentional moral and proper social behaviour. c. Play patterns are methods of social learning during the pre-school years. Boys tend to show more functional play (imaginative) than do girls who display more constructive play (object manipulation). d. & e. I couldn’t find any gender differences in the literature. Tuesday, March 3, 2009 Ethics and Palliative Volunteers An article in the 2009 issue of American Journal of Hospice and Palliative Medicine I found interesting was concerning hospice volunteers who get very little training before going onto the ward. This was a US study, but the same thing applies at least here in Vancouver. The four ethical challenges outlined in the paper from Canadian palliative volunteers are the easy ones to answer, at least for a volunteer since in that position you defer responsibility of medically related enquiries to the medical staff. That takes care of challenges 1 and 3 (communication with anyone other than patient regarding their status), and 4 (personal medical concerns about the patient). Challenge 2 (being asked for opinions by the patient) is also pretty straightforward; don’t if you don’t want to. If a patient asks which funeral home you think they should go with, list a few different places and let them decide that way. Or ask them what they were thinking of going with and how they came to that decision. Or say that you don’t really know much about the topic and refer them to the nurses or family. The article also talks about accepting gifts and listening to suicide talk/requests, which are more complicated issues, but still manageable. In the introduction, more interesting examples were given which I will answer with extreme brevity here: “…whether to address honestly the patient’s questions about whether she was dying while also respecting the family’s wishes that she not be told…” Defer to doctor – “I’m afraid I’m not given any medical information. You should ask your doctor next time you see him.” “…whether to help a patient go to his garage (at some physical risk and with great difficulty) to destroy materials he did not want his wife to see…” You should never put yourself, or the patient, at risk. “…whether to write a letter from the patient to someone the caregiver would not approve of…” Yes. “…whether to speak up when the volunteer believed that the patient was seriously overmedicated…” Of course you should speak up, just remember that staff might not listen to you. “…how to address issues of morality raised by the patient herself regarding a longheld secret about a pregnancy before her marriage.” This would be much like any therapy situation; ask questions (don’t give opinions if uncomfortable), don’t judge, and listen attentively. Sunday, March 1, 2009 Pi Day!!!!!!! I am growing increasingly excited for pi day. March 14th for those who don’t know (3.14). Apparently, it’s tradition to eat pie on pi day, but I will be baking a cake in the shape of the pi symbol and at 1:59:26 (3.1415926) I will make the first cut. Sure I have a very important birthday just a few days before ( I will be (3*14)-(square root of the sum of the first 9 digits, after the decimal)+2*(the difference of the next two digits (that’s the 10th and 11th digit), , but what I am really looking forward to is pi day. My business card has the digits of e framed by the digits of pi. Because that’s how geeky I am. I already have the first 46 digits memorised and would like to break 100 by pi day, but that might be a bit optimistic. A Japanese man in 2006 who recited 100,000 digits from memory kinda puts me to shame. For the record, the article I got this fun fact from said that pi “is usually written out to a maximum of three decimal places, as 3.141, in math textbooks.” Let the laughter ensue. This is why I can’t trust the media. Oh, and there’s pi approximation day on July 22 (pi is roughly equal to 22/7). And on April 26 (the distance of orbit of the earth divided by the distance, 2 radians, it has travelled by April 26). And on November 10 (the 314th day of the year). And on December 21 (the 355th day of the year) at 1:13 (355/113). The fun never stops. It really doesn’t…A pi rap song and a pi(ano) song which is written with the digits of pi mapped to a melody. It’s not stopping yet…I stole this fun pi day activity from here: Convert things into pi. This step is absolutely necessary for two reasons: To utterly confuse people who have no idea what you are talking about (thus opening the door for enlightenment) and to have fun seeing how many things can be referenced with pi. Consider two approaches: • Convert naturally circular things into radians like the hours on the clock. Instead of it being 3 o'clock, now it's 1/2*pi o'clock. Or, instead of it being 3 o'clock, convert the inclination of the sun into radians and describe that as the time. • Simply use 3.14 as a unit of measure. Instead of being 31 years old, you are 9pi years old (approaching your 10th birthday). With this same approach, you can find out your next pi birthday (don't forget to celebrate it when it comes!). Strength Through Music This game was sent to me and I am posting it here because I have made two therapeutic exercises out of it. Upon completion of the game, I realised how depressing my music is (and this is on a 4GB ipod). May people who are depressed listen to depressing music (see my post for one reason this is therapeutic). Most people will not change the music they listen to. But even the most depressed person has some happy type music in their collection. So the first exercise is to create a playlist with at least as many happy songs in it as there are sad songs in your current playlist and then merge these two. This might mean removing some negative songs from your current playlist, but them’s the rules. If you don’t have enough positive songs to balance the negative ones, again remove some of the negative songs. This doesn’t mean you can’t listen to your favourite depressing songs, just that you shouldn’t listen to all of them all the time. The songs on your playlist can be rotated, as long as the positive-negative count is at least equal, if not greater on the positive side. And you can’t use the excuse that you have no happy songs because I’m including some suggestions at the bottom. The second exercise is more directly related to the game. What you do is get the lyrics to the sad songs on your list (I only used the first ten songs because my stolen internet signal disappeared on my) and write a positive passage/poem using one line from each song. Another interesting exercise I just thought of would be to use the lyrics of one song and write something positive using the words (not whole lines) in the song. Here is the game (with my answers): 4. Have Fun! Same Ghost Every Night (Wolf Parade) Sister (Sufjan Stevens) Sea Legs (Immaculate Machine) Brand New Colony (Postal Service) One by One All Day (The Shins) Almost Over (Elliot Smith) Innocent Bones (Iron and Wine) Two-Headed Boy (Neutral Milk Hotel) The Gate (Belle and Sebastian) WHAT IS 2 + 2? One Chance (Modest Mouse) - I can’t believe how wonderfully this one worked out. Burn Your Life Down (Tegan and Sara) Ghost (Neutral Milk Hotel) Roboxulla (The Jealous Girlfriends) Satin in a Coffin (Modest Mouse) Roseblood (Mazzy Star) Sunday Smile (Beirut) O My Heart (Mother Mother) Two Places (Immaculate Machine) Sing (Dresden Dolls) Wrong Choice (Lovely Feathers) Sun Will Set (Zoe Keating) Strength Through Music (Amanda Palmer) Some happy songs: You, Me, & the Bourgeoisie (The Submarines), 50’s Parking (Tapes ‘n Tapes), Jason’s Basement (The Gossip), Dashboard (Modest Mouse), Army (Immaculate Machine), Hay Loft (Mother Mother), Wraith Pinned to the Mist and Other Games (Of Montreal) This is my re-write of the first ten songs: With an iron will to walk the walk - My own breath, my own breath through the path, Reminding me to know that I'm glad, There is no reason to grieve. If every moment of our lives We have one chance, Everything will change And you've got everything to gain And even the last of the blue-eyed babies know: It's a chance I'll take oh yeah. If anyone does this, I would be interested to see the results.
6c06feb391812b91
LOG#116. Basic Neutrinology(I). This new post ignites a new thread. Let me begin… m_\nu\leq 50eV 1) From KamLAND (2005), we get 2) From MINOS (2006), we get May the Neutrinos be with you! LOG#050. Why riemannium? This special 50th log-entry is dedicated to 2 special people and scientists who inspired (and guided) me in the hard task of starting and writing this blog. These two people are 1st. John C. Baez, a mathematical physicist. Author of the old but always fresh/brand new This Week Finds in Mathematical Physics, and now involved in the Azimuth blog. You can visit him here and here I was a mere undergraduate in the early years of the internet in my country when I began to read his TWF. If you have never done it, I urge to do it. Read him. He is a wonderful teacher and an excellent lecturer. John is now worried about global warming and related stuff, but he keeps his mathematical interests and pedagogical gifts untouched. I miss some topics about he used to discuss often before in his hew blog, but his insights about virtually everything he is involved into are really impressive. He also manages to share his entusiastic vision of Mathematics and Science. From pure mathematics to physics. He is a great blogger and scientist! 2nd. The professor Francis Villatoro. I am really grateful to him. He tries to divulge Science in Spain with his excellent blog ( written in Spanish language) He is a very active person in the world of Spanish Science (and its divulgation). In his blog, he also tries to explain to the general public the latest news on HEP and other topics related with other branches of Physics, Mathematics or general Science. It is not an easy task! Some months ago, after some time reading and following his blog (as I do now yet, like with Baez’s stuff), I realized that I could not remain as a passive and simple reader or spectator in the web, so I wrote him and I asked him some questions about his experience with blogging and for advice. His comments and remarks were incredibly useful for me, specially during my first logs. I have followed several blogs the last years (like those by Baez or Villatoro), and I had no idea about what kind of style/scheme I should addopt here. I had only some fuzzy ideas about what to do, what to write and, of course, I had no idea if I could explain stuff in a simple way while keeping the physical intuition and the mathematical background I wanted to include. His early criticism was very helpful, so this post is a tribute for him as well. After all, he suggested me the topic of this post! I encourage you to read him and his blog (as long as you know Spanish or you can use a good translator). Finally, let me express and show my deepest gratitude to John and Francis. Two great and extraordinary people and professionals in their respective fields who inspired (and yet they do) me in spirit and insight in my early and difficult steps of writing this blog. I am just convinced that Science is made of little, ordinary and small contributions like mine, and not only the greatest contributions like those making John and Francis to the whole world. I wish they continue making their contributions in the future for many, many years yet to come. Now, let me answer the question Francis asked me to explain here with further details. My special post/log-entry number 50…It will be devoted to tell you why this blog is called The Spectrum of Riemannium, and what is behind the greatest unsolved problem in Number Theory, Mathematics and likely Physics/Physmatics as well…Enjoy it! The Riemann zeta function is a device/object/function related to prime numbers. In general, it is a function of complex variable s=\sigma+i\tau defined by the next equation: \boxed{\displaystyle{\zeta (s)=\sum_{n=1}^{\infty}n^{-s}=\sum_{n=1}^{\infty}\dfrac{1}{n^s}=\prod_{p=2}^{\infty}\dfrac{1}{1-p^{-s}}=\prod_{p,\; prime}\dfrac{1}{1-p^{-s}}}} \boxed{\displaystyle{\zeta (s)=\dfrac{1}{1-2^{-s}}\dfrac{1}{1-3^{-s}}\ldots\dfrac{1}{1-137^{-s}}\ldots}} Generally speaking, the Riemann zeta function extended by analytical continuation to the whole complex plane is “more” than the classical Riemann zeta function that Euler found much before the work of Riemann in the XIX century. The Riemann zeta function for real and entire positive values is a very well known (and admired) series by the mathematicians. \zeta (1)=\infty due to the divergence of the harmonic series. Zeta values at even positive numbers are related to the Bernoulli numbers, and it is still lacking an analytic expression for the zeta values at odd positive numbers. The Riemann zeta function over the whole complex plane satisfy the following functional equation: \boxed{\pi^{-\frac{s}{2}}\Gamma \left(\dfrac{s}{2}\right)\zeta (s)=\pi^{-\frac{(1-s)}{2}}\Gamma \left(\dfrac{1-s}{2}\right)\zeta (1-s)} Equivalently, it can be also written in a very simple way: \boxed{\xi (s)=\xi (1-s)} where we have defined \xi (s)=\pi^{-\frac{s}{2}}\Gamma \left(\dfrac{s}{2}\right)\zeta (s) Riemann zeta values are an example of beautiful Mathematics. From \displaystyle{\zeta (s)=\sum_{n=1}^{\infty}n^{-s}}, then we have: 1) \zeta (0)=1+1+\ldots=-\dfrac{1}{2}. 2) \zeta (1)=1+\dfrac{1}{2}+\dfrac{1}{3}+\ldots =\infty. The harmonic series is divergent. 3) \zeta (2)=1+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\ldots =\dfrac{\pi^2}{6}\approx 1.645. The famous Euler result. 4) \zeta (3)=1+\dfrac{1}{2^3}+\dfrac{1}{3^3}+\ldots \approx 1.202. And odd zeta value called Apery’s constant that we do not know yet how to express in terms of irrational numbers. 5) \zeta (4)=\dfrac{\pi^4}{90}\approx 1.0823. 6) \zeta (-2n)=-\dfrac{\pi^{-n}}{2\Gamma (-n+1)}=0,\;\;\forall n=1,2,\ldots ,\infty. Trivial zeroes of zeta. 7) \zeta (2n)=\dfrac{(-1)^{n+1}(2\pi)^{2n}B_{2n}}{2(2n)!}\;\;\forall n=1,2,\ldots ,\infty, where B_{2n} are the Bernoulli numbers. The first 13 Bernoulli numbers are: B_0=1, B_1=-\dfrac{1}{2}, B_2=\dfrac{1}{6}, B_3=0, B_4=-\dfrac{1}{30}, B_5=0, B_6=\dfrac{1}{42} B_7=0, B_8=-\dfrac{1}{30}, B_9=0, B_{10}=\dfrac{5}{66}, B_{11}=0, B_{12}=-\dfrac{691}{2730}, B_{13}=0 8) We note that B_{2n+1}=0,\;\; \forall n\geq 1. 9) \zeta (-2n+1)=-\dfrac{B_{2n}}{2n}, \;\; \forall n=1,2,\ldots ,\infty. For instance, \zeta (-1)=-\dfrac{1}{12}=1+2+3+\ldots, \zeta (-3)=\dfrac{1}{120}, and \zeta (-5)=-\dfrac{1}{252}. Indeed, \zeta (-1) arises in string theory trying to renormalize the vacuum energy of an infinite number of harmonic oscillators. The result in the bosonic string is \dfrac{2}{2-D}. In order to match with Riemann zeta function regularization of the above series, the bosonic string is asked to live in an ambient spacetime of D=26 dimensions. We also have that \sum \vert n\vert^3=-\dfrac{1}{60} 10) \zeta (\infty)=1. The Riemann zeta value at the infinity is equal to the unit. 11) The derivative of the zeta function is \displaystyle{\zeta '(s)=-\sum_{n=1}^{\infty}\dfrac{\log n}{n^s}}. Particularly important of this derivative are: \displaystyle{\zeta '(0)=-\sum_{n=1}^\infty \log n=-\log \prod_{n=1}^\infty n=\zeta (0)\log (2\pi)=-\dfrac{1}{2}\log (2\pi)=-\log \sqrt{2\pi}=\log \dfrac{1}{\sqrt{2\pi}}} or \zeta '(0)=\log \sqrt{\dfrac{1}{2\pi}} This allow us to define the factorial of the infinity as \displaystyle{\infty !=\prod_{n=1}^{\infty}n=1\cdot 2\cdots \infty=e^{-\zeta '(0)}=\sqrt{2\pi}} and the renormalized infinite dimensional determinant of certain operator A as: \det _\zeta (A)=a_1\cdot a_2\cdots=\exp \left(-\zeta_A '(0)\right), with \displaystyle{\zeta _A (s)=\sum_{n=1}^\infty \dfrac{1}{a_n^s}} 12) \zeta (1+\varepsilon )=\dfrac{1}{\varepsilon}+\gamma_E +\mathcal{O} (\varepsilon ). This is a result used by theoretical physicists in dimensional renormalization/regularization. \gamma_E\approx 0.577 is the so-called Euler-Mascheroni constant. The alternating zeta function, called Dirichlet eta function, provides interesting values as well. Dirichlet eta function is defined and related to the Riemann zeta fucntion as follows: \boxed{\displaystyle{\eta (s)=\sum_{n=1}^\infty \dfrac{(-1)^{n+1}}{n^s}=\left(1-2^{1-s}\right)\zeta (s)}} This can be thought as “bosons made of fermions” or “fermions made of bosons” somehow. Special values of Dirichlet eta function are given by: \eta (0)=-\zeta (0)=\dfrac{1}{2} \eta (1)=\log 2 \eta (2)=\dfrac{1}{2}\zeta (2)=\dfrac{\pi^2}{12} \eta (3)=\dfrac{3}{4}\zeta (3)\approx \dfrac{3}{4}(1.202) \eta (4)=\dfrac{7}{8}\zeta (4)=\dfrac{7}{8}\left(\dfrac{\pi^4}{90}\right) Remark(I): \zeta(2) is important in the physics realm, since the spectrum of the hydrogen atom has the following aspect and the Balmer formula is, as every physicist knows \Delta E(n,m)=K\left(\dfrac{1}{n^2}-\dfrac{1}{m^2}\right) Remark (II): The fact that \zeta (2) is finite implies that the energy level separation of the hydrogen atom in the Böhr level tends to zero AND that the sum of ALL the possible energy levels in the hydrogen atom is finite since \zeta (2) is finite. Remark(III): What about an “atom”/system with spectrum E(n)=\kappa n^{-s}? If s=2, we do know that is the case of the Kepler problem. Moreover, it is easy to observe that s=-1 corresponds to tha harmonic oscillator, i.e., E(n)=\hbar \omega n. We also know that s=-2 is the infinite potential well. So the question is, what about a n^{-3} spectrum and so on? In summary, does the following spectrum with energy separation/splitting \boxed{\Delta E(n,m;s)=\mathbb{K}\left(\dfrac{1}{n^{s}}-\dfrac{1}{m^{s}}\right)} exist in Nature for some physical system beyond the infinite potential well, the harmonic oscillator or the hydrogen atom, where s=-2, s=-1 and s=2 respectively? It is amazing how Riemann zeta function gets involved with a common origin of such a different systems and spectra like the Kepler problem, the harmonic oscillator and the infinite potential well! The Riemann Hypothesis (RH) is the greatest unsolved problem in pure Mathematics, and likely, in Physics too. It is the statement that the only non-trivial zeroes of the Riemann zeta function, beyond the trivial zeroes at s=-2n,\;\forall n=1,2,\ldots,\infty have real part equal to 1/2. In other words, the equation or feynmanity has only the next solutions: \boxed{\mbox{RH:}\;\;\zeta (s)=0\leftrightarrow \begin{cases} s_n=-2n,\;\forall n=1,\ldots,\infty\;\;\mbox{Trivial zeroes}\\ s_n=\dfrac{1}{2}\pm i\lambda_n, \;\;\forall n=1,\ldots,\infty \;\;\mbox{Non-trivial zeroes}\end{cases}} I generally prefer the following projective-like version of the RH (PRH): \boxed{\mbox{PRH:}\;\;\zeta (s)=0\leftrightarrow \begin{cases} s_n=-2n,\;\forall n=1,\ldots,\infty\;\;\mbox{Trivial zeroes}\\ s_n=\dfrac{1\pm i\overline{\lambda}_n}{2}, \;\;\forall n=1,\ldots,\infty \;\;\mbox{Non-trivial zeroes}\end{cases}} The Riemann zeta function can be sketched on the whole complex plane, in order to obtain a radiography about the RH and what it means. The mathematicians have studied the critical strip with ingenious tools an frameworks. The now terminated ZetaGrid project proved that there are billions of zeroes IN the critical line. No counterexample has been found of a non-trivial zeta zero outside the critical line (and there are some arguments that make it very unlikely). The RH says that primes “have music/order/pattern” in their interior, but nobody has managed to prove the RH. The next picture shows you what the RH “say” graphically: If you want to know how the Riemann zeroes sound, M. Watkins has done a nice audio file to see their music. You can learn how to make “music” from Riemann zeroes here http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/munafo-zetasound.htm And you can listen their sound here Riemann zeroes are connected with prime numbers through a complicated formula called “the explicit formula”. The next equation holds  \forall x\geq 2 integer numbers, and non-trivial Riemann zeroes in the complex (upper) half-plane with \tau>0: \boxed{\displaystyle{\pi (x)+\sum_{n=2}^\infty \dfrac{\pi \left( x^{1/n}\right)}{n}=\text{Li} (x)-\sum_{\lambda =\sigma+i\tau }\left(\text{Li}(x^\lambda)+\text{Li}\left( x^{1-\lambda}\right)\right)+\int_x^\infty\dfrac{du}{u(u^2-1)\ln u}-\ln 2}} and where \pi (x) is the celebrated Gauss prime number counting function, i.e., \pi (x) represents the prime numbers that are equal than x or below. This explicit formula was proved by Hadamard. The explicit formula follows from both product representations of \zeta (s), the Euler product on one side and the Hadamard product on the other side. The function \text{Li} (x), sometimes written as \text{li} (x), is the logarithmic integral \displaystyle{\text{Li} (x) =\text{li} (x)= \int_2^x\dfrac{du}{\ln x}} The explicit formula comes in some cool variants too. For instance, we can write \pi (x)=\pi_0 (x)+\pi_1 (x)=\pi_{\mbox{smooth}}+\pi_{\mbox{osc-chaotic}} \displaystyle{\pi_0 (x)=\sum_{n=1}^\infty\dfrac{\mu (n)}{n}\left[\mbox{Li}(x^{1/n})-\sum_{k=1}^\infty\mbox{Li}(x^{-2k/n})\right]} \displaystyle{\pi_1 (x)=-2\mbox{Re}\sum_{n=1}^\infty\dfrac{\mu (n)}{n}\sum_{\alpha=1}^\infty\mbox{Li}(x^{(\sigma_\alpha+i\tau_\alpha)/n})} For large values of x, we have the asymptotics \pi_0 (x)\approx \mbox{Li} (x) \displaystyle{\pi_1 (x)\approx -\dfrac{2}{\ln x}\sum_{\alpha=1}^\infty\dfrac{x^{\sigma_\alpha}}{\sigma_\alpha^2+\tau_\alpha^2}\left(\sigma_\alpha\cos (\tau_\alpha \ln x)+\tau_\alpha \sin (\tau_\alpha \ln x)\right)} Remark: Please, don’t confuse the logarithmic integral with the polylogarithm function \text{Li}_x (s). Gauss also conjectured that \pi (x)\sim \text{Li} (x) Date: January 3, 1982. Andrew Odlyzko wrote a letter to George Pólya about the physical ground/basis of the Riemann Hypothesis and the conjecture associated to Polya himself and David Hilbert. Polya answered and told Odlyzko that while he was in Göttingen around 1912 to 1914 he was asked by Edmund Landau for a physical reason that the Riemann Hypothesis should be true, and suggested that this would be the case if the imaginary parts, say T of the non-trivial zeros of the Riemann zeta function corresponded to eigenvalues of an unbounded and unknown self adjoint operator \hat{T}. That statement was never published formally, but  it was remembered after all, and it was transmitted from one generation to another. At the time of Pólya’s conversation with Landau, there was little basis for such speculation. However, Selberg, in the early 1950s, proved a duality between the length spectrum of a Riemann surface and the eigenvalues of its Laplacian. This so-called Selberg trace formula shared a striking resemblance to the explicit formula of certain L-function, which gave credibility to the speculation of Hilbert and Pólya. Dialogue(circa 1970). “(…)Dyson: So tell me, Montgomery, what have you been up to? Montgomery: Well, lately I’ve been looking into the distribution of the zeros of the Riemann zeta function.  Dyson: Yes? And?  Montgomery: It seems the two-point correlations go as….(…) Dyson: Extraordinary! Do you realize that’s the pair-correlation function for the eigenvalues of a random Hermitian matrix? It’s also a model of the energy levels in a heavy nucleus, say U-238.(…)” A step further was given in the 1970s, by the mathematician Hugh Montgomery. He investigated and found that the statistical distribution of the zeros on the critical line has a certain property, now called Montgomery’s pair correlation conjecture. The Riemann zeros tend not to cluster too closely together, but to repel. During a visit to the Institute for Advanced Study (IAS) in 1972, he showed this result to Freeman Dyson, one of the founders of the theory of random matrices. Dyson realized that the statistical distribution found by Montgomery appeared to be the same as the pair correlation distribution for the eigenvalues of a random and “very big/large” Hermitian matrix with size NxN. These distributions are of importance in physics and mathematics. Why? It is simple. The eigenstates of a Hamiltonian, for example the energy levels of an atomic nucleus, satisfy such statistics. Subsequent work has strongly borne out the connection between the distribution of the zeros of the Riemann zeta function and the eigenvalues of a random Hermitian matrix drawn from the theoyr of the so-calle Gaussian unitary ensemble, and both are now believed to obey the same statistics. Thus the conjecture of Pólya and Hilbert now has a more solid fundamental link to QM, though it has not yet led to a proof of the Riemann hypothesis. The pair-correlation function of the zeros is given by the function: R_2(x)=1-\left(\dfrac{\sin \pi x}{\pi x}\right)^2 In a posterior development that has given substantive force to this approach to the Riemann hypothesis through functional analysis and operator theory, the mathematician Alain Connes has formulated a “trace formula” using his non-commutative geometry framework that is actually equivalent to certain generalized Riemann hypothesis. This fact has therefore strengthened the analogy with the Selberg trace formula to the point where it gives precise statements. However, the mysterious operator believed to provide the Riemann zeta zeroes remain hidden yet. Even worst, we don’t even know on which space the Riemann operator is acting on. However, some trials to guess the Riemann operator has been given from a semiclassical physical environtment as well. Michael Berry  and Jon Keating have speculated that the Hamiltonian/Riemann operator H is actually some kind of quantization of the classical Hamiltonian XP where P is the canonical momentum associated with the position operator X. If that Berry-Keating conjecture is true. The simplest Hermitian operator corresponding to XP is H = \dfrac1{2} (xp+px) = - i \left( x \dfrac{\mathrm{d}}{\mathrm{d} x} + \dfrac{1}{2} \right) At current time, it is still quite inconcrete, as it is not clear on which space this operator should act in order to get the correct dynamics, nor how to regularize it in order to get the expected logarithmic corrections. Berry and Germán Sierra, the latter in collaboration with P.K.Townsed, have conjectured that since this operator is invariant under dilatations perhaps the boundary condition f(nx)=f(x) for integer n may help to get the correct asymptotic results valid for big n. That it, in the large n we should obtain s_n=\dfrac{1}{2} + i \dfrac{ 2\pi n}{\log n} Indeed, the Berry-Keating conjecture opened another striking attack to prove the RH. A topic that was popular in the 80’s and 90’s in the 20th century. The weird subject of “quantum chaos”. Quantum chaos is the subject devoted to the study of quantum systems corresponding to classically chaotic systems. The Berry-Keating conjecture shed light further into the Riemann dynamics, sketching some of the properties of the dynamical system behind the Riemann Hypothesis. In summary, the dynamics of the Riemann operator should provide: 1st. The quantum hamiltonian operator behind the Riemann zeroes, in addition to the classical counterpart, the classical hamiltonian H, has a dynamics containing the scaling symmetry. As a consequence, the trajectories are the same at all energy scale. 2nd. The classical system corresponding to the Riemann dynamics is chaotic and unstable. 3rd. The dynamics lacks time-reversal symmetry. 4th. The dynamics is quasi one-dimensional. A full dictionary translating the whole correspondence between the chaotic system corresponding to the Riemann zeta function and its main features is presented in the next table: In 2001, the following paper emerged, http://arxiv.org/abs/nlin/0101014. The Riemannium arxiv paper was published later (here: Reg. Chaot. Dyn. 6 (2001) 205-210). After that, Brian Hayes  wrote a really beautiful, wonderful and short paper titled The Spectrum of Riemannium in 2003 (American Scientist, Volume 91, Number 4 July–August, 2003,pages 296–300). I remember myself reading the manuscript and being totally surprised. I was shocked during several weeks. I decided that I would try to understand that stuff better and better, and, maybe, make some contribution to it. The Spectrum of Riemannium was an amazing name, an incredible concept. So, I have been studying related stuff during all these years. And I have my own suspitions about what the riemannium and the zeta function are, but this is not a good place to explain all of them! The riemannium is the mysterious physical system behind the RH. Its spectrum, the spectrum of riemannium, are given by the RH and its generalizations. Moreover, the following sketch from Hayes’ paper is also very illustrative: What do you think? Isn’t it suggestive? Is it amazing? Riemann zeta function also arises in the renormalization of the Standard Model and the regularization of determinants with “infinite size” (i.e., determinants of differential operators and/or pseudodifferential operators). For instance, the \infty-dimensional regularized determinant is defined through the Riemann zeta function as follows: \displaystyle{\det _\zeta \mathcal{P}=e^{-\zeta_{\mathcal{P}}^{'}(0)}} The dimensional renormalization/regularization of the SM makes use of the Riemann zeta function as well. It is ubiquitous in that approach, but, as far as I know, nobody has asked why is that issue important, as I have suspected from long time ago. Riemann zeta function is also used in the theory of Quantum Statistics. Quantum Statistics are important in Cosmology and Condensed Matter, so it is really striking that Riemann zeta values are related to phenomena like Bose-Einstein condensation or the Cosmic Microwave Background and also the yet to be found Cosmic Neutrino Background! Let me begin with the easiest quantum (indeed classical) statistics, the Maxwell-Boltzmann (MB) statistics. In 3 spatial dimensions (3d) the MB distribution arises ( we will work with units in which \hbar =1): f(p)_{MB}=\dfrac{1}{(2\pi)^3}e^{\frac{\mu -E}{k_BT}} Usually, there are 3 thermodynamical quantities that physicists wish to compute with statistical distributions: 1) the number density of particles n=N/V, 2) the energy density \varepsilon=U/V and 3) the pressure P. In the case of a MB distribution, we have the following definitions: \displaystyle{n=\dfrac{1}{(2\pi)^3}\int d^3p e^{\frac{\mu -E}{k_BT}}} \displaystyle{\varepsilon =\dfrac{1}{(2\pi)^3}\int d^3p Ee^{\frac{\mu -E}{k_BT}}} \displaystyle{\varepsilon =\dfrac{1}{(2\pi)^3}\int d^3p \dfrac{1}{3}\dfrac{\vert\mathbf{p}\vert^2}{E}e^{\frac{\mu -E}{k_BT}}} We can introduce the dimensionless variables $late z=\dfrac{mc^2}{k_BT}$, \tau =\dfrac{E}{k_BT}=\dfrac{\sqrt{p^2+m^2c^4}}{k_BT}. In this way, \vert p\vert=\dfrac{k_BT}{c}\sqrt{\tau^2-z^2} c^2\vert\mathbf{p}\vert d\vert \mathbf{p}\vert=k_B^2T^2\tau d\tau With these definitions, the particle density becomes \displaystyle{n=\dfrac{4\pi k_B^3T^3}{(2\pi)^3}e^{\frac{\mu}{k_BT}}\int_z^\infty d\tau (\tau^2-z^2)^{1/2}\tau e^{-\tau}} This integral can be calculated in closed form with the aid of modified Bessel functions of the 2th kind: K_n (z)=\dfrac{2^nn!}{(2n)!z^n}\int_z^\infty d\tau (\tau^2-z^2)^{n-1/2}e^{-\tau} or equivalently K_n (z)=\dfrac{2^{n-1}(n-1)!}{(2n-2)!z^n}\int_z^\infty d\tau (\tau^2-z^2)^{n-3/2}\tau e^{-\tau} K_{n+1} (z)=\dfrac{2nK_n (z)}{z}+K_{n-1} (z) \displaystyle{K_2 (x)=\dfrac{1}{z^2}\int_z^\infty (\tau^2-z^2)^{1/2}\tau e^{-\tau}d\tau} And thus, we have the next results (setting c=1 for simplicity): \mbox{Particle number density}\equiv n=\dfrac{N}{V}=\dfrac{k_B^3T^3}{2\pi^2}z^2K_2 (z)=\dfrac{k_B^3T^3}{2\pi^2}\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)e^{\frac{\mu}{k_BT}} \mbox{Energy density}\equiv\varepsilon=\dfrac{k_B^4T^4}{2\pi^2}\left[ 3\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)+\left(\dfrac{m}{k_BT}\right)^3K_1\left(\dfrac{m}{k_BT}\right)\right]e^{\frac{\mu}{k_BT}} Even entropy density is easiy to compute: \mbox{Entropy density}\equiv s=\dfrac{m^3}{2\pi^2}e^{\frac{\mu}{k_BT}}\left[ K_1\left(\dfrac{m}{k_BT}\right)+\dfrac{4k_BT-\mu}{m}K_2\left(\dfrac{m}{k_BT}\right)\right] These results can be simplified in some limit cases. For instance, in the massless limit z=m/k_BT\rightarrow 0. Moreover, we also know that \displaystyle{\lim_{z\rightarrow 0}z^nK_n (z)=2^{n-1}(n-1)!}. In such a case, we obtain: n\approx \dfrac{k_B^3T^3}{\pi^2}e^{\frac{\mu}{k_BT}} \varepsilon \approx \dfrac{3k_B^4T^4}{\pi^2}e^{\frac{\mu}{k_BT}} P\approx \dfrac{k_B^4T^4}{\pi^2}e^{\frac{\mu}{k_BT}} We note that \varepsilon=3P in this massless limit. Remark (I): In the massless limit, and whenever there is no degeneracy, \varepsilon =3P holds. Remark (II): If there is a quantum degeneracy in the energy levels, i.e., if g\neq 1, we must include an extra factor of g_j=2j+1 for massive particles of spin j. For massless photons with helicity, there is a g=2 degeneracy. Remark (III): In the D-dimensional (D=d+1) Bose gas with dispersion relationship \varepsilon_p=cp^{s}, it can be shown that the pressure is related with the energy density in the following way \mbox{Pressure}\equiv P=\dfrac{s}{d}\dfrac{U}{V}=\dfrac{s}{d}\varepsilon Remark (IV): Let us define p^s (n) as the number of ways an integer number can be expressed as a sum of the sth powers of integers. For instance, p^1 (5)=7 because 5=4+1=3+2=3+1+1=2+2+1=2+1+1+1=1+1+1+1+1 p^2 (5)=2 because 5=2^2+1^2=1^2+1^2+1^2+1^2+1^2 If E_n=n^s with n\geq 1 and s>0, then x=e^{-\beta} and the partition function is \displaystyle{Z=\prod_{k}\left( 1+e^{\frac{\mu-E}{k_BT}}\right)} We will see later that \displaystyle{\sum_{N=0}^\infty x^N=\begin{cases}1+x, FD \\ \dfrac{1}{1-x}, BE\end{cases}} with \mu =0 is nothing but the generatin function of the partitions p^s (n) \displaystyle{Z(x=e^{-\beta})=\prod_{n=1}^\infty \dfrac{1}{1-x^{n^s}}=\sum_{n=1}^\infty p^s (n) x^n\approx \int_1^\infty dn p^s (n) e^{-\beta n}} The Hardy-Ramanujan inversion formula reads (for the case s=1 only): p(n) \approx \dfrac{1}{4\sqrt{3}N}e^{\pi\sqrt{2N/3}} Remark (V): There are some useful integrals in quantum statistics. They are the so-called Bose-Einstein/Fermi-Dirac integrals \displaystyle{\int_0^\infty dx \dfrac{x^{n-1}}{e^x\mp 1}=\begin{cases}\Gamma (n) \zeta (n), \;\; BE\\ \Gamma (n)\eta (n)=\Gamma (n) (1-2^{1-n})\zeta (n),\;\; FD\end{cases}} The BE-FD quantum distributions in 3d are defined as follows: where the minus sign corresponds to FD and the plus sign to BE. We will firstly study the BE distribution in 3d. We have: \displaystyle{n=\dfrac{1}{(2\pi)^3}\int d^3p \left(e^{\frac{\mu-E}{k_BT}}-1\right)^{-1}=\dfrac{1}{(2\pi)^3}\int d^3p \sum_{n=1}^{\infty}(+1)^{n+1}e^{\frac{n\mu-nE}{k_BT}}} Introducing a scaled temperature T'=T/n, we get \displaystyle{n=\sum_{n=1}^{\infty}\left[\dfrac{1}{(2\pi)^3}\int d^3p e^{\frac{n\mu-nE}{k_BT'}}\right]=\sum_{n=1}^{\infty}\dfrac{k_B^3T^3}{2\pi^2}\dfrac{1}{n^3}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{n\mu}{k_BT}}} Again, we can study a particularly simple case: the massless limit m\rightarrow 0 with \mu\rightarrow 0. In this case, we get: \displaystyle{n=\dfrac{k_B^3T^3}{\pi^2}\sum_{n=1}^\infty \dfrac{1}{n^3}=\dfrac{k_B^3T^3}{\pi^2}\zeta (3)\approx 1.202\dfrac{k_B^3T^3}{\pi^2}} \displaystyle{\varepsilon=\sum_{n=1}^\infty\dfrac{3(k_BT)^4}{\pi^2}\dfrac{1}{n^4}=\dfrac{3(k_BT)^4\zeta (4)}{\pi^2}=\dfrac{\pi^2}{30}(k_BT)^4} \displaystyle{P=\sum_{n=1}^\infty\dfrac{(k_BT)^4}{\pi^2}\dfrac{1}{n^4}=\dfrac{(k_BT)^4\zeta (4)}{\pi^2}=\dfrac{\pi^2(k_BT)^4}{90}} The FD distribution in 3d can be studied in a similar way. Following the same approach as the BE distribution, we deduce that: \displaystyle{n=\sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^3}{2\pi^2n^3}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{\mu n}{k_BT}}} \displaystyle{\varepsilon= \sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^4}{2\pi^2}\left[3\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)+\left(\dfrac{nm}{k_BT}\right)^3K_1\left(\dfrac{nm}{k_BT}\right)\right]e^{\frac{\mu n}{k_BT}}} and again the massless limit m=0 and \mu\rightarrow 0 provide \displaystyle{n\approx \dfrac{(k_BT)^3}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^3}=\dfrac{(k_BT)^3}{\pi^2}\eta (3)=\dfrac{(k_BT)^3}{\pi^2}\left(\dfrac{3}{4}\right)\zeta (3)} \displaystyle{\varepsilon\approx \dfrac{3(k_BT)^4}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^4}=3(k_BT)^4\eta (4)=3(k_BT)^4\dfrac{7}{8}\zeta (4)=\dfrac{\pi^2(k_BT)^4}{30}\left(\dfrac{7}{8}\right)} \displaystyle{P\approx \dfrac{(k_BT)^4}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^4}=\left(\dfrac{7}{8}\right)\dfrac{\pi^2(k_BT)^4}{90}} Remark (I): For photons \gamma with degeneracy g=2 we obtain n_\gamma =\dfrac{2\zeta (3) (k_BT)^3}{\pi^2} \varepsilon_\gamma= 3P_\gamma =\dfrac{\pi^2 (k_BT)^4}{15} s_\gamma =P'(T)=\dfrac{4}{3}\left(\dfrac{\pi^2}{15}\right)(k_BT)^3=\dfrac{2\pi^4}{45\zeta (3)}n Remark (II): In Cosmology, Astrophysics and also in High Energy Physics, the following units are used 1eV=1.602\cdot 10^{-19}J \hbar=1=6.58\cdot 10^{-22}MeVs=7.64\cdot 10^{-12}Ks \hbar c=1=0.19733GeV\cdot fm=0.2290 K\cdot cm 1 K=0.1532\cdot 10^{-36}g\cdot c^2 The Cosmic Microwave Background is the relic photon radiation of the Big Bang, and thus it has a temperature due to photons in the microwave band of the electromagnetic spectrum. Its value is: T_\gamma \approx 2.725K Indeed, it also implies that the relic photon density is about n_\gamma =410\dfrac{1}{cm^3} It is also speculated that there has to be a Cosmic Neutrino Background relic from the Big Bang. From theoretical Cosmology, it is related to the photon CMB temperature in the following way: T_\nu =\left(\dfrac{4}{11}\right)^{1/3}2.7K or equivalently T_\nu\approx 1.9K This temperature implies a relic neutrino density (per species, i.e., with g_\nu=1) about The cosmological density entropy due to these particles is s_0=\dfrac{S_0}{V}=\dfrac{4\pi^2}{45}\left[1+\dfrac{2\cdot 3}{2}\left(\dfrac{7}{8}\right)\left(\dfrac{4}{11}\right)\right]T_{0\gamma}^3=2810\dfrac{1}{cm^3}\left( \dfrac{T_{0\gamma}}{2.7K}\right)^3 and then we get s_0\approx 7.03n_{0\gamma} Remark (III): In Cosmology, for fermions in 3d ( note that BE implies \varepsilon=3P, and that we must drop the factors \left( 7/8\right), \left( 3/4\right), \left( 7/6\right) in the next numerical values) we can compute n=\begin{cases}\left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)\dfrac{2\zeta (3)}{\pi^2}(k_BT)^3\\ \left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)31.700\left(\dfrac{k_BT}{GeV}\right)^3\dfrac{1}{fm^3}\\ \left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)20.288\left(\dfrac{T}{K}\right)^3\dfrac{1}{cm^3}\end{cases} \varepsilon=3P=\begin{cases}\left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(\dfrac{\pi^2}{15}\right)(k_BT)^4\\ \left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)(85.633)\left(\dfrac{k_BT}{GeV}\right)\dfrac{GeV}{fm^3}\\ \left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(0.841\cdot 10^{-36}\right)\left(\dfrac{T}{K}\right)^4\dfrac{g}{cm^3}\end{cases} s=\dfrac{S}{V}=\left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(\dfrac{4\pi^2}{45}\right)(k_BT)^3=\dfrac{7}{6}\left[\dfrac{2\pi^4}{45\zeta (3)}\right] n Remark (IV): An example of the computation of degeneracy factor is the quark-gluon plasma degeneracy g_{QGP}. Firstly we compute the gluon and quark degeneracies g_g=(\mbox{color})(\mbox{spin})=2^3\cdot 2=8\cdot 2=16 g_q=(p\overline{p})(\mbox{spin})(\mbox{color})(\mbox{flavor})=2\cdot 2\cdot 3\cdot N_f=12N_f Then, the QG plasma degeneracy factor is g_{QGP}=g_g+\dfrac{7}{8}g_q=16+\dfrac{7}{8}12N_f=16+\dfrac{21}{2}N_f \leftrightarrow \boxed{g_{QGP}=16+\dfrac{21}{2}N_f} In general, for charged leptons and nucleons g=2, g=1 for neutrinos (per species, of course), and g=2 for gluons and photons. Remember that massive particles with spin j will have g_j=2j+1. Remark (V): For the Planck distribution, we also get the known result for the thermal distribution of the blackbody radiation \displaystyle{I(T)=\int_0^\infty f(\nu ,T)d\nu=\dfrac{8\pi h}{c^3}\int_0^\infty \dfrac{\nu^3d\nu}{e^{\frac{h\nu}{k_BT}}-1}=\dfrac{8\pi^5k_B^4T^4}{15c^3h^3}} Remark (VI): Sometimes the following nomenclature is used i) Extremely degenerated gas if \mu>>k_BT ii) Non-degenerated gas if \mu <<-k_BT iii) Extremely relativistic gas ( or ultra-relativistic gas) if p>> mc iv) Non-relativistic gas if p<<mc Let us define the following shift operator \hat{T}: where \sigma\in \mathbb{R}. Moreover, there is certain isomorphism  between the shift operator space and the space of functions through the map \hat{T}\leftrightarrow x^\sigma. We define the generalized logarithm as the image under the previous map of \hat{T}. That is: \displaystyle{\mbox{Log}_G(x)\equiv \dfrac{1}{\sigma}\sum_{n=l}^{m}k_n x^{\sigma n}} where l,m\in \mathbb{Z}, with l<m, m-l=r and x>0. Furthermore, the next contraints are also given for every generalized logarithm: 1st. \displaystyle{\sum_{n=1}^m k_n=0}. 2nd. \displaystyle{\sum_{n=l}^m nk_n=c}, k_m\neq 0, and k_l\neq 0. 3rd. \displaystyle{\sum_{n=l}^m\vert n\vert^l k_n=K_l}, \forall l=2,3,\ldots ,m-l and where K_l \in \mathbb{R}. With these definitions we also have that A) \mbox{Log}_G(x)=\ln (x) B) \mbox{Log}_G(1)=0 Examples of generalized logarithms are: 1) The Tsallis logarithm. 2) The Kaniadakis logarithm. 3) The Abe logarithm. \mbox{Log}_A(x)=\dfrac{x^{\sigma -1}-x^{\sigma^{-1}-1}}{\sigma-\sigma^{-1}} 4) The biparametric logarithm. with a=\sigma-1 and b=\sigma^{-1}-1 in the case of the Abe logarithm. Group entropies are defined through the use of generalized logarithms. Define some discrete probability distribution \left[ p_i\right]_{i=1,\ldots,W} with normalization \displaystyle{\sum_{i=1}^Wp_i=1}. Therefore, the group entropy is the following functional sum: \boxed{\displaystyle{S_G=-k_B\sum_{i=1}^{W}p_i \mbox{Log}_G \left(\dfrac{1}{p_i}\right)}} where we have used the previous definition of generalized logarithm and the Boltzmann’s constant k_B is a real number. It is called group entropy due to the fact that S_G is connected to some universal formal group. This formal group will determine some correlations for the class of physical systems under study and its invariant properties. In fact, the Tsallis logarithm itself is related to the Riemann zeta function through a beautiful equation! Under the Tsallis group exponential, the isomorphism x\leftrightarrow e^t is defined to be e_G^t=\dfrac{e^{(1-q)t}-1}{1-q}, and thus we easily get: \displaystyle{\dfrac{1}{\Gamma (s)}=\int_0^\infty\dfrac{1}{\dfrac{e^{(1-q)t}-1}{1-q}}t^{s-1}dt=\dfrac{\zeta (s)}{(1-q)^{s-1}}} \forall s such as Re (s)>1 and q<1. The primon gas/free Riemann gas is a statistical mechanics toy model illustrating in a simple way some correspondences between number theory and concepts in statistical physics, quantum mechanics, quantum field theory and dynamical systems. The primon gas IS  a quantum field theory (QFT) of a set of non-interacting particles, called the “primons”. It is also named a gas or a free model because the particles are non-interacting. There is no potential. The idea of the primon gas was independently discovered  by Donald Spector (D. Spector, Supersymmetry and the Möbius Inversion Function, Communications in Mathemtical Physics 127 (1990) pp. 239-252) and Bernard Julia (Bernard L. Julia, Statistical theory of numbers, in Number Theory and Physics, eds. J. M. Luck, P. Moussa, and M. Waldschmidt, Springer Proceedings in Physics, Vol. 47, Springer-Verlag, Berlin, 1990, pp. 276-293). There have been later works by Bakas and Bowick (I. Bakas and M.J. Bowick, Curiosities of Arithmetic Gases, J. Math. Phys. 32 (1991) p. 1881) and Spector (D. Spector, Duality, Partial Supersymmetry, and Arithmetic Number Theory, J. Math. Phys. 39 (1998) pp.1919-1927) in which it was explored the connection of such systems to string theory. This model is based on some simple hypothesis: 1st. Consider a simple quantum Hamiltonian, H, having eigenstates \vert p\rangle labelled by the prime numbers “p”. 2nd. The eigenenergies or spectrum are given by E_p and they have energies proportional to \log p. Mathematically speaking, H\vert p\rangle = E_p \vert p\rangle with E_p=E_0 \log p Please, note the natural emergence of a “free” scale of energy E_0. What is this scale of energy? We do not know! 3rd. The second quantization/second-quantized version of this Hamiltonian converts states into particles, the “primons”. Multi-particle states are defined in terms of the numbers k_p of primons in the single-particle states p: |N\rangle = |k_2, k_3, k_5, k_7, k_{11}, \ldots, k_{137},\ldots, k_p \ldots\rangle This corresponds to the factorization of N into primes: N = 2^{k_2} \cdot 3^{k_3} \cdot 5^{k_5} \cdot 7^{k_7} \cdot 11^{k_{11}} \cdots 137^{k_{137}}\cdots p^{k_p} \cdots The labelling by the integer “N” is unique, since every number has a unique factorization into primes. The energy of such a multi-particle state is clearly \displaystyle{E(N) = \sum_p k_p E_p = E_0 \cdot \sum_p k_p \log p = E_0 \log N} 4th. The statistical mechanics partition function Z IS, for the (bosonic) primon gas, the Riemann zeta function! \displaystyle{Z_B(T) \equiv\sum_{N=1}^\infty \exp \left(-\dfrac{E(N)}{k_B T}\right) = \sum_{N=1}^\infty \exp \left(-\dfrac{E_0 \log N}{k_B T}\right) = \sum_{N=1}^\infty \dfrac{1}{N^s} = \zeta (s)} with s=E_0/k_BT=\beta E_0, and where k_B is the Boltzmann’s constant and T is the absolute temperature. The divergence of the zeta function at the value s=1 (corresponding to the harmonic sum) is due to the divergence of the partition function at certain temperature, usually called Hagedorn temperature. The Hagedorn temperature is defined by: This temperature represents a limit beyond the system of (bosonic) primons can not be heated up. To understand why, we can calculate the energy E=-\dfrac{\partial}{\partial \beta}\ln Z_B=-\dfrac{E_0}{\zeta (\beta E_0)}\dfrac{\partial \zeta (\beta E_0)}{\partial \beta}\approx \dfrac{E_0}{s-1} A similar treatment can be built up for fermions rather than bosons, but here the Pauli exclusion principle has to be taken into account, i.e. two primons cannot occupy the same single particle state. Therefore m_i can be 0 or 1 for all single particle state. As a consequence, the many-body states are labeled not by the natural numbers, but by the square-free numbers. These numbers are sieved from the natural numbers by the Möbius function. The calculation is a bit more complex, but the partition function for a non-interacting fermion primon gas reduces to the relatively simple form Z_F(T)=\dfrac{\zeta (s)}{\zeta (2s)} The canonical ensemble is of course not the only ensemble used in statistical physics. Julia extended the Riemann gas approach to the grand canonical ensemble by introducing a chemical potential \mu (Julia, B. L., 1994, Physica A 203(3-4), 425), and thus, he replaced the primes p with new primes pe^{-\mu}. This generalisation of the Riemann gas is called the Beurling gas, after the Swedish mathematician Beurling who had generalised the notion of prime numbers. Examining a boson primon gas with fugacity -1, it shows that its partition function becomes \overline{Z}_B=\dfrac{\zeta (2s)}{\zeta (s)} Remarkable interpretation: pick a system, formed by two sub-systems not interacting with each other, the overall partition function is simply the product of the individual partition functions of the subsystems. From the previous equation of the free fermionic riemann gas we get exactly this structure, and so there are two decoupled systems. Firstly, a fermionic “ghost” Riemann gas at zero chemical potential and, secondly, a boson Riemann gas with energy-levels given by E(N)=2E_0\ln p_N. Julia also calculated the appropriate Hagedorn temperatures and analysed how the partition functions of two different number theoretical gases, the Riemann gas and the “log-gas” behave around the Hagedorn temperature. Although the divergence of the partition function hints the breakdown of the canonical ensemble, Julia also claims that the continuation across or around this critical temperature can help understand certain phase transitions in string theory or in the study of quark confinement. The Riemann gas, as a mathematically tractable model, has been followed with much attention because the asymptotic density of states grows exponentially, \rho (E)\sim e^E, just as in string theory. Moreover, using arithmetic functions it is not extremely hard to define a transition between bosons and fermions by introducing an extra parameter, called kappa \kappa, which defines an imaginary particle, the non-interacting parafermions of order \kappa. This order parameter counts how many parafermions can occupy the same state, i.e. the occupation number of any state falls into the interval \left[0,\kappa-1\right], and therefore \kappa=2 belongs to normal fermions, while \kappa\rightarrow\infty are the usual bosons. Furthermore, the partition function of a free, non-interacting κ-parafermion gas can be defined to be (Bakas and Bowick,1991, in the paper Bakas, I., and M. J. Bowick, 1991, J. Math. Phys. 32(7), 1881): Z_\kappa=\dfrac{\zeta (s)}{\zeta (\kappa s)} Indeed, Bakas et al. proved, using the Dirichlet convolution \star, how one can introduce free mixing of parafermions with different orders which do not interact with each other \displaystyle{f\star g=\sum_{d\vert n}f(d)g\left(\dfrac{n}{d}\right)} where the symbol d\vert n means d is a divisor of n. This operation preserves the multiplicative property of the classically defined partition functions, i.e., Z_{\kappa_1\star \kappa_2}=Z_{\kappa_1}\star Z_{\kappa_2}. It is even more intriguing how interaction can be incorporated into the mixing by modifying the Dirichlet convolution with a kernel function or twisting factor \displaystyle{f\odot g=\sum_{d\vert n}f(d)g\left( \dfrac{n}{d}\right) K(n,d)} Using the unitary convolution Bakas establishes a pedagogically illuminating case, the mixing of two identical boson Riemann gases. He shows that Z_\infty\star Z_\infty=\zeta ^2(s)\zeta(2s)=\dfrac{\zeta (s)}{\zeta(2s)}\zeta (s)=Z_2Z_\infty=Z_FZ_B This result has an amazing meaning. Two identical boson Riemann gases interacting with each other through the unitary twisting, are equivalent to mixing a fermion Riemann gas with a boson Riemann gas which do not interact with each other. Therefore, one of the original boson components suffers a transmutation/mutation into a fermion gas! Remark (I): the Möbius function, which is the identity function with respect to the  \star operation (i.e. free mixing), reappears in supersymmetric quantum field theories as a possible representation of the (-1)^F operator, where F is the fermion number operator!  In this context, the fact that \mu (n)=0 for square-free numbers is the manifestation of the Pauli exclusion principle itself! In any QFT with fermions, (-1)^F is a unitary, hermitian, involutive operator where F is the fermion number operator and is equal to the sum of the lepton number plus the baryon number, i.e., F=B+L, for all particles in the Standard Model and some (most of) SUSY QFT.  The action of this operator is to multiply bosonic states by 1 and fermionic states by -1. This is always a global internal symmetry of any QFT with fermions and corresponds to a rotation by an angle 2\pi. This splits the Hilbert space into two superselection sectors. Bosonic operators commute with (-1)^F whereas fermionic operators anticommute with it. This operator really is, therefore, more useful in supersymmetric field theories. Remark (II): potential attacks on the Riemann Hypothesis  may lead to advances in physics and/or mathematics, i.e., progress in Physmatics! Remark (III): the energy of the ground state is taken to be zero and the energy spectrum of the excited state is E(n)=E_0\ln (p_n), where p_n, n=2,3,5,\ldots, runs over the prime numbers. Let N and E denote now the number of particles in the ground state and the total energy of the system, respectively. The fundamental theorem of arithmetic allows only one excited state configuration for a given energy E=\ln (n) \;\; mod E_0 where n is an integer. It immediately means that this gas preserves its quantum nature at any temperature, since only one quantum state is permitted to be occupied. The number fluctuation of any state (even the ground state) is therefore zero. In contrast, the changes in the number of particles in the ground state \delta n_0 predicted by the canonical ensemble is a smooth non-vanishing function of the temperature, while the grand-canonical ensemble still exhibits a divergence. This discrepancy between the microcanonical (combinatorial) and the other two ensembles remains even in the thermodynamic limit. One could argue that the Riemann gas is fictitious/unreal and its spectrum is unrealisable/unphysical. However, we, physicists, think otherwise, since the spectrum E_N=\ln (N) does not increase with N more rapidly than n^2, therefore the existence of a quantum mechanical potential supporting this spectrum is possible (e.g., via inverse scattering transform or supplementary tools). And of course the question is: what kind of system has such an spectrum? Some temptative ideas for the potential based on elementary Quantum Mechanics will be given in the next section. Instead of considering the free Riemann gas, we could ask to Quantum Mechanics if there is some potential providing the logarithmic spectrum of the previous section. Indeed, there exists such a potential. Let us factorize any natural number in terms of its prime “atoms”: N=p_1^{n_1}p_2^{n_2}\cdots p_m^{n_m} Take the logarithm \log N=\log \left(p_1^{n_1}p_2^{n_2}\cdots p_m^{n_m}\right)=n_1\log p_1+n_2\log p_2+\ldots+n_m\log p_m \displaystyle{\log N=\sum_{i=1}^{m}n_i\log p_i} where p_i are prime numbers (note that if we include “1” as a prime number it gives a zero contribution to the sum). Now, suppose a logarithmic oscillator spectrum, i.e., \varepsilon_i=\log p_i with p_i=(1),2,3,5,7,11,13,\ldots,137,\ldots,\infty with i=0,1,2,3,4,\ldots,\infty. In order to have a “riemann gas”/riemannium, we impose an spectrum labelled in the following fashion \varepsilon_s =\log (2s+1) \forall s=0,1,2,3,\ldots,\infty Equivalently, we could also define the spectrum of interacting riemannium gas as \varepsilon_s=\log (s) \forall s=1,2,3,\ldots,\infty In addition to this, suppose the next quantum postulates: 1st. Logarithmic potential: V(x)=V_0\ln\dfrac{\vert x\vert}{L} with positive constants V_0, L>0 From the physical viewpoint, the positive constant V_0 means repulsive interaction (force). 2nd. Bohr-Sommerfeld quantization rule: a) \displaystyle{I=\dfrac{1}{2\pi}\oint pdx=\hbar \left(s+\dfrac{1}{2}\right)}\; \forall s=0,1,\ldots,\infty or equivalently we could also get b) \displaystyle{I=\dfrac{1}{2\pi}\oint pdx=\hbar s}\; \forall s=1,2,\ldots,\infty 3rd. Turning point condition: x_s=L\exp \left(\dfrac{\varepsilon_s}{V_0}\right) In the case of 2a) we would deduce that \displaystyle{\dfrac{\hbar \pi}{2}\left(s+\dfrac{1}{2}\right)=\int_0^{x_s}dx\sqrt{2m\left(\varepsilon_s-V_0\ln \dfrac{x}{L}\right)}} \displaystyle{\dfrac{\hbar \pi}{2}\left(s+\dfrac{1}{2}\right)=\int_0^{x_x}dx\sqrt{-\ln \left(\dfrac{x}{x_s}\right)}=\sqrt{2mV_0}x_s\Gamma \left(\dfrac{3}{2}\right)} and then x_s=\sqrt{\dfrac{\pi}{2mV_0}}\hbar \left( s+\dfrac{1}{2}\right) Then, using the turning point condition in this equation, we finally obtain \boxed{\dfrac{\varepsilon_s}{V_0}=\ln (2s+1)+\ln \left(\dfrac{\hbar}{2L}\sqrt{\dfrac{\pi}{2mV_0}}\right)} \forall s=0,1,\ldots,\infty In the case of 2b) we would obtain \boxed{\dfrac{\varepsilon_s}{V_0}=\ln (s)+\ln \left(\dfrac{\hbar}{L}\sqrt{\dfrac{\pi}{2mV_0}}\right)} \forall s=1,2,\ldots,\infty In summary, the logarithmic potential provides a model for the interacting Riemann gas! Massive elementary particles (with mass m) can be understood as composite particles made of confined particles moving with some energy pc inside a sphere of radius R. We note that we do not define futher properties of the constituent particles (e.g., if they are rotating strings, particles, extended objects like branes, or some other exotic structure moving in circular orbits or any other pattern as trajectory inside the composite particle). Let us make the hypothesis that there is some force F needed to counteract the centrifugal force F_c=\dfrac{\kappa c^2}{R}. The centrifugal force is equal to pc/R, i.e., the balancing force F is F=pc/R. Then, assuming the two forces are equal in magnitude, we get where A_1 is some constant, and that equation holds regardless the origin of the interaction. The potentail energy U necessary to confine a constituent particle will be, in that case, \displaystyle{U=\int \dfrac{A_1}{R}dR=A_1\int \dfrac{1}{R}dR=A_1\ln \dfrac{R}{R_\star}} with R_\star some integration constant to be determined later. The center of mass of the “elementary particle”, truly a composite particle, from the external observer and the mass assinged to the composited system is: The logarithmic potential energy is postulated to be proportional to m/R, and it provides U=\dfrac{A_2 m}{R} with A_2 is another constant. In fact, A_1, A_2 are parameters that don’t depend, a priori, on the radius R but on the constitutent particle properties and coupling constants, respectively. Indeed, for instance, we could set and fix the ratio A_2/A_1 to the constant c^2/G_N, where G_N is the gravitational constant. However, such a constraint is not required from first principles or from a clear physical reason. From the following equations: m=\dfrac{\hbar}{cR} and U=\dfrac{A_2 m}{R} we get \boxed{U=\dfrac{A_2 \hbar}{cR^2}} Quantum Mechanics implies that the angular momentum should be quantized, so we can make the following generalization U=\dfrac{A_2 m}{cR^2}\rightarrow U_n=\dfrac{A_2 \hbar}{cR_n^2}=\dfrac{A_2 (n+1)\hbar}{cR_0^2} \forall n=0,1,2,\ldots,\infty so R_n^2=\dfrac{R_0^2}{n+1}\leftrightarrow R_n=\dfrac{R_0}{\sqrt{n+1}} Using the previous integral and this last result, we obtain \ln \left(\dfrac{R_\star}{R_0}\right)=-(n+1)\dfrac{R_\star^2}{R_0^2} This is due to the fact that U_n=A_2\dfrac{\hbar}{cR_n^2}=\dfrac{A_2\hbar (n+1)}{cR_0^2} and U=A_2\ln \dfrac{R}{R_\star} Combining these equations, we deduce the value of R_\star as a function of the parameters A_1,A_2 \boxed{R_\star=\sqrt{\dfrac{A_2\hbar}{A_1 c}}} The ratio R_\star/R_0 can be calculated from the above equations as well, since \ln \left(\dfrac{R_\star}{R_0}\right)=-(n+1)\dfrac{R_\star^2}{R_0^2} for the case n=0 implies that \ln \left(\dfrac{R_\star}{R_0}\right)=-\dfrac{R_\star^2}{R_0^2}, and after exponentiation, it yields Introducing the variable x=\dfrac{R_\star}{R_0} we have to solve the equation x=e^{-x^2} The solution is \phi=\dfrac{1}{x}=1.53158 from which the relationship between R_\star and R_0 can be easily obtained. Indeed, we can make more deductions from this result. From \ln \phi=1/\phi^2, then R_n=R_\star e^{(n+1)\ln\phi} If we take R_\star=\alpha R_0, with R_0=\hbar/mc, then \alpha=m_0\sqrt{\dfrac{A_2 c}{A_1\hbar}} so R_n=R_0e^{K\varphi_n} with K=\dfrac{1}{2\pi}\ln \phi and \varphi_n=2\pi (n+1)+\varphi_s \varphi_s=2\pi \left(\dfrac{\ln \alpha}{\ln \phi}\right) Equivalently, the masses would be dynamically generated from the above equations, since m_n=\dfrac{\hbar}{R_nc} and m_0=\dfrac{\hbar}{R_0c} so we would deduce a particle spectrum given by a logarithmic spiral, through the equation Remark: The shift K\rightarrow -K implies that the spiral would begin with m_0 as the lowest mass and not the biggest mass, turning the spiral from inside to the outside region and vice versa. In summary, the logarithmic oscillator is also related to some kind of confined particles and it provides a toy model of confinement! Is the link between classical statistical mechanics and Riemann zeta function unique or is it something more general? C. Tsallis explained long ago the connection of non-extensive Tsallis entropies an the Riemann zeta function, given supplementary arguments to support the idea of a physical link between Physics, Statistical Mechanics and the Riemann hypothesis. His idea is the following. A) Consider the harmonic oscillator with spectrum E_n=\hbar\omega n E(n),\;\forall n=0,1,2,\ldots,\infty, are the H.O. eigenenergies. B) Consider the Tsallis partition function \displaystyle{Z_q (\beta )=\sum_{n=0}^{\infty}e_q^{-\beta E_n}=\sum_{n=0}^{\infty}e_q^{-\beta\hbar\omega n}} where q>1 and the deformed q-exponential is defined as e_q^z\equiv \left[1+(q-1)z\right]_+^{\frac{1}{1-q}} and \left[\alpha\right]=\begin{cases}\alpha, \alpha>0\\ 0,otherwise\end{cases} and the inverse of the deformed exponential is the q-logarithm \ln_q z=\dfrac{z^{1-q}-1}{1-q} It implies that \boxed{\displaystyle{Z_q=\sum_{n=0}^{\infty}\dfrac{1}{\left[1+(q-1)\beta\hbar\omega n\right]^{\frac{1}{q-1}}}=\dfrac{1}{\left[(q-1)\beta\hbar \omega\right]^{\frac{1}{q-1}}}\sum_{n=0}^{\infty}\dfrac{1}{\left[\left(\dfrac{1}{(q-1)\beta\hbar\omega}\right)+n\right]^{\frac{1}{q-1}}}}} Now, defining the Hurwitz zeta function as: \displaystyle{\zeta (s,Q)=\sum_{n=0}^{\infty}\dfrac{1}{\left(Q+n\right)^{s}}=\dfrac{1}{Q^s}+\sum_{n=1}^{\infty}\dfrac{1}{\left(Q+n\right)^{s}}} the last equation can be rewritten in a simple and elegant way: \boxed{\displaystyle{Z_q=\dfrac{1}{\left[(q-1)\beta\hbar\omega\right]^{\frac{1}{q-1}}}\zeta \left(\dfrac{1}{q-1},\dfrac{1}{(q-1)\beta\hbar\omega}\right)}} This system can be called the Tsallis gas or the Tsallisium. It is a q-deformed version (non-extensive) of the free Riemann gas. And it is related to the harmonic oscillator! The issue, of course, is the problematic limit q\rightarrow 1. In the limit Q\rightarrow 1 we get the Riemann zeta function from the Hurwitz zeta function: \displaystyle{\zeta (s,1)\equiv \zeta (s)=\sum_{n=1}^{\infty}n^{-s}=\sum_{n=1}^{\infty}\dfrac{1}{n^s}=\prod_{p=2}^{\infty}\dfrac{1}{1-p^{-s}}=\prod_{p}\dfrac{1}{1-p^{-s}}} The above equation, the partition function of the Tsallis gas/Tsallisium, connects directly the Riemann zeta function with Physics and non-extensive Statistical Mechanics. Indeed, C.Tsallis himself dedicated a nice slide with this theme to M.Berry: Remark (I): The link between Riemann zeta function and the free Riemann gas/the interacting Riemann gas goes beyond classical statistical mechanics and it also appears in non-extensive statistical mechanics! Remark (II): In general, the Riemann hypothesis is entangled to the theory of harmonic oscillators with non-extensive statistical mechanics! For readers not familiarized with Tsallis generalized entropies, I would like to expose you the main definitions of such a generalization of classical statistical entropy (Boltzmann-Gibbs-Shannon), in a nutshell! I have to discuss more about this kind of statistical mechanics in the future, but today, I will only anticipate some bits of it. Tsallis entropy (and its Statistical Mechanics/Thermodynamics) is based on the following entropy functionals: 1st. Discrete case. \boxed{\displaystyle{S_q=k_B\dfrac{1-\displaystyle{\sum_{i=1}^W p_i^q}}{q-1}=-k_B\sum_{i=1}^Wp_i^q\ln_q p_i=k_B\sum_{i=1}^Wp_i\ln_q \left(\dfrac{1}{p_i}\right)}} plus the normalization condition \boxed{\displaystyle{\sum_{i=1}^Wp_i=1}} 2nd. Continuous case. \boxed{\displaystyle{S_q=-k_B\int dX\left[p(X)\right]^q\ln_q p(X)=k_B\int dX p(X)\ln_q\dfrac{1}{p(X)}}} plus the normalization condition \boxed{\displaystyle{\int dX p(X)=1}} 3rd. Quantum case. Tsallis matrix density. \boxed{\displaystyle{S_q=-k_BTr\rho^q\ln _q\rho\equiv k_BTr\rho \ln_q\dfrac{1}{\rho}}} plus the normatlization condition \boxed{Tr\rho=1} In all the three cases above, we have defined the q-logarithm as \ln_q z\equiv\dfrac{z^{1-q}-1}{q-1}, \ln_1 z\equiv \ln z, and the 3 Tsallis entropies satisfy the non-additive property: \boxed{\dfrac{S_q(A+B)}{k_B}=\dfrac{S_q (A)}{k_B}+\dfrac{S_q (B)}{k_B}+(1-q)\dfrac{S_q (A)}{k_B}\dfrac{S_q (B)}{k_B}} Theoretical physicsts suspect that Physics of the spacetime at the Planck scale or beyond will change or will be meaningless. There, the spacetime notion we are familiarized to loose its meaning. Even more, we could find those changes in the fundamental structure of the Polyverse to occur a higher scales of length. Really, we don’t know yet where the spacetime “emerges” as an effective theory of something deeper, but it is a natural consequence from our current limited knowledge of fundamental physics.  Indeed, it is thought that the experimental device making measurements and the experimenter can not be distinguished at Planck scale. At Planck scale, we can not know at this moment how the framework of cosmology and the Hilbert space tool of Quantum Mechanics could be obtained with some unified formalism. It is one of the challenges of Quantum Gravity. Many people and scientists think that geometry and topology of sub-Planckian lengths should not have any relation with our current geometry or topology. We say and believe that geometry, topology, fields and the main features of macroscopic bodies “emerge” from the ultra-Planckian and “subquantum” realm. It is an analogue to the colours of the rainbow emerging from the atoms or how Thermodynamics emerge from Statistical Mechanics. There are many proposed frameworks to go beyond the usual notions of space and time, but the p-adic analysis approach is a quite remarkable candidate, having several achievements in its favor. Motivations for a p-adic and adelic approaches as the ultimate substructure of the microscopic world arise from: 1) Divergences of QFT are believed to be absent with such number structures. Renormalization can be found to be unnecessary. 2) In an adelic approach, where there is no prime with special status in p-adic analysis, it might be more natural and instructive to work with adeles instead a pure p-adic approach. 3) There are two paths for a p-adic/adelic QM/QFT theory. The first path considers particles in a p-adic potential well, and the goal is to find solutions with smoothly varying complex-valued wavefunctions. There, the solutions share  certain kind of familiarity from ordinary life and ordinary QM. The second path allows particles in p-adic potential wells, and the goal is to find p-adic valued wavefunctions. In this case, the physical interpretation is harder. Yet the math often exhibits surprising features and properties, and some people are trying to explores those novel and striking aspects. Ordinary real (or even complex as well) numbers are familiar to everyone. Ostroswski’s theorem states that there are essentially only two possible completions of the rational numbers ( “fractions” you do know very well). The two options depend on the metric we consider: 1) The real numbers. One completes the rationals by adding the limit of all Cauchy sequences to the set. Cauchy sequences are series of numbers whose elements can be arbitrarily close to each other as the sequence of numbers progresses. Mathematically speaking, given any small positive distance, all but a finite number of elements of the sequence are less than that given distance from each other. Real numbers satisfy the triangle inequality \vert x+y\vert \leq \vert x\vert +\vert y\vert. 2) The p-adic numbers. The completions are different because of the two different ways of measuring distance. P-adic numbers satisfy an stronger version of the triangle inequality, called ultrametricity. For any p-adic number is shows \vert x+y\vert _p\leq \mbox{max}\{\vert x\vert_p ,\vert y \vert_p\} Spaces where the above enhanced triangle inequality/ultrametricity arises are called ultrametric spaces. In summary, there exist two different types of algebraic number systems. There is no other posible norm beyond the real (absolute) norm or the p-adic norm. It is the power of Mathematics in action. Then, a question follows inmediately. How can we unify such two different notions of norm, distance and type of numbers. After all, they behave in a very different way. Tryingo to answer this questions is how the concept adele emerges. The ring of adeles is a framework where we consider all those different patterns to happen at equal footing, in a same mathematical language. In fact, it is analogue to the way in which we unify space and time in relativistic theories! Adele numbers are an array consisting of both real (complex) and p-adic numbers! That is, \mathbb{A}=\left( x_\infty, x_2,x_3,x_5,\ldots,x_p,\ldots\right) where x_\infty is a real number and the x_p are p-adic numbers living in the p-adic field \mathbb{Q}_p. Indeed, the infinity symbol is just a consequence of the fact that real numbers can be thought as “the prime at infinity”. Moreover, it is required that all but finitely many of the p-adic numbers x_p lie in the entire p-adic set \mathbb{Z}_p. The adele ring is therefore a restricted direct (cartesian) product. The idele group is defined as the essentially invertible elements of the adelic ring: \mathbb{I}=\mathbb{A}^\star =\{ x\in \mathbb{A}, \mbox{where}\;\; x_\infty \in \mathbb{R}^{\star} \;\; \mbox{and} \;\; \vert x_p\vert _p=1,\; \mbox{for all but finitely many primes p.}\} We can define the calculus over the adelic ring in a very similar way to the real or complex case. For instance, we define trigonometric functions, e^X, logarithms \log (x) and special functions like the Riemann zeta function. We can also perform integral transforms like the Mellin of the Fourier transformation over this ring. However, this ring has many interesting properties. For example, quadratic polynomials obey the Hasse local-global principle: a rational number is the solution of a quadratic polynomial equation if and only if it has a solution in \mathbb{R} and \mathbb{Q}_p for all primes p. Furthermore, the real and p-adic norms are related to each other by the remarkable adelic product formula/identity: \displaystyle{\vert x\vert_\infty \prod_p\vert x\vert_p=1} and where x is a nonzero rational number. Beyond complex QM, where we can study the particle in a box or in a ring array of atoms, p-adic QM can be used to handle fractal potential wells as well. Indeed, the analogue Schrödinger equation can be solved and it has been useful, for instance, in the design of microchips and self-similar structures. It has been conjectured by Wu and Sprung, Hutchinson and van Zyl,here http://arXiv.org/abs/nlin/0304038v1 , that the potential constructed from the non-trivial Riemann zeroes and prime number sequences has fractal properties. They have suggested that D=1.5 for the Riemann zeroes and D=1.8 for the prime numbers. Therefore,  p-adic numbers are an excellent method for constructing fractal potential wells. By the other hand, following Feynman, we do know that path integrals for quantum particles/entities manifest fractal properties. Indeed we can use path integrals in the absence of a p-adic Schrödinger equation. Thus, defining the adelic version of Feynman’s path integral is a necessary a fundamental object for a general quantum theory beyond the common textbook version. However, we need to be very precise with certain details. In particular, we have to be careful with the definition of derivatives and differentials in order to do proper calculations. Indeed we can do it since both, the adelic and idelic rings have a well defined translation-invariant Haar measure Dx=dx_\infty dx_2dx_3\cdots dx_p\cdots and Dx^\star=dx_\infty^\star dx_2^\star dx_3^\star\cdots dx_p^\star\cdots These measures provide a way to compute Feynman path integrals over adelic/idelic spaces.  It turns out that Gaussian integrals satisfy a generalization of the adelic product formula introduced before, namely: \displaystyle{\int_{\mathbb{Q}_p}\chi_\infty (ax_\infty^2+bx_\infty)dx_\infty \prod_p \int_{\mathbb{Q}_p}\chi_p (ax_p^2+bx_p)dx_p=1} where \chi is an additive character from the adeles to complex numbers \mathbb{C} given by the map: \displaystyle{\chi (x)=\chi_\infty (x_\infty)\prod_p \chi_p (x_p)\rightarrow e^{-2\pi ix_\infty}\prod_p e^{2\pi i\{p\}_p}} and  \{x_p\}_p is the fractional part of x_p in the ordinary p-adic expression for x. This can be thought of as a strong generalization of the homomorphism \mathbb{Z}/\mathbb{Z}_n\rightarrow e^{2\pi i/n}.Then, the adelic path integral, with input parameters in the adelic ring \mathbb{A}  and generating complex-valued wavefunctions follows up: \displaystyle{K_{\mathbb{A}} (x'',t'';x',t') =\prod_\alpha \int_{(x' _\alpha ,t' _\alpha)}^{(x'' _\alpha ,t'' _\alpha)}\chi_\alpha \left(-\dfrac{1}{h}\int_{t' _\alpha}^{t''_\alpha}L(\dot{q} _\alpha ,q_\alpha ,t_\alpha )dt_\alpha \right) Dq_\alpha} The eigenvalue problem over the adelic ring is given by: U(t) \psi_\alpha (x)=\chi (E_\alpha (t))\psi_\alpha (x) where U is the time-development operator, \psi_\alpha are adelic eigenfunctions, and E_\alpha is the adelic energy. Here the notation has been simplified by using the subscript \alpha, which stands for all primes including the prime at infinity. One notices the additive character \chi which allows these to be complex-valued integrals. The path integral can be generalized to p-adic time as well, i.e., to paths with fractal behaviour! How is this p-adic/adelic stuff connected to the Riemannium an the Riemann zeta function? It can be shown that ground state of adelic quantum harmonic oscillator is \displaystyle{\vert 0\rangle =\Psi_0 (x)=2^{1/4}e^{-\pi x_\infty^2}\prod_p \Omega (\vert x_p\vert_p)} where \Omega \left(\vert x_p \vert _p\right)  is 1 if \vert x_p\vert_p is a p-adic integer and 0 otherwise. This result is strikingly similar to the ordinary complex-valued ground state. Applying the adelic Mellin transform, we can deduce that \Phi (\alpha)=\sqrt{2}\Gamma \left(\dfrac{\alpha}{2}\right)\pi^{-\alpha/2}\zeta (\alpha) where \Gamma, \zeta are, respectively, the gamma function and the Riemann zeta function. Due to the Tate formula, we get that \Phi (\alpha)=\Phi (1-\alpha). and from this the functional equation for the Riemann zeta function naturally emerges. In conclusion: it is fascinating that such simple physical system as the (adelic) harmonic oscillator is related to so significant mathematical object as the Riemann zeta function. The Veneziano amplitude is also related to the Riemann zeta function and string theory. A nice application of the previous adelic formalism involves the adelic product formula in a different way. In string theory, one computes crossing symmetric Veneziano amplitudesA(a,b) describing the scattering of four tachyons in the 26d open bosonic string. Indeed, the Veneziano amplitude can be written in terms of Riemann zeta function in this way: A_\infty (a,b)=g_\infty^2 \dfrac{\zeta (1-a)}{\zeta (a)}\dfrac{\zeta (1-b)}{\zeta (b)}\dfrac{\zeta (1-c)}{\zeta (c)} These amplitudes are not easy to calculate. However, in 1987, an amazingly simple adelic product formula for this tachyonic scattering was found to be: \displaystyle{A_\infty (a,b)\prod_p A_p (a,b)=1} Using this formula, we can compute and calculate the four-point amplitudes/interacting vertices at the tree level exactly, as the inverse of the much simpler p-adic amplitudes. This discovery has generated a quite a bit of activity in string theory, somewhat unknown, although it is not very popular as far as I know. Moreover, the whole landscape of the p-adic/adelic framework is not as easy for the closed bosonic string as the open bosonic strings (note that in a p-adic world, there is no “closure” but “clopen” segments instead of naive closed intervals). It has also been a source of controversy what is the role of the p-adic/adelic stuff at the level of the string worldsheet. However, there is some reasearch along these lines at current time. Another nice topic is the vacuum energy and its physical manifestations. There are some very interesting physical effects involving the vacuum energy in both classical and quantum physics. The most important effects are the Casimir effect (vacuum repulsion between “plates”) , the Schwinger effect ( particle creation in strong fields) , the Unruh effect ( thermal effects seen by an uniformly accelerated observer/frame) , the Hawking effect (particle creation by Black Holes, due to Black Hole Thermodynamcis in the corresponding gravitational/accelerated environtment) , and the cosmological constant effect (or vacuum energy expanding the Universe at increasing rate on large scales. Itself, does it gravitate?). Riemann zeta function and its generalizations do appear in these 4 effects. It is not a mere coincidence. It is telling us something deeper we can not understand yet. As an example of why zeta function matters in, e.g., the Casimir effect, let me say that zeta function regularizes the following general sum: \boxed{\displaystyle{\sum_{n\in \mathbb{Z}}\vert n\vert^d =2\zeta (-d)}} Remark: I do know that I should have likely said “the cosmological constant problem”. But as it should be solved in the future, we can see the cosmological constant we observe ( very, very smaller than our current QFT calculations say) as “an effect” or “anomaly” to be explained. We know that the cosmological constant drives the current positive acceleration of the Universe, but it is really tiny. What makes it so small? We don’ t know for sure. Remark(II): What are the p-adic strings/branes? I. Arefeva, I. Volovich and B. Dravogich, between other physicists from Russia and Eastern Europe, have worked about non-local field theories and cosmologies using the Riemann zeta function as a model. It is a relatively unknown approach but it is remarkable, very interesting and uncommon.  I have to tell you about these works but not here, not today. I went too far, far away in this log. I apologize… I have explained why I chose The Spectrum of Riemannium as my blog name here and I used the (partial) answer to explain you some of the multiple connections and links of the Riemann zeta function (and its generalizations) with Mathematics and Physics. I am sure that solving the Riemann Hypothesis will require to answer the question of what is the vibrating system behind the spectral properties of Riemann zeroes. It is important for Physmatics! I would say more, it is capital to theoretical physics as well. Let me review what and where are the main links of the Riemann zeta function and zeroes to Physmatics: 1) Riemann zeta values appear in atomic Physics and Statistical Physics. 2) The Riemannium has spectral properties similar to those of Random Matrix Theory. 3) The Hilbert-Polya conjecture states that there is some mysterious hamiltonian providing the zeroes. The Berry-Keating conjecture states that the “quantum” hamiltonian corresponding to the Riemann hypothesis is the corresponding or dual hamiltonian to a (semi)classical hamiltonian providing a classically chaotic dynamics. 4) The logarithmic potential provides a realization of certain kind of spectrum asymptotically similar to that of the free Riemann gas. It is also related to the issue of confinement of “fundamental” constituents inside “elementary” particles. 5) The primon gas is the Riemann gas associated to the prime numbers in a (Quantum) Statistical Mechanics approach. There are bosonic, fermionic and parafermionic/parabosonic versions of the free Riemann gas and some other generalizations using the Beurling gas and other tools from number theory. 6) The non-extensive Statistical Mechanics studied by C. Tsallis (and other people) provides a link between the harmonic oscillator and the Riemann hypothesis as well. The Tsallisium is the physical system obtained when we study the harmonic oscillator with a non-extensive Tsallis approach. 7) An adelic approach to QM and the harmonic oscillator produces the Riemann’s zeta function functional equation via the Tate formula. The link with p-adic numbers and p-adic zeta functions reveals certain fractal patterns in the Riemann zeroes, the prime numbers and the theory behind it. The periodicity or quasiperiodicity also relates it with some kind of (quasi)crystal and maybe it could be used to explain some behaviour or the prime numbers, such as the one behind the Goldbach’s conjecture. 8) A link between entropy, information theory and Riemann zeta function is done through the use of the notion of group entropy.  Connections between the Veneziano amplitudes, tachyons, p-adic numbers and string theory arise after the Veneziano amplitude in a natural way. 9) Riemann zeta function also is used in the regularization/definition of infinite determinants arising in the theory of differential operators and similar maps. Even the generalization of this framework is important in number theory through the uses of generalizations of the Riemann zeta function and other arithmetical functions similar to it. Riemann zeta function is, thus, one of the simplest examples of arithmetical functions. 10) There are further links of the Riemann zeta function and “vacuum effects” like the Schwinger effect ( pair creating in strong fields) or the Casimir effect ( repulsive/atractive forces between close objects with “nothing” between them). Riemann zeta function is also related to SUSY somehow, either by the striking similarity between the Dirichlet eta function used in Fermi-Dirac statistics or directly with the explicit relationship between the Möbius function and the (-1)^F operator appearing in supersymmetric field theories. In summary, Riemann zeta function is ubiquitious and it appears alone or with its generalizations in very different fields: number theory, quantum physics, (semi)classical physics/dynamics, (quantum) chaos theory, information theory, QFT, string theory, statistical physics, fractals, quasicrystals, operator theory, renormalization and many other places. Is it an accident or is it telling us something more important? I think so. Zeta functions are fundamental objects for the future of Physmatics and the solution of Riemann Hypothesis, perhaps, would provide such a guide into the ultimate quest of both Physics and Mathematics (Physmatics) likely providing a complete and consistent description of the whole Polyverse. Then, the main unanswered questions to be answered are yet: A) What is the Riemann zeta function? What is the riemannium/tsallisium and what kind of physical system do they represent really? What is the physical system behind the Riemann non-trivial zeroes? What does it mean for the Riemann zeroes arising from the Riemann zeta function  generalizations in form of L-functions? B) What is the Riemann-Hilbert-Polya operator? What is the space over the Riemann operator is acting? C) Are Riemann zeta function and its generalization everywhere as they seem to be inside the deepest structures of the microscopic/macroscopic entities of the Polyverse? I suppose you will now understand better why I decided to name my blog as The Spectrum of Riemannium…And there are many other reasons I will not write you here since I could reveal my current research. However, stay tuned! Physmatics is out there and everywhere, like fractals, zeta functions and it is full of lots of wonderful mathematical structures and simple principles! LOG#044. Hydrodynamics and SR(I). Inserting the above tensor for the fluid, we get if we divide by n LOG#040. Relativity: Examples(IV). Example 1. Compton effect. Let us define as  “a” a photon of frequency \nu. Then, it hits an electron “b” at rest, changing its frequency into \nu', we denote “c” this new photon, and the electron then moves after the collision in certain direction with respect to the line of observation. We define that direction with \theta. We use momenergy conservation: We multiply this equation by P_{\mu c} to deduce that P^\mu_a P_{\mu c}+P^\mu_{b}P_{\mu c}=P^\mu_c P_{\mu c}+P^\mu_d P_{\mu c} Using that the photon momenergy squared is zero, we obtain: P^\mu_a P_{\mu c}+P^\mu_bP_{\mu c}=P^\mu_dP_{\mu c} P^\mu _a=\left(\dfrac{h\nu}{c},\dfrac{h\nu}{c},0,0\right) Remembering the definitions \dfrac{c}{\lambda}=\nu and \dfrac{c}{\lambda'}=\nu' and inserting the values of the momenta into the respective equations, we get \boxed{\Delta \lambda\equiv \lambda'-\lambda=\dfrac{h}{mc}\left(1-\cos\theta\right)} \boxed{\dfrac{\omega'}{\omega}=\mbox{Energy transfer}=\left[1+\dfrac{\hbar \omega}{mc^2}\right]^{-1}} It is generally defined the so-callen electron Compon’s wavelength as: \bar{\lambda_C}=\dfrac{\hbar}{mc}\approx 2.42\cdot 10^{-12}m Remark: There are some current discussions and speculative ideas trying to use the Compton effect as a tool to define the kilogram in an invariant and precise way. Example 2. Inverse Compton effect. Imagine an electron moving “to the left” denoted by “a”, it hits a photon “b” chaging its frequency into another photon “c” and the electron changes its direction of motion, being the velocity -u_b and the angle with respect to the direction of motion \theta. The momenergy reads P^\mu_b=\left(\gamma_b mc,-\gamma_b m u_b,0,0\right) Using the same conservation of momenergy than above \dfrac{2EE'}{c^2}+\gamma_b mE'-\gamma_b m\dfrac{u_b}{c}E'=\gamma_b m E+\gamma_b \dfrac{mu_b E}{c} Supposing that u_b\approx c, and then 1-u_b/c\approx \dfrac{1}{2}\left(1+\dfrac{u_b}{c}\right)\left(1-\dfrac{u_b}{c}\right)=\dfrac{1}{2}\left(1-\dfrac{u_b^2}{c^2}\right)=\dfrac{1}{2}\dfrac{1}{\gamma_b^2} \dfrac{2EE'}{c^2}+\dfrac{mE'}{2\gamma_b}=2\gamma_b mE \dfrac{E'}{E}=\dfrac{2\gamma_b m}{\dfrac{2E}{c^2}+\dfrac{m}{2\gamma_b}}=\dfrac{4\gamma_b^2}{1+\dfrac{4\gamma_b E}{mc^2}} This inverse Compton effect is important of importance in Astronomy. PHotons of the microwave background radiation (CMB), with a very low energy of the order of E\approx 10^{-3}eV, are struck by very energetic electrons (rest energy mc²=511 keV). For typical values of \gamma_b >>10^8, the second term in the denominator dominates, giving E'\approx \gamma_b\times 511keV Therefore, the inverse Compton effect can increase the energy of a photon in a spectacular way. If we don’t plut u_b\approx c we would get from the equation: \dfrac{2EE'}{c^2}+\gamma_b mE'-\gamma_b m\dfrac{u_b}{c}E'=\gamma_b m E+\gamma m \dfrac{mu_b E}{c} \gamma_b m E'\left(1-\dfrac{u_b}{c}+\dfrac{2E}{\gamma_b mc^2}\right)=\gamma_b m E\left(1+\dfrac{u_b}{c}\right) \boxed{\dfrac{E'}{E}=\dfrac{1+\dfrac{u_b}{c}}{1-\dfrac{u_b}{c}+\dfrac{2E}{\gamma_b mc^2}}} If we suppose that the incident electron arrives with certain angle \alpha_i and it is scattered an angle \alpha_f. Then, we would obtain the general inverse Compton formula: \boxed{\dfrac{E'_f}{E'_i}=\dfrac{1-\beta_i\cos\alpha_i}{1-\beta_i\cos\alpha_f+\dfrac{E'_i}{\gamma_i mc^2}\left(1-\cos\theta\right)}} In the case of \alpha_f \approx 1/\gamma<<1, i.e., \cos\alpha_f\approx 1, and then \dfrac{E'}{E}\approx \dfrac{1-\beta_i\cos\alpha_i}{1-\beta_i}\approx \left(1-\beta_i\cos\alpha_i\right)2\gamma_i^2 In conclusion, there is an energy transfer proportional to \gamma_i^2. There are some interesting “maximal boosts”, depending on the final energy (frequency). For instance, if \gamma_i\approx 10^3-10^5, then E_f\approx \gamma_i^2\times 511 keV provides: a) In the radio branch: 1GHz=10^9Hz, a maximal boost 10^{15}Hz. It corresponds to a wavelength about 300nm (in the UV band). b) In the optical branch: 4\times 10^{14}Hz, a maximal boost 10^{20}Hz\approx 1.6MeV. It corresponds to photons in the Gamma ray band of the electromagnetic spectrum. Example 3. Bremsstrahlung. An electron (a) with rest mass m_a arrives from the left with velocity u_a and it hits a nucleus (b) at rest with mass m_b. After the collision, the cluster “c” moves with speed u_c, and a photon is emitted (d) to the left. That photon is considered “a radiation” due to the recoil of the nucleus. The equations of momenergy are now: \boxed{E=\dfrac{(\gamma_a-1)m_am_bc^2}{\gamma_a m_a(1+\beta_a)+m_b}} In clusters of galaxies, typical temperatures of T\sim 10^7-10^8K provide a kinetic energy of proton and electron at clusters about 1.3-13keV. Relativistic kinetic energy is E_k=(\gamma_a-1)m_ac^2 and it yields \gamma_a\sim 1.0025-1.025 for  hydrogen nuclei (i.e., protons p^+). If \gamma_am_a(1+\beta_a)<<1, then we have E\approx (\gamma_a-1)m_ac^2=(\gamma_a-1)\times 511keV. Then, the electron kinetic energy is almost completely turned into radiation (bremsstrahlung). In particular, bremsstrahlung is a X-ray radiation with E\sim 1.3-13keV.
0ffae32d4ab9e4e8
Perttu Luukko & Esa Räsänen Postdoctoral Reseacher (PL) and Professor of Physics (ER) Department of Physics, Tampere University of Technology We are two physicists on a quest to find truth and beauty in the world of quantum chaos. In these artworks we demonstrate the interplay between symmetry and complexity in quantum states for a single electron trapped in a nanoscale chaotic system. The quantum states have been computed with a highly efficient solver for the single-particle Schrödinger equation: P. J. J. Luukko and E. Räsänen, Comput. Phys. Comm. 184, 769 (2013). Quantum Carpet 60 x 90 cm printed on canvas A small variation to one of the simplest quantum model systems creates chaos and intricate detail. The artwork displays 24 high-energy eigenfunctions of a charged particle in a two-dimensional square box with a twist: a constant magnetic field perpendicular to the plane. The Rise and Fall of the Pentagram 60 x 90 cm printed on canvas The question of chaos is central to the correspondence between quantum and classical dynamics. The artwork depicts a recently discovered mechanism of quantum scarring, where perturbing an otherwise symmetric system reveals the paths of classical periodic orbits in the quantum eigenfunctions.
fde406b8e93cd7cb
1483 quotes by 518 authors in 126 categories Search for a word or phrase: Or choose an author or a topic: topic author ❝My philosophy like colour TV is all there in black and white❞ Monty Python Quotes, Aphorisms, Laws, and Thoughts Picture of Albert Einstein. 66Eight quotes by Albert Einstein99 Albert Einstein (14 March 1879 - 18 April 1955) was a theoretical physicist who is widely regarded as one of the most influential scientists of all time, and the 'greatest physicist ever', according to a 1999 poll of leading physicists. His many contributions to physics include the special and general theories of relativity, the founding of relativistic cosmology, the first post-Newtonian expansion, explaining the perihelion advance of Mercury, prediction of the deflection of light by gravity and gravitational lensing, the first fluctuation dissipation theorem which explained the Brownian movement of molecules, the photon theory and wave-particle duality, the quantum theory of atomic motion in solids, the zero-point energy concept, the semiclassical version of the Schrödinger equation, and the quantum theory of a monatomic gas which predicted Bose - ;Einstein condensation. ❝All of science is nothing more than the refinement of everyday thinking.❞ [+] ❝God does not play dice with the cosmos.❞ [+] ❝It is not known with what weapon World War III will be fought, but World War IV will be fought with sticks and stones.❞ [+] ❝Life is like riding a bicycle: to keep your balance you must keep moving.❞ [+] ❝Make everything as simple as possible, but not simpler.❞ [+] ❝Many things which go under my name are badly translated from the German or are invented by other people.❞ [+] ❝People like us, who believe in physics, know that the distinction between past, present, and future is only a stubbornly persistent illusion.❞ [+] ❝Technological progress is like an axe in the hands of a pathological criminal.❞ [+]
ae5fb09328102ddc
Skip to main content [ "article:topic-guide", "Author tag:Zielinski", "authorname:zielinskit", "showtoc:no", "license:ccbyncsa" ] Chemistry LibreTexts 3: The Schrödinger Equation • Page ID • The discussion in this chapter constructs the ideas that lead to the postulates of quantum mechanics, which are given at the end of the chapter. The overall picture is that quantum mechanical systems such as atoms and molecules are described by mathematical functions that are solutions of a differential equation called the Schrödinger equation. In this chapter we want to make the Schrödinger equation and other postulates of Quantum Mechanics seem plausible. We follow a train-of-thought that could resemble Schrödinger's original thinking. The discussion is not a derivation; it is a plausibility argument. In the end we accept and use the Schrödinger equation and associated concepts because they explain the properties of microscopic objects like electrons and atoms and molecules.
0153cc07e2b8c258
Massive astrophysical objects governed by subatomic equation March 5, 2018, California Institute of Technology An artist's impression of research presented in Batygin (2018), MNRAS 475, 4. Propagation of waves through an astrophysical disk can be understood using Schrödinger's equation -- a cornerstone of quantum mechanics. Credit: James Tuttle Keane, California Institute of Technology Quantum mechanics is the branch of physics governing the sometimes-strange behavior of the tiny particles that make up our universe. Equations describing the quantum world are generally confined to the subatomic realm—the mathematics relevant at very small scales is not relevant at larger scales, and vice versa. However, a surprising new discovery from a Caltech researcher suggests that the Schrödinger Equation—the fundamental equation of quantum mechanics—is remarkably useful in describing the long-term evolution of certain astronomical structures. The work, done by Konstantin Batygin, a Caltech assistant professor of planetary science and Van Nuys Page Scholar, is described in a paper appearing in the March 5 issue of Monthly Notices of the Royal Astronomical Society. Massive astronomical objects are frequently encircled by groups of smaller objects that revolve around them, like the planets around the sun. For example, supermassive black holes are orbited by swarms of stars, which are themselves orbited by enormous amounts of rock, ice, and other space debris. Due to gravitational forces, these huge volumes of material form into flat, round disks. These disks, made up of countless individual particles orbiting en masse, can range from the size of the solar system to many light-years across. Astrophysical disks of material generally do not retain simple circular shapes throughout their lifetimes. Instead, over millions of years, these disks slowly evolve to exhibit large-scale distortions, bending and warping like ripples on a pond. Exactly how these warps emerge and propagate has long puzzled astronomers, and even computer simulations have not offered a definitive answer, as the process is both complex and prohibitively expensive to model directly. While teaching a Caltech course on planetary physics, Batygin (the theorist behind the proposed existence of Planet Nine) turned to an approximation scheme called perturbation theory to formulate a simple mathematical representation of disk evolution. This approximation, often used by astronomers, is based upon equations developed by the 18th-century mathematicians Joseph-Louis Lagrange and Pierre-Simon Laplace. Within the framework of these equations, the individual particles and pebbles on each particular orbital trajectory are mathematically smeared together. In this way, a disk can be modeled as a series of concentric wires that slowly exchange orbital angular momentum among one another. As an analogy, in our own solar system one can imagine breaking each planet into pieces and spreading those pieces around the orbit the planet takes around the sun, such that the sun is encircled by a collection of massive rings that interact gravitationally. The vibrations of these rings mirror the actual planetary orbital evolution that unfolds over millions of years, making the approximation quite accurate. Using this approximation to model disk evolution, however, had unexpected results. "When we do this with all the material in a disk, we can get more and more meticulous, representing the disk as an ever-larger number of ever-thinner wires," Batygin says. "Eventually, you can approximate the number of wires in the disk to be infinite, which allows you to mathematically blur them together into a continuum. When I did this, astonishingly, the Schrödinger Equation emerged in my calculations." The Schrödinger Equation is the foundation of : It describes the non-intuitive behavior of systems at atomic and subatomic scales. One of these non-intuitive behaviors is that subatomic particles actually behave more like waves than like discrete particles—a phenomenon called wave-particle duality. Batygin's work suggests that large-scale warps in astrophysical disks behave similarly to , and the propagation of warps within the disk material can be described by the same mathematics used to describe the behavior of a single quantum particle if it were bouncing back and forth between the inner and outer edges of the disk. The Schrödinger Equation is well studied, and finding that such a quintessential equation is able to describe the long-term evolution of astrophysical disks should be useful for scientists who model such large-scale phenomena. Additionally, adds Batygin, it is intriguing that two seemingly unrelated branches of physics—those that represent the largest and the smallest of scales in nature—can be governed by similar mathematics. "This discovery is surprising because the Schrödinger Equation is an unlikely formula to arise when looking at distances on the order of light-years," says Batygin. "The equations that are relevant to subatomic physics are generally not relevant to massive, astronomical phenomena. Thus, I was fascinated to find a situation in which an that is typically used only for very small systems also works in describing very large systems." "Fundamentally, the Schrödinger Equation governs the evolution of wave-like disturbances." says Batygin. "In a sense, the waves that represent the warps and lopsidedness of astrophysical disks are not too different from the waves on a vibrating string, which are themselves not too different from the motion of a quantum particle in a box. In retrospect, it seems like an obvious connection, but it's exciting to begin to uncover the mathematical backbone behind this reciprocity." Explore further: Curious tilt of the sun traced to undiscovered planet More information: Konstantin Batygin, Schrödinger evolution of self-gravitating discs, Monthly Notices of the Royal Astronomical Society (2018). DOI: 10.1093/mnras/sty162 Related Stories Curious tilt of the sun traced to undiscovered planet October 19, 2016 Planet Nine—the undiscovered planet at the edge of the Solar System that was predicted by the work of Caltech's Konstantin Batygin and Mike Brown in January 2016—appears to be responsible for the unusual tilt of the sun, ... Supermassive black holes can feast on one star per year February 1, 2018 CU Boulder researchers have discovered a mechanism that explains the persistence of asymmetrical stellar clusters surrounding supermassive black holes in some galaxies and suggests that during post-galactic merger periods, ... Granular media friction explained: Da Vinci would be proud July 12, 2017 New York | Heidelberg, 12 July 2017 Leonardo Da Vinci had already noticed it. There is a very peculiar dynamics of granular matter, such as dry sand or grains of wheat. When these granular particles are left on a vibrating ... On the origins of the Schrodinger equation April 8, 2013 ( —One of the cornerstones of quantum physics is the Schrödinger equation, which describes what a system of quantum objects such as atoms and subatomic particles will do in the future based on its current state. ... Recommended for you Milky Way's neighbors pick up the pace January 22, 2019 After slowly forming stars for the first few billion years of their lives, the Magellanic Clouds, near neighbors of our own Milky Way galaxy, have upped their game and are now forming new stars at a fast clip. This new insight ... A fleeting moment in time January 22, 2019 The faint, ephemeral glow emanating from the planetary nebula ESO 577-24 persists for only a short time—around 10,000 years, a blink of an eye in astronomical terms. ESO's Very Large Telescope captured this shell of glowing ... Revealing the black hole at the heart of the galaxy January 22, 2019 Including the powerful ALMA into an array of telescopes for the first time, astronomers have found that the emission from the supermassive black hole Sagittarius A* at the center of the galaxy comes from a smaller region ... January 21, 2019 New eclipsing cataclysmic variable discovered January 21, 2019 The disintegrating exoplanet K2-22b January 21, 2019 Adjust slider to filter visible comments by rank Display comments: newest first 3 / 5 (2) Mar 05, 2018 This is the type of connection between the quantum world and the cosmos that I have been trying to point out for some time. Many of the systems that we see being played out in huge, grandly slow time and immense space is actually the same sort of thing that happens on the sub-atomic scale on an instantaneous timeframe. 3 / 5 (2) Mar 05, 2018 I love the comparison of quantum mechanics and astrophysics! In simpler terms, since we cannot see small things at the micro-scale which move very fast and cannot be observed ( Heisenberg principle ), we try to predict their behavior by using a statistical inferences. Then we try to make sense of our universe with this understanding which we see almost uniformly but frozen/displaced in time. I think these two will always have issues from a physics perspective and coming up with a unified theory will be a long ways ahead. Just imagine trying to predict a planets location by hitting it with a large object such as a moon from across the universe and trying to use that information to predict an electrons behavior? 5 / 5 (3) Mar 05, 2018 I wouldn't say it's that surprising.. the model involves replacing planets (point-like objects) with a continuum. By doing so, one is basically delocalising the planets, allowing them to interact like waves. Which is exactly the kind of system one would expect to find quantum effects in. 5 / 5 (1) Mar 05, 2018 I wonder how many of the people commenting could even write down Schrodingers Equation or have ever solved it for a hydrogen atom? not rated yet Mar 05, 2018 This makes sense when you think about it - e.g. say we want to determine the motion of all humans on a planet for example, until you measure the system any human could conceivably be anywhere on the surface (analogous to electron cloud on the surface of an atom). Whydening Gyre not rated yet Mar 05, 2018 Interestingly enuff - I find that analogy plausible... not rated yet Mar 05, 2018 In a Chemistry class (a long time ago) we once had a visiting prof. who said that whenever he got stuck on a problem he would take long walks in public gardens and woods. He said that sometimes, though not always, he found that Nature had already solved the problem albeit in a different circumstance. I am not suggesting the this as being the same as in the article but it isn't surprising that the macro might of insights to the micro. Didn't something similar happen with the Gamma/Beta functions that ushered in Super String theory? 5 / 5 (1) Mar 06, 2018 Tried many times to solve it and failed. From observations the overwhelming preponderance have no comprehension of a DE. Some time ago I heard this postulated with regards to the orbits of stars and black holes around a supermassive blackhole. If one visualizes elliptical orbits and considers the velocities relative to distance from the supermassive blackhole it would seem reasonable to see the distribution look like a hydrogen electron. not rated yet Mar 07, 2018 How about the correspondence between Schrodingers and the wave equation used in hydrodynamics, and with the Mathieu equation? The distinction is still in the complex solutions, which have special meaning in a quantum context which doesn't imply in a macroscopic setting. 5 / 5 (1) Mar 13, 2018 Space/time must interact with mass/energy in the form of a wave. The Schrodinger equation describes this phenomenon, which is the "secret" of wave-particle duality. In the double-slit experiment, space/time itself creates the complex interference pattern that is the hallmark of the experiment. Individually released photons/ electrons/ molecules are guided by constructive interference onto the wave crests, much like a duck is naturally drawn up onto wave crests of water. When one attempts to measure what is passing through one of the slits, the cohesive nature of the wave of space/time passing through that slit is disrupted, and the complex interference pattern disappears. Following that logic, waves of space/time wave must also interact with mass/energy at the largest of cosmological scales, such as in this Caltech research-- and even larger scales. This is an indication that the range of space/time wavelengths in the universe may be infinite. not rated yet Mar 27, 2018 Consider that IF (BIG If) galaxies represent Electrons, on a Mega-Cosmic scale, then as we see them we have a UMHB at the center, and energy/mass swirling around it. It is able to gain energy, as clouds of gas or globular clusters, and when the central black hole has gained enough mass in it's torus (think total energy in electron orbital) it spits out a pair of 'Jets' in opposite directions, just like expelling photons from an electron in changing energetic levels within it's 'Orbital'. But at one spin per billion years, we see our own 'Electron-Galaxy' as a semi-frozen entity, where energy takes the form of matter so as to be further condensed, and the deepest density is at the BH levels. How many seconds away from the Big Bang is the Greater Cosmos, since IT's timescale is so different, it should be super close to The BB. Are we part of the Quark-Gluon Liquid Plasma at the Microsecond level, with BH's tossing photon pairs that get stopped since Universe has not re-ionized? Click here to reset your password.
de0354c60a0b3da2
fredag 8 november 2013 Mathematics: Backward Magics or Forward Reason                 A primitive function magically pulled out of a hat as an area under a function graph. There are two approaches to mathematics: 1. Symbolic mathematics: magics: objects pulled out of hats.  2. Constructive mathematics: reason: objects constructed in stepwise computation.   Let me give two examples: The Fundamental Theorem of Calculus The presentation of the Fundamental Theorem of Calculus in standard text books of Calculus, is the following: Consider the integral • $u(t) =\int_0^t f(s)\, ds$  for $t > 0$, defined as the area under the curve determined by the function $s\rightarrow f(s)$ for $s\in [0,t]$. Compute the derivate $\dot u=\frac{du}{dt}$ of the function $t\rightarrow u(t)$ with respect to $t$, to find that, assuming some suitable continuity property of $s\rightarrow f(s)$: • $\dot u (t) = \lim_{\Delta t\rightarrow 0}\frac{u(t+\Delta t) - u(t)}{\Delta t}= \lim_{\Delta t\rightarrow 0}\frac{1}{\Delta t}\int_t^{t+\Delta t} f(s)\, ds = f(t)$ for $t >0$.  In short, the key argument is to show that the integral $u(t)$, defined as an area, satisfies a differential equation • $\dot u(t) = f(t)$ for $t > 0$ or solves an initial value problem • $\dot u(t) = f(t)$ for $t > 0$ with $u(0)=0$.        (*) We thus start with a given function, the integral $u(t)$, which is shown to be the solution of a certain initial value problem. The process leads from solution to equation satisified by the solution. The equation appears as magics without reason, since the reason is put into the specification of the solution or integral $u(t)$, with appeal to a concept of area which has to be defined, and not into the equation. But this is backwards: The more reasonable forward procedure is to start with the initial value problem (*) expressing that the rate of change $\dot u$ of $u$ is equal to $f$ as balance equation expressing some basic physics, and then proceed to the integral $u(t)$ as the solution to the balance equation constructed by time stepping. This is the approach followed in BodyandSoul. We sum up as follows • To proceed from solution to equation is backwards magical.  • To proceed from equation to solution by forward time-stepping is reasonable and not magical.  There are many specific examples of this form including trigonometric and exponential functions and more generally elementary functions all better constructed by time stepping basic differential equations than magically being picked out of hats. For example, the trigonometric functions $\sin(t)$ and $\cos(t)$ are better defined as solutions to $\ddot u + u =0$, which can be constructed by time stepping, rather than geometrically as in standard calculus as ratios of the lengths of sides of a right-angled triangle, which is not computationally constructive. Quantum Mechanics The same situation is met in quantum mechanics: The backward magical process is to start from a wave function solution and discover an equation satisfied by the solution, a magical Schrödinger equation without physical basis which is a mystery to all physicists. The more natural procedure is to start from the Schrödinger equation, which can be formulated as a rational balance equation of smoothed particle dynamics, and then construct the solution (the wave function) by forward time stepping. Concluding Remark: In the discussion of the mathematics program at Chalmers, the standard text book by Adams represents backwards magics, while BodyandSoul represents forward reason. Pick what you think is best. But after all, who cares? Inga kommentarer: Skicka en kommentar
759529b881919836
Quantum Ethics A Spinozist Interpretation of Quantum Field Theory Sébastien Fauvel » View as PDF » Order on Amazon » Contribute on GitHub History of this book How would Quantum Field Theory look like if we stopped for a while developing it further as if it were the draft of a yet-to-be-discovered Theory of Everything, and just started to reformulate the Standard Model as a mathematically and conceptually coherent physical theory? And what would such a theory tell us about the world and about ourselves, which remains hidden in the ill-defined formulations we’ve grown up with through the last decades? As I started back in 2010 to reflect on these questions, I didn’t have yet a clear vision of what this work would lead me to. I just had the feeling that these very basic questions hadn’t been interesting anyone any more for a far too long time, and that we should actually have the means by now, with our understanding of Renormalization, of writing down a well-defined Quantum Field Theory reasonably accounting for all known experimental data (excepted General Relativity phenomena) – which means essentially that it has to be compatible with the Standard Model at known energy scales. I was quite confident that I could find a physically sound regularization of the Standard Model, which I simply wouldn’t consider as an approximation, but take as the exact theory itself, the Standard Model being an ill-defined idealization of it. The models used in computer simulations of lattice Quantum Chromodynamics, for instance, would show me the way. Of course, I knew that I wouldn’t be able to derive the theory from the usual first principles any more, but given that all the attempts of axiomatic Quantum Field Theory to construct well-defined interacting fields upon these first principles had failed miserably, I thought that maybe they could be misleading in the end. Anyhow, I had never been very fond of the heuristical construction of Quantum Field Theory based on Gauge and Poincaré invariance. Developing the whole mathematical apparatus of Representation Theory to simply derive the expression of spin 1 and spin 1/2 spinors as irreducible unitary representations of the Poincaré group had always seemed far too expensive to me, and Gauge transformations mixing particle fields far too artificial to make up a fundamental symmetry of Nature. So I felt free to redefine the Hilbert space of the quantum states without paying much attention to these first principles and focused instead on the mathematical well-definedness of the theory, and in particular of the Schrödinger equation. The most evident way of insuring a well-defined solution at all times is to make the Hilbert space finite dimensional, which has two major physical implications. The most important one is that the physical space itself, too, has to be finite, i.e. to consist in a finite number of points. The simplest way to take this constraint into account is to define space as a finite lattice, like in computer simulations of Quantum Chromodynamics, and to adapt the expression of the Hamiltonian operator of the Standard Model, developed on the momentum basis, by simply using a discrete Fourier transform on the lattice. This formulation of the theory, considered as a fundamental theory and not as a numerical approximation, has evident ontological and cosmological implications. It is interesting to see, for instance, how modern physics addresses thus the atomist polemic of ancient Greece, i.e. the question if matter can be, in principle, indefinitely separated into smaller pieces, or if there are smallest building blocks of matter. The answer of this theory were not only that elementary particles are the smallest, point-like building blocks of matter, but that space itself is constituted of smallest, point-like building blocks, and even of a finite number of them! Incidentally, the void between point-like particles imagined by Greek atomists like Democritus acquires a very different quality, too. There is still a notion of void as the unrealized potentiality of the presence of matter, represented by an unoccupied lattice site, but this site, although it is empty of matter, is still something familiar, identifiable, something we could put a name on. Psychologically, the void loses thus much of the threatening quality of the indiscernible. The empty space that we might tend to imagine between the lattice sites isn’t actually part of the material world, it is purely virtual and has no physical relevance. From a cosmological perspective, the finiteness of space is also a very interesting aspect. It addresses the old question of knowing whether there is something like a frontier of the universe or if the universe is infinite, and it offers a very original answer. According to this theory, the universe is both finite and boundless; it actually has a toroidal structure, which is not of topological nature, but reveals itself at the level of the field dynamics: Wave packets will transit smoothly from one side of the finite lattice to the opposite one without experiencing any discontinuity. So the light we emit, for instance, could come back to us from the opposite direction after having traveled through the whole universe. Yes, if the universe were smaller, maybe you could see the Earth looking at the stars... and the position of the closest images of the Earth in the night sky would give you the direction of the lattice axes, by the way. The second physical implication of the finite dimension of the Hilbert space is the existence of a maximum occupation number for boson fields. I wondered if there were any good theoretical reason to assume an unbounded number of bosons per field mode, and I actually didn’t find any. Of course, the commutation relations usually considered as essential properties of the creation operators would break down when the maximum number of particles is being reached, but these relations, relicts of a heuristical construction of Quantum Field Theory based on the harmonic oscillator model of Quantum Mechanics, are not really necessary to define creation operators. In fact, it is quite straight-forward to define a basis of the Hilbert space on a finite lattice, you just have to take as basis vectors field configurations defined as functions giving the number of particles of each kind at each lattice site. And it isn’t more complicated either to define creation operators as adding one particle of a given kind at a given lattice site, as long as a given maximum occupation number hasn’t been reached. The normalization factors implied by the commutation relations can then be moved to the spinors, where they actually belong. The situation is quite similar for fermions: If you don’t construct the Hilbert space heuristically as a Fock space over the one-particle Hilbert space of Quantum Mechanics, the sign factors implied by the anticommutation of the creation operators can be moved to the spinors too. So in the end, there isn’t any qualitative distinction to be made between bosons and fermions; the same creation operators can be used in both cases, differing only in their maximum occupation numbers. In fact, if you don’t construct the Hilbert space as a Fock space, but define it directly (or use a Fock space modulo particle labels permutations), there is no Spin-Statistics Theorem classifying particles into bosons and fermions according to their spin any more. This famous theorem relates the integer or half-integer character of the spin to the possible sign change happening to the quantum state when the labels of two particles of the same type are being exchanged. But the notion of exchanging the labels of two particles doesn’t actually have any physical meaning, it only makes sense in the Fock space formalism, and is a mere mathematical artifact. I think it is important to realize that the Spin-Statistics Theorem, traditionally considered as one of the greatest insights provided by Special Relativity into Quantum Field Theory, actually doesn’t have any profound physical meaning, and doesn’t establish, as it is often being stated, a connexion between the geometry of space-time and the collective behavior of particles. It only expresses a property of the “unphysical” Fock space formalism, and becomes meaningless as soon as you consider the “physical” quantum states modulo particle labels permutations. So the categories of ‘bosons’ and ‘fermions’ are not implied by Special Relativity, as far as their collective behavior is concerned; only the form of the spinors is. Determining experimentally the maximum occupation number for each boson field is still an open question: For “heavy” bosons like the Z boson, for instance, I don’t think that a lower bound much greater that one can already be established with current experimental data... Once I had constructed this well-defined framework for Quantum Field Theory and made a first proof-of-concept by integrating Quantum Electrodynamics, I left the paper draft I had written by that time rest for a while, took care of my new-born son and started reading a book from the Philosophy library of my wife that had been intriguing me for a while: A French translation of Spinoza’s Ethics. The reading would accompany me through the whole summer of 2011 and make a lasting impression on me. The subtle way Spinoza integrates subjective experience into the physical world reminded me of von Neumann’s hypothesis that mind could somehow cause the collapse of the quantum state of a system upon measurement, and I realized that, within the well-defined framework I had constructed, we had the possibility for the first time to give a formally very precise definition of what von Neumann had meant. This would provide a precise answer to the measurement problem, and probably the first one that isn’t only psychologically motivated, but also constrained by formal consistency. So I started to figure out how to relate subjective experience to the state of the material world in quantum physical terms, and re-read Spinoza with this question in mind. Following von Neumann’s interpretation, I should relate a mental state to a Hilbert subspace in such a way that the Hilbert space be a direct sum of the subspaces corresponding to each possible mental state. Making the assumption that we have to do with different states of a single subjective experience in this decomposition leads directly to the paradox of Wigner’s friend, that is, when several bodies (brains?) are present at once – and it is the case most of the time, isn’t it? –, which one oughts to determine the mental state and trigger the collapse? Escaping this issue requires to describe the mental state in its totality, i.e. to specify the number of subjects having each possible subjective experience at a time, so that a mental state is, basically, described with the same formalism as a field configuration over subjective experiences. And exactly as this is the case for particles in particle fields, subjects are indistinguishable at a fundamental level. There is nothing like “my” mind or “your” mind, each one having its own personal history that could, in principle, be tracked back from birth to death. Pretty much like single particles don’t have any individual trajectory in Quantum Physics, single subjects don’t have any individual history either. As Spinoza would say, we are all thinking together in God; we participate of a single mental reality and don’t have any individual existence below this ontological level. This will probably sound crazy to most readers, and it is probably one of the reasons why Spinoza has been excommunicated for heresy in his time. But it is actually an utmost self-consistent point of view, and the only one consistent with Quantum Field Theory so far. I cannot but warmly advise you to take a closer look at The Ethics; re-reading Spinoza and seeing how a 17th century heresy meets Quantum Physics is really a very exciting experience. The pantheist thesis of Spinoza fits incredibly well in the world view sustained by Quantum Field Theory; neither your body, enmeshed by quantum entanglement with other ones, nor your mind, indistinguishable from other ones, have any individual existence: Nothing exists but God, aka Nature. This is basically the idea of this book, and given that no other interpretation of Quantum Physics integrates so deeply into the formalism of Quantum Field Theory, this made me think that this book was worth writing it, and I guess it will be a joy for many science philosophers to see that the latest achievements in fundamental physics are leading us back, eventually, from a materialistic to a pantheist philosophy. As soon as I had developed this Spinozist model of the mental world (which builds up, together with the material world of quantum fields, the physical world as a whole), I got confronted with the old question of the status of time in Quantum Physics. The controversies on this subject have been summarized very concisely by Wolfgang Pauli in his statement that there cannot be any time observable in Quantum Physics. In the Copenhagen interpretation, indeed, time isn’t a property of the quantum system under observation; it isn’t being measured quantum physically, but classically, and correlated with quantum measurement results. When you measure the fluorescence lifetime of ruby, for instance, you only measure the presence of emitted photons on a quantum physical way, which implies the collapse of the system’s quantum state, but you measure the time at which the photodetector gets activated by simultaneously reading a clock in a classical way. That is a very strange feature of the quantum/classical dichotomy of the Copenhagen interpretation, and it leaves one very basic question completely open: There is no way to predict quantum physically when the quantum measurement process and the collapse of the quantum state will take place, or even to find out the time distribution of the measurement process in a statistical way. The Copenhagen interpretation only defines the statistical distribution of the possible measurement results assuming a measurement is being performed at a given time, but doesn’t tell anything about the conditions under which a quantum measurement will actually happen – basically because measuring is considered as an act taking place in the classical world, which escapes quantum physical description. The reason why this uncertainty about the time at which a quantum measurement happens doesn’t have any consequences on our ability to derive statistical results from the theory was already clear in the 1930’s: As von Neumann pointed out, it wouldn’t make any statistical difference if the collapse of the quantum state happened upon an interaction of the quantum system with a measurement apparatus, or upon an interaction of the quantum system including measurement apparatus with the observer, or at any stage inbetween. And even if the observer wouldn’t read the output of the measurement apparatus, the interaction of the quantum system with it, like any process introducing a strong correlation of its state with the environment, would yield quantum decoherence effects which are practically impossible to tell apart from the effects of an hypothetical collapse, as far as the statistical measurement results are concerned. So we have practically no means of finding out at which stage the collapse is taking place, and addressing this question remains a purely theoretical issue of no practical interest. Nevertheless, it has to be addressed by any theory going beyond the Copenhagen interpretation and trying to describe collapse as a physical process independent of the free will of the observer, which is subsumed in the classical world view. There are lots of so-called spontaneous collapse theories, developed originally by John Bell followed by many others, which generally describe collapse as a dynamical process, yielding in fine to the same states as an abrupt orthogonal projection would do. But these models are purely materialistic and don’t address the question of describing subjective experience in physical terms. The suggestion of von Neumann that mind could cause the collapse of the quantum state, which would get projected to quantum states of the brain corresponding to a definite subjective experience, seemed much more promising to me, as I was looking forward to sketching a more comprehensive world view in physics. So I stuck to the rather conservative hypothesis of an abrupt collapse of the quantum state via a random orthogonal projection to one of the Hilbert subspaces corresponding to a given mental state, and I had to define precisely when this process would happen. In doing so, you are totally free as a theoretical physicist, because, as I said before, collapse and quantum decoherence have practically the same signature in statistical measurement results, so that we can never be sure of having observed a collapse or not. I rejected the hypothesis of a continuous collapse, because continuous stochastic processes are only idealizations, so I supposed rather that collapses happen at discrete times. This implies that our mental state evolves discontinuously, although we usually don’t notice it. From a phenomenological point of view, this isn’t very surprising: Our impression of continuity is based on short-time memory and intentionality, not on the permanence and continuity of our subjective experience itself. Even if we had a single, isolated mental experience, it would have the same quality and provide the same sensation of time as a continuous one – for as the poet says, eternity lies in every moment... The continuity of time only applies at the material level, while the mental world only picks out single “snapshots” of the state of the material world, so to say. Determining when these mental experiences take place cannot be achieved by investigating their subjective content alone; only the elusive effects of the simultaneous collapse of the quantum state could indicate this. So for the sake of simplicity, I just assumed a periodic collapse with a given elementary period, in order to have a well-defined model, even if we don’t have yet any experimental clues in this respect. Of course, the collapse of the quantum state is not a local process in the sense of Relativity Theory, but Einstein-Podolsky-Rosen experiments have already shown very clearly that this non-locality is really part of Nature. And after all, who would expect mental phenomena to be local? They are not bounded to their material substrate; they don’t live in the frame of space, but in another dimension of the physical world, so to say. In the end, the model I’m proposing can be roughly described in very simple terms: A mental state is being experienced while the quantum state is undergoing an elementary unitary evolution, then a new mental state is being randomly moved to as the quantum state gets projected to the corresponding subspace, an so on. In the meanwhile, this almost sounds trivial to me, so I guess I’m eventually understanding Quantum Physics, at least in this form. This alone would be a revolution in this field of science. But I’m not interested in pretending to have discovered deep truths about “the inmost force which binds the world”, to speak with Goethe; I just wanted to show that it is possible, and actually quite easy, to give Quantum Field Theory a form and an interpretation which make it a formally and conceptually closed theory, capable of giving a well-defined answer to any question we can ask it – even if we may eventually find out that it wasn’t the right one. This interpretation challenges all existing ones insofar as it is the first time that this degree of conceptual precision and formal well-definedness has been reached, and I hope this will be motivation enough for others to work out alternative interpretations and achieve the same level of quality – so that we can finally know what Quantum Theory is actually about... » Read full PDF About the Author Sébastien Fauvel, born 1983, graduated from the Ecole Normale Supérieure of Paris in Physics and Comparative Literature. He has been working as a Consultant, Software and Web Developer in Lyon, Freiburg and Basel. La tentative de connexion à la base de données a échoué... Message d'erreur: Access denied for user 'web498'@'localhost' (using password: YES)
6435aa8bc299fa6e
Many benefits of Pulsed Electro-Magnetic Field (“PEMF”) therapy have been demonstrated through more than 2,000 University level, double-blind, medical studies done in many countries with many different PEMF therapy devices. Some of the positive effects of PEMF therapy were well established by the mid 1900’s. The first commercially produced low power PEMF devices entered the market in the early 1900s. These were used for studies and experimentation in healing and cellular wellness. They were sold to both consumers and as medical devices to doctors. The first commercially produced high power PEMF devices entered the market around 1975. They focused on the health of bones, muscles, nerves, tendons, ligaments and cartilage, on reducing pain and on cellular and tissue regeneration. Medical PEMF therapy has been accepted in many countries around the world. The US FDA accepted the use of PEMF devices in the healing of non-union bone fractures in 1979, urinary incontinence and muscle stimulation in 1998, and depression and anxiety in 2006. Israel has accepted the use of PEMF devices for migraine headaches. Canada has accepted PEMF devices for many uses. The European Union has many acceptances for the use of PEMF therapy in many areas including healing and recovery from trauma, degeneration and the treatment of the pain associated with these conditions. Differences in PEMF Therapy Devices • Power Level • Continuous or Pulsed Waveform • Shape of Waveform The continuous waveform PEMF devices can produce a square, a saw tooth, a sine or a custom waveform. The pulsed output PEMF devices usually produce a biphasic short duration pulse. • Control of Frequency Many low power PEMF devices have preset frequencies to choose from according to the various manufacturers’ individual theories. Most high power PEMF devices have a user variable control of the frequency. • Duration of Treatment Primary Benefits of PEMF Therapy • Reduced pain • Reduced inflammation • Increased range of motion • Faster functional recovery • Reduced muscle loss after surgery • Increased tensile strength in ligaments • Faster healing of skin wounds • Enhanced capillary formation • Accelerated nerve regeneration • Reduced tissue necrosis. PEMF Therapy and Nitric Oxide Production Many cells in the body produce nitric oxide; however, its production by the vascular endothelium is particularly important in the regulation of blood flow. Abnormal production of nitric oxide, as occurs in different disease states, can adversely affect blood flow and other vascular functions. Nitric oxide is one of the few gaseous signaling molecules known and is additionally exceptional due to the fact that it is a radical gas. It is a key vertebrate biological messenger, playing a role in biological processes. The March/April 2009 Aesthetic Surgery Journal published a study: “Evidence-Based Use of Pulsed Electromagnetic Field Therapy in Clinical Plastic Surgery” that summarizes the evolution in the understanding of the physiological effects of PEMF therapy on cells and tissues. Studies emerged suggesting that PEMF could modulate the production of growth factors and began to focus on enzyme systems with well-characterized calcium (Ca2+) dependence. By the mid-1990s, researchers were investigating the effects of electrical and PEMF signaling on intracellular Ca2+, specifically the binding of Ca2+ to calmodulin (CaM), using the knowledge that CaM dependent cascades were involved in tissue repair. The most recent studies of the PEMF transduction pathway have concentrated upon the Ca/CaM-dependent nitric oxide cascades, the growth factor cascades involved in tissue healing. It is within this system that the effectiveness of PEMF is now understood to function. PEMFs modulate the calcium-binding kinetics to calmodulin. Calcium/calmodulin (Ca/CaM) then activates nitric oxide synthase (NOS) in several different isoforms. When injury occurs, large amounts of nitric oxide are produced by long-lived inducible nitric oxide synthase (iNOS). In this cascade, tissue levels of nitric oxide persist and the prolonged presence of this free radical is proinflammatory, which accounts for the leaky blood vessels associated with pain and swelling. In contrast, the endothelial and neuronal nitric oxide synthase isoforms (respectively eNOS and nNOS) produce nitric oxide in short bursts that can immediately relax blood and lymph vessels. These short bursts of nitric oxide also lead to the production of cyclic guanosine monophosphate (cGMP), which in turn drives growth factor production. Interestingly, iNOS is not dependent on CaM, while the constitutive or cNOS (eNOS or nNOS) cascade is dependent on the binding of Ca/CaM. Therapies that could accelerate Ca/CaM binding, therefore, should impact all phases of tissue repair, from initial pain and swelling to blood vessel growth, tissue regeneration, and remodeling. As shown in the following diagram, this mechanism has been proposed as a working model for PEMF therapeutics. nypapaacupuncture newyork acupuncture Nitric oxide, known as the ‘endothelium-derived relaxing factor‘, or ‘EDRF’, is biosynthesized endogenously from L-arginine, oxygen and NADPH by various nitric oxide synthase (NOS) enzymes. Dr. Richard E. Klabunde explains the synthesis of nitric oxide from the amino acid L-arginine by the enzymatic action of nitric oxide synthase (NOS). There are two endothelial forms of NOS: constitutive NOS (cNOS; type III) and inducible NOS (iNOS; type II). In addition to endothelial NOS, there is a neural NOS (nNOS; type I) that serves as a transmitter in the brain and in different nerves of the peripheral nervous system, such as non-adrenergic, non-cholinergic (NANC) autonomic nerves that innervate penile erectile tissues and other specialized tissues in the body to produce vasodilation. The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, thus resulting in vasodilation and increasing blood flow. Under normal conditions, nitric oxide is continually being produced by cNOS in the blood vessels. The activity of cNOS is Ca/CaM-dependent and produces vascular relaxation when the endothelium is intact. The activation of the other isoform of endothelial NOS is iNOS is not calcium dependent. Under normal conditions, the activity of iNOS is very low. The activity of iNOS is stimulated during inflammation by bacterial endotoxins or cytokines such as tumor necrosis factor (TNF) and interleukins. During inflammation, the amount of nitric oxide produced by iNOS may be a 1,000-fold greater than that produced by cNOS. Intracellular Mechanisms When nitric oxide forms, it is highly reactive (having a lifetime of a few seconds), yet diffuses freely across membranes, primarely because superoxide anion has a high affinity for nitric oxide. Superoxide and its products can have vasoactive activities in addition to their tissue damaging effects; superoxide anion has another property that makes it very important in cardiovascular pathology and pathophysiology. Superoxide anion, with its unpaired electron, very rapidly binds to nitric oxide, which also has an unpaired electron. Because nitric oxide is a very important vasodilator substance, the reaction between superoxide and nitric oxide effectively scavenges nitric oxide thereby reducing its bioavailability. This leads to vasoconstriction, increased platelet-endothelial cell adhesion, platelet aggregation and thrombus formation, increased leukocyte-endothelial cell adhesion, and morphologic changes in blood vessels, such as cell proliferation. Nitric oxide also avidly binds to hemoglobin (in red blood cells) and the enzyme guanylyl cyclase, which is found in vascular smooth muscle cells and most other cells of the body. When nitric oxide is formed by vascular endothelium, it rapidly diffuses into the blood where it binds to hemoglobin and subsequently broken down. It also diffuses into the vascular smooth muscle cells adjacent to the endothelium where it binds to and activates guanylyl cyclase. This enzyme catalyzes the dephosphorylation of GTP to cGMP, which serves as a second messenger for many important cellular functions, particularly for signaling smooth muscle relaxation. Because of the central role of cGMP in nitric oxide mediated vasodilation, drugs (e.g., Viagra®) that inhibit the breakdown of cGMP (cGMP-dependent phosphodiesterase inhibitors) are used to enhance nitric oxide mediated vasodilation, particularly in penile erectile tissue in the treatment of erectile dysfunction. Increased cGMP also has an important anti-platelet, anti-aggregatory effect. (Cardiovascular Physiology Concepts by Richard E. Klabunde, PhD, published in 2005, updated in 2008) In the Discussion in a study entitled “Pulsed Electro-Magnetic Fields Affect Local Factor Production and Connexin 43 Protein Expression in MLO-Y4 Osteocyte-like cells and ROS17/2.8 Osteoblasts like Cells”, Lohman C.H. et al. state: “This study shows that PEMF affects gap junction formation, local production of nitric oxide, TGF-b1 and PGE2. Osteocytes potientially regulate the bone remodeling through signaling molecules like nitric oxide and PGE2 but also through the local release of TGF-b1.” The above studies demonstrate that PEMF therapy affects many transduction pathways and, in particular the Ca/CaM-dependent nitric oxide cascades. The CaM dependent cascades are involved in tissue repair. By modulating the calcium-binding kinetics to calmodulin (intracellular Ca2+/CaM), the endothelial and neuronal nitric oxide synthase isoforms (respectively eNOS and nNOS) produce nitric oxide in short bursts that can immediately relax blood and lymph vessels. As a highly reactive gaseous molecule, nitric oxide makes an ideal transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule that has direct and indirect vascular action, including the following: the • Direct vasodilation (flow dependent and receptor mediated) • Indirect vasodilation by inhibiting vasoconstrictor influences • Anti-proliferative effect – inhibits smooth muscle hyperplasia. By increasing the production of nitric oxide  when its production is impaired or its bioavailability is reduced, PEMF therapy can successfully help improve conditions and diseases, including those associated with vasoconstriction (e.g., coronary vasospasm, elevated systemic vascular resistance, hypertension), thrombosis due to platelet aggregation and adhesion to vascular endothelium, inflammation due to upregulation of leukocyte and endothelial adhesion molecules, vascular hypertrophy and stenosis, and consequently hypertension, obesity, dyslipidemias (particularly hypercholesterolemia and hypertriglyceridemia), diabetes (both type I and II), heart failure, atherosclerosis, tissue repair and aging. A recent study on postoperative recovery led to the conclusion that PEMF therapy significantly reduced postoperative pain and narcotic use in the immediate postoperative period by means of a PEMF effect on nitric oxide signaling, which could impact the speed and quality of wound repair (Rohde et al., June 2009, Plastic & Reconstructive Surgery, Columbia, NY). Nitric oxide is one of the few gaseous signaling molecules and a key vertebrate biological messenger that plays a role in a variety of biological processes. Recent studies uncover how PEMF therapy stimulates and rebalances many of these processes. The mechanisms by which nitric oxide has been demonstrated to affect the biology of living cells are numerous and include oxidation of iron-containing proteins such as ribonucleotide reductase and aconitase, activation of the soluble guanylate cyclase, a single transmembrane protein, ADP (adenosine di-phosphate) ribosylation of proteins, a process of protein modification involved in cell signaling and the control of many cell processes including DNA repair, protein sulfhydryl group nitrosylation, another protein modification process, and iron regulatory factor activation.                                                                        Having a lifetime of a few seconds, nitric oxide is highly reactive and diffuses freely across cell membranes. These attributes make nitric oxide an ideal transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. PEMF therapy is proven to effectively stimulate paracrine and autocrine communication. Nitric oxide is also generated by phagocytes (monocytes, macrophages, and neutrophils) and, as such, is part of the human immune response. Nitric oxide has been demonstrated to activate NF-κB in peripheral blood mononuclear cells, an important protein complex that controls the transcription of DNA and a transcription factor in iNOS gene expression in response to inflammation. NF-kB mechanism of action Nitric oxide plays a key role in regulating the immune response to infection and is implicated in processes of synaptic plasticity and memory (see diagram above). The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, thus resulting in vasodilatation and increasing blood flow. As blood flow increases, so does the oxygen intake. PEMF therapy has proven to effectively increase blood flow and provide muscle relaxation maybe with better oxygenation of the muscle tissue. The Dynamics of Pain and PEMF Therapy For most individuals, aside from the multiple benefits of the therapy, one of the most relevant effects of PEMF therapy is the improvement of painful conditions regardless of their origin. Pain mechanisms are complex and have peripheral and central nervous system aspects. Overview of signal transduction pathways a receptor, and ends with a change in cell behavior. Transmembrane receptors move across the cell membrane, with half of the receptor outside the cell and the other half inside the cell. The signal, such as a chemical signal, binds to the outer half of the receptor, which changes its shape and conveys another signal inside the cell. Sometimes there is a long cascade of signals, one after the other. Eventually, the signal creates a change in the cell, either in the DNA of the nucleus or the cytoplasm outside the nucleus. In the chronic pain state, pain signal generation can actually occur in the central nervous system without peripheral noxious stimulation. In pain management, modulation of the pain signal transmission is a far better choice than neural destruction, and this can be achieved with PEMF. Scientific evidence shows that acute persistent pain eventually sensitizes wide dynamic neurons in the dorsal horn of the spinal cord, the wind-up phenomenon, constituting the basis of developing chronic pain syndromes (Kristensen, 1992). Persistent and excessive pain has no biological good or necessary function. It is actually harmful to our well-being. Therefore, pain needs to be treated as early and as completely as possible and not to be left alone (Adams et al. 1997). The primary symptom in most patients with disorders affecting the soft tissue is pain.  In many patients, daily activities are limited as inflammation causes pain and, with it, a restriction of the range of movements. Causes of soft tissue pain can be depicted as musculo-skeletal, neurologic, vascular, and referred visceral-somatic or articular (Cailliet, 1991). Early reports of applying electrical current to treat pain date back to before 1800 (Ersek, 1981). PEMF therapy has successfully been used for the control of pain associated with rotator cuff tendinitis, multiple sclerosis, carpal tunnel syndrome, and peri-arthritis (Battisti et al., 1998; Lecaire et al., 1991). An improvement was observed in 93% of patients suffering from carpal tunnel pain and in 83% in cases of rotator cuff tendinitis. PEMF therapy was also used for treatment of migraine, chronic pelvic pain, neck pain, and whiplash injuries (Rosch et al., 2004). In a March, 2003 publication on Pain Management with PEMF Treatment, Dr. William Pawluk explains:”Magnetic fields affect pain perception in many different ways. These actions are both direct and indirect. Direct effects of magnetic fields are: neuron firing, calcium ion movement, membrane potentials, endorphin levels, nitric oxide, dopamine levels, acupuncture actions and nerve regeneration. Indirect benefits of magnetic fields on physiologic function are on: circulation, muscle, edema, tissue oxygen, inflammation, healing, prostaglandins, cellular metabolism and cell energy levels… Short-term effects are thought due to a decrease in cortisol and noradrenaline, and an increase in serotonin, endorphins and enkephalins. Longer term effects may be due to CNS and/or peripheral nervous system biochemical and neuronal effects in which correction of pain messages occur; and the pain is not just masked as in the case of medication”. PEMF Therapy Reduces Pain Many studies have demonstrated the positive effects of PEMF therapy on patients with pain, even as opposed to receiving traditional treatment as well as against a placebo group getting no treatment. Some studies focused on the rapid, short-term relief while others demonstrate the long-term effects. The effectiveness of PEMF therapy has been demonstrated in a wide variety of painful conditions. In a study entitled: “Double-blind, placebo-controlled study on the treatment of migraine with PEMF”, Sherman et al. (Orthopedic Surgery Service, Madigan Army Medical Center, Tacoma, WA, USA) evaluated 42 subjects who met the International Headache Society’s criteria. During the first month of follow-up with exposure to PEMF, 73% of those receiving actual exposure, reported decreased headaches with 45% a substantial decrease and 14% an excellent decrease. Ten of the 22 subjects who had received actual exposure received two additional weeks of actual exposure, after their initial month. All showed decreased headache activity with 50% a substantial decrease and 38% an excellent decrease. Sherman R. concluded that exposure to PEMF for at least 3 weeks is an effective, short-term intervention for migraine. Jorgensen et al. (1994 International Pain Research Institute, Los Angeles, CA, USA) studied the effects of PEMF on tissue trauma and concluded: “Unusually effective and long-lasting relief of pelvic pain of gynecological origin has been obtained consistently by short exposures of affected areas to the application of a magnetic induction device. Treatments are short, fasting-acting, economical, and in many instances have obviated surgery”. Patients with typical cases such as dysmenorrhoea, endometriosis, ruptured ovarian cyst, acute lower urinary tract infection, post-operative haematoma, and persistent dyspareunia who had not received analgesic medication were treated with pulsed magnetic field treatment and evaluated. The results showed that 90% of the patients experienced marked, even dramatic relief, while 10% reported less than complete pain. Hedén P., Pilla AA. (2008 Department of Plastic Surgery, Stockholm, Sweden) studied the Effects of pulsed electro-magnetic fields on postoperative pain in breast augmentation patients. She notes: “Postoperative pain may be experienced after breast augmentation surgery despite advances in surgical techniques, which minimize trauma. The use of pharmacological analgesics and narcotics may have undesirable side effects that can add to patient morbidity”. This study was undertaken to determine if PEMF could provide pain control after breast augmentation. Postoperative pain data were obtained and showed that pain had decreased in the treated patient group by nearly a factor of three times that for the control group. Patient use of postoperative pain medication correspondingly also decreased nearly three times faster in the active versus the sham groups.  Hedén P and Pilla AA concluded: “Pulsed electro-magnetic field therapy, adjunctive to standard of care, can provide pain control with a noninvasive modality and reduce morbidity due to pain medication after breast augmentation surgery”. The Clinical Rheumatology Journal, volume 26-1, January 2007 (Springer London) reported on the Effectiveness of PEMF therapy in lateral epicondylitis by Kaan Uzunca, Murat Birtane and Nurettin Taştekin (Trakya University Medical Faculty Physical Medicine and Rehabilitation Department, Edirne, Turkey): “We aimed to investigate the efficacy of PEMF in lateral epicondylitis comparing the modality with sham PEMF and local steroid injection”. Patients with lateral epicondylitis were randomly and equally distributed into three groups. One group received PEMF, another sham PEMF, and the third group a corticosteroid + anesthetic agent injection. Pain levels during rest, activity, nighttime, resisted wrist dorsiflexion, and forearm supination were investigated with visual analog scale (VAS). Pain threshold on elbow was determined with an Algometer. All patients were evaluated before treatment, at the third week and the third month. Pain levels were significantly lower in the group treated with the local steroid at the third week but the group treated with PEMF had lower pain during rest, activity and nighttime than the group receiving steroids at the third month. Lau (School of Medicine, Loma University, USA) reported on the application of PEMF therapy to the problems of diabetic retinopathy. Patients were treated over a 6-week period, 76% of the patients had a reduction in the level of numbness and tingling. All patients had a reduction of pain, with 66% reporting that they were totally pain-free. Sanseverino et al. (1999, Universita di Bologna, Italy) studied the therapeutic effects of PEMF on joint diseases, in chronic and acute conditions of more than 3,000 patients over a period of 11 years. Follow-up was pursued as constantly as possible. Pain control, recovery of joint mobility and maintenance of the improved conditions represented the parameters for judging the results as good or poor. The chi-square test was applied in order to evaluate the probability that the results are not casual. A general average value of 78.8% of good results and 21.2% of poor results was obtained. The high percentage of good results obtained and the absolute absence of both negative results and undesired side-effects led to the conclusion that PEMF treatment is an excellent physical therapy in cases of joint diseases. A hypothesis is advanced that external magnetic fields influence transmembrane ionic activity. In a 2008 randomized clinical trial to determine if a physics-based combination of simultaneous static and time-varying dynamic magnetic field stimulation to the wrist can reduce subjective neuropathic pain and influence objective electrophysiologic parameters of patients with carpal tunnel syndrome, Weintraub et al. report:” PEMF exposure in refractory carpal tunnel syndrome provides statistically significant short- and long-term pain reduction and mild improvement in objective neuronal functions. In a 2009 evidence-based analysis on the use of PEMF therapy in clinical plastic surgery, Strauch et al. (Einstein College of Medicine, Bronx, NY, USA) explain:” Our objective was to review the major scientific breakthroughs and current understanding of the mechanism of action of PEMF therapy… The results show that PEMF therapy has been used successfully in the management of postsurgical pain and edema, the treatment of chronic wounds, and in facilitating vasodilatation and angiogenesis… with no known side effects for the adjunctive, noninvasive, nonpharmacologic management of postoperative pain and edema… Given the recent rapid advances in development of PEMF devices what has been of most significance to the plastic surgeon is the laboratory and clinical confirmation of decreased pain and swelling following injury or surgery”. Because of the interaction between the biological systems and natural magnetic fields, PEMFs can affect pain perception in many different ways. PEMF Therapy Blocks Pain PEMF therapy has shown to be effective at reducing pain both in the short-term and in the long-term. The ways by which PEMF therapy relieves pain include pain blocking, decreased inflammation, increased cellular flexibility, increased blood and fluids circulation, and increased tissue oxygenation. The trans-membrane potential, (“TMP”) is the voltage difference (or electrical potential difference) between the interior and exterior of a cell. An electrochemical gradient results from a spatial variation of both an electrical potential and a chemical concentration across a membrane. Both components are often due to ion gradients, particularly proton gradients, and the result is a type of potential energy available for cellular metabolism. This can be calculated as a thermodynamic measure, an electrochemical potential that combines the concepts of energy stored in the form of chemical potential, which accounts for an ion’s concentration gradient across a cellular membrane, and electrostatics, which accounts for an ion’s tendency to move relative to the TMP. Differences in concentration of ions on opposite sides of a cellular membrane produce the TMP. The largest contributions usually come from sodium (Na+) and chloride (Cl) ions which have high concentrations in the extracellular region, and potassium (K+) ions, which along with large protein anions have high concentrations in the intracellular region. Opening or closing of ion channels for ion transport (Na+, Ca2+, K+, Cl) in and out of cells at one point in the membrane produces a local change in the TMP, which causes an electric current to flow rapidly to other points in the membrane that occurs with the movement of electrons. In electrically excitable cells such as neurons, the TMP is used for transmitting signals from one part of a cell to another. In non-excitable cells, and in excitable cells in their baseline states, the TMP is held at a relatively stable value, called the resting potential. For neurons, typical values of the resting potential range from -70 to -80 mV (mill Volts); that is, the interior of a cell has a negative baseline voltage. Each axon has its characteristic resting potential voltage and in each case the inside is negative relative to the outside. Opening and closing of ion channels can induce a departure from the resting potential, called a depolarization if the interior voltage rises (say from -70 mV to -65 mV), or a hyper polarization if the interior voltage becomes more negative (for example, changing from -70 mV to -80 mV). In excitable cells, a sufficiently large depolarization can evoke a short-lasting all-or-nothing event called an action potential, in which the TMP very rapidly undergoes a large change, often reversing its sign. Special types of voltage-dependent ion channels that generate action potentials but remain closed at the resting TMP can be induced to open by a small depolarization. In a lecture on Pain Reduction, Dr. D. Laycock, Ph.D. Med. Eng. MBES, MIPEM, B.Ed., inspired by the works of Adams et al. (1997) explains how PEMF therapy affects pain transmission at the levels of the neurons. “It is necessary to understand the mechanism of pain transmission to understand how pain blocking can take place with PEMF therapy.  Pain is transmitted along the nerve cells by an electric signal. This signal encounters synaptic gaps at intervals.  The pain signals are transmitted along nerve cells to pre-synaptic terminals. At these terminals, channels in the cell alter due to a movement of ions. The TMP changes, causing the release of a chemical transmitter from a synaptic vesicle contained within the membrane. The pain signal is chemically transferred across the synaptic gap to chemical receptors on the post-synaptic nerve cell. This all happens in about 1/2000th of a second, as the synaptic gap is only 20 to 50 nm (nano meters) wide. As the pain signal, in chemical form, approaches the post-synaptic cell, the membrane changes and the signal is transferred. During quiescent times, cells possess a small charge of about –70mV between the inner and outer membranes.  When a pain signal arrives, it temporarily depolarizes the nociceptive cell and raises the cell TMP to +30mV.  This increase is sufficient to open channels in the cell membrane allowing the exchange of the sodium (Na+) and potassium (K+) ions. When an action potential begins, the channels that allow crossing of the Na+ ions open up. When the Na+ channels open, the depolarization occurs, the Na+ rushes in because both of the greater concentration of Na+ on the outside and the more positive voltage on the outside of the axon. The flow of positively charged ions into the axon leads the axon to become positively charged relative to the outside. With each positively charged Na+ ion that enters the axon, another positive charge is inside and one fewer negative charge is outside the axon. Thus, together the inside grows increasingly more positive and the relative concentration of Na+ inside the axon relative to outside the axon grows greater. This initial phase of the action potential is called the depolarization phase. Now as the depolarization phase progresses, the status of the two physical forces that have been discussed changes. At the end of the depolarization phase, the voltage of the inside of the axon relative to the outside is positive and the relative concentration of Na+ ions inside the axon is greater than at the beginning of the action potential. The inside of the axon becomes sufficiently positive, about +30 mV as an average value, the Na+ channels close. This closing of the Na+ channels will greatly limit the ability of Na+ ions to enter the axon. In addition to the Na+ channels closing, the potassium (K+) channels open. Now K+ ions are free to cross the channels and now leave the axon due both to the greater concentration of K+ on the inside and the reversed voltage levels. The action potential is therefore not the movement of voltage or ions but the flow of these ion channels opening and closing moving down the axon. This movement of the ion channels explains why the action potential is transferred slowly relative to the normal flow of electricity. The normal flow electricity is the flow of electrons in an electrical field and the electrons travel at the speed of light while the movement of these ion channels opening and closing is considerably slower. These are mechanical movements that cannot move as fast as the speed of light. The exchange of the sodium (NA+) and potassium (K+) ions then triggers exocytosis of neurotransmitters via synaptic vesicles.  These neurotransmitters diffuse into the synaptic gap. Once this process has occurred, the cell depolarizes back to its previous level of –70mV. Research by Warnke established that the application of PEMF therapy has an effect on the quiescent potential of the neuronal synaptic membrane (Warnke, 1983; Warnke et al. 1997).  It suggested that the effect is to lower the potential to a hyperpolarized level of –90mV.  “When a pain signal is received, the TMP has to be raised again in order to fire an action potential via neurotransmitters but it only achieves to raise the cell TMP to an approximate +10mV. This potential is well below the threshold of +30mV necessary to release the relevant neurotransmitters into the synaptic cleft and the pain signal is effectively blocked”. By causing a hyperpolarized state at the neuronal membrane, PEMF therapy effectively blocks pain as it prevents the threshold necessary to transmit the pain signal to be reached. In the same way, PEMF therapy effectively increases the TMP of damaged cells thus allowing them to recover their functions, heal and improve their metabolism. The Encyclopedia of Nursing and Allied Health define the use of “Electrotherapy” for pain relief as effective to manage both acute and chronic pain. In the “Gate Model” of pain, the neural fibers that carry the signal for pain and those that carry the signal for proprioception (body and limb position) are mediated through the same central junction. Because signal ptissue caused by an impact injury or trauma. It can also result from surgery. Tissue cells are inherently like tiny electrically charged machines. When a cell is traumatized, the cell’s electrical charge is diminished; this causes normal cell functions and operations to shut down. Cells that are scarred or fibrotic with adhesions have a TMP charge of approximately -15 mV, degenerative or immune-compromised cells average -30 mV, both low TMPs. With the raised TMP, the body releases chemical signals that cause inflammation swelling and bruising resulting in pain and inhibiting the cell communication pathways necessary for healing to begin. Numerous clinical studies have demonstrated that PEMF therapy has been successful in reducing inflammation. PEMF therapy treats the cellular source of swelling by recharging the cells with a mild electromagnetic current. This stops the release of pain and inflammatory mediators, reduces inflammatory fluids and allows an increase in blood flow, therefore increased oxygen intake, to help the cells heal faster with less swelling, pain and bruising. The effect of wound healing electromagnetic fields on inflammatory cytokine gene expression in rats was studied by Jasti et al. in 2001 who  state: “Inflammation is characterized by massive infiltration of T lymphocytes, neutrophils and macrophages into the damaged tissue. These inflammatory cells produce a variety of cytokines, which are the cellular regulators of inflammation”. In a study on Low Frequency PEMF—a viable alternative therapy for arthritis published in 2009, Ganesan et al. (Department of Biotechnology, Chennai, India) declare: “PEMF for arthritis cure has conclusively shown that PEMF not only alleviates the pain in the arthritis condition but it also affords chondroprotection, exerts anti-inflammatory action and helps in bone remodeling, and this could be developed as a viable alternative for arthritis therapy”. Damaged cells are also energy deficient; thus they have low oxygen levels, high in sodium levels, and have a faltered electrochemical gradient. By inducing a mild electrical current into damaged cells, PEMF therapy slows or stops the release of pain and inflammatory mediators, increases blood flow, and re-establishes normal cell interaction. PEMF stimulates and restores the electrochemical gradient, the cell starts pumping sodium out, potassium enters the cell, the swelling resolves, oxygen starts flowing back in, and pain improves. Due to the density of the cell tissue, change requires stronger pulsed magnetic fields to be able to restore the healthy TMP to its optimal -70 mV. Several factors influence tissue inflammation and the processes by which PEMF therapy operates to reduce inflammation include complex mechanical, chemical, electrical and magnetic processes along with increased circulation, oxygenation and cellular activity. With reduced inflammation, pain decreases and faster tissue healing occurs. The Elsevier Journal of Biomedicine & Pharmacotherapy (2005) publication: “Effects of pulsed electromagnetic fields on articular hyaline cartilage: review of experimental and clinical studies by M. Fini. G. Giavaresi, A. Carpi, A. Nicolini, S. Setti, R. Giardino (Experimental Surgery Department, Research Institute Codivilla-Putti-Rizzoli, Orthopedic Institute, via di Barbiano 1/10, 40136 Bologna, Italy, Department of Reproduction and Aging, University of Pisa, Pisa, Italy, Department of Internal Medicine, University of Pisa, Pisa, Italy, igea SRL, Carpi, Modena, Italy) states: “Newer concepts on osteoarthritis (OA) pathogenesis are related to the role of inflammation that is now well accepted as a feature in OA. Synovitis is common in advanced age involving infiltration of activated B cells and T lymphocytes and the expression of pro-inflammatory cytokines and chemokines is observed in patients with OA in the joints of OA patients and animals. With regards to this, IL-1b, TNFa, IL-6, IL-18, IL-17 and leukemia inhibitory factor (LIF) appear to be more relevant to the disease. These catabolic cytokines lead to the destruction of joint tissue by stimulating cartilage PG resorption, MMP synthesis and nitric oxide production. The purine base adenosine has been shown to limit inflammation through receptor (i.e. A2a)-mediated regulation and suppressing pro-inflammatory cytokines synthesis (TNFa, IL-8, IL-2, IL-6). Adenosine has been reported to reduce inflammation and swelling in several in vivo models of inflammation and also in adjuvant-induced and septic arthritis in animals. So, a therapy combining an anabolic effect on chondrocytes, a catabolic cytokine blockage, a stimulatory effect on anabolic cytokine production and one that is able to counteract the inflammatory process would be extremely useful for OA treatment. In vitro studies showed that chondrocyte proliferation and matrix synthesis are significantly enhanced by PEMF stimulation, when investigating also the conditions affecting the PEMF action. A part the importance of physical properties of the fields used (intensity, frequency, impulse amplitude, etc.) and the exposure time, the availability of growth factors, environmental constrictions and the maintenance of the native–cell matrix interactions seem to be fundamental in driving the PEMF-induced stimulation. In particular, the interaction between cell membrane receptors and mitogens seems to be one of the molecular events affected by PEMFs. These data are in agreement with results of in vivo studies with a decalcified bone matrixinduced endochondral ossification model and showing that the stimulation of TGF-b1 may be a mechanism through which PEMFs affect complex tissue behavior and through which the effects of PEMFs may be amplified. In addition, PEMFs are reported to up-regulate mRNA levels for, and protein synthesis of, growth factors resulting in the synthesis of ECM proteins and acceleration of tissue repair. As far as inflammation is concerned, IL-1b is present in high amounts in OA cartilage and is considered to be one of the main catabolic factors involved in the cartilage matrix degradation associated with OA. As previously mentioned, PEMFs in vitro were able to counterbalance efficiently the cartilage degradation induced by the catabolic cytokine”. As cited above, many studies lead to the conclusion that PEMF therapy is effective and reduces inflammation. PEMF Therapy Increases Blood and Lymphatic Circulation The arterial and venal blood vessels are intimately associated with the lymphatic system.. As the blood and lymphatic vessels bring oxygen and nutrients to the cells and remove their waste products, they are nourishing and detoxifying the cells, tissues and body. In June 2004, The Faseb Journal states: “PEMF therapy has been shown to be clinically beneficial in repairing bones and other tissues, but the mechanism in action is unclear. The results of a study done at the New York University Medical Center (Institute of Reconstructive Plastic Surgery, NY, NY, USA) demonstrates that electro-magnetic fields increased angiogenesis, the growth of new blood vessels, in vitro and in vivo through the endothelial release of FGF-2, fibroblast growth factor-2. The delivery of PEMF therapy in low doses identical to that currently in clinical use significantly increased endothelial cell proliferation and tubulization, which are both important processes for vessel formation. The ability of PEMF to increase cell proliferation was unique to endothelial cells, which seemed to be the primary target of PEMF stimulation, releasing a protein in a paracrine fashion (or signaling to adjacent cells and other types of cells) to induce changes in neighboring cells and tissues. Since direct stimulation did not produce significant changes in osteoblast proliferation, the ability of PEMF therapy to enhance the healing of complicated fractures is likely the result of increased vascularity rather than a direct effect on osteogenesis as previously believed. The coordinated release of FGF-2 suggests that PEMF therapy may facilitate healing by augmenting the interaction between osteogenesis and blood vessel growth. As such, PEMF therapy may offer distinct advantages as a non-invasive and targeted modality that is able to release several growth factors to achieve therapeutic angiogenesis. The fibroblast and endothelial cells are made to go embryonic due to drastic changes in ionic concentrations in the cells’ cytoplasm and therefore the cells’ nuclei. These ionic concentrations react with the cell DNA opening up some gene sets and closing down others. It is apparently the rapid onset of a strong-pulsed electric field generated by the pulsed magnetic field, which causes some cell ion gate types to open and be force fed ions by the same electric field”. As demonstrated in the following study entitled: “Impulse magnetic-field therapy for erectile dysfunction: a double-blind, placebo-controlled study”, increased microcirculation leads to improvements in macro-circulation. The study by Pelka et al. (Universitat der Bundeswehr Munchen, Munich, Germany) assessed the efficacy of three weeks of PEMF therapy for erectile dysfunction. In the active-treatment group, all efficacy endpoints were significantly improved at study end with 80% reporting increases in intensity and duration of erection, frequency of genital warmth, and general well-being. In contrast, only 30% of the placebo group noted some improvement in their sexual activity; 70% had no change. No side effects were reported. PEMF therapy has proven efficacious at increasing the flow of ions and nutrients into the cells and at stimulating blood and interstitial fluid circulation. With increased lymphatic drainage and blood flow, cells receive more oxygen and nutrients, and eliminate toxins faster. Cells are therefore able to function better and tissues repair themselves more efficiently. Through the same processes, vital organs such as the liver, kidneys and colon are able to rid themselves of impurities thus detoxifying the body and allowing better organ functionality. PEMF Therapy Increases Cellular Membrane Permeability As early as 1940, it was suggested that magnetic fields affect the TMP and the flow of ions in and out of the cells and might therefore influence cellular membrane permeability. It has since been established that magnetic fields can influence ATP (Adenosine Tri-phosphate) production; increase the supply of oxygen and nutrients via the vascular and lymphatic systems; improve the removal of waste via the lymphatic system; and help re-balance the distribution of ions across the cell membrane. Healthy cells in tissue have a voltage difference between the inner and outer membrane referred to as the membrane resting potential that ranges from -70 to -80 mV. This causes a steady flow of ions through its voltage-dependant ion channels. In a damaged cell, the potential is raised and an increased sodium inflow occurs. As a result, interstitial fluid is attracted to the inner cellular space, resulting in swelling and edema. The application of PEMF to damaged cells accelerates the re-establishment of normal potentials (Sanseverino, 1999) increasing the rate of healing and reducing swelling. In biology, depolarization is a change in a cell’s TMP, making it more positive or less negative. In neurons and some other cells, a large enough depolarization may result in an action potential. Hyper polarization is the opposite of depolarization, and inhibits the rise of an action potential. If a cell has a resting potential of -70mV and the membrane potential rises to -50mV, then the cell has been depolarized. Depolarization is often caused by influx of cations, e.g. Na+ through Na+ channels, or Ca2+ through Ca2+ channels. On the other hand, efflux of K+ through K+ channels inhibits depolarization, as does influx of Cl– (an anion) through Cl– channels. If a cell has K+ or Cl– currents at rest, then inhibition of those currents will also result in a depolarization. As the magnetic field created fluctuates, it induces an electron flow or a current in one direction through the living tissue. As electrons always flow from a negative (cathode) to a positive (anode) potential, when the magnetic field vanishes, the direction of the electron flow is reversed. Therefore such induced polarized currents stimulate the exchange of ions across the cell membrane. As the electro-magnetic field pulses temporarily hyperpolarize and depolarize the membrane, the ion channels open and close allowing a more efficient ion exchange, as with the sodium-potassium (Na+, K+) pump, thus increasing cellular oxygenation and nutrition as sodium export stimulates several secondary active transporters. PEMF Therapy Increases Cellular Metabolism In a study on Chronic Fatigue Syndrome and Electro-medicine, Thomas Valone, Ph.D, showed that damaged or diseased cells present an abnormally low TMP, about 80% lower than healthy cells. This signifies a greatly reduced metabolism and, in particular, impairment of the electrogenic Na+/ K+ pump activity associated with reduced ATP (Adenosine Tri-Phosphate) production. The Na+/ K+ pump within the membrane forces a ratio of 3Na+ ions out of the cell for every 2K+ ions pumped in for proper metabolism. The sodium-potassium pump uses energy derived from ATP to exchange sodium for potassium ions across the membrane. An impaired Na+/ K+ pump results in edema (cellular water accumulation) and a tendency toward fermentation, a condition known to be favorable toward cancerous activity. French researcher Louis C. Kervran demonstrated that Sodium plus Oxygen plus Energy (ex: magnetic) nuclearly transmutes into Potassium as follows: 11 Na23 + 8 O16 + energy = 19 K39 This nuclear process is accomplished with low heat, in a low rate of thermal decomposition, which is the most important and commonly occurring phenomenon of Nuclear Fusion in Biology. As a result, utilization of oxygen in the cells increases and the body increases production of its own energy supplier (ATP). The organism becomes more stable and efficient; toxins and waste products are more rapidly broken down. The body’s natural regulatory mechanisms are reinforced and healing processes accelerated. Free radical proliferation is linked to pathological changes that cause cellular malfunction or mutation (i.e. cancer) as well as protein degradation. Free radicals also play a large role in causing damage to all cells of the body but particularly that of the immune system. According to studies, free radicals also “deplete cellular energy” by interfering with mitochondrial function and contribute to a shortened lifespan. Cellular energy generation in the mitochondria is both a key source and a key target of oxidative stress in the cells. Seeking an electron to complete the radical, free radicals cause chain reactions as electrons are ripped from molecules, creating another free radical. Antioxidants such as vitamin A, vitamin E, selenium and coenzyme Q10 supply free electrons and are usually prescribed to provide limited relief in counteracting free radical ravages. However, electronic antioxidants produced by PEMF therapy can also satisfy and terminate free radicals by abundantly supplying the key ingredient usually found only in encapsulated antioxidant supplements…the electron (Thomas Valone, Ph.D. on Bioelectromagnetics, 2003). On the biophysical level, as PEMF therapy increases the circulation of electrons across the cell membrane, a parallel phenomenon seems to occur, the acceleration of ATP synthesis and of other aspects of the cellular biochemical anabolism. As electrons are drawn to the inner membrane, they increase the ionic charge inside the cell and, thus, the TMP. In 1976, Nobel Prize winner Dr. Albert Szent-Gyorgi established that structured proteins behave like diodes or rectifiers. A diode passes electricity in only one direction. He proposed that cell membranes can rectify an induced voltage and this rectifying property of cell membranes can cause changes in the ion concentration of the inner and outer surfaces of the cell membrane in such a way as to increase the TMP and effectively stimulate the activity of the Na+/ K+ pump. Cell health is directly affected by the health of the Na+/ K+ pump, which is directly proportional to the TMP. Based on these biophysical principles, an endogenous high voltage EMF potential of sufficient strength will theoretically stimulate the TMP, normal cell metabolism, the sodium pump, ATP production and healing. Electro-medicine appears to connect to and recharge the storage battery of the TMP. Dr. Albert Szent-Gyorgi summarizes: “TMP is proportional to the activity of this pump and thus to the rate of healing.” Furthermore, “increases in the TMP have also been found to increase the uptake of amino acids.” This is important, as increasing the supply of nutrients is also an effective aid to cell repair. This is particularly true in trauma where circulation has been impaired by crushed or severed blood vessels, or by the inflammation and swelling that compresses capillaries, blocking the flow to both the injured and uninjured cells. PEMF Therapy Increases Energy Storage and Cellular Activity At the sub-atomic level, as the pulsed fields expand and collapse through a tissue, the protein molecules, such as the cytochromes in the cells’ mitochondria, gain electrons and, in doing so, store energy. Even though the instantaneous peak magnetic energy amplitudes are very high, the average magnetic amplitudes generated by PEMF therapy remain low, the average total energy transmitted to the tissues is not powerful enough to create heat within the cells, nor for the cells’ atoms to vibrate much and cause a thermal increase, nor for an electron to jump to a higher orbit and emit heat as it returns to its orbit of origin. There is only sufficient average energy for the electron-spin to be increased, thus, energy gets stored in the cells’ mitochondria by converting ADP (Adenosine Di-Phosphate) to ATP molecules more rapidly by the addition of the phosphate radical to the ADP. The ATP molecules store and transport the energy that is then used in the many chemical processes within the cell that participate in all the metabolic functions of living cells. This phenomenon is referred to as the electron transport chain and is described in the diagrams below. ADP structure ATP structure The diagram below describes the electron transport chain: Understanding the effects of PEMF therapy at the atomic level requires a basic understanding of Quantum Mechanics that is provided here. Solving the Schrödinger equation for a molecule and determining probable amplitude for its electrons over an infinite number of possible trajectories yields the vibrational states of a molecule. This describes how the quantum state or wave function of a molecule or physical system changes in time. A diatomic molecule, which only involves one vibrational degree of freedom, (the stretching of the bond between the electron and the positon) provides a simple description (Atkins et al., 2002). Quantum mechanical considerations show that during the electronic excitation of a particular molecule at the same orbital state, the energy of an excited triplet state (T1) is lower than that of its corresponding singlet state (S1). In biomolecules, the non-radiative crossing from the state S2 to S1 is generally the dominant mechanism. This crossing between two electronic states of the same spin multiplicity is called internal conversion (“IC”) (Atkins et al., 2002). The IC process is then followed by a rapid vibrational relaxation (decrease) where the excess vibrational energy is dissipated into heat, the molecule now ending up at the lowest, zero-point vibrational level of the S1 electronic state. From here, it can return to the ground electronic state S0 by emitting a photon (radiatively). The time-varying magnetic fields associated with PEMF therapy apparently affect electronic states via the intercrossing system (“ISC”), which is an excitation from state Si to Ti, where Ti is the corresponding triplet state (2 electrons are unpaired). The ISC type of crossing is heavily affected by the spin-orbit coupling, which relaxes the spin property by mixing with an orbital character (Szent-Gyorgyi A, 1976; Atkins et al., 2002). The ISC type of crossing leads to phosphorescence rather than fluorescence with radically different heat properties. Heavy metals, molecular oxygen having a triplet ground state, paramagnetic molecules such as hemoglobin, and heavy atoms such as iodine increase the inter-system crossing rate (Prasad, 2003). In shifting positions around an atomic nucleus, an electron generates energy and emits a magnetic resonance of specific frequency.  Thus, the magnetic resonance field frequency of the various body tissues and organs is a product of the individual atomic, molecular and cellular frequencies specific to the molecules that constitute the particular tissue or organ. PEMF therapy therefore confuses the specific inherent magnetic resonance and temporarily modifies it in each atom, molecule, cell, and thus, tissue and organ. From the perspective of biophysics, physiological markers represent a level of “order or disorder” in the magnetic resonance of a normal atom that correlates to internal and external factors. The Pulsed Electro-Magnetic Fields generated by PEMF therapy devices provide sufficient energy to affect the magnetic resonance of the atom as the electron is energized. When a disruption in the magnetic resonance occurs, the magnetic resonance of the electrons at the atomic level also exhibits a change, a phase shift that disturbs and breaks the once orderly pathways of communication that is usually transmitted from atom to molecule, molecule to cell, cell to tissue, and tissue to organ.  In doing so, the phase shift influences the physical and chemical characteristics of the physiological markers. PEMF therapy has proven beneficial in many ways for the various energetic body functions. All of the many types of living cells that make up the tissues and organs of the body are tiny electrochemical units. They are powered by a “battery” that is continually recharged by the cells’ metabolic chemistry in a closed loop of biological energy. PEMF Therapy Increases Cellular Membrane Flexibility and Elasticity A study entitled “Modulation of collagen production in cultured fibroblasts by a low-frequency pulsed magnetic field” by Murray et al. (Biochim Biophys Acta) shows that the total protein synthesis was increased in confluent cells treated with a pulsed magnetic field for the last 24 h of culture as well as in cells treated for a total of 6 days. However, in 6 day-treated cultures, collagen accumulation was specifically enhanced as compared to total protein, whereas after short-term exposure, collagen production was increased only to the same extent as total protein. These results indicate that a pulsed magnetic field can specifically increase collagen production, the major differentiated function of fibroblasts, possibly by altering cyclic-AMP metabolism. PEMF therapy successfully increases membrane flexibility by increasing the synthesis of collagen, a crucial protein that supports membrane elasticity, within the fibroblasts. In doing so, PEMF therapy increases tissue and muscle flexibility and, in doing so, increases range of motion. PEMF Therapy Stimulates Cellular Communication and Replication DNA synthesis is linked to pulsed, low intensity magnetic fields (Liboff et al., 1984; Rosch et al., 2004). Proteins are conductors of electricity. When exposed to strong fields, proteins are subject to electrophoresis. The Ribonucleic Acid (“RNA”) messengers that are synthesized from a Deoxyribonucleic Acid (“DNA”) template during transcription mediate the transfer of genetic information from the cell nucleus to ribosomes in the cytoplasm and serve as a template for protein synthesis. Since RNA mechanically influences the DNA and encoded proteins influence RNA, the flow of information to and from genes may be linked to changing magnetic fields (Einstein, 1977; Goodman et al., 1983). Since magnetic fields interact with changing electrical charges and recent studies (Dandliker et al., 1997) show that DNA conducts electrons along the stacked bases within the DNA double helix, electro-magnetic fields may initiate transcription of the precursor mRNA by accelerating electrons moving within the DNA helix (McLean et al., 2003). PEMF Therapy Increases Cellular Genesis (Cellular Growth and Repair) In December 2004, the Swiss Medical Tribune stated that PEMF therapy provided: “improvement of blood circulation, relief from pain, improvement of bone healing and the stimulation of nerve cells. Not only is the PEMF therapy effective in disease condition: it is an excellent means of preventing stress, assisting regeneration and recovery after sports exertion… Through metabolic activation and blood circulation more nutrients and oxygen are available to muscle cells, less damage is experienced, and efficiency is improved.” • PEMF and the spine In a long-term study entitled: “Spine fusion for discogenic low back pain: outcome in patients treated with or without pulsed electromagnetic field stimulation”, Marks RA. (Richardson Orthopaedic Surgery, TX, USA) randomly selected 61 patients who underwent lumbar fusion surgeries for discogenic low back pain between 1987 and 1994 and had failed to respond to preoperative conservative treatments. Average follow-up time was 15.6 months postoperatively. Fusion succeeded in 97.6% of the 42 patients who received PEMF stimulation for only 52.6% of the 19 patients who did not receive electrical stimulation of any kind. A similar study by Richard A. Silver, M.D. (Tucson Orthopaedic & Fracture Surgery Associates, Ltd., Tucson, AZ, USA) with 85 patients who had undergone surgery of posterior lumbar interbody fusion (PLIF) and had risk factors associated with a poor prognosis for healing, including smoking, prior back surgery, multiple spinal levels fused, diabetes millitus, and obesity, roentgenographic examination and clinical evidence indicated that all but two patients achieved successful fusion. Of the 83 patients with successful spinal fusion, 29 (34.9%) were assessed as “excellent,” 45 (54.2%) as “good,” 3 (3.6%) as “fair”, and 6 (7.2%) as “poor”. Adjunctive treatment with PEMF appeared effective in promoting spinal fusion following PLIF procedures across all patient subgroups. • PEMF, cartilage and bones In a study entitled: “Modification of biological behavior of cells by Pulsing Electro-magnetic fields”, 20 subjects of ages between 57 and 75 years with decreased bone mineral density as defined by a bone densitometer, were treated with PEMF therapy during a period of 12 weeks by Ben Philipson, Curatronic Ltd. (University of Hawaii School of Medicine, HI, USA). After a period of 6 weeks, the bone density rose in those patients with an average of 5.6%. Properly applied pulsed electromagnetic fields, if scaled for whole body use, have clear clinical benefits in the treatment of bone diseases and related pain, often caused by micro-fractures in vertebrae. In addition, joint pain caused by worn out cartilage layers can be treated successfully, through electromagnetic stimulation. PEMF application promotes bone union by electric current induction, which changes the permeability of cell membrane allowing more ions across, affects the activity of intracellular cyclic adenosine monophosphate (cAMP) and cyclic guanosine monophosphate (cGMP), and accelerates osteoblast differentiation by activation of p38 phosphorylation. PEMF stimulation also increases the partial oxygen pressure and calcium transport.  Repair and growth of cartilage is thus stimulated, preventing grinding of the bones. • PEMF and tendons The Department of Rheumatology at Addenbrookes Hospital carried out investigations into the use of PEMF therapy for the treatment of persistent rotator cuff tendonitis. PEMF treatment was applied to patients who had symptoms refractory to steroid injection and other conventional treatments. At the end of the trial, 65% of these were symptom free, with 18% of the remainder being greatly improved. In a study entitled “Pulsed Magnetic Field Therapy Increases Tensile Strength in a Rat Achilles’ Tendon Repair Model” published in 2006 (Department of Plastic and Reconstructive Surgery, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY; Department of Biomedical Engineering, Columbia University, New York, NY; and Department of Orthopaedics, Mount Sinai School of Medicine, New York, NY.), Berish Strauch, MD. et al. conclude: “The use of electromagnetic fields in tissue healing is still a relatively recent application and much research remains to be performed. Areas that need greater explanation include the interplay between wound healing, contributing growth factors, and angiogenesis. PEMFs hold promise as a safe, easily administered and noninvasive modality to accelerate and improve the body’s healing mechanisms”. • PEMF and intestines An experimental study was designed to investigate the effect of PEMF therapy on intestinal healing and to compare small and large intestinal anastomoses, or connections between the loops of the intestines, by Nayci (Department of Pediatric Surgery, Mersin University Medical Faculty, Turkey). The study demonstrated that PEMF stimulation provided a significant gain in anastomotic healing in both small and large intestine, and a significant increase in both biochemical and mechanical parameters. • PEMF and the brain A four-week double-blind, placebo-controlled study conducted by Uni der Bundeswehr (Munich, Germany) assessed the efficacy of PEMF Therapy for Insomnia. One hundred one patients were randomly assigned to either active treatment (n = 50) or placebo (n = 51) and allocated to one of three diagnostic groups: sleep latency; interrupted sleep; or nightmares. The results showed 70% (n = 34) of the patients given active PEMF treatment experienced substantial or even complete relief of their complaints; 24% (n = 12) reported clear improvement; 6% (n = 3) noted a slight improvement. Only one placebo patient (2%) had very clear relief; 49% (n = 23) reported slight or clear improvement; and 49% (n = 23) saw no change in their symptoms. No adverse effects of treatment were reported. Stunning results were obtained in a study entitled Protection against focal cerebral ischemia following exposure to a pulsed electro-magnetic field, Grant G (1994 Department of Neurosurgery, Stanford University, CA, USA) stated: “There is evidence that electro-magnetic stimulation may accelerate the healing of tissue damage following ischemia. Exposure to pulsed electro-magnetic field attenuated cortical ischemia edema on MRI at the most anterior coronal level by 65%. On histological examination, PEMF exposure reduced ischemic neuronal damage in this same cortical area by 69% and by 43% in the striatum. Preliminary data suggest that exposure to a PEMF of short duration may have implications for the treatment of acute stroke”. • PEMF and multiple sclerosis At the Biologic Effects of Light 1998 Symposium, Richards et al. explain the effects of pulsing magnetic field on brain electrical activity in multiple sclerosis: “Multiple sclerosis (MS) is a disease of the central nervous system. Clinical symptoms include central fatigue, impaired bladder control, muscle weakness, sensory deficits, impaired cognition, and others. The cause of MS is unknown, but from histologic, immunologic, and radiologic studies, we know that there are demyelinated brain lesions (visible on MRI) that contain immune cells such as macrophages and T-cells (visible on microscopic analysis of brain sections). Recently, a histologic study has also shown that widespread axonal damage occurs in MS along with demyelination. What is the possible connection between MS and bio-electromagnetic fields? We recently published a review entitled “Bio-electromagnetic applications for multiple sclerosis,” which examined several scientific studies that demonstrated the effects of electromagnetic fields on nerve regeneration, brain electrical activity (electro-encephalography), neurochemistry, and immune system components. All of these effects are important for disease pathology and clinical symptoms in MS”. He referred to a study that evaluated electro-encephalograms (EEG) in response to photic stimulation with flashing lights before and after PEMF exposure. The evidence showed a significant increase in alpha EEG magnitude that was greater in the active group compared to the placebo group demonstrating increased activity. Richards et al. (Dep. Radiology, University of Washington, WA, USA) confirm the above conclusion in a double-blind study to measure the clinical and sub-clinical effects of an alternative medicine electromagnetic device on disease activity in multiple sclerosis. The MS patients were exposed to a magnetic pulsing device that was either active (PEMF) or inactive (placebo) for two months. Each MS patient received a set of tests to evaluate MS disease status before and after wearing the device. The tests included a clinical rating (Kurtzke, EDSS), patient reported performance scales (PS), and quantitative electro-encephalography (QEEG) during a language task. Although there was no significant change between pre-treatment and post-treatment in the EDSS scale, there was a significant improvement in the PS combined rating for bladder control, cognitive function, fatigue level, mobility, spasticity, and vision. There was also a significant change between pre-treatment and post-treatment in alpha EEG magnitude during the language task. Richards et al. stated: “we have demonstrated a statistically significant effect of the magnetic pulsing device on patient performance scales and on alpha EEG magnitude during a language task”. In “Treatment with AC PEMFs normalizes the latency of the visual evoked response in a multiple sclerosis patient with optic atrophy”, Sandyk (1998, Department of Neuroscience at the Institute for Biomedical Engineering and Rehabilitation Services of Touro College, Dix Hills, NY, USA) explains: Visual evoked response (VER) studies have been utilized as supportive information for the diagnosis of MS and may be useful in objectively monitoring the effects of various therapeutic modalities. Delayed latency of the VER, which reflects slowed impulse transmission in the optic pathways, is the most characteristic abnormality associated with the disease. Brief transcranial applications of AC PEMFs in the picotesla flux density are efficacious in the symptomatic treatment of MS and may also reestablish impulse transmission in the optic pathways… The rapid improvement in vision coupled with the normalization of the VER latency despite the presence of optic atrophy, which reflects chronic demyelization of the optic nerve, cannot be explained on the basis of partial or full reformation of myelin. It is proposed that in MS synaptic neurotransmitter deficiency is associated with the visual impairment and delayed VER latency following optic neuritis and that the recovery of the VER latency by treatment with PEMFs is related to enhancement of synaptic neurotransmitter functions in the retina and central optic pathways. Recovery of the VER latency in MS patients may have important implications with respect to the treatment of visual impairment and prevention of visual loss. Specifically, repeated applications of PEMFs may maintain impulse transmission in the optic nerve and thus potentially sustain its viability”. Sandyk R. summarizes recent clinical work on the therapeutic effects of AC PEMF in MS: “Multiple sclerosis is the third most common cause of severe disability in patients between the ages of 15 and 50 years. The cause of the disease and its pathogenesis remain unknown. The last 20 years have seen only meager advances in the development of effective treatments for the disease. No specific treatment modality can cure the disease or alter its long-term course and eventual outcome. Moreover, there are no agents or treatments that will restore premorbid neuronal function. A host of biological phenomena associated with the disease involving interactions among genetic, environmental, immunologic, and hormonal factors, cannot be explained on the basis of demyelization alone and, therefore, require refocusing attention on alternative explanations, one of which implicates the pineal gland as pivotal. The pineal gland functions as a magneto-receptor organ. This biological property of the gland provided the impetus for the development of a novel and highly effective therapeutic modality, which involves transcranial applications of alternating current AC PEMFs flux density” (1997). As demonstrated by the many studies cited herein, it is clear that PEMF treatment stimulates many aspects of cellular metabolism and activity by increasing the TMP and flow of ions across the cell membrane, growth factors, tissue repair and healing.  PEMF therapy increases blood circulation in and around damaged tissue, and effectively helps damaged cells heal by bringing more oxygen into the cells. Effects that are observed when the TMP is increased include: enhanced cellular energy (ATP) production, increased oxygen uptake, changes in entry of calcium, movement of sodium out of the cell, movement of potassium into the cell, changes in enzyme and biochemical activity, and changes in cellular pH will stimulate large amounts of lymphatic vessels to pump and drain lymph fluid which, in turn, supports immune health. This effect involves a chain of processes in the human body, which leads to the improvement of health without side effects including: • Increased production of nitric oxide • Improved micro-circulation • Increased supply of oxygen, ions and nutrients to cells • Increased partial oxygen pressure • Increased ATP production by excitation of electrons • Stimulation of RNA and DNA production • Accelerated protein bio-synthesis by electron and energy transfer • Anti-oxidation regulation with increased circulation of available electrons • Enhanced cellular and tissue elasticity with increased collagen production • Increased cellular genesis promoting bone, cartilage, tendon and soft tissue growth • Stimulation of cellular repair mechanisms • Enhanced macro circulation: by mechanically de-clumping blood cells, alternately dilating and constricting vessels, and through angiogenesis, the growth of new blood vessels • Increased absorption of nutrients and pharmaceuticals • Accelerated detoxification of cells and organs • Decreased swelling, inflammation and pain • Boosting of the immune system, the body’s defenses, by improving the rolling and adhesion behavior of white blood cells • Supporting the body’s internal self-regulating mechanisms by activating cellular and molecular processes. Beyond its complex mechanisms, PEMF therapy offers many health benefits. PEMFs help the natural body healing processes by delivering a non-invasive form of repetitive electrical stimulation that requires no direct contact with the skin surface. Magnetic fields have been shown to affect biologic processes and be effective in a wide range of medical conditions. PEMF therapy has proven beneficial in stimulating cellular metabolism, blood and fluids circulation, tissue regeneration and immune system response. Through these processes, cells are able to function better and tissues repair themselves more efficiently. Through the same processes, vital organs such as the liver, kidneys and colon are able to rid themselves of impurities thus detoxifying the body and allowing better organ functionality. PEMF treatment is effective at increasing bone formation and bone density, healing fractures and osteotomies, recovery from wounds and trauma, graft and post-surgical behavior, recovery from myocardial and brain ischemia (heart attack and stroke), tendonitis, osteoarthritis, and impaired neural function or spasticity from central nervous system diseases such as multiple sclerosis and spinal cord damage. PEMF stimulation offers a safer and more comfortable alternative for urinary incontinence to prior treatments. PEMF therapy improves sports performance, and simply helps to maintain good health. It stimulates muscles, connective tissues, intestines, tendons and cartilage, the brain and peripheral nerve sites. In doing so, PEMF therapy promotes healing and a return to higher activity levels. Functions that were lost begin to recover. Extensive research has been carried out to determine the mechanisms by which this occurs but, for the physiotherapist or other medical professional presented with a wide range of clinical problems, PEMF therapy is an invaluable aid to the clinic. PEMF therapy leaves you feeling relaxed, energized, renewed and with a sense of well-being. Thank you to Wikipedia English for public access to its formidable scientific data resources.
29471fc9d17672eb
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer This is something which I suspect is written up in introductory books on mathematical physics if I knew where to look. Suppose I have some parameters $t_1$, ..., $t_k$ ranging over a neighborhood in $\mathbb{R}^k$. I also have $k$ matrix-valued functions of the $t$'s: $H_1(t_1, \ldots, t_k)$, ... $H_k(t_1, \ldots, t_k)$. These obey both $$[H_i, H_j]=0 \quad (\ast)$$ and $$[\partial_i+H_i, \partial_j+H_j] =0 \quad (\dagger).$$ For those who don't like the language of connections, we can expand $(\dagger)$ as $\partial H_i/\partial t_j - \partial H_j/\partial t_i + [H_i, H_j]=0$ or, in the presence of $(\ast)$, as $$\frac{\partial H_i}{\partial t_j} = \frac{\partial H_j}{\partial t_i}.$$ Equation $(\ast)$ tells us that, assuming the $H_i$ are individually diagonalizable, we can find $u(t)$ a simultaneous eigenvector for all the $H_i$: $$H_i(t) u(t) = \lambda_i(t) u(t). \quad (\ast\ast)$$ Equation $(\dagger)$ tells us that the vector-valued PDE $$\frac{\partial v}{\partial t_i} + H_i v=0 \quad (\dagger \dagger)$$ will have a unique solution $v(t_1, t_2, \ldots, t_n)$ for any initial value. I'm pretty sure there is supposed to be a relation between the solutions to $(\ast \ast)$ and $(\dagger \dagger)$. What is the right statement, and what is the keyword to read about this situation? Motivation: I'm trying to work through the papers of Varchenko, Scherbak and others on the KZ equation. I think it would really clear my head to just see this scenario described abstractly without all the details of which operators they are thinking about. $\def\mg{\mathfrak{gl}_n}$ Edit to spell out the relation. Let $V_1$, $V_2$, ..., $V_n$ be representations of $\mg$. So $U(\mg)^{\otimes n}$ acts on $V_1 \otimes V_2 \otimes \cdots \otimes V_n$. Let $\Omega \in U(\mg) \otimes U(\mg)$ be the Casimir. (Note: The element I learned to call the Casimir was a central element $c$ in $U(g)$. In terms of that element, $\Omega = \Delta(c) - c \otimes 1 - 1 \otimes c$.) Let $\Omega_{ij}$ be $\Omega$ acting in positions $i$ and $j$. For generic parameters $z_1$, ..., $z_n$, define $H_i = \sum_{j \neq i} \Omega_{ij}/(z_i-z_j)$. Then, as I understand it, the KZ equation is $(\partial_i + H_i) v(z_1, \ldots, z_n)=0$, where $v$ is a function valued in $V_1 \otimes V_2 \otimes \cdots \otimes V_n$. The $H_i$'s obey both $(\ast)$ and $(\dagger)$ (a nice exercise). And people seem to be very interested in solving both $(\ast \ast)$ "diagonalizing the action of the Gaudin subalgebra" and $(\dagger \dagger)$ "solving the KZ equation". So I was hoping to understand how they relate, and why. share|cite|improve this question Usually one considers the limit of "connections" to "h" (geometric optics or short wave asymtotics). So sections of flat con. are costructed as series where first term made of eigenvectors . Technically you put constant "k" in front of d/dt and k goes to zero so you can forget about d/dt. This is "cortical level" in KZ story. – Alexander Chervov Apr 25 '12 at 5:06 "Critical level" – Alexander Chervov Apr 25 '12 at 5:07 Thanks! But I'm pretty sure that the interesting stuff in, for example, 1102.5368, 0910.4690 or 1004.3253 are all happening without sending $\hbar$ to $0$. – David Speyer Apr 25 '12 at 5:22 You are welcome. I am pretty agree that there is much interesting stuff around the KZ and the Gaudin model, but still when I was working on this I did not see the way how to constuct solutions of KZ from Gaudin hamiltonians in somewhat "nice"/"explicit". Except one very strange case which we discuss at page 15 section "4.1.1 Application to the Knizhnik-Zamolodchikov equation" – Alexander Chervov Apr 25 '12 at 6:20 I looked at the papers you mention - still I did not see explicit relation with the question you ask and what is discussed there, may be I was looking not very carefully. – Alexander Chervov Apr 25 '12 at 6:21 up vote 3 down vote accepted Hi David, I think there is indeed a relation, which I learned precisely from papers of Varchenko among others. All of this is rather classical and can be found e.g. in Etingof-Frenkel-Kirilov book "Lectures on Representation Theory and Knizhnik-Zamolodchikov Equations". The fact that the $H_i$ satisfies this stronger condition is equivalent to say that for any parameter $\kappa$ the operators $\kappa \partial_i+H_i$ alos satisfies ($\dagger$). Hence you can take asymptotic expansion of solutions at $\kappa \rightarrow 0$ on some neighbourhood $D$ of some $z_0$, of the form $$e^{S(z)/\kappa} (f_0(z)+O(\kappa))$$ where $S$ is a scalar valued function. Then you can show that, assuming the $H_i(z)$ are simultaneously digonalizable then $f_0$ is a common eigenvector of them, with eigenvalues $\partial_i S$. Conversly given a common eigenvector at some $z_0$ you can construct an asymptotic solution. So the usual trick, widely used in the study of the KZ equation, is to also take some asymptotic limit w.r.t. the variable $z_i$ in such a way that eigenvectors are "easy" to find. The standard example in the KZ case is the asymptotic zone $$|z_i-z_1| \ll |z_j -z_1|\quad if\quad i < j $$ for which, up to some change of variable, the equation can be written $$\kappa \partial_i f= \left ( \Omega_i/u_i +reg\right)f\quad i=1\dots n-1$$ where $\Omega_i=\sum_{k < i} \Omega_{k,i+1}$ and $reg$ is regular at $u=0$. Then given some common eigenvector $v$ of the $\Omega_i$ with eigenvalues $\mu_i$ there exists a unique solution of the form $$(\prod u_i^{\mu_i/\kappa})(v+r(u))$$ where $r(u)$ is regular at $u=0$ and $r(0)=0$. I'm not very familiar with D-modules (and by the way I would be happy is someone extends on this), but you can rephrase it as follows: viewing $\kappa$ as a formal variable leads to a filtration on the algebra of differential operators on $V$ (the vector space acted on by the $H_i$) which in turn is nothing but the usual filtration by the degree of differential operators. Taking the associated graded turns the equation into the equation Whose solutions are clearly commons eigenvectors of $H_i$. So I'm rather confident that you can say that the spectrum of the $H_i$ for all common eigenvectors is the characteristic variety of the D-module of solutions of the differential equation you started with. share|cite|improve this answer I can't comment on the case of several operators $H_i$, but for a single operator, the eigenvector equation $(**)$ $$ H(τ) \psi_τ = λ(τ) \psi_τ $$ and the time-dependent Schrödinger equation $(\dagger\dagger)$ $$ (i\frac{\partial}{\partial t} - H(t)) ψ(t) = 0 $$ are related by the adiabatic theorem. Not sure if that's what you are looking for, but I would be very surprised if your setting didn't have a similar intuition. Essentially, the idea of the adiabatic theorem is the following: the eigenvector equation describes, for each parameter $\tau$, an instantaneous eigenvector $\psi_{\tau}$. This gives a solution $\psi_{\tau}(t) = e^{-itλ(τ)} ψ_τ$ to the "instantaneous" Schrödinger equation $$ (i\frac{\partial}{\partial t} - H(τ)) ψ_τ(t) = 0 $$ where the Hamiltonian $H$ is considered at a fixed time $τ$. Now, if the Hamiltonian $H(t)$ varies very "slowly" in time, then it is reasonable to expect that the full Schrödinger equation will essentially follow the solutions to the "instantaneous" Schrödinger equation(s). First it evolves like a solution of the instantaneous equation with $H(0)$, then for $H(\Delta t)$ a small time step after, and so on. This can be made precise by rescaling time to $τ=t/T$ and obtaining an asymptotic expansion $$ ψ(t) = e^{-i∫λ(τ)dt} ψ_τ + \mathcal O(1/T) $$ in the limit $T\to ∞$ and in the $L^2$ sense. More details can be found wherever you can find details about the adiabatic theorem. share|cite|improve this answer Without further assumptions, it does not seem that much can be said. Consider the case k=1. You are asking for a connection between the eigenvalue problem for H(t) and the equation dv/dt+Hv=0. But time-dependent linear ODE systems cannot in general be related to the eigenvalue problem. On the other hand, the condition $\partial H_i/\partial t_j=\partial H_j/\partial t_i$ implies that $H_i=\partial K/\partial t_i$ for some $K$. Let us now strengthen your assumptions and assume that the $H_i$ commute not only with each other, but also with $K$. Then solutions of ($\dagger\dagger$) can be written as $\exp(-K(t))w$ for fixed $w$, and solutions of ($**$) can be written as $u=\partial v/\partial t_i$, where $v$ is an eigenfunction of $K$. share|cite|improve this answer To expand on Greg's answer regarding the adiabatic theorem. You are looking for situations where the adiabatic evolution is exact. This is the case for a Hamiltonian of the form $H = i\left[\frac{\partial P}{\partial t},P\right]$ where $P$ is a projector onto your chosen instantaneous eigenstate. This comes from T, Kato, J. Phys. Soc. J. Jpn. 5, 435 (1950). share|cite|improve this answer Your Answer
5d57a4eb13775f9b
 The Time Dependent Schroedinger Equation and the Nature of its Solution San José State University Thayer Watkins Silicon Valley & Tornado Alley The Time Dependent Schrödinger Equation and the Nature of its Solution The time dependent Schrödinger equation for a system is ih(∂ψ/∂t) = H^ψ where H^ is the operator derived from the Hamiltonian function for the system. The symbol h is Planck's constant divided by 2π, t is time and i denotes the square root of negative one. The variable ψ is called the wave function and its nature is in dispute. The Correspondence Principle Neils Bohr nearly a century ago observed that classical analysis for many areas of physics had been empirically verified. Therefore, for any quantum mechanical analysis its appropriate extension to the realm of classical analysis should agree with the classical analysis. In atomic physics the extension is in terms of scale and/or the level of energy. In radiation physics the extension is the limit as h, Planck's constant goes to zero. In statistical mechanics the limit as the number of molecules increases without bound should agree with thermodynamics. Thus in order for a quantum mechanical analysis to be valid it must obey the Correspondence Principle. For an example of an analysis involving the time dependent Schrödinger equation consider a particle traveling freely in 3D space. The solution to the time dependent Schrödinger equation has the particle spread uniformly over an infinite plane perpendicular to its direction of motion. Not only is this an unacceptable probability distribution but also it does not satisfy the Correspondence Principle. Nothing in the solution gives an asymptotic approach to the concentration of the particle to a limited volume of space in the classical analysis. A Particle in a Central Potential Field Let V(r) be the potential energy of a particle as a function of its distance r from the center of the potential field. The Hamiltonian function for this system is H = p²/(2m) + V(r) where m is the mass of the particle and p is its momentum. In polar coordinates (r, θ) p² = m(dr/dt)² + m(r(dθ/dt))² According to the rules formulated by Schrödinger p² is replaced in the Hamiltonian function by −h²∇² to obtain the Hamiltonian operator for the system. The time-dependent Schrödinger equation for the system is ih(∂ψ/∂t) = −h²∇²ψ + V(r)ψ At this point in quantum analysis it is customary to apply the separation of variables technique. This technique is not innoculous and could preclude finding the physically relevant solutions to the equation. It is worth applying if for no reason other than to gain some insights to the physical system. Furthermore this technique is valid for some simple, symmetric cases. The separation of variables technique assumes that ψ(r, θ, t) = S(r, θ)T(t). Thus the Schrödinger equation becomes ihS(r, θ)T'(t) = −h²(∇²S(r, θ))T(t) + V(r)S(r, θ)T(t) and, upon division by ST ihT'(t)/T = −h²(∇²S(r, θ))/S + V(r) The LHS of the above equation is a function only of t, the RHS a function only of r and θ. Therefore the common value of the LHS and RHS must be a constant, say E. This means ihT'(t)/T = E and thus T'(t)/T = −iE/h T(t) = T(0)exp(−i(E/h)t) This is an oscillatory solution whose magnitude is constant. It is also true that h²(∇²S(r, θ))/S + V(r) = E or, equivalently h²(∇²S(r, θ)) + V(r)S = ES This is equivalent to the time-indepent Schrödinger equation. It is known from previous studies that the solution to this equation implies a probability density function that is inversely proportional to velocity and thus the time spent in a state. The above procedure could just as well have been in the form of a system with generalized coordinates {q1, …, qn} The time-dependent Schrödinger equation is then ih(∂ψ/∂t) = H^ψ If ψ is assumed to be of the form Ψ(q1, …, qn)T(t) then hΨT'(t) = T(H^Ψ) and thus ihT'(t)/T = (H^Ψ)/Ψ Again the LHS is a function only of t and the RHS only of the generalized coordinates. The common value of the two sides must be a constant, say E. Thus, as before, and Ψ satisfies the time-independent Schrödinger equation H^Ψ = EΨ The Special Case of ∂ψ/∂t Being Equal to Zero This corresponds to a particle with no motion and thus the probability density function of its location is a Dirac delta function. The valid solutions to the time dependent Schrödinger equation correspond to probability density functions which are the proportions of the time spent in the allowable states of the system. HOME PAGE OF applet-magic
f07b33ccb44000c6
Friday, April 28, 2006 Leaving Cert messing up our schools the failing standard of Honours MathsEducation in Ireland is becoming Machiavellian activity. In a piece in today’s Irish Independent about the falling standards in Honours Maths this line jumped out at me. There is also a decline in the capacity of candidates to engage with problems that are not of a well-rehearsed type. Now I will admit I am a bit of a Maths nerd and to me this the time independent Schrödinger equation is quiet beautiful. However this rant is not merely about how maths is so vitally important. But about how the education system and in particular grind schools are messing up the system. The leaving cert is not a test of skill it is a test of memory. Also the points system is not a measure of intelligence but popularity. This is a concept that not many people get. Indeed Physics is one of the lowest points courses in the CAO system. But the reason is not that it is easy but because it is perceived to be very hard. Due to the fact that the Leaving Cert is a system of recurring questions and patterns people predict the papers, people prepare for the system. This has been championed by the Grind Schools preparing model answers that the students regurgitate on the day. And who is to blame the kids doing this, they are trying to get the best possible outcome and by learning and not understanding they will achieve the best results. But this should not be the job of the schools the aim of the schools is to educate children not teach them. You can teach the method of how do a sum but you have to educate them why the method is. Due to the high pressure on students to produce high points they don't care to understand. The schools are relenting to the pressure and teaching the kids not educating them. This is leading to the above quote. The Exam papers nowadays ask questions that do not challenge anyone. They learn off the question and plant it down on the paper without thinking. While this might seem harmless to some it is disastrous to the country. If people coming out of the schools do not understand what they have learnt them this countries prospects are pretty dim. My solution change the format of the papers that removes the predictability to the paper. The papers are far too formulaic and leaves its self open to predication and prepared answers. Changed the format so kids have to understand the material not learn it off. It will be better off for them in the long run. Kevin Breathnach said... Changing the format, away from that of the forumliac style in existance, would be brilliant; not only because that students would understand rather than regurgitate, but because it would, I think, relieve students unnecessary pressure and many hours of, often pointless, study. Which is exactly what I'd like most. However, I suppose that by forcing students to learn a lot of things off, CAO and the respective college get a fair idea as to how serious a prospective student will take their studies come September. Kevin Breathnach said... However, I don't think there is anything inherently wrong about Grinds schools. At least, not much in this sense. I attended a few week long grinds over the Easter periods, and while I received a few Past Paper solutions, the best thing I took from it was a understanding of Photosynthesis and Biochemistry - which, with my normal teacher, I couldn't get my head around. Of course, that is not to say that others gained an understanding, rather than a few solutions! winds said... And yet, despite that, standards in higher level maths are falling? Even with the syllabus having been "streamlined" or "simplified" on at least one occasion in the past 10 or 15 years? The overwhelming impression I have been getting regards education in this country is not that it is not challenging (that's an excuse) or that it leads to rote mentality (that's an excuse as well) but that it's being perceived as a consumer item. That's why you have sixteen year olds believing that they should drive what is being taught to them, and not them learning what is given to them. It's also seen as a currency - with which you buy your way into university. It's not the education system itself which is doing this though - it is those going through the system. Personally - I'm going to come across all old here - I think the Leaving Cert has dumbed down a little bit. I'm not in tune with the idea of projects, because there are serious issues in the UK regarding who is actually doing the project and how much input responsible adults are having into continuous assessment projects there. Possibly one way to get people to refocus on what an education system is all about is to limit the number of third level places - there seems to be a thousand different colleges in the country now. The issues with education in this country are not, to my mind, limited to the Leaving Certificate - but to the mentality that you don't do what interests you, you do what will earn you loads of money. This is where the problem lies with physics, medicine and law. This attitude is killing young minds, I think. But then, possibly, so too are games consoles. I think I'll go back to pre-aging and wittering on about the youth of today. Kevin Breathnach said... From what I can gather, it certainly has been dumbed down. The one case I know of is English. Ten or twenty years ago, apparently, poetry or drama questions would zone in very specifically on one aspect of a poet. Today, most questions on studied material will be vague - like, "Give a speech to 5th year students on the poetry of Thomas Hardy" or some such. Clearly, it encourages students to learn off one generic essay, which - with little effort - they can twist to suit the question. My English teacher spins this adaptation, saying that it rewards not only the bright students, but the hard working students. Take what you will from that. Concerning languages, I'm not sure what level of fluency was needed in times gone by. Nowadays though, I could learn off three different letters and five 90 word essays and still be fairly certain of a high B in the written examine. Equally, most people treat the oral exam as little more than a recital. I'll sign-off here, for I must ponder the merits of limiting third-level places - which currently stands at 40,000 per year. Simon said... Limiting the places will do little for the education system neither will it do anything for the country. I know people who came into collage with high points and barely passed the course. I also know people who came in with little points and came out with top marks. Limiting places is going to leave the high points low degree person in and low points high degree person out. That is simply not going to work. I think the points system should be weighted. i.e. if you get an A in physics and an A in English and apply for physics you should get 200 points for Physics and 50 for English. The people with high points often get high points by picking the easier subjects like ag science and geography. That needs to be changed. By the way hope the study is going well Kevin winds said... Okay, if there are just 60000 people taking the Leaving Certificate and there are 40,000 third level places...I'd like to know what the 40,000 consists of. Does it include apprenticeships, for example? If it doesn't, I have got to say that that seems to be excessive. I agree about the weighting - I don't think it's a bad idea per se. I'd also like to see interviews required for medicine, nursing and law. I think it's already required for teaching (certainly at postgrad level I think anyway). But that's just me. Simon said... I would not agree with interviews. The greatest thing of our points system is its annoyminity. Interviews open the situation up to much "pull". Ireland is a small place it would never be fair. I have no problem with the number going in. As long as the exam standard is kept the same year on year it makes no difference. If there is 60,000 people on one year that are deserving of a year they should get there degrees Anonymous said... The weighting is a great idea alright. Splitting the LC into 2 parts seems sensible as well and that's going to happen very soon. Also, I think there are actually going to be interviews for medicine (and lower points requirements) brought in soon. winds said... I'm just not sure at the moment that the country's interests are served by people with high points doing medicine who are not really interested in it, likewise law. At least if you interview, you stand some chance of identifying the ones who are actually interested in the subject, rather than just the potential financial returns. The problem with the points system is that it's not so much that it's anonymous any more. It is, however, weighted in such a way as the amount of money you fling at your secondary education - for example via grinds schools - may place certain elements of society at an advantage. That isn't, strictly speaking, fair either. Simon said... In fairness winds there is little you can do about that. Would you suggest giving extra points for being poor.? Changing the papers away from a formulaic pattern to a truly chalanging paper might help Frank said... How you feel about mathematical equations is similar to how I feel about nicely drafted statutes - beautiful constructions. anthony c said... I think that the article, while eloquent and insightful, seems to play down student motivation to learn. For example it's safe to say that the problems listed exist, but it's equally safe to say that there is a fair contingent of students out there that genuinely want to learn! What are their perspectives? I don't agree with weighting certain subjects like the sciences and mathematics. I see this as a disincentive to learning by reinforcing the need to take subjects for the points gained. An argument well documented in this forum. Overall, frmo my own experience with transition year students is that the key issue withg learning anythnig at school is the ease of getting a job with that subject. Law, Forensics,Medicine and IT are very popular right now because they're perceived as 'safe' jobs, and the probably are. Perhaps if there was more transparency in the subject about possible employment, the student may adopt the subject more readily. It's just a theory.
4776ce095fc677c5
Take the 2-minute tour × I'm having trouble understanding what exactly "information" is in the context of the holographic principle suggested by string theory. Can it be equated to a matrix of ones and zeros? Does this information have its own laws of physics or are all laws of physics in our universe a result of this information, as in, does information supersede everything? share|improve this question closed as off-topic by Rob Jeffries, TildalWave, Mitch Goshorn, HDE 226868, Stan Liou Dec 31 '14 at 0:55 I would say it is everything you need to know if you wanted to reconstruct the universe exactly as it is. –  harogaston Jul 9 '14 at 3:48 This question appears to be off-topic because it is possibly about Physics and should be migrated to Physics SE –  Rob Jeffries Dec 26 '14 at 11:13 @RobJeffries I don't know what the scope of this site is (the help center makes no distinction between "physics" and "astrophysics" and I've never really understood the difference between the two), nor do I care if this question is migrated, but the holographic principle has applications in astrophysics and cosmology in terms of understanding both black hole physics (arguably where it was first formulated) and large-scale cosmology (where it's still in development). It's of great interest in astrophysics and cosmology; in fact, I just finished reading a paper on AdS/CFT by two astrophysicists. –  Logan Maingi Dec 27 '14 at 0:07 @LoganMaingi Fair comment, but there would almost certainly be more interest (and more interest in your thoughtfully written answer) on Physics SE. –  Rob Jeffries Dec 27 '14 at 0:17 @RobJeffries The question is older than 60 days old. At this point it can not be migrated even by moderators (I forgot this rule until just now). If it's completely off-topic here feel free to close it, but even if it's only marginally on-topic it's stuck here and can't be moved to Physics SE. –  Logan Maingi Dec 27 '14 at 10:07 1 Answer 1 up vote 1 down vote accepted The answer here is deceptively simple, and has very little to do with gravity directly. You only really need to know a bit of quantum mechanics, and the answer comes almost for free. In any quantum mechanical theory, we have a Hilbert space $\mathcal H$. (Actually, we really want a rigged Hilbert space, but this distinction isn't particularly relevant here.) This can be as simple as the space of a single q-bit, or it can be as complicated as you need it to be (e.g. the Fock spaces which arise in quantum field theory). The Hilbert space describes all the possible configurations of any system described by this theory; an individual vector is a specific configuration. We also have a linear operator $H$ on $\mathcal H$, called the Hamiltonian, which describes the dynamics of such a system via the Schrödinger equation. The "information" is just knowing what state $| \psi \rangle \in \mathcal H$ your system is in. This state vector tells you the result of every possible measurement, and thus contains all the information of your system. If you really wanted to, it's possible to express any such state uniquely as a sequence of zeros and ones, but that's a very classical way of thinking, and we're dealing with quantum information, which means that the fundamental objects aren't zeros and ones, but state vectors. So, when we talk about holography, what we're really saying is that we can determine the state $| \psi \rangle$ that our system (e.g. the universe) is in simply from knowing the results of experiments we perform on the boundary of the spacetime (e.g. infinitely far away). The holographic principle alone doesn't say how this reconstruction works, only that it is possible. Since this is a bit broad, it may be helpful to see how it works in the only really well-understood example, the AdS/CFT correspondence. In this case, we have a theory of quantum gravity in $d+1$ dimensions with Hilbert space $\mathcal H$ and Hamiltonian $H$, constrained to have spatially-asymptotically AdS$_{d+1}$ metric (AdS is just a special, maximally symmetric solution to the vacuum Einstein field equations which has nice properties that make this work). It turns out that, at least morally speaking (there is ongoing work to understand the full extent of this), when we collect all the observables in this theory and shuffle them around in well-defined ways, we can construct out of them vectors in a different Hilbert space $\mathcal H'$. That is, we have a linear map $T: \mathcal H \rightarrow \mathcal H'$. This map is 1-to-1, meaning that we don't lose information. Strictly speaking, it doesn't need to be onto, but this distinction isn't crucial at first pass, so you can think of $T$ as a 1-to-1 correspondence (i.e. a linear isomorphism) if you like. In addition, the Hamiltonian $H$ can be mapped to $H'$, which describes compatible dynamics to $H$ on $\mathcal H'$ (in the sense that one can first evolve in the original theory and then map to the new one, or first map and then evolve, and the results will be the same). When we look at the pair $\mathcal H', H'$, we recognize this not as a $d+1$-dimensional theory of quantum gravity, but as a $d$-dimensional theory without gravity, but with extra symmetries that turn it into a so-called "conformal field theory" (which come from the extra symmetries of the AdS spacetime). In some sense, this new theory can be thought of as living on the boundary of AdS. So when we say that all the information is contained on the boundary, we mean that given a state $| \psi' \rangle \in \mathcal H'$ which describes the boundary theory observables, we can reconstruct the full state in the quantum gravity theory simply by $T^{-1}(|\psi'\rangle)$. What I've given here is just the 2-minute summary; AdS/CFT is still an active area of research and everything I said above is only approximately and/or morally true. You might wonder why we need to rely on quantum mechanics. It turns out that (at least in the AdS/CFT correspondence), we can't do it classically. Highly quantum mechanical behavior in one theory is recovered by the classical limit of the other, and vice-versa. This is both a blessing and a curse, but at any rate there's no real classical equivalent. This is suspected to be true in any nontrivial example of the holographic principle, so we're pretty stuck describing things in terms of quantum information. share|improve this answer
369d25ce503c9511
Schrödinger Approach and Density Gradient Model for Quantum Effects Modeling A.Ferron1, B.Cottle2, G.Curatola3, G.Fiori3, E.Guichard1 1 Silvaco Data Systems, 55 rue Blaise Pascal, 38330 Montbonnot Saint-Martin, France 2 Silvaco International, 4701 Patrick Henry Dr., Santa Clara, CA 95054, USA 3 University of Pisa, Via Diotisalvi 2, I-56122, Pisa, Italy We describe here two approaches to model the quantum effects that can no more be neglected in actual and future devices. These models are the Schrödinger-Poisson and Density-Gradient methods fully integrated in the device simulator ATLAS. Simulations based on such methods are compared to each other on electron concentration and C-V curves in a MOS-capacitor. Advanced silicon technology tends towards ever thinner and shorter gate oxide resulting in significant quantum effects. The most relevant effect is the confinement of the carriers. For instance, in a Metal-Oxide-Semiconductor capacitor C-V characteristic, the threshold voltage is shifted and the apparent oxide thickness is increased compared to the C-V characteristic expected with a semi-classical approach. To model this confinement accurately in a device simulator based on a drift-diffusion approach, two methods are treated in this paper. The first one, and the most accurate, is to include the Schrödinger equation into a self-consistent computation with the Poisson equation. Unfortunately this solution, due to its non-locality, has a significant numerical cost and cannot be efficiently coupled with the continuity equations giving the current flow in practical applications. All the same this method is used in 1D as a reference: the C-V characteristic and the carrier density profiles are useful to validate simpler methods. Different simpler methods compatible with the drift-diffusion approach have been developed [1, 2]. In this paper we describe a density gradient model which introduces a quantum potential correction in the continuity equations. In the following, we present first the Schrödinger-Poisson model, then the density gradient model and the comparison to each other. Schrödinger-Poisson Model (S-P) The confinement effect appears in very thin oxide devices where the barrier of potential at the interface SiO2/Si is larger and deeper than a thick oxide device. This quantum confinement is well described by solving the single particle Schrödinger equation. Solved self-consistently with the Poisson equation, it provides the eigenvalues and eigenvectors along the three directions of the k-space. Considering ml, mt1 and mt2 the electron longitudinal effective mass and the electron transverse effective masses respectively, the electron density is written as: where x is the position along a vertical slice (normal to the gate oxide), li, Eli (resp. ti, Eti) are the i-th longitudinal (resp. transverse) eigenvector and eigenvalue, kB is the Boltzmann constant, T is the temperature, h is the Planck constant and EF is the Fermi level. For the holes, a similar expression is obtained with the light and heavy holes effective masses. For a 2D device, the S-P equation is solved along a set of 1D parallel slices under the gate. At the ends of each slice an infinite potential is set as a boundary condition. As this assumption is unphysical at the SiO2/Si interface, the S-P model has been designed to include the gate oxide in the solver so that the eigenvectors and thus the carriers could penetrate in the oxide. In the silicon oxide effective masses for electrons and holes have been defined, with value 0.3 and 1.0, respectively. A full description of this S-P model is presented in [3] with the works presented in [4, 5]. To illustrate this model, one defines a MOS-capacitor with 1e18 cm-3 p-type doped substrate and a 3 nm gate oxide thickness. In inversion mode (Vgate=1.0 V), Figure 1 shows the 5 first longitudinal and transverse eigenvectors (ml=0.98, mt1=mt2=0.19 have been set). The corresponding electron concentration is depicted in Figure 2 and compared with a semi-classical profile. It shows the peak in the quantum simulation is no more at the interface (x=0 coordinate) as in the semi-classical simulation. The quantum confinement is correctly modeled. Figure 1a. 5 first longitudinal wave functions. Figure 1b. 5 first transverse wave functions. Figure 2. Semi-classical (dotted line) and quantum (solid line) electron concentration in log scale. Density Gradient Model (DG) The density gradient method is an approach compatible with the drift-diffusion treatment used in device simulator. Different methods have been proposed [6-8], one presents here one of these models. It applies a quantum potential correction _ in the density current expression: (if Boltzmann statistics is assumed), µn is the electron mobility, is the electrostatic potential, nie is the intrinsic carrier concentration, m is the electron effective mass, is a fit factor. The factor has been introduced to adjust the quantum correction which has been obtained after a few simplifications. Discussions about its introduction can be found in [7-9]. In this way it accounts the fact only one mass is used in DG model whereas three are used in S-P model. It could also be adjusted depending on the temperature of operation and the device (bulk, SOI, double gate). Concerning the boundary conditions, they are the same as in a semi-classical scheme. The only boundary condition is that at contacts, the quantum correction is zero. This model is compared to S-P model in Figure 3. The same device as described in section 2 has been used, the factor has been set to 3.6 (its default value as indicated in [8] ) and 3.4 which fits better the S-P electron profile. The electron concentration is displayed in a linear scale and the x=0 coordinate corresponds to the interface. The Figure 4 is a zoom around the peak and it shows a difference between S-P and DG with g=3.4 less than 1% at the peak. It confirms the DG model is suitable to capture quantum effects. Figure 3. S-P (solid line) and DG (dashed and dotted lines) electron profiles. Figure 4. Electron profiles, zoom of Figure 3 around the peak, S-P in solid line, DG/_=3.4 in dashed line and DG/_=3.6 dotted lines. Then for each approach, semi-classical, Schrödinger-Poisson and Density-Gradient, we display in Figure 5 the C-V characteristics. The device used is the same as described in section 2 and =3.4 has been set for the DG model. Figure 5. C-V curves, semi-classical in dashed line, S-P in dotted line and DG in solid line. We clearly note the shift of the threshold voltage near 0.5 volt and the reduction of the quantum capacitance in inversion mode (Vg > 0.5 V). The difference observed between S-P approach and DG model in strong accumulation is explained by the fact the charge is treated in a full quantum scheme in S-P solver whereas a part of the charge should be treated semi-classically. However this small error is not really important because the more strongly doped the substrate, the less the carriers are confined [9], moreover the mode of operation of an actual MOSFET is in inversion mode, and Figure 5 shows the very good agreement between the DG model and the S-P approach in this case. We have presented the different approaches to model quantum confinement in MOSFET implemented in the commercial device simulator ATLAS. The Schrödinger-Poisson model is suitable for any kind of 1D or 2D devices (with planar or non-planar gate oxide) in which quantum effects are important and with bias conditions not to far from equilibrium (for instance, a small bias on the drain can be applied). This solver has been developed in collaboration with the University of Pisa, and has shown excellent agreement with their in-house code. Then a density gradient model has been described and its results, based on carriers’profiles and C-V curves, have proven its capability to model correctly the quantum confinement with an adjustment of the factor. 1. W.Hänsch et al., “Carrier transport near the Si/SiO2 interface of a MOSFET”, Solid-State Electron., vol.32, p.839, 1989. 2. M.J van Dort et al., “A simple model for quantization effects in heavily-doped silicon MOSFET’s at inversion conditions”, Solid-State Electron., vol.37, p.411, 1994. 3. Simulation Standard, Volume 12, Number 11, November 2002 on http://www.silvaco.com 4. S.Gennai, G.Iannaccone, “Detailed calculation of the vertical electric field in thin oxide MOSFETs”, Electronics Letters, 35, p.1881 , 1999. 5. G.Iannaccone,.F.Crupi, B.Neri, S.Lombardo, “Suppressed shot noise in trap-assited-tunneling of metal-oxide-capacitors”, Appl. Phys. Lett. 77, pp.2876-2878, 2000. 6. M.G.Ancona, H.F.Tiersten, “Macroscopic physics of the silicon inversion layer”, Physical Review B, vol.35, 15, pp.7959-7965, 1987. 7. M.G.Ancona, “Density-gradient theory analysis of electron distributions in heterostructures”, Superlattices and Microstructures, vol.7, No.2, 1990. 8. Andreas Wettstein et al., “Quantum Device-Simulation with the Density-Gradient Model on Unstructured Grids”, IEEE Transactions On Electron Devices, vol. 48, No.2, February 2001. 9. G.Chindalore et al., “An experimental study of the effect of quantization on the effective electrical oxide thickness in MOS electron and hole accumulation layers in heavily doped Si”, IEEE Transactions On Electron Devices, vol. 47, No.3, March 2000. Download pdf version of this article
072efe8f69f5906f
Waste less time on Facebook — follow Brilliant. Thinking like a theorist: Are complex numbers real? UPDATE: My answer to this problem. Which, at the end of the day, isn't really an answer at all. As part of our explorations of "why this math" for aspects of physics, I pose an obvious and seemingly simple question: which type of numbers is required to describe the world around us? We use different types of numbers in mathematics. For example, we have the integers, the rational numbers, the real numbers, and the complex numbers. Furthermore, there is a distinct hierarchy among number types - the hierarchy for the four types above is: \(integers \subset rational ~numbers \subset real ~numbers \subset complex~ numbers\). There are additional number types such as the hypercomplex numbers (which are fascinating if you've never run across them). Physics, however, should only need to use a certain type of number to describe the universe, and which type should be dictated by nature. Which number type is required for our current physical theories, and why? A couple comments for this question: • Arguing that one number type is simply more convenient to describe phenomena is not a valid argument, as we are looking for what is required. • On its surface this seems like a very straightforward question, but I warn everyone that it is actually rather subtle. • There are different answers to this question, depending on what you believe about measurement and/or the fundamental structure of our world. Hence I hope everyone will provide a number of interesting viewpoints. Finally, as always - only if enough interest and discussion is shown by the members of our community will I post my own answer. I and your peers on Brilliant want to hear everyone's thoughts! Note by David Mattingly 3 years, 6 months ago No vote yet 31 votes Sort by: Top Newest You asked us what number type is required for our current physical theories. Well, I can name certain theories where complex numbers are a necessity. Quantum Mechanics is such an example. It seems that you can't even approach Quantum Mechanics without introducing complex numbers. In QM the probability of something happening is the square of the magnitude (absolute value) of the “probability amplitude”. This probability amplitude can be complex valued. For example if the probability amplitude is\(\frac{i}{\sqrt2}\), then the probability is \(|{\frac{i}{\sqrt2}}|^2=\frac{1}{2}\).In fact, the Schrödinger equation, which is arguably the backbone of Quantum Mechanics, has an '\(i\)' right at the beginning. You can't escape it! The equation of a single particle looks something like this: \(i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r},\,t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},\,t)+V(\mathbf{r})\Psi(\mathbf{r},\,t)\) [It took me a really long time to render this in LaTeX!]. I don't know much about Quantum Mechanics and most of what I wrote here is copied and pasted from things I found on the internet. So, I will stop talking about Quantum Mechanics now. A classmate of mine once asked me, "Why are we studying about complex numbers anyway? They don't exist after all! So why make a fuss about things that don't even exist?" In reply, I asked him another question, "What is the physical significance of multiplying something by \(-1\)? Multiplication by \(-1\) rotates something by \(180\) degrees. If something is heading towards north at a velocity of \(1 ms^{-1}\). This means that thing is heading towards south at a velocity of \(-1ms^{-1}\). What if we could find a number that would rotate something by \(90\) degrees? Assume that we have such a number. What would happen if we multiplied something by this number twice? It would rotate that thing by \(90+90=180\) degrees. This is remarkable! Because that's very thing multiplying by \(-1\) does. In other words, \(something\times new\) \(number \times new\) \(number = something \times (-1)\) Or in other words, \(new\) \(number^2=-1\). And this is how the imaginary unit \(i\) is defined. Just because you can't count \(i\) chickens doesn't mean complex numbers are any less real. By this definition, even negative numbers don't exist (you can't actually count \(-5\) chickens)." My answer was able to convince my classmate. Another point: we know that the distance between two points in Euclidean space is \(\sqrt{\bigtriangleup x^2+\bigtriangleup y^2+\bigtriangleup z^2}\) [this is just the Pythagorean theorem]. When relativity came along, we realized that space and time were very closely related. And the distance [this is also known as the space-time interval] between two points (events) in space-time is: \(\sqrt{\bigtriangleup x^2+\bigtriangleup y^2+\bigtriangleup z^2 -c^2t^2}\). What does that have to do with complex numbers? Watch again: the formula is \(\sqrt{\bigtriangleup x^2+\bigtriangleup y^2+\bigtriangleup z^2 +(ict)^2}\). So \(i\) also creeps up here! So physical theories need numbers and sometimes those numbers happen to be complex. But as you have said, there are different answer to this question and a lot of people will come up with different viewpoints. According to me, in order to understand and describe everything [I'm putting a lot of stress on this] in this universe, complex numbers are necessary. Look at the size of this comment! I think I'll stop now. EDIT: there have been a couple of comments recently that say that complex numbers are not actually necessary for describing the universe. They are absolutely right! But this raises another question: Do we even need numbers to describe the universe around us? Any physical theory is a model of the universe. Numbers are a tool to describe a theory. They are not a property of the theory itself. Numbers have properties and we use those properties as a tool to try to describe a theory that in turn describes the universe to some extent. We can have theories that don't use numbers at all! There are certain things that we observe in the universe and we try to capture those things with numbers and end up using those numbers in theories. Complex numbers exist in nature in the same way other numbers exist. They have certain properties that we can experience, perceive and observe. I tried to demonstrate that with the example of rotation. I tried to illustrate current physical theories that use the properties of complex numbers to describe natural phenomenon without getting too philosophical about it. The universe doesn't care if we use numbers to understand it. Numbers are merely a tool for us. I understand that this has gotten a little bit more philosophical than I would have wanted. So, I'm stopping here. Mursalin Habib · 3 years, 6 months ago Log in to reply @Mursalin Habib Although I agree with Mursalin and have nothing to add about QM, I do object to his observation that \(i\) also creeps up in special relativity when considering the interval ("distance") between two points in spacetime. Unfortunately, properly explaining why this interpretation is not suitable, requires quite a lengthy explanation. First off, vectors with complex components are used in QM for instance. The norm squared of such a vector is (considered to be) equal to the inproduct with its conjugate rather than itself, so even with complex components you still get something strictly positive. The four-vector as used in special (and general) relativity is not a proper vector in the mathematical sense, or at least not in the normal four-dimensional vector space. The scalar product of two four-vectors is defined as \( \mathbf{A} \cdot \mathbf{B} = -A_0 B_0 + A_1 B_1 + A_2 B_2 + A_3 B_3 = \sum_{\mu,\nu=0}^{3} \eta_{\mu\nu} A^\mu B^\nu \), where \( \eta \) is the Minkowski (or flat-space) metric tensor with \( \eta_{00} = -1, \eta_{11} = \eta_{22} = \eta_{33} = 1 \) and all other components zero. It should be noted that sometimes the scalar product (and hence \( \eta \)) is defined with the exact opposite sign; there is no ironclad convention for this. Also, typically the summation sign is omitted; summation is implicit whenever an index is repeated (that is ironclad). At this point I cannot resist but demonstrate what makes four-vectors and their special scalar product so useful. The fundamental principle (or postulate, if you like) behind (special) relativity is that it does not matter what (inertial) frame of reference you use to describe something. This is nicely reflected in this scalar product: the scalar product of two four-vectors is invariant with respect to a change of reference frame. That goes for the norm of the spacetime four-vector \( (ct, x, y, z) \), \( \eta_{\mu\nu} x^\mu x^\nu = -c^2 t^2 + x^2 + y^2 + z^2 \), which gives you the spacetime interval. But it also works for other four-vectors such as the energy-momentum four-vector \( (E/c, p_x, p_y, p_z) \), for which we have \( \eta_{\mu\nu} p^\mu p^\nu = -E^2/c^2 + p_x^2 + p_y^2 + p_z^2 = -m^2 c^2 \), where \( m \) is the invariant restmass. You might recognize the above formula for the special case \( \vec{p} = 0 \), but this is the more general form. Going back to the definition of a scalar product, it may seem that using \( \eta \) is just overly complicated; why introduce a 16-component tensor for a sum of four products? And why not use imaginary components to deal with the minus sign? I suppose the most convincing answer (which is what I have been working towards) is that, when you go to general relativity, the scalar product generalizes to \( \mathbf{A} \cdot \mathbf{B} = g_{\mu\nu} A^\mu B^\nu \), where \( g \) is still real-valued but may now have nonzero off-diagonal components. Finally! Sorry for the long post. Thomas Beuman · 3 years, 6 months ago Log in to reply @Mursalin Habib I agree with you. It's like algebra, that one use to describe a system, and same system can be described by different algebras, but it doesn't mean that algebraic model exists physically. As we have different tools like matrices system, similarly complex numbers and it's properties are used to describe systems, it doesn't have to do with physical existence. Greendragons X · 3 years, 6 months ago Log in to reply Hey all. Sorry I haven't been able to monitor this discussion as closely as I'd like. We got hit by lightning over the weekend and it took out our internet connection. However, it's a great discussion. I'm not going to reply to everyone's comments as I'll post my overall take on this question on Wednesday (after I consult Mariam B's tarot cards). The comments on QM are right in a certain way of looking at it, and the comments about deriving complex numbers from simpler number types are also right. I'll try and distill these viewpoints into my own response. There is a physics question besides QM involved too, that no one has incorporated yet. I'll toss this question into the ring as well: Is there a fundamental limit as to how much information one can squeeze into a certain region of space and time? If yes, then our universe, which is finite in extent, could be built out of a large but finite number of these small regions. How would this change the required number system? Note, this goes a bit beyond the limitation "current physical theories", which was a deliberate choice. David Mattingly Staff · 3 years, 6 months ago Log in to reply I like this question! Some top answers have focused on the fact that complex numbers are required because basic equations describing quantum mechanics rely on complex numbers. Others have countered that only natural numbers are required because from them we can construct a system that mimics the behavior of, e.g., complex numbers using only the naturals. As others have pointed out the Natural numbers purists are kind of cheating - if you construct a system that behaves like complex numbers it doesn't matter that you have avoided using the symbol 'i', you are still using complex numbers. Why not go even further and claim that we don't need numbers at all, just the concept of and axioms of a set (from which we can reconstruct all math)... On the other hand, I don't think that those on the complex numbers side of the debate go far enough. Let's examine the meaning of the phrase "a number system". A number system is a collection of items (which we will call numbers) and relations and operations (different ways to compare or combine these numbers). Numbers are useful because we can talk about them in the abstract, replace an actual number with an unknown variable, and we know how the operations will be evaluated when we assign any specific value to the variable. The problem with this abstraction is that sometimes you can write down unsolvable questions. For example if your number system is the natural numbers, you can write down 4+x=3, but now there is no number that makes the equation true unless we introduce integers. Similarly if we want to solve all types of equations we must expand our number set to include rations, reals, and complex numbers to solve polynomial equations. The thing is, as soon as we have introduced 0, 1, the operations of addition and multiplication, and, crucially, the idea that we want to be able to 'undo' those operations. That is, we want to be able to 'do algebra'. Then we have magically already created all the complex numbers. What about vectors? surely in a multi-dimensional world we need vectors to describe things. Vectors appear to just be a list of numbers, but they have different operations. For example, Maxwell's laws require the use of cross product. We need matrices to deal with vectors. And while the matrix is just an array of numbers, and the matrix operations can be described as a sequence of steps using basic number operations, the overall algebra of matrices has a different structure to that of 'numbers'. So I believe that we have to include matrices on our list of required number systems for the same reason that we have to include complex numbers. It is a false cop-out to claim that because we can describe a more complex system using the symbols of a simpler system, we don't really need the complex system at all. Are there other number systems needed? Surely. I don't pretend to know any details about cutting edge particle physics theory, but I know that the ways that these particles interact can be quite complicated, and cannot be modeled within the algebra of basic numbers. On the very cutting edge of what might not even be real physics after all, string theorists make predictions about reality using even more exotic algebraic structures. At the root, numbers are an abstraction that allow us to makes predictions about the universe. For any complicated way that objects in the universe can interact, we need a number system that incorporates and models that kind of interaction. As our knowledge of physics in incomplete, I'm sure that there is no final answer to the types of number systems that we need to describe the world around us. Colin Hinde · 3 years, 6 months ago Log in to reply I find that many of the people here that argue for the coarser left side of David M.'s inclusion chain above argue something like this: "You don't need complex numbers because you can construct those from real numbers by algebraic closure" or like this: "You don't need real numbers because you can construct those from the rationals using Dedekind cuts or Cauchy sequences", If that is the case, then you're still using those numbers, you've just created them anew. Just call a spade a spade and admit it. Also, I think a few interesting options are left out: the algebraic numbers, the Gaussian integers and \(\mathbb Q(i)\) For the current quantum theory, I believe we need a continuum of values (or, at least, something dense in the continiuum) that multiply without changing modulus. Thus we need values from all over the complex unit circle. Also, all rational numbers should be present, since ratios are a thing. So, at least all complex numbers with rational polar coordinates (with argument a rational multiple of \(\pi\)), extended to a field (Again, since ratios are a thing, I think we need to have a field). That would be \(\mathbb Q(i)\) extended with all possible values of \(\sin (q\,\pi),\; q\in \mathbb Q\). Some of these might be trancendental. I can't tell this late at night. Do we really need \(\pi\) or \(e\)? How about all the other trancendental numbers? I don't know. I am not certain enough to give an answer, but I have told you what I believe to be a minimum of numbers needed. Arthur Mårtensson · 3 years, 6 months ago Log in to reply I was quite impressed by Mursalin's answer to the question. I know some basics about quantum mechanics, so I would try to make the reason here:- The Schrodinger's equation gives the probability of finding an electron in a given amount of space. It is the square of the wavefunction, and the wavefunction itself consists of a complex number. In fact, even the Newtonian mechanics can be derived from the Schrodinger's equation after assigning some values and parameters to the equation. So this clearly shows that this equation is a much more basic way of describing the universe than the newton's laws itself. So the answer comes here, everything in this universe is described by its wavefunction, and this wavefuction is an imaginary quantity. Hence complex numbers are the more basic and essential number type required for describing the universe around us. I don't know anything about hypercomplex numbers, but as per my knowledge complex numbers may satisfy the requirements. Siddharth Kumar · 3 years, 6 months ago Log in to reply I'd like to make another pitch for an answer that seems mostly neglected on this thread, which is the answer of INTEGERS! One of the first issues with this answer is obviously the irrational numbers. However, I think that there is an easy way around this. Consider that no measurement is exact, that any measurement has a finite amount of significant digits in any measurement. Then, any measurement we could ever take has a finite number of decimal points, and can then be expressed as a fraction of two integers. In other words, it should never be necessary to use every infinite digit of pi to describe the circumference of a circle, or any other physical quantity. Only in pure math would you require these irrational numbers. I am not so far in physics to be able to speak confidently about the issues of these claims in quantum mechanics, nor the role or necessity of imaginary numbers in quantum mechanics, so I will leave that alone. But perhaps there is a similar argument to be made there. Mark Brown · 3 years, 6 months ago Log in to reply @Mark Brown I like it! Any applied science needs only enough digits to have the calculations turn out accurately enough. Bob Krueger · 3 years, 6 months ago Log in to reply Mursalin gave an answer which I think everybody is liking. But honestly speaking I don't think that it is right. If you ask what type of numbers is required to describe universe around us then my answer would be Natural numbers! Yes.. It looks very stupid but let me explain. First of let me clear to you that it is not true that it is impossible to imagine quantum mechanics without complex numbers. This is a very famous misconception. Let us ask.. if nobody would have thought about complex numbers before the advent of quantum mechanics, wouldn't scientists have solved puzzles of blackbody radiation or photoelectric effect? Actually quantum mechanics can be constructed without using complex numbers, we just need to write equations for 2 variables instead of 1 with proper constraints for them. Complex number has two such quantities contained in it (magnitude and phase) and so if we use complex number then we need to write only single equation. Thus mathematically quantum mechanics or LCR circuit equations become easier if we use complex numbers. Now what about 0,negative integers,rational numbers and irrational numbers? It is well known that all these quantities can be constructed mathematically. I think everybody knows about constructions of 0 and negative integers: they are constructed as solutions to certain equations. Last of all irrational numbers are constructed using well known procedures of Dedekind cuts or Nested intervals (you may read wikipedia articles about these). Now the next question: suppose we don't construct any of these and only use natural numbers. Then is it possible to write say theory of general relativity or quantum mechanics? And answer is YES! Only problem would be that it would be too complicated since each variable would need lot of description! May be Kroneker had realized this long back and said "God made the natural numbers; all else is the work of man." People who don't like my answer may also want to read about "Preintuitionism" on wikipedia. Snehal Shekatkar · 3 years, 6 months ago Log in to reply @Snehal Shekatkar Though members of this thread have successfully acknowledged that the entities that represent rationals and irrationals are completely built from natural numbers (e.g. equivalence classes of ordered pairs for fractions and sequences for irrationals), one significant component of the construction of real numbers has been ignored: the operations on these new structures, addition and multiplication. At each stage of construction, one must redefine addition and multiplication for ordered pairs, sequences, etc. For example, if the ordered pairs (a,b) and (c,d) are fractions, we have the rule, which we must define their sum by the rule (a,b) + (c,d) = (ad + bc, bd). So even if we only use ordered pairs and sequences of ordered pairs of natural numbers, we have to provide new axioms that define the structures of the real numbers. So even employing this construction, one is still using a system isomorphic to the real numbers, equipped with the entire structure of the continuum. Thus if physics uses any idea of the continuum, it uses real numbers, no matter how they are disguised Louis Esser · 3 years, 6 months ago Log in to reply @Snehal Shekatkar I disagree on one point,can all irrational numbers be constructed from natural numbers?How would you construct pi or e? Aahitagni Mukherjee · 3 years, 6 months ago Log in to reply @Aahitagni Mukherjee Well, one possible way is this: \(\pi=4(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}- \cdots\) \(e=1+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+ \cdots\) Mursalin Habib · 3 years, 6 months ago Log in to reply @Mursalin Habib How do you know that L.H.S. is equal to R.H.S. here? You first need to construct an object which we call irrational number and only then you can think about representing them using infinite series. This is not the correct way. You can either use nested intervals or Dedekind cut. Snehal Shekatkar · 3 years, 6 months ago Log in to reply @Snehal Shekatkar mursalin's way is obviously not a feasible construction-It will require an infinite number of terms to be calculated.Moreover,there are irrational numbers which can't be represented as such explicit series. If I've Understood the method of Dedekind cuts correctly,it requires you to construct a set of rational numbers where each element is less than the number to be constructed. My point is,in both of the above methods,we are talking about infinite sequences,and thus,not applicable in actual physics calculations,no matter how complicated a calculation or analysis we are ready to undertake. But it has got really interesting.I read that there are many irrational numbers which can't be explicitly represented(by some root,ratio,or series).Do these numbers have any significance in physics? Aahitagni Mukherjee · 3 years, 6 months ago Log in to reply @Snehal Shekatkar The first one comes from the obvious fact that \(tan\frac{\pi}{4}=1\) and the Taylor Expansion of \({tan}^{-1}x\). The second one is a direct implementation of the Taylor Expansion of \(e^x\). I'm trying to express them just from their definition and without using any idea of where they might be on the number line. But I think we're deviating away from the topic a little bit.... Mursalin Habib · 3 years, 6 months ago Log in to reply @Mursalin Habib Dear Mursalin, I think you are not getting my point. May be this is because yet you didn't study analysis. Read first what I said and then you will understand. Snehal Shekatkar · 3 years, 6 months ago Log in to reply @Aahitagni Mukherjee Read about Dedekind cut on wikipedia or in any book on analysis. Snehal Shekatkar · 3 years, 6 months ago Log in to reply @Snehal Shekatkar What you've said is absolutely right! But that wasn't what I was going for. See the EDIT part of my comment. Mursalin Habib · 3 years, 6 months ago Log in to reply I think that complex numbers are necessary, because they make a complete system. This means all equations can be described. This is important to mathematics (I don't know about QM and that stuff). And for all those people who say "everything can be constructed by integers", that doesn't matter. For example, some people said that fractions are simply integers over integers. Yes, but they aren't INTEGERS, they're RATIONAL NUMBERS. If you need to construct something to use it, face it, you need it. Now real numbers. I'm sure we all love calculating out infinite fraction sequences (not really), but for all purposes, it's better to use the irrational numbers. Complex numbers are similarly useful, even if they can't describe actual amounts. If you only count amounts, then sure, natural numbers are fine. But the universe is obviously more complex than that. And like shown dozens of times, physical equations use complex numbers. So I think that complex numbers are necessary, if not in a too obvious way, to our physical system. Bob Bob · 3 years, 6 months ago Log in to reply Interesting, i find this kind of discussion fascinating, and would really like to see what Davids own answer is... One method of reasoning is that all rational numbers can be expressed using integers - simply use fractions.... real numbers also tend to be able to be expressed using integers - taylor expansions etc, although i do not know if it would be possible to express all real numbers in this way... maybe not.... but the ones necessary for current physical theories, quite possibly. Complex numbers such as 'i' can just be expressed as 'root(-1)', using integers and common notation. The possible flaw in this method is that although you can express these using integers, overall what you have expressed isnt actually an integer - if you replace i with 'root(-1)' surely you're still using complex numbers - I guess it just depends on how you like to think about things. If you ignore this flaw and continue using this reductionist method, you can actually dispose of "numbers" altogether, and instead use symbols and notation - sort of like 'typographical number theory', it is probably possible to express current physical theories using pure logic. It would be horrific - but I think its possible. On the topic of beyond current physical theories, if we assume that the universe is finite, and that in a certain region there can only be a certain finite amount of information, I think that this information can be expressed/approximated in lots of different ways, to varying degrees of accuracy - different constructs are required for different approximations. I think if there was an ultimate fundamental "theory of the universe", it is likely to be simple in nature, in that it shouldn't require as a necessity certain man-made constructs (which I believe numbers essentially are), but would have complex implications - think fractals and chaos theory, however I also feel that such a theory would also be impossible... because surely any such system must be either incomplete or inconsistent - according to Godel. If this is applicable to 'the universe' or what implication it would have if it was is beyond me. Whichever which-way you look at it, the "fundamental structure of our world" seems to be so far beyond human understanding that all we can do is approximate using crude models based on artificial constructs which seem to agree with our observations, which are also crude. Ben Blayney · 3 years, 6 months ago Log in to reply @Ben Blayney Fascinating view. Bob Krueger · 3 years, 6 months ago Log in to reply I have another question.... Are quaternions real? Taehyung Kim · 3 years, 6 months ago Log in to reply All number systems are essentially a mathematical construct to explain the space, e.g. 1-D space can be described by a real number, 2-D space by a complex number or an ordered pair of two real numbers, 3-D space by vectors, and 4-D by quaternions, etc. Sunny Prajapati · 3 years, 6 months ago Log in to reply Really, all we ever need is \(1\), \(-1\), addition, and multiplication. Because with these few tools and a little algebra, you can construct all other numbers. However, I did realize that transcendental numbers aren't the easiest to construct with this system, so we should also include helpful numbers like \(\pi\), \(e\), or anything else like that. (Here is a more detailed explanation of this construction system and transcendental numbers) Whereas real and complex numbers are convenient for most physical applications, they are not necessary. Furthermore, if you consider \(1\) and \(-1\) as "real," then undoubtably, all these other numbers are real. Bob Krueger · 3 years, 6 months ago Log in to reply I think it should be complex numbers. Though I have not seen them(purely complex numbers) being used so often, but I know one field where it is used. My teacher had showed me using it in solving parallel LCR circuits. I don't know whether I am going as per the requirements of the discussion. Nishant Sharma · 3 years, 6 months ago Log in to reply @Nishant Sharma Aah, so this is a good example of ease of description. Is it necessary to have complex numbers for LCR circuits, or is it merely a convenient description because it makes the math easier? David Mattingly Staff · 3 years, 6 months ago Log in to reply @David Mattingly From what I can tell, in classical wave analysis, complex numbers are introduced as a way to simplify representations of wave equations, using Euler's formula. In the example of LCR circuits, one sets up the differential equation describing the system; an oversimplified method of seeing how complex numbers relate to this is by showing that the solution of the differential equation may take the form of \(e^{i(wt-c)}\), where w is angular frequency and c is phase shift. This is convenient, as it is represents the wave equations involving sine and cosine, so we know intuitively that we're on the right track like this. When you plug it all back into the original differential equation and solve with initial conditions and such, we find that, indeed, all the imaginary terms from the \(e^{ix}\) term disappear by the time we arrive at the final solution. Thus, it becomes apparent that the complex numbers were only introduced to help solve the equation, and are devoid of physical meaning. Quantum mechanics is a different story, however... John X · 3 years, 6 months ago Log in to reply @David Mattingly Well I think most of the things can be simplified out using one or more approaches. It depends on the effectiveness( I mean accuracy) of the technique being employed. Nishant Sharma · 3 years, 6 months ago Log in to reply @David Mattingly It makes the math easier I guess. I remember that while learning LCR circuits, I would end up with a second order differential equation and solving them isn't included in High School mathematics. Instead, we were introduced to phasors and use vectors(?) to solve the problems. Pranav Arora · 3 years, 6 months ago Log in to reply I like the book of Paul J. Nahin, "An Imaginary Tale: The Story of i" (you can see on Amazon) :) Virgilius Teodorescu · 3 years, 5 months ago Log in to reply First of all focussiong on what numbers are? guyz they are nothing except something used as a magnitude to describe teh physical quantities if the universe.ANd according to Model delpendent realism it occurs in our brain.Numbers are only in our brain and are create dfrom logic and i dont know the numbers which will decribe the universe but i know is the last and frist number of this number system- the numbers are -0 and Infinity...and guyzz thay are same Nabasindhu Das · 3 years, 6 months ago Log in to reply Seems like this answer has not come up: real numbers. Complex numbers, can actually be represented as an ordered pair, and just make the operations just like the ones in complex numbers. I do not have much time to build this up but I believe it is possible. In my opinion, complex numbers is just a tool for convenience. Other stuff can just replace it. It is obvious that rational numbers are needed, otherwise how are you going to evaluate \(P=\frac{F}{A}\) (just an example)? So we have left why we need irrational numbers. Again I think this is obvious, as many constants are irrational and we need them. \(\pi\) is significant enough. Well I think this is not the answer, just my opinion, so I hope someone out there can tell me my misconceptions, mistakes, gaps or whatsoever. Yong See Foo · 3 years, 6 months ago Log in to reply Complex numbers are imaginary. You cant visualize a complex number. Take for example, You can count \(10\) chickens but you cannot count \(-10\) or \(i\) chickens. Imagining a complex number is not possible unless graphed on the complex plane. Even then, it is just a series of dots. Imagining a complex number is not something anything can imagine. We can have a cube of volume \(10\) but, what about a cube of volume \(i\)? However, in complex numbers of the form \(a+bi\), we can imagine the real part or \(a\), but what is \(bi\) to the human eye? It is just some imaginary space that cannot be defined unless you use math. A classmate in my school once asked me, "Why do we need complex or imaginary number?" I gave the person a simple answer in two parts. First, I asked him to calculate \(\sqrt{-1}\), of course, he calculated it as \(i\), and I told him that we use it for things like that. But then, I asked him to imagine a space of volume \(i\), and he could not do it. He then realized that our imaginary numbers are a whole different realm of units. They are uncountable. You wont sit in school, and have a teacher ask you this, "Okay kids, lets count the number of chickens" "i,2i,3i,4i..." More like, the teacher would say, You cannot count imaginary numbers. Quantam Mechanics is another example. You wont see a QM equation with out imaginary numbers. Who would ever believe that we could square a number to result in a negative number? But, what do we really need to count? To be able to define everything in our universe. Natural numbers are not enough. Real numbers are not enough. Complex numbers cannot be visualized. What is our world coming to? Fiyi Adebekun · 3 years, 6 months ago Log in to reply I think complex numbers should be used , though we can only use the reals and the tasks where complex numbers were used should be assciated with pairs of real numbers. Ahmed Taha · 3 years, 6 months ago Log in to reply I think COMPLEX NUMBERS is required to describe the whole world around us. Reason being , Complex numbers exists in Conjugate pairs . In the same way , In physics Forces exists in conjugate pairs (Fundamental theory of classical mechanics : newtons 3rd law). We can't move a inch without forces .SO, forces are there in every fundamental structure of the world and exists in pairs (analogous to complex numbers) . Secondly , In physics we use vectors everywhere which is analogous to complex numbers. Thirdly , We can't compare complex numbers. We can't say whether 3 + i or 4 - i greater. Can we? In the same way in physics , We can't compare different measuring units in physics. For example , we can't compare ' Kilo grams and pascals ' etc. etc. ( somewhat analogous ) (I think my 3rd reason makes no sense , But I am just trying to prove it ) Fourthly , Maths in the mother of all sciences. It is required in every walk of life and physics. In physics we use mathematical equations instantly. There is always a tendency to get a purely complex number as a root of the equation. And there are infinitely many pure complex numbers and infinitely many real numbers (or integers etc) .SO the probability of getting a pure complex number is equal to the probability of getting a real number. So , we can say , complex are used in almost every equations of maths (as integers⊂rational numbers⊂real numbers⊂complex numbers. ) Combining above four conditions , I can say that COMPLEX NUMBERS is required to describe the whole world around us. Vinay Pandey · 3 years, 6 months ago Log in to reply I think that the whole process of classifying number types has begun from simple systems such as integers to intricate ones like complex numbers or hypercomplex numbers (quaternions, tessarines, coquaternions, etc. ). Now one can question what would be the need for newer (or more complex) number systems and the answer to that lies in the fact that these new number systems provide the appropriate mathematical formalism to realize ideas and theories. Now why are they appropriate and the answer to that would be if any other number system be used to formalize a theory it's be insufficient to describe the nuances of that theory. This brings us to question as to why advanced mathematics like lie algebra, or group theory, etc. is needed to do advanced physics. One would not be able to get far along if one continues to abandon complex numbers to do Quantum Mechanics ( as someone rightly pointed out). Coming back to number systems, the process with which new number systems were developed tells us an important fact as we made progress with the advent of complex numbers and hypercomplex numbers, our understanding of the universe improved. Does this not hint towards the fact that there is a possibility of an all encompassing number system which will qualify as the singular type of system needed for all physical theories. Definitely this number system can't be integers, real numbers, or even complex numbers because these prove to be parts of a bigger better system for theories. Taking a crude example, we can always use complex numbers instead of real numbers or integers, but let's say does it serve any purpose to use complex numbers in simple arithmetic. The point here is these are all subsets of what is required. Striving for a better, bigger, "all encompassing" and universal number system is what would be required for our current physical theories. And as complex numbers is in a sense a subset of hypercomplex numbers. So "currently" my answer would be hypercomplex numbers. And yeah, i could be totally wrong but this is what I feel should be. Hope I have conveyed my idea properly. Rahul Mehra · 3 years, 6 months ago Log in to reply I think complex numbers must be the numbers used for describing the universe around us. Siddharth Kumar · 3 years, 6 months ago Log in to reply @Siddharth Kumar Can you give a reason? David Mattingly Staff · 3 years, 6 months ago Log in to reply @David Mattingly yes sir ,in real life situations like free fall under all the effects of nature like drag, viscosity, etc..the energy dissipated by the falling object is in the form of complex number magnitudes.. Chitres Guria · 3 years, 6 months ago Log in to reply I think all we need is real numbers.other things are all structures constructed from real numbers.A complex number is only an ordered pair of real numbers. An n-tuple of real numbers constitute an n-dimensional vector,a matrix of real numbers is an n cross n tensor,and so on.If necessary,we can also add another dimension and get a 3 dimensional analogue of a tensor,which will have n cross n cross n elements.So I think faith in 'reality' should be restored :) Aahitagni Mukherjee · 3 years, 6 months ago Log in to reply hmmm Christian Baldo · 3 years, 6 months ago Log in to reply [Deleted for irrelevance- Peter] Mariam Baurice · 3 years, 6 months ago Log in to reply @Mariam Baurice Internet preachers are everywhere! Unbelievable! Mursalin Habib · 3 years, 6 months ago Log in to reply @Mariam Baurice What this has to do with the original question?? LOL Snehal Shekatkar · 3 years, 6 months ago Log in to reply @Mariam Baurice !!!!??? Heli Trivedi · 3 years, 6 months ago Log in to reply @Mariam Baurice We are interested in your 'situation'....please tell about it Aahitagni Mukherjee · 3 years, 6 months ago Log in to reply Problem Loading... Note Loading... Set Loading...
863751067b41426e
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Consider a free-particle with a Gaussian wavefunction, $$\psi(x)~=~\left(\frac{a}{\pi}\right)^{1/4}e^{-\frac12a x^2},$$ find $\psi(x,t)$. The wavefunction is already normalized, so the next thing to find is coefficient expansion function ($\theta(k)$), where: $$\theta(k)=\int_{-\infty}^{\infty} \psi(x)e^{-ikx} \,dx.$$ But this equation seems to be impossible to solve without error function (as maple 16 tells me). Is there any trick to solve this? share|cite|improve this question I am a bit confused, why are you trying to find $\psi (k)$ ? Or as you write it, $\theta (k)$ ? – DJBunk Sep 20 '12 at 23:06 Can you put what you ran through maple 16? – Magpie Apr 8 '13 at 1:38 Your question seems rather confused, • First you ask for the time evolution of the wavefunction. For this you will need to use the Schrödinger equation $i \partial \psi/\partial t= \hat H \psi $ and thus will need to know the Hamiltonian ($\hat H$). • Second you seem to want to work out the Fourier transform of the wavefunction. This will not give you the wavefunction as a function of time but will give you the wavefunction in momentum space. The integral you want to calculate is the Fourier transform of a Gaussian which is itself a Gaussian: $$\int_{-\infty}^{\infty} e^{-ax^2/2}e^{-i k x} \, dx \\ = \int_{-\infty}^{\infty} e^{-ax^2/2}\left(\cos{kx} - i \sin{kx} \right) \, dx .$$ The second term in the above integral is odd so will give zero. The first term is a known integral and gives $$=\sqrt{\frac{2\pi}{a}} e^{-k^2/2 a} , $$ a Gaussian as promised with width inversey proportional to the original. I am pretty certain Maple should also be able to calculate the integral for you as it is written in my fist line (Mathematica can), so I imagine you are just not entering it correctly. Edit: Apologies for the first comment above. I had not seen that you had written this was for a free particle, so indeed you know the Hamiltonian, the potential is $V(x,t)=0$, and so from Schrödinger's equation we know the time evolution of the energy Eigenstates is $\psi(x,t)=e^{-i \omega t}\psi(x)$. For the free particle we have $\omega=k^2/2m$ and so you know the time evolution of the Fourier transform. So taking the Fourier transform given above, applying the time evolution, and transforming back to position space we have $$\psi(x,t)=\int_{-\infty}^{\infty} e^{-k^2/2 a}e^{-i\omega t}e^{ikx} \, dk \\ =\int_{-\infty}^{\infty} e^{-\frac{k^2}{2 a}(1+iat/m)}e^{ikx}\, dk \\ \sim e^{\frac12 \frac{x^2}{1/a+imt}}$$ as #Ron pointed out in his comment. This shows how the wavepacket spreads out with time. share|cite|improve this answer The fourier trasnform evolves by simple phases, and a reverse fourier transform gives the time evolution, which is a spreading Gaussian, so that the a gets replaced everywhere by ${1\over {(1/a)+it}}$ – Ron Maimon Sep 21 '12 at 6:48 Oh yeah, hadn't seen the part saying this was for a free particle (doh!). Have added an edit to the answer to complete it. Thanks for pointing that out. – Mistake Ink Sep 21 '12 at 13:27 Your Answer
48e8b959451df83d
Lambert W function From Wikipedia, the free encyclopedia   (Redirected from Lambert's W function) Jump to: navigation, search The graph of W(x) for W > −4 and x < 6. The upper branch with W ≥ −1 is the function W0 (principal branch), the lower branch with W ≤ −1 is the function W−1. In mathematics, the Lambert W function, also called the omega function or product logarithm, is a set of functions, namely the branches of the inverse relation of the function f(z) = zez where ez is the exponential function and z is any complex number. In other words z = f^{-1}(ze^{z}) = W(ze^{z}) By substituting z' = ze^z in the above equation, we get the defining equation for the W function (and for the W relation in general): z' = W(z')e^{W(z')} for any complex number z'. Since the function ƒ\scriptstyle (\cdot) is not injective, the relation W is multivalued (except at 0). If we restrict attention to real-valued W, the complex variable z is then replaced by the real variable x, and the relation is defined only for x ≥ −1/e, and is double-valued on (−1/e, 0). The additional constraint W ≥ −1 defines a single-valued function W0(x). We have W0(0) = 0 and W0(−1/e) = −1. Meanwhile, the lower branch has W ≤ −1 and is denoted W−1(x). It decreases from W−1(−1/e) = −1 to W−1(0) = −∞. The Lambert W relation cannot be expressed in terms of elementary functions.[1] It is useful in combinatorics, for instance in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y'(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, a closed-form solution for the time course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function. Main branch of the Lambert W function in the complex plane. Note the branch cut along the negative real axis, ending at −1/e. In this picture, the hue of a point z is determined by the argument of W(z) and the brightness by the absolute value of W(z). The two main branches W_0 and W_{-1} The Lambert W-function is named after Johann Heinrich Lambert. The main branch W0 is denoted by Wp in the Digital Library of Mathematical Functions and the branch W−1 is denoted by Wm there. The notation convention chosen here (with W0 and W−1) follows the canonical reference on the Lambert-W function by Corless, Gonnet, Hare, Jeffrey and Knuth.[2] Lambert first considered the related Lambert's Transcendental Equation in 1758,[3] which led to a paper by Leonhard Euler in 1783[4] that discussed the special case of wew. The Lambert W function was "re-discovered" every decade or so in specialized applications.[citation needed] In 1993, when it was reported that the Lambert W function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics—Corless and developers of the Maple Computer algebra system made a library search, and found that this function was ubiquitous in nature.[2][5] By implicit differentiation, one can show that all branches of W satisfy the differential equation z(1+W)\frac{{\rm d}W}{{\rm d}z}=W\quad\text{for }z\neq -1/e. (W is not differentiable for z = −1/e.) As a consequence, we get the following formula for the derivative of W: \frac{{\rm d}W}{{\rm d}z}=\frac{W(z)}{z(1 + W(z))}\quad\text{for }z\not\in\{0,-1/e\}. Using the identity e^{W(z)}=z/W(z), we get the following equivalent formula which holds for all z\not=-1/e: \frac{{\rm d}W}{{\rm d}z}=\frac{1}{z+e^{W(z)}}. The function W(x), and many expressions involving W(x), can be integrated using the substitution w = W(x), i.e. x = w ew: \int W(x)\,{\rm d}x &= x W(x)-x+e^{W(x)}+C\\ & = x \left( W(x) - 1 + \frac{1}{W(x)} \right) + C. \\ (The last equation is more common in the literature but does not hold at x=0.) One consequence of which (using the fact that W(e)=1) is the identity: \int_{0}^{e} W(x)\,{\rm d}x = e-1 Asymptotic expansions[edit] The Taylor series of W_0 around 0 can be found using the Lagrange inversion theorem and is given by W_0 (x) = \sum_{n=1}^\infty \frac{(-n)^{n-1}}{n!}\ x^n = x - x^2 + \frac{3}{2}x^3 - \frac{8}{3}x^4 + \frac{125}{24}x^5 - \cdots The radius of convergence is 1/e, as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval (−∞, −1/e]; this holomorphic function defines the principal branch of the Lambert W function. For large values of x, W0 is asymptotic to W_{0} (x) = L_1 - L_2 + \frac{L_2}{L_1} + \frac{L_2 (-2 + L_2)}{2 L_1^2} + \frac{ L_2 (6 - 9 L_2 + 2 L_2^2) }{6 L_1^3} + \frac{L_2 (-12+36L_2 - 22 L_2^2 + 3 L_2^3)}{12 L_1^4} + \cdots W_{0} (x) = L_1-L_2+\sum_{\ell=0}^{\infty}\sum_{m=1}^{\infty}\frac{(-1)^{\ell}\left [\begin{matrix} \ell+m \\ \ell + 1 \end{matrix}\right ]}{m!} L_1^{-\ell-m} L_2^{m} where L_1=\ln x, L_2=\ln\ln x and \left [\begin{matrix} \ell+m \\ \ell + 1 \end{matrix}\right ] is a non-negative Stirling number of the first kind.[6] Keeping only the first two terms of the expansion, W_0(x)=\ln x-\ln\ln x+o(1). The other real branch, W_{-1}, defined in the interval [−1/e, 0), has an approximation of the same form as x approaches zero, with in this case L_1=\ln(-x) and L_2=\ln(-\ln(-x)). In [7] it is shown that the following bound holds for x\ge e: \ln x-\ln\ln x+\frac{1}{2}\frac{\ln\ln x}{\ln x} \le W_0(x)\le\ln x-\ln\ln x+\frac{e}{e-1}\frac{\ln\ln x}{\ln x}. In [8] it was proven that branch W_{-1} can be bounded as follows: -1-\sqrt{2u}-u < W_{-1}(-e^{-u-1}) < -1-\sqrt{2u}-\frac{2}{3}u for u > 0. Integer and complex powers[edit] Integer powers of W_0 also admit simple Taylor (or Laurent) series expansions at 0 W_0(x)^2 = \sum_{n=2}^\infty \frac{-2(-n)^{n-3}}{(n-2)!}\ x^n = x^2-2x^3+4x^4-\frac{25}{3}x^5+18x^6- \cdots More generally, for r\in\Z, the Lagrange inversion formula gives W_0(x)^r = \sum_{n=r}^\infty \frac{-r(-n)^{n-r-1}}{(n-r)!}\ x^n, which is, in general, a Laurent series of order r. Equivalently, the latter can be written in the form of a Taylor expansion of powers of W_0(x)/x \left(\frac{W_0(x)}{x}\right)^r =\exp(-r W_0(x)) = \sum_{n=0}^\infty \frac{r(n+r)^{n-1}}{n!}\ (-x)^n, which holds for any r\in\C and |x|<e^{-1}. A few identities follow from definition: W(x \cdot e^{x}) = x \text{ for } x \geq 0 \text{ and } x=-1 W_0(x \cdot e^{x}) = x \text{ for } x \geq -1 W_{-1}(x \cdot e^{x}) = x \text{ for } x \leq -1 Note that, since f(x) = x⋅ex is not injective, not always W(f(x)) = x. For fixed x < 0 and x ≠ 1 the equation x⋅ex = y⋅ey has two solutions in y, one of which is of course y = x. Then, for i = 0 and x < -1 as well as for i = -1 and x ∈ (-1, 0), Wi(x⋅ex) is the other solution of the equation x⋅ex = y⋅ey. W(x) \cdot e^{W(x)} = x e^{W(x)} = \frac{x}{W(x)} e^{-W(x)} = \frac{W(x)}{x} e^{n \cdot W(x)} = \left(\frac{x}{W(x)}\right)^{n}[9] \ln W(x) = \ln(x) - W(x)\text{ for }x>0[10] W(x) = \ln\left(\frac{x}{W(x)}\right)\text{ for }x\geq-1/e W\left( \frac{nx^n}{W(x)^{n-1}} \right)=n \cdot W(x)\text{ for }n>0\text{, }x>0 (which can be extended to other n and x if the right branch is chosen) From inverting f(ln(x)): W(x \cdot \ln x) = \ln x\text{ for }x>0 W(x \cdot \ln x) = W(x) + \ln W(x)\text{ for }x>0 With Euler's iterated exponential h(x): \begin{align}h(x) & = e^{-W(-\ln(x))}\\ & = \frac{W(-\ln(x))}{-\ln(x)}\text{ for }x\not=1 Special values[edit] For any non-zero algebraic number x, W(x) is a transcendental number. Indeed, if W(x) is zero then x must be zero as well, and if W(x) is non-zero and algebraic, then by the Lindemann–Weierstrass theorem, eW(x) must be transcendental, implying that x=W(x)eW(x) must also be transcendental. W\left(-\frac{\pi}{2}\right) = \frac{\pi}{2}{\rm{i}} W\left(-\frac{\ln a}{a}\right)= -\ln a \quad \left(\frac{1}{e}\le a\le e\right) W\left(-\frac{1}{e}\right) = -1 W\left(0\right) = 0\, W\left(1\right) = \Omega=\frac{1}{\displaystyle \int_{-\infty}^{+\infty}\frac{\,dt}{(e^t-t)^2+\pi^2}}-1\approx 0.56714329\dots\, (the Omega constant) W\left(1\right) = e^{-W(1)} = \ln\left(\frac{1}{W(1)}\right) = -\ln W(1) W\left(e\right) = 1\, W\left(-1\right) \approx -0.31813-1.33723{\rm{i}} \, W'\left(0\right) = 1\, Other formulas[edit] There are several useful integration formulas involving the W function. Some of these include the following: \int_{0}^{\pi} W\bigl( 2\cot^2(x) \bigr)\sec^2(x)\;\mathrm dx = 4\sqrt{\pi} \int_{0}^{\infty} \frac{W(x)}{x\sqrt{x}}\mathrm dx = 2\sqrt{2\pi} \int_{0}^{\infty} W\left(\frac{1}{x^2}\right)\;\mathrm dx = \sqrt{2\pi} The second identity can be derived by making the substitution which gives \int_{0}^{\infty} \frac{W(x)}{x\sqrt{x}}\mathrm dx &=\int_{0}^{\infty} \frac{u}{ue^{u}\sqrt{ue^{u}}}(u+1)e^{u}\mathrm du \\ &=\int_{0}^{\infty} \frac{u+1}{\sqrt{ue^{u}}}\mathrm du \\ &=\int_{0}^{\infty} \frac{u+1}{\sqrt{u}}\frac{1}{\sqrt{e^{u}}}\mathrm du\\ &=\int_{0}^{\infty} u^{\frac{1}{2}}e^{-\frac{u}{2}}\mathrm du+\int_{0}^{\infty} u^{-\frac{1}{2}}e^{-\frac{u}{2}}\mathrm du\\ &=2\int_{0}^{\infty} (2w)^{\frac{1}{2}}e^{-w}\mathrm dw+2\int_{0}^{\infty} (2w)^{-\frac{1}{2}}e^{-w}\mathrm dw && \quad u =2w \\ &=2\sqrt{2}\int_{0}^{\infty} w^{\frac{1}{2}}e^{-w}\mathrm dw+\sqrt{2}\int_{0}^{\infty} w^{-\frac{1}{2}}e^{-w}\mathrm dw \\ &=2\sqrt{2} \cdot \Gamma \left (\tfrac{3}{2} \right )+\sqrt{2} \cdot \Gamma \left (\tfrac{1}{2} \right ) \\ &=2\sqrt{2} \left (\tfrac{1}{2}\sqrt{\pi} \right )+\sqrt{2}(\sqrt{\pi}) \\ The third identity may be derived from the second by making the substitution u=\frac{1}{x^{2}} and the first can be derived from the third by the substitution z=\tan(x)/\sqrt{2}. Except for z along the branch cut (-\infty,-1/e] (where the integral does not converge), the principal branch of the Lambert W function can be computed by the following integral: W(z)=\frac{z}{2\pi}\int\limits_{-\pi}^{\pi}\frac{(1-\nu\cot\nu)^2+\nu^2}{z+\nu\csc\nu e^{-\nu\cot\nu}}d\nu=\frac{z}{\pi}\int\limits_0^{\pi}\frac{(1-\nu\cot\nu)^2+\nu^2}{z+\nu\csc\nu e^{-\nu\cot\nu}}d\nu[11] where the two integral expressions are equivalent due to the symmetry of the integrand. Many equations involving exponentials can be solved using the W function. The general strategy is to move all instances of the unknown to one side of the equation and make it look like Y = XeX at which point the W function provides the value of the variable in X. In other words : Y = X e ^ X \; \Longleftrightarrow \; X = W(Y) Example 1[edit] 2^t &= 5t\\ 1 &= \frac{5 t}{2^t}\\ 1 &= 5 t \, e^{-t \ln 2}\\ \frac{1}{5} &= t \, e^{-t \ln 2}\\ \frac{- \, \ln 2}{5} &= ( - \, t \, \ln 2 ) \, e^{( -t \ln 2 )}\\ W \left ( \frac{- \ln 2}{5} \right ) &= -t \ln 2\\ t &= -\frac{W \left ( \frac{- \ln 2}{5} \right )}{\ln 2} More generally, the equation ~p^{a x + b} = c x + d p > 0 \text{ and } c,a \neq 0 can be transformed via the substitution -t = a x + \frac{a d}{c} t p^t = R = -\frac{a}{c} p^{b-\frac{a d}{c}} t = \frac{W(R\ln p)}{\ln p} which yields the final solution x = -\frac{W(-\frac{a\ln p}{c}\,p^{b-\frac{a d}{c}})}{a\ln p} - \frac{d}{c} Example 2[edit] \Rightarrow x\ln x = \ln z\, \Rightarrow e^{\ln x} \cdot \ln x = \ln z\, \Rightarrow \ln x = W(\ln z)\, \Rightarrow x=e^{W(\ln z)}\, , or, equivalently, x=\frac{\ln z}{W(\ln z)}, \ln z = W(\ln z) e^{W(\ln z)}\, by definition. Example 3[edit] Whenever the complex infinite exponential tetration z^{z^{z^{\cdot^{\cdot^{\cdot}}}}} \! converges, the Lambert W function provides the actual limit value as where ln(z) denotes the principal branch of the complex log function. This can be shown by observing that if c exists, so \Rightarrow z^{-1}=c^{-\frac{1}{c}} \Rightarrow \frac{1}{z}=\left(\frac{1}{c}\right)^{\left(\frac{1}{c}\right)} \Rightarrow -\ln(z)=\left(\frac{1}{c}\right)\ln\left(\frac{1}{c}\right) \Rightarrow -\ln(z)=e^{\ln\left(\frac{1}{c}\right)}\ln\left(\frac{1}{c}\right) \Rightarrow \ln\left(\frac{1}{c}\right)=W(-\ln(z)) \Rightarrow \frac{1}{c}=e^{W(-\ln(z))} \Rightarrow \frac{1}{c}=\frac{-\ln(z)}{W(-\ln(z))} \Rightarrow c=\frac{W(-\ln(z))}{-\ln(z)} which is the result which was to be found. Example 4[edit] Solutions for x \log_b \left(x\right) = a have the form[5] x=e^{W(a\ln b)}. Example 5[edit] The solution for the current in a series diode/resistor circuit can also be written in terms of the Lambert W. See diode modeling. Example 6[edit] The delay differential equation \dot{y}(t) = ay(t-1) has characteristic equation \lambda=a e^{-\lambda}, leading to \lambda=W_k(a) and y(t)=e^{W_k(a)t}, where k is the branch index. If a \ge e^{-1}, only W_0(a) need be considered. Example 7[edit] The Lambert W function has been recently (2013) shown to be the optimal solution for the required magnetic field of a Zeeman slower.[12] Example 8[edit] Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in the laboratory experiments can be described by using the Lambert–Euler omega function as follows: H(x)= 1 + W[(H(0) -1) \exp((H(0)-1)-\frac{x}{L})], where H(x) is the debris flow height, x is the channel downstream position, L is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient. Example 9[edit] The Lambert W function was employed in the field of Neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding Blood Oxygenation Level Dependent (BOLD) signal.[13] Example 10[edit] The Lambert W function was employed in the field of Chemical Engineering for modelling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert "W" function turned out to be the exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other.[14][15] Example 11[edit] The Lambert W function was employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert "W" for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert "W" turns it in an explicit equation for analytical handling with ease.[16] Example 12[edit] The Lambert W function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneus tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the -1 branch applies if the displacement is unstable with the heavier fluid running underneath the ligther fluid.[17] Example 13[edit] The equation (linked with the generating functions of Bernoulli numbers and Todd genus): Y = \frac{X}{1-e^X} can be solved by means of the two real branches W_0 and W_{-1}: X(Y) = W_{-1}( Y e^Y) - W_0( Y e^Y) = Y - W_0( Y e^Y) \text{for }Y < -1. X(Y) = W_0( Y e^Y) - W_{-1}( Y e^Y) = Y - W_{-1}(Y e^Y) \text{for }-1 < Y < 0. This application shows in evidence that the branch difference of the W function can be employed in order to solve other trascendental equations. See : D. J. Jeffrey and J. E. Jankowski, "Branch differences and Lambert W" Example 14[edit] The centroid of a set of histograms defined with respect to the symmetrized Kullback-Leibler divergence (also called the Jeffreys divergence) is in closed form using the Lambert function. See : F. Nielsen, "Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms" Example 15[edit] The Lambert W-function appears in a quantum-mechanical potential (see The Lambert-W step-potential) which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as V = \frac{V_0}{1+W (e^{-x/\sigma})}. A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to z = W (e^{-x/\sigma}). See : A.M. Ishkhanyan, "The Lambert W-barrier - an exactly solvable confluent hypergeometric potential" The standard Lambert W function expresses exact solutions to transcendental algebraic equations (in x) of the form: e^{-c x} = a_o (x-r) ~~\quad\qquad\qquad\qquad\qquad(1) where a0, c and r are real constants. The solution is x = r + \frac{1}{c} W\!\left( \frac{c\,e^{-c r}}{a_o } \right)\, . Generalizations of the Lambert W function[18][19][20] include: e^{-c x} = a_o (x-r_1 ) (x-r_2 ) ~~\qquad\qquad(2) and where r1 and r2 are real distinct constants, the roots of the quadratic polynomial. Here, the solution is a function has a single argument x but the terms like ri and ao are parameters of that function. In this respect, the generalization resembles the hypergeometric function and the Meijer G-function but it belongs to a different class of functions. When r1 = r2, both sides of (2) can be factored and reduced to (1) and thus the solution reduces to that of the standard W function. Eq. (2) expresses the equation governing the dilaton field, from which is derived the metric of the R=T or lineal two-body gravity problem in 1+1 dimensions (one spatial dimension and one time dimension) for the case of unequal (rest) masses, as well as, the eigenenergies of the quantum-mechanical double-well Dirac delta function model for unequal charges in one dimension. • Analytical solutions of the eigenenergies of a special case of the quantum mechanical three-body problem, namely the (three-dimensional) hydrogen molecule-ion.[22] Here the right-hand-side of (1) (or (2)) is now a ratio of infinite order polynomials in x: e^{-c x} = a_o \frac{\displaystyle \prod_{i=1}^{\infty} (x-r_i )}{\displaystyle \prod_{i=1}^{\infty} (x-s_i)} \qquad \qquad\qquad(3) where ri and si are distinct real constants and x is a function of the eigenenergy and the internuclear distance R. Eq. (3) with its specialized cases expressed in (1) and (2) is related to a large class of delay differential equations. Applications of the Lambert "W" function in fundamental physical problems are not exhausted even for the standard case expressed in (1) as seen recently in the area of atomic, molecular, and optical physics.[23] Numerical evaluation[edit] The W function may be approximated using Newton's method, with successive approximations to w=W(z) (so z=we^w) being w_{j+1}=w_j-\frac{w_j e^{w_j}-z}{e^{w_j}+w_j e^{w_j}}. The W function may also be approximated using Halley's method, w_{j+1}=w_j-\frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\frac{(w_j+2)(w_je^{w_j}-z)} given in Corless et al. to compute W. The LambertW function is implemented as LambertW in Maple, lambertw in GP (and glambertW in PARI), lambertw in MATLAB,[24] also lambertw in octave with the 'specfun' package, as lambert_w in Maxima,[25] as ProductLog (with a silent alias LambertW) in Mathematica,[26] as lambertw in Python scipy's special function package[27] and as gsl_sf_lambert_W0 and gsl_sf_lambert_Wm1 functions in special functions section of the GNU Scientific Library - GSL. See also[edit] 1. ^ Chow, Timothy Y. (1999), "What is a closed-form number?", American Mathematical Monthly 106 (5): 440–448, doi:10.2307/2589148, MR 1699262 . 2. ^ a b Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J.; Knuth, D. E. (1996). "On the Lambert W function" (PostScript). Advances in Computational Mathematics 5: 329–359. doi:10.1007/BF02124750.  3. ^ Lambert JH, "Observationes variae in mathesin puram", Acta Helveticae physico-mathematico-anatomico-botanico-medica, Band III, 128–168, 1758 (facsimile) 4. ^ Euler, L. "De serie Lambertina Plurimisque eius insignibus proprietatibus." Acta Acad. Scient. Petropol. 2, 29–51, 1783. Reprinted in Euler, L. Opera Omnia, Series Prima, Vol. 6: Commentationes Algebraicae. Leipzig, Germany: Teubner, pp. 350–369, 1921. (facsimile) 5. ^ a b Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J. (1993). "Lambert's W function in Maple". The Maple Technical Newsletter (MapleTech) 9: 12–22. CiteSeerX:  6. ^ Approximation of the Lambert W function and the hyperpower function, Hoorfar, Abdolhossein; Hassani, Mehdi. 7. ^ 8. ^ Chatzigeorgiou, I. (2013). "Bounds on the Lambert function and their Application to the Outage Analysis of User Cooperation" (PDF). IEEE Communications Letters 17 (8): 1505–1508. doi:10.1109/LCOMM.2013.070113.130972.  9. ^ 10. ^ 11. ^ "The Lambert W Function". Ontario Research Centre.  12. ^ B Ohayon., G Ron. (2013). "New approaches in designing a Zeeman Slower". Journal of Instrumentation 8 (02): P02016. doi:10.1088/1748-0221/8/02/P02016.  13. ^ Sotero, Roberto C.; Iturria-Medina, Yasser (2011). "From Blood oxygenation level dependent (BOLD) signals to brain temperature maps". Bull Math Biol 73 (11): 2731–47. doi:10.1007/s11538-011-9645-5. PMID 21409512.  14. ^ Braun, Artur; Wokaun, Alexander; Hermanns, Heinz-Guenter (2003). "Analytical Solution to a Growth Problem with Two Moving Boundaries". Appl Math Model 27 (1): 47–52. doi:10.1016/S0307-904X(02)00085-9.  15. ^ Braun, Artur; Baertsch, Martin; Schnyder, Bernhard; Koetz, Ruediger (2000). "A Model for the film growth in samples with two moving boundaries - An Application and Extension of the Unreacted-Core Model.". Chem Eng Sci 55 (22): 5273–5282. doi:10.1016/S0009-2509(00)00143-3.  16. ^ Braun, Artur; Briggs, Keith M.; Boeni, Peter (2003). "Analytical solution to Matthews’ and Blakeslee's critical dislocation formation thickness of epitaxially grown thin films". J Cryst Growth 241 (1/2): 231–234. Bibcode:2002JCrGr.241..231B. doi:10.1016/S0022-0248(02)00941-7.  17. ^ Colla, Pietro (2014). "A New Analytical Method for the Motion of a Two-Phase Interface in a Tilted Porous Medium". PROCEEDINGS,Thirty-Eighth Workshop on Geothermal Reservoir Engineering,Stanford University. SGP-TR-202. ([1]) 18. ^ Scott, T. C.; Mann, R. B.; Martinez Ii, Roberto E. (2006). "General Relativity and Quantum Mechanics: Towards a Generalization of the Lambert W Function". AAECC (Applicable Algebra in Engineering, Communication and Computing) 17 (1): 41–47. arXiv:math-ph/0607011. doi:10.1007/s00200-006-0196-1.  19. ^ Scott, T. C.; Fee, G.; Grotendorst, J. (2013). "Asymptotic series of Generalized Lambert W Function". SIGSAM (ACM Special Interest Group in Symbolic and Algebraic Manipulation) 47 (185): 75–83. doi:10.1145/2576802.2576804.  20. ^ Scott, T. C.; Fee, G.; Grotendorst, J.; Zhang, W.Z. (2014). "Numerics of the Generalized Lambert W Function". SIGSAM 48 (188): 42–56. doi:10.1145/2644288.2644298.  21. ^ Farrugia, P. S.; Mann, R. B.; Scott, T. C. (2007). "N-body Gravity and the Schrödinger Equation". Class. Quantum Grav. 24 (18): 4647–4659. arXiv:gr-qc/0611144. doi:10.1088/0264-9381/24/18/006.  22. ^ Scott, T. C.; Aubert-Frécon, M.; Grotendorst, J. (2006). "New Approach for the Electronic Energies of the Hydrogen Molecular Ion". Chem. Phys. 324 (2–3): 323–338. arXiv:physics/0607081. doi:10.1016/j.chemphys.2005.10.031.  23. ^ Scott, T. C.; Lüchow, A.; Bressanini, D.; Morgan, J. D. III (2007). "The Nodal Surfaces of Helium Atom Eigenfunctions". Phys. Rev. A 75 (6): 060101. doi:10.1103/PhysRevA.75.060101.  24. ^ lambertw - MATLAB 25. ^ Maxima, a Computer Algebra System 26. ^ ProductLog at WolframAlpha 27. ^ [2] External links[edit]
5aa63c8118607d40
In my blog post Why riemannium? , I introduced the following idea. The infinite potential well in quantum mechanics, the harmonic oscillator and the Kepler (hydrogen-like) problem have energy spectra, respectively, equal to 1) $$ E\sim n^2$$ 2) $$ E\sim n$$ 3) $$ E\sim \dfrac{1}{n^2}$$ Do you know quantum systems with general spectra/eigenvalues given by $$ E(n;s)\sim n^{-s}$$ and energy splitting $$ \Delta E(n,m;s)\sim \left( \dfrac{1}{n^s}-\dfrac{1}{m^s}\right)$$ for all $s\neq -2,-1,2$? Here we will not address the full quantum mechanical problem, but only discuss the semi-classical limit $n \gg 1$, i.e. only the highly excited part of the energy spectrum far away from the ground state energy. If we are in one dimension with a power law potential $$\Phi(x)~\sim~|x|^{p}, \qquad p>-2, $$ for $|x|$ sufficiently large, then we can use the semi-classical method of this Phys.SE answer to estimate the classically accessible length as $$ \ell(V)~\sim~V^{\frac{1}{p}}, $$ where $V$ is the available potential energy. The number of states $N(E)$ below energy-level $E$ then goes as $$ N(E)~\sim~E^{\frac{1}{p}+\frac{1}{2}}, $$ and therefore the semi-classical discrete energies also obey a power law $$ E_n ~\sim~n^{\frac{2p}{p+2}} \quad\text{for}\quad n ~\gg~ 1. $$ The values $p=-1$, $p=2$, and $p=\infty$ correspond to the (radial) hydrogen atom, the harmonic oscillator, and the infinite potential well, respectively. • 1 $\begingroup$ For completeness: If the power $p<-2$, then there will only be a finite number of bound states, so our semi-classical analysis does not work in that case. $\endgroup$ – Qmechanic Apr 13 '13 at 18:15 • $\begingroup$ That's cool! And I am wondering what kind of fully quantum (not semiclassical) "field/system/potential" could produce them too. Your answer is a great help Qmechanic. I am also very interested in physical and real/factual systems with that asymptotical spectrum... It is quite interesting for many reasons for my current "thoughts"... $\endgroup$ – riemannium Apr 13 '13 at 21:49 • $\begingroup$ And one more question, what if your "p" is complex? Can the 1D Schrödinger equation be solved for that potential?$ (T+V)\Psi=E\Psi$? There T is the usual (free) kinetic energy operator in QM and $V\sim \vert \Phi\vert^p$ with $p\in \mathbb{C}$ $\endgroup$ – riemannium Apr 13 '13 at 21:54 • 1 $\begingroup$ In standard QM, the Hamiltonian $H$ should be a Hermitian operator, and hence the potential $\Phi$ (and therefore $p$) should be real. You might try to look into PT-symmetric QM. $\endgroup$ – Qmechanic Apr 13 '13 at 22:03 • 1 $\begingroup$ p=1 is used as a model of quarkonium. See arxiv.org/abs/hep-ph/0608103 $\endgroup$ – user4552 Apr 14 '13 at 0:27 Your Answer
57ad20f66ce4dcd2
Coronavirus (Covid-19): Latest updates and information Skip to main content Skip to navigation Electronic band structure Energy-wavevector relationship In a solid, the free electron approximation is no longer valid, so to describe the behaviour of the electrons we need to use another approach. The Schrödinger equation fully describes the behaviour of electrons, when an appropriate Hamiltonian is used. A Hamiltonian that fully describes the system requires the inclusion of the interaction between electrons and ions, and is given by: H = \sum_{i} \frac{\vec{p}_i ^2}{2 m_e} + \sum_{j} \frac{\vec{p}_l ^2}{2 M_l} + \sum_{i.l} V ( \vec{r}_i - \vec{R}_l) + \sum_{l,m} U ( \vec{R}_l - \vec{R}_m) + \sum_{i,j} \frac{e^2}{4 \pi \epsilon_0} \frac{1}{| \vec{r}_i - \vec{r}_j|} Where m_e is the electron mass M_i is the ion mass and \vec{R} is the separation between the ion and electron. Directly solving this Hamiltonian is very complex, so approximations are usually employed to simplify the task. Important approximations are the Born-Oppenheimer approximation[1] the one electron approximation[1] and the mean-field approximation[1]. The Born-Oppenheimer approximation states that the electrons react instantly to the motion of the ions, but the reaction of the ions is much slower, allowing us to ignore the motion of the ions. The one electron approximation assumes all electron-electron interactions are averaged, and the mean-field approximation states that all the electrons are in identical surroundings with regards to the ions and their equilibrium positions. These approximations allow us to use a simplified Hamiltonian: H = \frac{\vec{p}^2}{2 m_e} + V_0 (\vec{r}) V_0(\vec{r}) represents the periodic potential in the lattice, utilising the translational symmetry that is present. Parabolic approximation By considering the electron energy dispersion only around the \Gamma-point (centre of the Brillouin zone) we arrive at an equation for the parabolic approximation at the extrema of the conduction and valence bands (which are in general located at the \Gamma-point for semiconductors). This can be shown to be: E_{\mathrm{e,h}}(\vec{k}) = E_{\mathrm{e,h}} (0) \pm \frac{\hbar ^2 \vec{k}^2}{2 m ^{*}} For large gap semiconductors this equation tends to hold well, but this proves to be ineffective for narrow gap semiconductors such as InN, because the interaction between conduction and valence bands cannot be ignored. In this case Kane's k.p perturbation theory[2] is used. k.p perturbation theory k.pperturbation uses the fact that the cell periodic functions for the electrons for any \vec{k} form a complete set. The wave function used for this can be written as: \psi = u_{n\vec{k}} (\vec{r}) \exp(i \vec{k} \cdot\vec{r}) = \left[ \sum c_m U_{m\vec{k_0}} (r) \right] \exp (i \vec{k} \cdot \vec{r}) Using \psi in the Schrödinger equation, and the fact that the wave function for \vec{k} &nbsp;= \vec{k_0} in the nthband is \psi =\exp(i \vec{k_0}\cdot \vec{r}) U_{m \vec{k_0}} (\vec{r}) should produce[3]: \left[ - \frac{\hbar^2}{2 m_0} \nabla^2 + \frac{\hbar}{m_0} \vec{k_0} \cdot \vec{p} + \frac{\hbar^2 k_0 ^2}{2 m_0} + V(\vec{r}) \right] U_{m \vec{k_0}} (r) = E_m (\vec{k_0}) U_{m \vec{k_0}} (\vec{r})   Effective mass  The electron movement in a lattice is different from the movement in free space. In a crystal the electron will pass through energy band full of electrons and some with very few. Also we need to consider that the electron is subject to different forces, internal and external. The internal forces are due to the different particles within the crystal. Since it is difficult take into account all the internal forces instead of having F_{total}=F_{ext}+F_{int}=ma we define a effective mass that takes in account the particle mass as well as the effects of the internal forces (F=m^*a).  In order to describe the particle in the crystal we need to consider a wave packet and the uncertainty principle, which states we can measure the position and the momentum of the wave packet simultaneity with a precision of \Delta x\Delta\hbar\approx\hbar. the group velocity od the wave packet is given by v=\frac{d\omega}{dk}=\frac{1}{\hbar}\frac{d\epsilon}{dk}. For a particle acted by a force \vec{F}, for example, if we apply an electric field, we will have d\epsilon=(\frac{d\epsilon}{dk})dk=vd(\hbar k) with d(\hbar) the crystal momentum. From Newton law we have f/m^{*}=dv/dt and using the above expressions we will get \frac{F}{m^*}=\frac{1}{\hbar}\left(\frac{d^{2}\epsilon}{d\hbar^2}\right)\frac{dk}{dt} \Leftrightarrow \hbar \frac{dk}{dt}\frac{1}{m^*}=\frac{1}{\hbar}\left(\frac{d^{2}\epsilon}{d\hbar^2}\right)\frac{dk}{dt}  This will give an expression for the effective mass (1D): m^*=\frac{\hbar^2}{d^{2}\epsilon/dk^* } As we can see from the above expression the sign of the effective mass depends on the curvature of the band. The band gap is the energy difference  between the lowest point of the conduction band ( conduction band edge) and the highest point of the valence band (valence band edge). A semiconductor can have a direct band gap or a indirect band gap. A direct band gap is characterised by having the band edges aligned in k, so that a electron can transit from the valence band to the conduction band, with the emission of a photon, without changing considerably the momentum. On the other hand, in the indirect band gap the band edges are not aligned so the electron doesn't transit directly to the conduction band. In this process both a photon and a phonon are involved. Si bandstructure Energetic bands  The energetic levels in atoms and molecules can be discrete or can split into a near-continuum of levels called a band. The energy bands can be classified as empty, filled, mixed or fobidden bands. The energetic levels are occupied by the electrons (distributed according to Pauli exclusion principle) , starting with the lowest energy value level.  The electrons that contribute to the electrical conduction occupy the higher energy bands. The valence band corresponds to the highest energy band that contains electrons. The valence band can be fully or partial occupied. The allowed (empty) states in the valence band add contribution to the electric current. The conduction band is the lowest energetic band with unoccupied states. In materials the conducting bands of empty, filled or allowed states can interfere with forbidden bands, also called band gaps.  The width of the band gap (width measured in energy units) determines the type of material: insulator, semiconductor, metal. If at  temperatures in the range of room temperature the electrons in a pur semiconductor gain sufficient energy to overcome the energy of the bandgap Eg, the semiconductor is called intrinsic semiconductor. Energy Band Gaps in Materials    Description of the electronic bands in solids[8]. Density of states The density of states provides numerical information on the states availability at each energy level. A high value for the density of states represents a high number for the energetic states ready to be occupied. If there are no available states for occupation in an energetic level, the value for the density of states will be zero.  Density of states           Schematic diagram illustrating the representation of the electronic density of states depending on dimensionality[7].   Scanning tunneling microscope is an advanced type of computerized microscope that can probe the density of states of nanostructures. With a scanning tunneling microscope both empty and filled states can be probed in a single measurement. The electric current is measured as a function of the bias voltage and the cumputer program will plot a curve describing the conducting character of the nanostructure. The first derivative, dI/dV, is a first aproximation of the local density of states. The local density of states is a measure of the amount of filled or empty states present at a specific value of energy. scanning tunneling spectroscopy Example of scanning tunneling spectroscopy data[4]. Holes are fictious particles [5] considered positively charged with charge |e|, having positive effective mass mh and energies   E=+\hbar^2*k^2/2m_h. These fictious particles "occupy" the empty states in the energetic bands. A hole is actually a missing electron in an energy band. The concept of holes as positively charged particles has been introduced in order to simplify the calculations for the electronic transitions in an almost fully occupied valence band. The pure silicon, for example, is a poor conductor. In order to improve the conductivity properties, the concentration of charge carriers must be increased. This process is possible by introducing impurities in the silicon crystal. This method is called doping and give rise to additional number of electrons or holes in the semiconductor. The doped semiconductor is called extrinsic.  Depending on the impurities type, there are acceptor dopants (Gr. III elements) and donor dopants (Gr. V elements). This classification leads to p-type, respectively n-type semiconductors. Due to the doping process, the Fermi level, normally at the center of the band gap, will be shifted according to the type of semiconductor. In the p-type materials, there is a generation of extra number of  holes. The energetic levels of the acceptor impurities will occupy the space right on the top of the valence band. The Fermi level, in this case, will be shifted towards the valence band. For the n-type materials, the number of electrons is increased and the donor impurity levels are situated right under the conduction band, which causes the Fermi level to be shifted towards the conduction band. Illustration of intrinsic and extrinsic types of silicon semiconductor [6].           Acceptor-donor levels Position of the donor level (D.L.) occupied with electrons, and, acceptor level (A.L.) of holes with respect to the conduction and valence band [7]. 1. B. Ridley, Quantum Processors In Semiconductors, Oxford Science Publications, 1993. 2. E. O. Kane, J. Phys. Chem. Solids 1, 249 (1957). 3. B. Nag, Electron Transport In Compound Semiconductors, Springer-Verlag, 1980. 5. Hook, J.R. and Hall, H.E. Solid State Physics. 2nd ed. Chichester: John Wiley & Sons, 1991. 6. (with slight editing for better layout) • John Singleton (2001). Band Theory and Electronic Properties of Solids  Oxford Master Series in Condensed Matter Physics. ISBN 0-19-850644-9 • K. Seeger(1991), SSemiconductor physics: an introduction. Springer-Verlag See also
4ad6eab7e5348ddd
Blog Archives It was my distinct pleasure for me to participate in the organization of the latest edition of the Mexican Meeting on Theoretical Physical Chemistry, RMFQT which took place last week here in Toluca. With the help of the School of Chemistry from the Universidad Autónoma del Estado de México. This year the national committee created a Lifetime Achievement Award for Dr. Annik Vivier, Dr. Carlos Bunge, and Dr. José Luis Gázquez. This recognition from our community is awarded to these fine scientists for their contributions to theoretical chemistry but also for their pioneering work in the field in Mexico. The three of them were invited to talk about any topic of their choosing, particularly, Dr. Vivier stirred the imagination of younger students by showing her pictures of the times when she used to hangout with Slater, Roothan, Löwdin, etc., it is always nice to put faces onto equations. Continuing with a recent tradition we also had the pleasure to host three invited plenary lectures by great scientists and good friends of our community: Prof. William Tiznado (Chile), Prof. Samuel B. Trickey (USA), and Prof. Julia Contreras (France) who shared their progress on their recent work. As I’ve abundantly pointed out in the past, the RMFQT is a joyous occasion for the Mexican theoretical community to get together with old friends and discuss very exciting research being done in our country and by our colleagues abroad. I’d like to add a big shoutout to Dr. Jacinto Sandoval-Lira for his valuable help with the organization of our event. All you wanted to know about Hybrid Orbitals… … but were afraid to ask How I learned to stop worrying and not caring that much about hybridization. The math behind orbital hybridization is fairly simple as I’ll try to show below, but first let me give my praise once again to the formidable Linus Pauling, whose creation of this model built a bridge between quantum mechanics and chemistry; I often say Pauling was the first Quantum Chemist (Gilbert N. Lewis’ fans, please settle down). Hybrid orbitals are therefore a way to create a basis that better suits the geometry formed by the bonds around a given atom and not the result of a process in which atomic orbitals transform themselves for better sterical fitting, or like I’ve said before, the C atom in CH4 is sp3 hybridized because CH4 is tetrahedral and not the other way around. Jack Simmons put it better in his book: 2017-08-09 20.29.45 Taken from “Quantum Mechanics in Chemistry” by Jack Simmons The atomic orbitals we all know and love are the set of solutions to the Schrödinger equation for the Hydrogen atom and more generally they are solutions to the hydrogen-like atoms for which the value of Z in the potential term of the Hamiltonian changes according to each element’s atomic number. Since the Hamiltonian, and any other quantum mechanical operator for that matter, is a Hermitian operator, any given linear combination of wave functions that are solutions to it, will also be an acceptable solution. Therefore, since the 2s and 2p valence orbitals of Carbon do not point towards the edges of a tetrahedron they don’t offer a suitable basis for explaining the geometry of methane; even more so these atomic orbitals are not degenerate and there is no reason to assume all C-H bonds in methane aren’t equal. However we can come up with a linear combination of them that might and at the same time will be a solution to the Schrödinger equation of the hydrogen-like atom. Ok, so we need four degenerate orbitals which we’ll name ζi and formulate them as linear combinations of the C atom valence orbitals: ζ1a12s + b12px + c12py + d12pz ζ2a22s + b22px + c22py + d22pz ζ3a32s + b32px + c32py + d32pz ζ4a42s + b42px + c42py + d42pz to comply with equivalency lets set a1 = a2 = a3 = a4 and normalize them: a12 + a22 + a32 + a42 = 1  ∴  ai = 1/√4 Lets take ζ1 to be directed along the z axis so b1 = c1 = 0 ζ= 1/√4(2s) + d12pz since ζ1 must be normalized the sum of the squares of the coefficients is equal to 1: 1/4 + d12 = 1; d1 = √3/2 Therefore the first hybrid orbital looks like: ζ1 = 1/√4(2s) +√3/2(2pz) We now set the second hybrid orbital on the xz plane, therefore c2 = 0 ζ2 = 1/√4(2s) + b22px + d22pz since these hybrid orbitals must comply with all the conditions of atomic orbitals they should also be orthonormal: ζ1|ζ2〉 = δ1,2 = 0 1/4 + d2√3/2 = 0 d2 = –1/2√3 our second hybrid orbital is almost complete, we are only missing the value of b2: ζ2 = 1/√4(2s) +b22px +-1/2√3(2pz) again we make use of the normalization condition: 1/4 + b22 + 1/12 = 1;  b2 = √2/√3 Finally, our second hybrid orbital takes the following form: ζ2 = 1/√4(2s) +√2/√3(2px) –1/√12(2pz) The procedure to obtain the remaining two hybrid orbitals is the same but I’d like to stop here and analyze the relative direction ζ1 and ζ2 take from each other. To that end, we take the angular part of the hydrogen-like atomic orbitals involved in the linear combinations we just found. Let us remember the canonical form of atomic orbitals and explicitly show the spherical harmonic functions to which the  2s, 2px, and 2pz atomic orbitals correspond: ψ2s = (1/4π)½R(r) ψ2px = (3/4π)½sinθcosφR(r) ψ2pz = (3/4π)½cosθR(r) we substitute these in ζ2 and factorize R(r) and 1/√(4π) ζ2 = (R(r)/√(4π))[1/√4 + √2 sinθcosφ –√3/√12cosθ] We differentiate ζ2 respect to θ, and set it to zero to find the maximum value of θ respect to the z axis we get the angle between the first to hybrid orbitals ζ1 and ζ2 (remember that ζ1 is projected entirely over the z axis) dζ2/dθ = (R(r)/√(4π))[√2 cosθ –√3/√12sinθ] = 0 sinθ/cosθ = tanθ = -√8 θ = -70.53°, but since θ is measured from the z axis towards the xy plane this result is equivalent to the complementary angle 180.0° – 70.53° = 109.47° which is exactly the angle between the C-H bonds in methane we all know! and we didn’t need to invoke the unpairing of electrons in full orbitals, their promotion of any electron into empty orbitals nor the ‘reorganization‘ of said orbitals into new ones. Orbital hybridization is nothing but a mathematical tool to find a set of orbitals which comply with the experimental observation and that is the important thing here! To summarize, you can take any number of orbitals and build any linear combination you want, in order to comply with the observed geometry. Furthermore, no matter what hybridization scheme you follow, you still take the entire orbital, you cannot take half of it because they are basis functions. That is why you should never believe that any atom exhibits something like an sp2.5 hybridization just because their bond angles lie between 109 and 120°. Take a vector v = xi+yj+zk, even if you specify it to be v = 1/2i that means x = 1/2, not that you took half of the unit vector i, and it doesn’t mean you took nothing of j and k but rather than y = z = 0. This was a very lengthy post so please let me know if you read it all the way through by commenting, liking, or sharing. Thanks for reading. No, seriously, why can’t orbitals be observed? The concept of electronic orbital has become such a useful and engraved tool in understanding chemical structure and reactivity that it has almost become one of those things whose original meaning has been lost and replaced for a utilitarian concept, one which is not bad in itself but that may lead to some wrong conclusions when certain fundamental facts are overlooked. Last week a wrote -what I thought was- a humorous post on this topic because a couple of weeks ago a viewpoint in JPC-A was published by Pham and Gordon on the possibility of observing molecular orbitals through microscopy methods, which elicited a ‘seriously? again?‘ reaction from me, since I distinctly remember the Nature article by Zuo from the year 2000 when I just had entered graduate school. The article is titled “direct observation of d-orbital holes.” We discussed this paper in class and the discussion it prompted was very interesting at various levels: for starters, the allegedly observed d-orbital was strikingly similar to a dz2, which we had learned in class (thanks, prof. Carlos Amador!) that is actually a linear combination of d(z2-x2) and d(z2-y2) orbitals, a mathematical -lets say- trick to conform to spectroscopic observations. Pham and Gordon are pretty clear in their first paragraph: “The wave function amplitude Ψ*Ψ is interpreted as the probability density. All observable atomic or molecular properties are determined by the probability and a corresponding quantum mechanical operator, not by the wave function itself. Wave functions, even exact wave functions, are not observables.” There is even another problem, about which I wrote a post long time ago: orbitals are non-unique, this means that I could get a set of orbitals by solving the Schrödinger equation for any given molecule and then perform a unit transformation on them (such as renormalizing them, re-orthonormalizing them to get a localized version, or even hybridizing them) and the electronic density derived from them would be the same! In quantum mechanical terms this means that the probability density associated with the wave function internal product, Ψ*Ψ, is not changed upon unit transformations; why then would a specific version be “observed” under a microscope? As Pham and Gordon state more eloquently it has to do with the Density of States (DOS) rather than with the orbitals. Furthermore, an orbital, or more precisely a spinorbital, is conveniently (in math terms) separated into a radial, an angular and a spin component R(r)Ylm(θ,φ)σ(α,β) with the angular part given by the spherical harmonic functions Ylm(θ,φ), which in turn -when plotted in spherical coordinates- create the famous lobes we all chemists know and love. Zuo’s observation claim was based on the resemblance of the observed density to the angular part of an atomic orbital. Another thing, orbitals have phases, no experimental observation claims to have resolved those. Now, I may be entering a dangerous comparison but, can you observe a 2? If you say you just did, well, that “2” is just a symbol used to represent a quantity: two, the cardinality of a set containing two elements. You might as well depict such quantity as “II” or “⋅⋅” but still cannot observe “a two”. (If any mathematician is reading this, please, be gentle.) I know a number and a function are different, sorry if I’m just rambling here and overextending a metaphor. Pretending to having observed an orbital through direct experimental methods is to neglect the Born interpretation of the wave function, Heisenberg’s uncertainty principle and even Schrödinger’s cat! (I know, I know, Schrödinger came up with this gedankenexperiment in order to refute the Copenhagen interpretation of quantum mechanics, but it seems like after all the cat is still not out of the box!) So, the take home message from the viewpoint in JPC is that molecular properties are defined by the expected values of a given wave function for a specific quantum mechanical operator of the property under investigation and not from the wave function itself. Wave functions are not observables and although some imaging techniques seem to accomplish a formidable task the physical impossibility hints to a misinterpretation of facts. I think I’ll write more about this in a future post but for now, my take home message is to keep in mind that orbitals are wave functions and therefore are not more observable (as in imaging) than a partition function is in statistical mechanics. Dealing with Spin Contamination Most organic chemistry deals with closed shell calculations, but every once in a while you want to calculate carbenes, free radicals or radical transition states coming from a homolytic bond break, which means your structure is now open shell. Closed shell systems are characterized by having doubly occupied molecular orbitals, that is to say the calculation is ‘restricted’: Two electrons with opposite spin occupy the same orbital. In open shell systems, unrestricted calculations have a complete set of orbitals for the electrons with alpha spin and another set for those with beta spin. Spin contamination arises from the fact that wavefunctions obtained from unrestricted calculations are no longer eigenfunctions of the total spin operator <S^2>. In other words, one obtains an artificial mixture of spin states; up until now we’re dealing only with single reference methods. With each step of the SCF procedure the value of <S^2> is calculated and compared to s(s+1) where s is half the number of unpaired electrons (0.75 for a radical and 2.0 for triplets, and so on); if a large deviation between these two numbers is found, the then calculation stops. Gaussian includes an annihilation step during SCF to reduce the amount of spin contamination but it’s not 100% reliable. Spin contaminated wavefunctions aren’t reliable and lead to errors in geometries, energies and population analyses. One solution to overcome spin contamination is using Restricted Open Shell calculations (ROHF, ROMP2, etc.) for which singly occupied orbitals is used for the unpaired electrons and doubly occupied ones for the rest. These calculations are far more expensive than the unrestricted ones and energies for the unpaired electrons (the interesting ones) are unreliable, specially spin polarization is lost since dynamical correlation is hardly accounted for. The IOP(5/14=2) in Gaussian uses the annihilated wavefunction for the population analysis if acceptable but since Mulliken’s method is not reliable either I don’t advice it anyway.  The case of DFT is different since rho.alpha and rho.beta can be separated (similarly to the case of unrestricted ab initio calculations), but the fact that both densities are built of Kohn-Sham orbitals and not true canonical orbitals, compensates the contamination somehow. That is not to say that it never shows up in DFT calculations but it is usually less severe, of course for the case of hybrid functional the more HF exchange is included the more important spin contamination may become.  So, in short, for spin contaminated wavefunctions you want to change from restricted to unrestricted and if that doesn’t work then move to Restricted Open Shell; if using DFT you can use the same scheme and also try changing from hybrid to pure orbitals at the cost of CPU time. There is a last option which is using spin projection methods but I’ll discuss that in a following post.  Rank your QM knowledge according to Pauli’s Exclusion Principle QM Evolutionary tree! QM Evolutionary tree! LOL just feeling a little humorous this morning! New paper in JPC-A XIth Mexican Reunion on Theoretical Physical Chemistry For over a decade these meetings have gathered theoretical chemists every year to share and comment their current work and to also give students the opportunity to interact with experienced researchers, some of which in turn were even students of Prof. Robert Parr, Prof. Richard Bader or Prof. Per Olov Löwdin. This year the Mexican Meeting on Theoretical Physical Chemistry took place last weekend in Toluca, where CCIQS is located. You can find links to this and previous meetings here. We participated with a poster which is presented below (in Spanish, sorry) about our current research on the development of calixarenes and tia-calixarenes as drug carriers. In this particular case, we presented our study with the drug IMATINIB (Gleevec as branded by Novartis), a powerful tyrosinkynase inhibitor widely employed in the treatment of Leukaemia. The International Journal of Quantum Chemistry is dedicating an issue to this reunion. As always, this meeting posed a great opportunity to reconnect with old friends, teachers, and colleagues as well as to make new acquaintances; my favourite session is still the beer session after all the seminars! Kudos to María Eugenia “Maru”  Sandoval-Salinas for this poster and the positive response it generated. %d bloggers like this:
9ab13f9b72c1e460
The physicist Eric J. Heller’s Transport XIII (2003), inspired by electron flow experiments conducted at Harvard. According to Heller, the image ‘shows two kinds of chaos: a random quantum wave on the surface of a sphere, and chaotic classical electron paths in a semiconductor launched over a range of angles from a particular point. Even though one is quantum mechanical and the other classical, they are related: the chaotic classical paths cause random quantum waves to appear when the classical system is solved quantum mechanically.’ Eric J. Heller Then in the 1920s, according to theories of Louis de Broglie and Erwin Schrödinger, it appeared that electrons, which had always been recognized as particles, under some circumstances behaved as waves. In order to account for the energies of the stable states of atoms, physicists had to give up the notion that electrons in atoms are little Newtonian planets in orbit around the atomic nucleus. Electrons in atoms are better described as waves, fitting around the nucleus like sound waves fitting into an organ pipe.1 The world’s categories had become all muddled. Worse yet, the electron waves are not waves of electronic matter, in the way that ocean waves are waves of water. Rather, as Max Born came to realize, the electron waves are waves of probability. That is, when a free electron collides with an atom, we cannot in principle say in what direction it will bounce off. The electron wave, after encountering the atom, spreads out in all directions, like an ocean wave after striking a reef. As Born recognized, this does not mean that the electron itself spreads out. Instead, the undivided electron goes in some one direction, but not a precisely predictable direction. It is more likely to go in a direction where the wave is more intense, but any direction is possible. Probability was not unfamiliar to the physicists of the 1920s, but it had generally been thought to reflect an imperfect knowledge of whatever was under study, not an indeterminism in the underlying physical laws. Newton’s theories of motion and gravitation had set the standard of deterministic laws. When we have reasonably precise knowledge of the location and velocity of each body in the solar system at a given moment, Newton’s laws tell us with good accuracy where they will all be for a long time in the future. Probability enters Newtonian physics only when our knowledge is imperfect, as for example when we do not have precise knowledge of how a pair of dice is thrown. But with the new quantum mechanics, the moment-to-moment determinism of the laws of physics themselves seemed to be lost. All very strange. In a 1926 letter to Born, Einstein complained: As late as 1964, in his Messenger lectures at Cornell, Richard Feynman lamented, “I think I can safely say that no one understands quantum mechanics.”3 With quantum mechanics, the break with the past was so sharp that all earlier physical theories became known as “classical.” The weirdness of quantum mechanics did not matter for most purposes. Physicists learned how to use it to do increasingly precise calculations of the energy levels of atoms, and of the probabilities that particles will scatter in one direction or another when they collide. Lawrence Krauss has labeled the quantum mechanical calculation of one effect in the spectrum of hydrogen “the best, most accurate prediction in all of science.”4 Beyond atomic physics, early applications of quantum mechanics listed by the physicist Gino Segrè included the binding of atoms in molecules, the radioactive decay of atomic nuclei, electrical conduction, magnetism, and electromagnetic radiation.5 Later applications spanned theories of semiconductivity and superconductivity, white dwarf stars and neutron stars, nuclear forces, and elementary particles. Even the most adventurous modern speculations, such as string theory, are based on the principles of quantum mechanics. Many physicists came to think that the reaction of Einstein and Feynman and others to the unfamiliar aspects of quantum mechanics had been overblown. This used to be my view. After all, Newton’s theories too had been unpalatable to many of his contemporaries. Newton had introduced what his critics saw as an occult force, gravity, which was unrelated to any sort of tangible pushing and pulling, and which could not be explained on the basis of philosophy or pure mathematics. Also, his theories had renounced a chief aim of Ptolemy and Kepler, to calculate the sizes of planetary orbits from first principles. But in the end the opposition to Newtonianism faded away. Newton and his followers succeeded in accounting not only for the motions of planets and falling apples, but also for the movements of comets and moons and the shape of the earth and the change in direction of its axis of rotation. By the end of the eighteenth century this success had established Newton’s theories of motion and gravitation as correct, or at least as a marvelously accurate approximation. Evidently it is a mistake to demand too strictly that new physical theories should fit some preconceived philosophical standard. In quantum mechanics the state of a system is not described by giving the position and velocity of every particle and the values and rates of change of various fields, as in classical physics. Instead, the state of any system at any moment is described by a wave function, essentially a list of numbers, one number for every possible configuration of the system.6 If the system is a single particle, then there is a number for every possible position in space that the particle may occupy. This is something like the description of a sound wave in classical physics, except that for a sound wave a number for each position in space gives the pressure of the air at that point, while for a particle in quantum mechanics the wave function’s number for a given position reflects the probability that the particle is at that position. What is so terrible about that? Certainly, it was a tragic mistake for Einstein and Schrödinger to step away from using quantum mechanics, isolating themselves in their later lives from the exciting progress made by others. Even so, I’m not as sure as I once was about the future of quantum mechanics. It is a bad sign that those physicists today who are most comfortable with quantum mechanics do not agree with one another about what it all means. The dispute arises chiefly regarding the nature of measurement in quantum mechanics. This issue can be illustrated by considering a simple example, measurement of the spin of an electron. (A particle’s spin in any direction is a measure of the amount of rotation of matter around a line pointing in that direction.) All theories agree, and experiment confirms, that when one measures the amount of spin of an electron in any arbitrarily chosen direction there are only two possible results. One possible result will be equal to a positive number, a universal constant of nature. (This is the constant that Max Planck originally introduced in his 1900 theory of heat radiation, denoted h, divided by 4π.) The other possible result is its opposite, the negative of the first. These positive or negative values of the spin correspond to an electron that is spinning either clockwise or counter-clockwise in the chosen direction. But it is only when a measurement is made that these are the sole two possibilities. An electron spin that has not been measured is like a musical chord, formed from a superposition of two notes that correspond to positive or negative spins, each note with its own amplitude. Just as a chord creates a sound distinct from each of its constituent notes, the state of an electron spin that has not yet been measured is a superposition of the two possible states of definite spin, the superposition differing qualitatively from either state. In this musical analogy, the act of measuring the spin somehow shifts all the intensity of the chord to one of the notes, which we then hear on its own. This can be put in terms of the wave function. If we disregard everything about an electron but its spin, there is not much that is wavelike about its wave function. It is just a pair of numbers, one number for each sign of the spin in some chosen direction, analogous to the amplitudes of each of the two notes in a chord.7 The wave function of an electron whose spin has not been measured generally has nonzero values for spins of both signs. There is a rule of quantum mechanics, known as the Born rule, that tells us how to use the wave function to calculate the probabilities of getting various possible results in experiments. For example, the Born rule tells us that the probabilities of finding either a positive or a negative result when the spin in some chosen direction is measured are proportional to the squares of the numbers in the wave function for those two states of the spin.8 One response to this puzzle was given in the 1920s by Niels Bohr, in what came to be called the Copenhagen interpretation of quantum mechanics. According to Bohr, in a measurement the state of a system such as a spin collapses to one result or another in a way that cannot itself be described by quantum mechanics, and is truly unpredictable. This answer is now widely felt to be unacceptable. There seems no way to locate the boundary between the realms in which, according to Bohr, quantum mechanics does or does not apply. As it happens, I was a graduate student at Bohr’s institute in Copenhagen, but he was very great and I was very young, and I never had a chance to ask him about this. The instrumentalist approach is a descendant of the Copenhagen interpretation, but instead of imagining a boundary beyond which reality is not described by quantum mechanics, it rejects quantum mechanics altogether as a description of reality. There is still a wave function, but it is not real like a particle or a field. Instead it is merely an instrument that provides predictions of the probabilities of various outcomes when measurements are made. It seems to me that the trouble with this approach is not only that it gives up on an ancient aim of science: to say what is really going on out there. It is a surrender of a particularly unfortunate kind. In the instrumentalist approach, we have to assume, as fundamental laws of nature, the rules (such as the Born rule I mentioned earlier) for using the wave function to calculate the probabilities of various results when humans make measurements. Thus humans are brought into the laws of nature at the most fundamental level. According to Eugene Wigner, a pioneer of quantum mechanics, “it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness.”11 Thus the instrumentalist approach turns its back on a vision that became possible after Darwin, of a world governed by impersonal physical laws that control human behavior along with everything else. It is not that we object to thinking about humans. Rather, we want to understand the relation of humans to nature, not just assuming the character of this relation by incorporating it in what we suppose are nature’s fundamental laws, but rather by deduction from laws that make no explicit reference to humans. We may in the end have to give up this goal, but I think not yet. Some physicists who adopt an instrumentalist approach argue that the probabilities we infer from the wave function are objective probabilities, independent of whether humans are making a measurement. I don’t find this tenable. In quantum mechanics these probabilities do not exist until people choose what to measure, such as the spin in one or another direction. Unlike the case of classical physics, a choice must be made, because in quantum mechanics not everything can be simultaneously measured. As Werner Heisenberg realized, a particle cannot have, at the same time, both a definite position and a definite velocity. The measuring of one precludes the measuring of the other. Likewise, if we know the wave function that describes the spin of an electron we can calculate the probability that the electron would have a positive spin in the north direction if that were measured, or the probability that the electron would have a positive spin in the east direction if that were measured, but we cannot ask about the probability of the spins being found positive in both directions because there is no state in which an electron has a definite spin in two different directions. These problems are partly avoided in the realist—as opposed to the instrumentalist—approach to quantum mechanics. Here one takes the wave function and its deterministic evolution seriously as a description of reality. But this raises other problems. Erwin Schrödinger Erwin Schrödinger; drawing by David Levine The realist approach has a very strange implication, first worked out in the 1957 Princeton Ph.D. thesis of the late Hugh Everett. When a physicist measures the spin of an electron, say in the north direction, the wave function of the electron and the measuring apparatus and the physicist are supposed, in the realist approach, to evolve deterministically, as dictated by the Schrödinger equation; but in consequence of their interaction during the measurement, the wave function becomes a superposition of two terms, in one of which the electron spin is positive and everyone in the world who looks into it thinks it is positive, and in the other the spin is negative and everyone thinks it is negative. Since in each term of the wave function everyone shares a belief that the spin has one definite sign, the existence of the superposition is undetectable. In effect the history of the world has split into two streams, uncorrelated with each other. This is strange enough, but the fission of history would not only occur when someone measures a spin. In the realist approach the history of the world is endlessly splitting; it does so every time a macroscopic body becomes tied in with a choice of quantum states. This inconceivably huge variety of histories has provided material for science fiction,12 and it offers a rationale for a multiverse, in which the particular cosmic history in which we find ourselves is constrained by the requirement that it must be one of the histories in which conditions are sufficiently benign to allow conscious beings to exist. But the vista of all these parallel histories is deeply unsettling, and like many other physicists I would prefer a single history. There is another thing that is unsatisfactory about the realist approach, beyond our parochial preferences. In this approach the wave function of the multiverse evolves deterministically. We can still talk of probabilities as the fractions of the time that various possible results are found when measurements are performed many times in any one history; but the rules that govern what probabilities are observed would have to follow from the deterministic evolution of the whole multiverse. If this were not the case, to predict probabilities we would need to make some additional assumption about what happens when humans make measurements, and we would be back with the shortcomings of the instrumentalist approach. Several attempts following the realist approach have come close to deducing rules like the Born rule that we know work well experimentally, but I think without final success. The realist approach to quantum mechanics had already run into a different sort of trouble long before Everett wrote about multiple histories. It was emphasized in a 1935 paper by Einstein with his coworkers Boris Podolsky and Nathan Rosen, and arises in connection with the phenomenon of “entanglement.”13 We naturally tend to think that reality can be described locally. I can say what is happening in my laboratory, and you can say what is happening in yours, but we don’t have to talk about both at the same time. But in quantum mechanics it is possible for a system to be in an entangled state that involves correlations between parts of the system that are arbitrarily far apart, like the two ends of a very long rigid stick. For instance, suppose we have a pair of electrons whose total spin in any direction is zero. In such a state, the wave function (ignoring everything but spin) is a sum of two terms: in one term, electron A has positive spin and electron B has negative spin in, say, the north direction, while in the other term in the wave function the positive and negative signs are reversed. The electron spins are said to be entangled. If nothing is done to interfere with these spins, this entangled state will persist even if the electrons fly apart to a great distance. However far apart they are, we can only talk about the wave function of the two electrons, not of each separately. Entanglement contributed to Einstein’s distrust of quantum mechanics as much or more than the appearance of probabilities. Strange as it is, the entanglement entailed by quantum mechanics is actually observed experimentally. But how can something so nonlocal represent reality? On the other hand, the problems of understanding measurement in the present form of quantum mechanics may be warning us that the theory needs modification. Quantum mechanics works so well for atoms that any new theory would have to be nearly indistinguishable from quantum mechanics when applied to such small things. But a new theory might be designed so that the superpositions of states of large things like physicists and their apparatus even in isolation suffer an actual rapid spontaneous collapse, in which probabilities evolve to give the results expected in quantum mechanics. The many histories of Everett would naturally collapse to a single history. The goal in inventing a new theory is to make this happen not by giving measurement any special status in the laws of physics, but as part of what in the post-quantum theory would be the ordinary processes of physics. One difficulty in developing such a new theory is that we get no direction from experiment—all data so far agree with ordinary quantum mechanics. We do get some help, however, from some general principles, which turn out to provide surprisingly strict constraints on any new theory. Obviously, probabilities must all be positive numbers, and add up to 100 percent. There is another requirement, satisfied in ordinary quantum mechanics, that in entangled states the evolution of probabilities during measurements cannot be used to send instantaneous signals, which would violate the theory of relativity. Special relativity requires that no signal can travel faster than the speed of light. When these requirements are put together, it turns out that the most general evolution of probabilities satisfies an equation of a class known as Lindblad equations.14 The class of Lindblad equations contains the Schrödinger equation of ordinary quantum mechanics as a special case, but in general these equations involve a variety of new quantities that represent a departure from quantum mechanics. These are quantities whose details of course we now don’t know. Though it has been scarcely noticed outside the theoretical community, there already is a line of interesting papers, going back to an influential 1986 article by Gian Carlo Ghirardi, Alberto Rimini, and Tullio Weber at Trieste, that use the Lindblad equations to generalize quantum mechanics in various ways. Lately I have been thinking about a possible experimental search for signs of departure from ordinary quantum mechanics in atomic clocks. At the heart of any atomic clock is a device invented by the late Norman Ramsey for tuning the frequency of microwave or visible radiation to the known natural frequency at which the wave function of an atom oscillates when it is in a superposition of two states of different energy. This natural frequency equals the difference in the energies of the two atomic states used in the clock, divided by Planck’s constant. It is the same under all external conditions, and therefore serves as a fixed reference for frequency, in the way that a platinum-iridium cylinder at Sèvres serves as a fixed reference for mass. Tuning the frequency of an electromagnetic wave to this reference frequency works a little like tuning the frequency of a metronome to match another metronome. If you start the two metronomes together and the beats still match after a thousand beats, you know that their frequencies are equal at least to about one part in a thousand. Quantum mechanical calculations show that in some atomic clocks the tuning should be precise to one part in a hundred million billion, and this precision is indeed realized. But if the corrections to quantum mechanics represented by the new terms in the Lindblad equations (expressed as energies) were as large as one part in a hundred million billion of the energy difference of the atomic states used in the clock, this precision would have been quite lost. The new terms must therefore be even smaller than this. How significant is this limit? Unfortunately, these ideas about modifications of quantum mechanics are not only speculative but also vague, and we have no idea how big we should expect the corrections to quantum mechanics to be. Regarding not only this issue, but more generally the future of quantum mechanics, I have to echo Viola in Twelfth Night: “O time, thou must untangle this, not I.”
7a3efe8ffd2ded5a
Skip to main content Chemistry LibreTexts 4.3: The Particle-in-a-Box Model • Page ID • The particle-in-a box model is used to approximate the Hamiltonian operator for the \(\pi\) electrons because the full Hamiltonian is quite complex. The full Hamiltonian operator for each electron consists of the kinetic energy term \(\dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2}\) and the sum of the Coulomb potential energy terms \(\dfrac {q_1q_2}{4\pi \epsilon _0 r_{12}}\) for the interaction of each electron with all the other electrons and with the nuclei (\(q\) is the charge on each particle and \(r\) is the distance between them). Considering these interactions, the Hamiltonian for electron i given below. \[ \hat {H} _i = \dfrac {- \hbar ^2}{2m} \dfrac {d^2}{dx^2} + \underset{\text{sum over electrons}}{ \sum _{j } \dfrac {e^2}{4 \pi \epsilon _0 r_{i, j}}} - \underset{\text{sum over nuclei}}{ \sum _{n} \dfrac {e^2 Z_n}{4 \pi \epsilon _0 r_{i,n}} } \label {4-1}\] The Schrödinger equation obtained with this Hamiltonian cannot be solved analytically by anyone because of the electron-electron interaction terms. Some approximations for the potential energy must be made. We want a model for the dye molecules that has a particularly simple potential energy function because we want to be able to solve the corresponding Schrödinger equation easily. The particle-in-a-box model has the necessary simple form. It also permits us to get directly at understanding the most interesting feature of these molecules, their absorption spectra. Figure \(\PageIndex{1}\): A diagram of the particle-in-a-box potential energy superimposed on a somewhat more realistic potential. The bond length is given by β, the overshoot by δ, and the length of the box by L = bβ + 2δ, where b is the number of bonds. As mentioned in the previous section, we assume that the π-electron motion is restricted to left and right along the chain in one dimension. The average potential energy due to the interaction with the other electrons and with the nuclei is taken to be a constant except at the ends of the molecule. At the ends, the potential energy increases abruptly to a large value; this increase in the potential energy keeps the electrons bound within the conjugated part of the molecule. Figure \(\PageIndex{1}\) shows the classical particle-in-a-box potential function and the more realistic potential energy function. We have defined the constant potential energy for the electrons within the molecule as the zero of energy. One end of the molecule is set at \(x = 0\), the other at \(x = L\), and the potential energy is goes to infinity at these points. For one electron located within the box, i.e. between \(x = 0\) and \(x = L\), the Hamiltonian is \[\hat {H} = \dfrac {-\hbar ^2}{2m} \dfrac {d^2}{dx^2}\] because \(V =0\), and the (time-independent) Schrödinger equation that needs to be solved is then \[\dfrac {- \hbar ^2}{2m} \dfrac {d^2}{dx^2} \psi (x) = E \psi (x) \label {4-2}\] We need to solve this differential equation to find the wavefunction and the energy. In general, differential equations have multiple solutions (solutions that are families of functions), so actually by solving this equation, we will find all the wavefunctions and all the energies for the particle-in-a-box. There are many ways of solving differential equations, and you will see some of them illustrated here and in subsequent chapters. One way is to recognize functions that might satisfy the equation. This equation says that differentiating the function twice produces the function times a constant. What kinds of functions have you seen that regenerate the function after differentiating twice? Exponential functions and sine and cosine functions come to mind. Example \(\PageIndex{1}\) Use \(\sin(kx)\), \(\cos(kx)\), and \(e^{ikx}\) for the possible wavefunctions in Equation \(\ref{4-2}\) and differentiate twice to demonstrate that each of these functions satisfies the Schrödinger equation for the particle-in-a-box. Exercise \(\PageIndex{1}\) leads you to the following three equations. \[\dfrac {\hbar ^2 k^2}{2m} \sin (kx) = E \sin (kx) \label {4-3}\] \[\dfrac {\hbar ^2 k^2}{2m} \cos (kx) = E \cos (kx) \label {4-4}\] \[\dfrac {\hbar ^2 k^2}{2m} e^{ikx} = E e^{ikx} \label {4-5}\] For the equalities expressed by these equations to hold, \(E\) must be given by \[E = \dfrac {\hbar ^2 k^2}{2m} \label {4-6}\] Kinetic energy is the momentum squared divided by twice the mass \(p^2/2m\), so we conclude from Equation \(\ref{4-6}\) that \(ħ^2k^2 = p^2\). Solutions to differential equations that describe the real world also must satisfy conditions imposed by the physical situation. These conditions are called boundary conditions. For the particle-in-a-box, the particle is restricted to the region of space occupied by the conjugated portion of the molecule, between \(x = 0\) and \(x = L\). If we make the large potential energy at the ends of the molecule infinite, then the wavefunctions must be zero at \(x = 0\) and \(x = L\) because the probability of finding a particle with an infinite energy should be zero. Otherwise, the world would not have an energy resource problem. This boundary condition therefore requires that \(ψ(0) = ψ(L) = 0\). Example \(\PageIndex{2}\) Which of the functions \(sin(kx)\), \(cos(kx)\), or \(e^{ikx}\) is 0 when x = 0? As you discovered in Exercise \(\PageIndex{2}\) for these three functions, only \(sin(kx) = 0\) when \(x = 0\). Consequently only \(sin(kx)\) is a physically acceptable solution to the Schrödinger equation. The boundary condition described above also requires us to set \(ψ(L) = 0\). \[ψ(L) = \sin(kL) = 0 \label {4.7}\] The sine function will be zero if \(kL = nπ\) with \(n = 1,2,3, \cdots\). In other words, \[ k = \dfrac {n \pi}{L} \label {4-8}\] with \(n = 1, 2, 3 \cdots\) Note that \(n = 0\) is not acceptable here because this makes the wave vector zero \(k = 0\), so \(\sin(kx) = 0\), and thus \(ψ(x)\) is zero everywhere. If the wavefunction were zero everywhere, it means that the probability of finding the electron is zero. This clearly is not acceptable because it means there is no electron. Example \(\PageIndex{3}\) Show that \(\sin(kx) = 0\) at \(x = L\) if \(k = nπ/L\) and \(n\) is an integer. Negative Quantum Numbers It appears that a negative integer also would work for \(n\) because \[\sin \left ( \dfrac {-n \pi}{L} x \right ) = - \sin \left ( \dfrac {n \pi}{L} x \right ) \label {4.9}\] which also satisfies the boundary condition at \(x = L\). The reason negative integers are not used is a bit subtle. Changing \(n\) to \(–n\) just changes the sign (also called the phase) of the wavefunction from + to -, and does not produce a function describing a new state of the particle. Note that the probability density for the particle is the absolute square of the function, and the energies are the same for \(n\) and \(–n\). Also, since the wave vector k is associated with the momentum (p = ħk), n > 0 means k > 0 corresponding to momentum in the positive direction, and \(n < 0\) means \(k < 0\) corresponding to momentum in the negative direction. By using Euler’s formula one can show that the sine function incorporates both \(k\) and \(–k\) since \[ \sin (kx) = \dfrac {1}{2i} ( e^{ikx} - e^{-ikx} ) \label {4-10}\] so changing \(n\) to \(–n\) and \(k\) to \(–k\) does not produce a function describing new state, because both momentum states already are included in the sine function. The set of wavefunctions that satisfies both boundary conditions is given by \[ \psi _n (x) = N \sin \left ( \dfrac {n \pi}{L} x \right ) \text {with } n = 1, 2, 3, \cdots \label {4-11}\] The normalization constant N is introduced and evaluated to satisfy the normalization requirement. \[ \int \limits _0^L \psi ^* (x) \psi (x) dx = 1 \label {4- 12}\] \[N^2 \int \limits _0^L \sin ^2 \left ( \dfrac {n \pi x}{L} \right ) dx = 1 \label {4-13}\] \[N = \sqrt{ \dfrac{1}{\int \limits _0^L \sin ^2 \dfrac {n\pi x}{L} dx} } \label {4-14}\] \[ N = \sqrt{ \dfrac {2}{L}} \label {4-15}\] Finally we write the wavefunction: \[ \psi _n (x) = \sqrt{ \dfrac {2}{L} } \sin \left ( \dfrac {n \pi}{L} x \right ) \label {4-16}\] Example \(\PageIndex{4}\) Evaluate the integral in Equation \(\ref{4-13}\) and show that \(N = (2/L)^{1/2}\). By finding the solutions to the Schrödinger equation and imposing boundary conditions, we have found a whole set of wavefunctions and corresponding energies for the particle-in-a box. The wavefunctions and energies depend upon the number n, which is called a quantum number. In fact there are an infinite number of wavefunctions and energy levels, corresponding to the infinite number of values for n \(n = 1 \rightarrow \infty\). The wavefunctions are given by Equation \(\ref{4-16}\) and the energies by Equation \(\ref{4-6}\). If we substitute the expression for k from Equation \(\ref{4-8}\) into Equation \(\ref{4-6}\), we obtain the equation for the energies \(E_n\) \[ E_n = \dfrac {n^2 \pi ^2 \hbar ^2}{2mL^2} = n^2 \left (\dfrac {h^2}{8mL^2} \right ) \label {4-17}\] Example \(\PageIndex{4}\) Substitute the wavefunction, Equation \(\ref{4-16}\), into Equation \(\ref{4.2}\) and differentiate twice to obtain the expression for the energy given by Equation \(\ref{4-17}\). From Equation \(\ref{4-17}\) we see that the energy is quantized in units of \(\dfrac {h^2}{8mL^2}\); i.e. only certain values for the energy of the particle are possible. This quantization, the dependence of the energy on integer values for n, results from the boundary conditions requiring that the wavefunction be zero at certain points. We will see in other chapters that quantization generally is produced by boundary conditions and the presence of Planck’s constant in the equations. The lowest-energy state of a system is called the ground state. Note that the ground state (\(n = 1\)) energy of the particle-in-a-box is not zero. This energy is called the zero-point energy. Example \(\PageIndex{5}\) Here is a neat way to deduce or remember the expression for the particle-in-a-box energies. The momentum of a particle has been shown to be equal to \(ħk\). Show that this momentum, with \(k\) constrained to be equal to \(nπ/L\), combined with the classical expression for the kinetic energy in terms of the momentum \((p^2/2m)\) produces Equation \(\ref{4.17}\). Determine the units for \(\dfrac {h^2}{8mL^2}\) from the units for \(h\), \(m\), and \(L\). Example \(\PageIndex{6}\) Why must the wavefunction for the particle-in-a-box be normalized? Show that φ(x) in Equation \(\ref{4-16}\) is normalized. Example \(\PageIndex{6}\) Use a spreadsheet program, Mathcad, or other suitable software to construct an accurate energy level diagram and to plot the wavefunctions and probability densities for a particle-in-a-box with \(n = 1\) to \(6\). You can make your graphs universal, i.e. apply to any particle in any box, by using the quantity \((h^2/8mL^2)\) as your unit of energy and \(L\) as your unit of length. To make these universal graphs, plot \(n^2\) on the y-axis of the energy-level diagram, and plot \(x/L\) from \(0\) to \(1\) on the x-axis of your wavefunction and probability density graphs. Example \(\PageIndex{7}\) How does the energy of the electron depend on the size of the box and the quantum number n? What is the significance of these variations with respect to the spectra of cyanine dye molecules with different numbers of carbon atoms and pi electrons? Plot \(E(n_2)\), \(E(L_2)\), and \(E(n)\) on the same figure and comment on the shape of each curve. The quantum number serves as an index to specify the energy and wavefunction or state. Note that \(E_n\) for the particle-in-a-box varies as \(n^2\) and as \(1/L^2\), which means that as \(n\) increases the energies of the states get further apart, and as \(L\) increases the energies get closer together. How the energy varies with increasing quantum number depends on the nature of the particular system being studied; be sure to take note of the relationship for each case that is discussed in subsequent chapters.
ce6b367c857b0141
Topic outline • General About this Course Chemical reactions underpin the production of pretty much everything in our modern world.  But, what is the driving force behind reactions?  Why do some reactions occur over geological time scales whilst others are so fast that we need femtosecond-pulsed lasers to study them?  Ultimately, what is going on at the atomic level?  Discover the answers to such fundamental questions and more on this course in introductory physical chemistry.   The course covers the key concepts of three of the principal topics in first-year undergraduate physical chemistry: thermodynamics, kinetics and quantum mechanics. These three topics cover whether or not reactions occur, how fast they go and what is actually going on at the sub-atomic scale.  awardaward 2award3 • Blended Learning Space for BNU Restricted Not available unless: You belong to a group in 北京师范大学化学学院 • Thermodynamics I This module explores thermodynamic definitions, the zeroth law of thermodynamics and temperature, the first law of thermodynamics and enthalpy, reversible expansion, and heat capacity. Page: 1Videofiles: 7Quiz: 1Files: 2 • Thermodynamics II This module explores the second law of thermodynamics and entropy, the second law of thermodynamics and spontaneity, the second law of thermodynamics and equilibrium, the third law of thermodynamics and absolute entropy, and Hess' Law. Page: 1Videofiles: 8Quizzes: 2 • Virtual Lab 1: Thermodynamics This lab allows you to further explore thermodynamics. Videofiles: 5Page: 1Quiz: 1File: 1 • Chemical Kinetics I Page: 1Videofiles: 11Quiz: 1 • Chemical Kinetics II This module explores complex reactions, steady-state approximation, and catalysis. Page: 1Videofiles: 10Quizzes: 2 • Virtual Lab 2: Kinetics This lab allows you to further explore kinetics. Page: 1Videofiles: 4Quiz: 1 • Quantum Chemistry I This module explores Planck's quantum of energy, particle nature of light, wave nature of matter, Heissenberg's uncertainty principle, the Schrödinger equation, free particle & the particle in a box, Born's interpretation of the wavefunction, and normalisation of the wavefunction. Page: 1Videofiles: 12Quiz: 1 • 每周测验 Quizzes: 4
bc12d176afc53c6a
This is inspired from an amazingly successful question on Operations Research Stack Exchange: What are the great unsolved problems in operations research? Wikipedia has some huge lists of: $\star$ But neither of them even mention the fact that the universal functonal in DFT is unknown! $\star$ Some great problems (not in the above lists!) were discussed in these answers: Some great problems (not in either of the above lists, as far as I know!) are here: • Finding a multi-electron relativistic and quantum mechanical method: • the Schrödinger equation is non-relativistic, • the Klein-Gordon equation is relativistic but only works for spinless particles, • the Dirac equation is a 1-electron equation and only approximates QM to 1st order in $\alpha$, • the Dirac-Coulomb-Breit equation involves interacting electrons but is not invariant with respect to Lorentz transformations (it is no longer properly relativistic) and like the Dirac equation it is not properly quantum mechanical either since it is derive from first-order perturbation theory in the fine structure constant $\alpha$! • $\therefore$ There is no multi-electron, relativistic, QM equation like the above four for single e-. • High temperature superconductivity: For low-temperatures we have BCS theory, but for high-temperature superconductors we cannot even predict $T_c$ (the critical temperature). • How to get multi-reference coupled cluster working well like CCSD(T) for single-reference? • Can we come up with a black-box multi-reference method like CCSD(T) for single-reference? • Is there a robust way to automatically select active spaces? • How best to reach the CBS limit for post-SCF methods? How to solve the cusp problem? • How to go beyond Gaussian orbitals, and remain efficient? • Can a quantum computer demonstrate beating a classical computer in the modeling of matter? Can you explain any of these, or perhaps discuss the most recent progress, in up to 3 paragraphs? What are some other unsolved problems in the computational / theoretical study of matter, and can you explain them in up to 3 paragraphs? High-temperature superconductivity High-Tc superconductor levitating above a magnet Superconductivity is a fascinating macroscopic quantum phenomenon in which, as some material is cooled below a critical temperature, its electrical resistance abruptly vanishes. A superconductor can also expel magnetic flux, which allows levitation effects as shown in the picture above. The conventional form of superconductivity was first discovered in Mercury in 1911 by Heike Kamerlingh Onnes, but it took until 1957 for the microscopic Bardeen-Cooper-Schrieffer (BCS) theory to explain its origin. In short, electrons form bound states called Cooper pairs, due to an effective attractive interaction mediated by phonons. However, there is a less conventional, less understood cousin known as high-temperature superconductivity, or high-$T_c$ superconductivity. It is mentioned both on Wikipedia's unsolved problems in physics page and on the unsolved problems in chemistry page, but it equally applies to the study of matter. Since the 1986 discovery by Bednorz and Müller of superconductivity in a copper oxide, with a transition temperature of $35$ K (high for superconductors!), there's been an immense amount of experimental, computational and theoretical activity in the field. The goals are manifold, including finding a room temperature superconductor, and to understand the mechanism. Often these systems are very complex, formed from multi-layered crystals, and involve some degree of doping and electron-electron interactions, making their modeling a complex task indeed. Promising computational avenues include accurate simulations of model Hamiltonians (e.g. Hubbard Hamiltonians) in an effort to find the mechanism, and the ongoing development of suitable ab initio methods to model these systems. At this point, I personally think that such approaches represent the most likely path to understanding these materials, barring some breakthrough. However, that doesn't mean progress has stopped elsewhere. For example, additional clues keep coming in from experiments establishing new classes of superconducting materials, and surprising transport properties. • 1 $\begingroup$ @NikeDattani Thanks. Regarding your edit, I think manifold is the correct word, not manyfold. At least according to the dictionary, only one of them means diverse. $\endgroup$ – Anyon Jul 20 '20 at 22:21 • $\begingroup$ You're right, I'm sorry about that! I got confused by "manifold" because I'm too used to seeing that word in the context of Reimannian manifolds in general relativity, or "manifold of electronic states" in quantum chemistry. How about multifold? $\endgroup$ – Nike Dattani Jul 20 '20 at 22:25 • $\begingroup$ @NikeDattani On balance, I prefer manifold. But it does not matter much. If I saw the word in isolation I would probably first think of a topological space... It is a versatile word. $\endgroup$ – Anyon Jul 21 '20 at 3:08 Relativistic correlation methods are another interesting topic: usually one employs the no-pair approximation, which doesn't correlate the negative energy states. However, there's really no reason why the negative energy states shouldn't experience correlation effects, as well... I think there's been pretty good effort recently for the automatic selection of active spaces with the DMRG method, see J. Comput. Chem. 40, 2216 (2019). Somewhat similar approaches have also been used in earlier works, e.g. J. Chem. Phys. 140, 241103 (2014) ran large-active-space calculations to figure out a smaller active space in which the production-level calculations were run. As to the beyond Gaussian orbitals question, numerical atomic orbitals (NAOs) are pretty good for this when combined with density fitting approaches; e.g. here's a RI-CCSD(T) study with NAOs: J. Chem. Theory Comput. 15, 4721 (2019). Your Answer
c4924c71401bdd45
Topological Matter in Artificial Gauge Fields Sonic Landau levels and synthetic gauge fields in mechanical metamaterials Abbaszadeh, Hamed Mechanical strain can lead to a synthetic gauge field that controls the dynamics of electrons in graphene sheets as well as light in photonic crystals. Here, we show how to engineer an analogous synthetic gauge field for lattice vibrations. Our approach relies on one of two strategies: shearing a honeycomb lattice of masses and springs or patterning its local material stiffness. As a result, vibrational spectra with discrete Landau levels are generated. Upon tuning the strength of the gauge field, we can control the density of states and transverse spatial confinement of sound in the metamaterial. We also show how this gauge field can be used to design waveguides in which sound propagates with robustness against disorder as a consequence of the change in topological polarization that occurs along a domain wall. By introducing dissipation, we can selectively enhance the domain-wall-bound topological sound mode, a feature that may potentially be exploited for the design of sound amplification by stimulated emission of radiation (SASERs, the mechanical analogs of lasers.) Probing quantum turbulence in He II by quantum evaporation measurements Amelio, Ivan In superfluid 4He, due to strong interactions, the density profile of a vortex line, as computed with Quantum Monte Carlo, deviates from what predicted by Gross–Pitaevskii (GP) mean-field. We find that the basic features of this density modulation are recovered in wave–packets of a single rotonic excitation. This suggests to correct the current GP-based view of a vortex reconnection event as a source of phonon waves, by including emission of rotons. Experiments at low temperature of quantum evaporation should be able to detect these non-thermal rotons. Topology and dynamics in driven hexagonal lattices Asteria, Luca Ultracold atoms are a versatile system to study the fascinating phenomena of gauge fields and topological band structures. By Floquet driving of optical lattices, the topology of the Bloch bands can be engineered. In this poster, we present experimental schemes for momentum-resolved Bloch state tomography, which allow mapping out the Berry curvature and obtaining the Chern number. Furthermore, we discuss the dynamics of the wave function after a quench into the Floquet system. We observe the appearance of dynamical vortices, which trace out a closed contour, the topology of which can be directly mapped to the Chern number. Our measurements provide a new perspective on topology and dynamics and a unique starting point for studying interacting topological phases. Spin-orbitcoupling in a Bose-Einstein condensate: Triple-well in momentum space Cabedo Bru, Josep Spin-Orbit (SO) coupling, which links a particle's spin to its motion, has a crucial role in the electronic properties of many condensed matter systems, and it is at the basis of phenomena such as the spin-Hall effect and topological insulators. The high level of control of ultracold atoms makes them ideal candidates to engineer the spin-orbit coupling in neutral systems [1]. Here we show that by dressing three atomic internal states of a Bose-Einstein condensate (BEC) with two pairs of lasers in a double Raman configuration, such three atomic spin states of the BEC become coupled and a triple-well in the 2D lowest band of the single atom dispersion relation is obtained. The distance between the centres, heights of the barriers, and the energy bias of the triple well in momentum state can be engineered by an appropriate manipulation of the laser intensities and detunings while tunneling in momentum space is induced by the external trapping potential. Interaction-dependent quantum phase transitions of the BEC ground state in such a triple-well potential in momentum space are predicted. [1] Y. Zhang, M. E. Mossman, Th. Busch, P. Engels, and C. Zhang, Frontiers of Physics 11, 118103 (2016). Tailoring the Fermi velocity in 2D Dirac Materials Díaz Fernández, Álvaro Motivation: Previous works aiming to modify the Fermi velocity in Dirac materials require cumbersome setups [1-3]. It is thus desirable to find new ways to tune this fundamental parameter. Our proposal is to embed different Dirac materials in a uniform electric field, something readily achievable in experiments. Systems: Topological crystalline insulator/semiconductor interface, armchair graphene nanoribbons and carbon nanotubes. Main result: The Fermi velocity is significantly reduced with increasing the transverse electric field in Dirac materials. This result has been tested via continuum (Dirac equation), tight-binding and ab initio approaches [4-5]. [1] G. Li et al., Nat. Phys. 6, 109 (2010). [2] C. Hwang et al., Sci. Rep. 2, 590 (2012). [3] D. C. Elias et al., Nat. Phys. 7, 701 (2011). [4] A. D. F. et al., Scientific Reports 7, 8058 (2017). [5] A. D. F. et al., Physica E 93, 230 (2017). A new machine for dysrosium experiment Du, Li We introduce the scientific goals, the engineering progress and the new technology being developed at the new dysprosium lab being built at MIT. Exact Edge and Bulk States of Topological Models and their Robustness Against an Impurity Duncan, Callum When considering topological states we usually use the bulk-edge correspondence to look for their existence. In this work we will not use the bulk-edge correspondence, instead, we will construct both the edge and bulk eigenfunctions analytically. It is known that the bulk states of certain topological models can be constructed via Bloch's theorem. We will discuss a general approach to construct the bulk states of a finite system in one dimension. Then by extending Bloch's theorem, we construct the exact edge state eigenfunctions. We fully prescribe a method of obtaining the form and properties of the edge and bulk eigenfunctions for a given one-dimensional periodic model. We extend the method to two dimensions by considering the dimensionally separable Hofstadter model. We also show that this method can be utilized to consider the robustness of a model to a static impurity localized to the edge or in the two-dimensional case a line defect across the edge. We observe that the presence of a single edge impurity can have a drastic effect on the edge state of a system. On increasing the impurity strength, for certain models, the topological edge state can be replaced (or joined) by a trivial bound state of the impurity, with an energy of the order of the impurity strength. Enhanced chiral anomaly in Floquet Schwinger model Ebihara, Shu Controlling quantum states by temporally periodic driving is actively studied with the use of the Floquet theory. In such driven systems we can realize exotic properties which cannot be exhibited in undriven equilibrium states. In this study we analyze the Schwinger model, (1+1)-dimensional quantum electrodynamics (QED), under temporally periodic electric field. Since the Schwinger model is a relativistic theory with fermions, we can expect chiral anomaly. We show that the periodic external field plays a role to shift the energy dispersions oppositely for right- and left-handed fermions, which is nothing but the spectral flow nature of the chiral anomaly, and that this leads to temporally oscillating chiral condensate. Synthetic dimensions and chiral currents with spin-orbit-coupled two-electron ultracold fermions Franchi, Lorenzo We report on two di erent approaches to the quantum simulation of Hall-like systems subjected to an arti cial gauge eld. Adopting an innovative scheme we engineer an hybrid two dimensional lattice characterized by a \real" dimension, provided by a 1D optical lattice, and a \synthetic" dimension encoded in the internal degrees of freedom of 173Yb. In the rst experiment [1] the synthetic dimension is mapped by performing a Raman coupling between the hyper ne states (F = 5=2) of the ground state of 173Yb. In this kind of experimental setup we observed chiral edge states, as well as their \skipping" trajectories. In the second major experiment [2] we demonstrate a new method to synthesize spin-orbit interaction which exploits the ultranarrow clock transition between the 1S0 and the long-lived 3P0 state in degenerate 173Yb atoms. For the rst time we characterize the dependence of the amplitude of the chiral current on the magnetic ux, providing a direct evidence of the inversion of the chiral current sign when the magnetic ux increases above . In second experiment the presence of spin-orbit coupling has been detected by means of clock transition spectroscopy, as proposed in [3], also taking advantage of a 642 km-long optical ber link infrastructure connecting LENS to the Italian National Metrology Institute (INRiM). Realizing and detecting a topological insulator in the AIII symmetry class García Velasco, Carlos Topological insulators in the AIII symmetry class lack experimental realization. Moreover, fractionalization in one-dimensional topological insulators has not been yet directly observed. Our work might open possibilities for both challenges. We propose a one-dimensional model realizing the AIII symmetry class which can be realized in current experiments with ultracold atomic gases. We further report on a distinctive property of topological edge modes in the AIII class: in contrast to those in the well studied BDI class, they have non-zero momentum. Exploiting this feature we propose a path for the detection of fractionalization. A fermion added to an AIII system splits into two halves localized at opposite momenta, which can be detected by imaging the momentum distribution. Real and imaginary part of conductivity of strongly interacting bosons in optical lattices Grygiel, Barbara Optical lattices filled with ultra-cold atomic gases can be thought of as a counterpart of solid state systems, where the optical lattice plays the role of the ionic potential, while ultra-cold atoms act as the charge carriers. Recent development in experimental techniques allowed to investigate correlation functions and transport phenomena in such systems. We study the Bose-Hubbard model in the quantum rotor approach, which allows us to take into account spatial dependencies, such as dimensionality, lattice geometry, and influence of the gauge potentials. We calculate conductivity of bosons in a two-dimensional lattice in synthetic magnetic field. In such scenario, two types of conductivity can be distinguished, intra- and inter-band. The interband contribution, usually omitted in analysis of multiband systems, appears to have a crucial role in the transport properties as its values are a few orders of magnitude greater than the intraband one. Topological Phases in Ultracold Fermionic Ladders Haller, Andreas Inspired by the recent experimental advances in the study of ultracold atoms trapped in optical lattices, we consider models of fermions hopping in ladder geometries and subject to artificial magnetic fluxes such as [1, 2, 3]. By applying the concept of resonances in chiral currents [2], we find a parameter (momentum component of the current in Fourier space), distinguishing between trivial and quantum Hall (QH) phases in non-interacting cases. We aim for evidence about fractional QH phases: In case of nearest-neighbor Hubbard interactions, we identify a gap in the spin sector of the corresponding Luttinger liquid leading to a resonant state at fractional filling factor v =1/2. We support our analytic results with matrix product sates (MPS) simulations [3]. References: [1] L. Mazza, M. Burrello et. al, New J. Phys. Volume 17, 105001 (2015) [2] E. Cornfeld and E. Sela, Phys. Rev. B Volume 92, 115446 (2015) [3] A. Haller, M. Rizzi and M. Burrello, arXiv:1707.05715 (2017) Characterzing interacting topological states of matter via charge pumps and single-particle topological invariants Hayward, Andrew Charge pumps in 1D systems can be used to probe the topology 2D systems by associating a cyclic Hamiltonian parameter with an artificial quasi-momentum. We use this mapping to investigate topological phase transitions in the presence of interactions. Transport in optical lattices with flux Hudomal, Ana Recent cold atom experiments have realized artificial gauge fields in periodically modulated optical lattices [1,2]. We study the dynamics of atomic clouds in these systems by performing numerical simulations using the full time-dependent Hamiltonian and comparing these results to the semiclassical approximation. Under constant external force, atoms in optical lattices with flux exhibit an anomalous velocity in the transverse direction. We investigate in detail how this transverse drift is related to the Berry curvature and Chern number, taking into account realistic experimental conditions. [1] G. Jotzu et al., Nature 515, 237 (2014). [2] M. Aidelsburger et al., Nature Phys. 11, 162 (2015). Time-periodic driving of spinor condensates in a hexagonal optical lattice Ilin, Alexander Local topological invariant of the Interacting Hofstadter Interface Irsigler, Bernhard Analogue black hole in coupled pseudo-spin-1/2 Bosons Kaur, Inderpreet Quantum fluids such as ultracold condensate of bosonic atoms has long been suggested as an important candidate for sonic black hole. Existence of such analogue black hole, their event horizon, and related Hawking radiation was recently confirmed experimentally. In this work we report a study of such sonic black hole in the pseudo-spin-1/2 Bosons, the related modification of the sonic horizon as well as the analogue space-time metric. Exactly Solvable Topological Edge, Surface, Corner and Hinge States from Destructive Interference Kunst, Flore The main feature of topological phases is the presence of robust boundary states, which appear for example in the form of chiral edge states in Chern insulators and open Fermi arcs on the surfaces of Weyl semimetals. Recently, new higher-order topological phases were proposed in the form of corner and hinge states. Even though, noninteracting, topological systems can be straightforwardly described by fully periodic systems, the understanding of the corresponding boundary states has almost exclusively relied on numerical studies. We devised a generic recipe for constructing D-dimensional lattice models whose d-dimensional boundary states, located on edges, surfaces, corners, hinges and so forth, can be obtained exactly. The solvability of these states is rooted in the underlying lattice structure and does not as such depend on fine-tuning, which allows us to track their evolution throughout various phases and across phase transitions. On my poster, I present the generic method with which to find these exact solutions and provide explicit examples of chiral, edge states, Fermi arcs, corner states and topologically protected hinge states. This is based on Phys. Rev. B 96, 085443 (2017) and arXiv: 1712.07911. Observation of the Higgs mode in a strongly interacting fermionic superfluid Link, Martin Higgs and Goldstone modes are possible collective modes of an order parameter upon spontaneously breaking a continuous symmetry. Whereas the low-energy Goldstone (phase) mode is always stable, additional symmetries are required to prevent the Higgs (amplitude) mode from rapidly decaying into low-energy excitations. In high-energy physics, where the Higgs boson has been found after a decades-long search, the stability is ensured by Lorentz invariance. In the realm of condensed–matter physics, particle–hole symmetry can play this role and a Higgs mode has been observed in weakly-interacting superconductors. However, whether the Higgs mode is also stable for strongly-correlated superconductors in which particle–hole symmetry is not precisely fulfilled or whether this mode becomes overdamped has been subject of numerous discussions. Experimental evidence is still lacking, in particular owing to the difficulty to excite the Higgs mode directly. Here, we observe the Higgs mode in a strongly-interacting superfluid Fermi gas. By inducing a periodic modulation of the amplitude of the superconducting order parameter $\Delta$, we observe an excitation resonance at frequency $2\Delta/h$. For strong coupling, the peak width broadens and eventually the mode disappears when the Cooper pairs turn into tightly bound dimers signalling the eventual instability of the Higgs mode. Exploring exotic orders by simple degrees of freedom coupled to a gauge theory Liu, Ke In condensed matter physics, gauge theories are often considered as an “emergent” phenomenon. They appear as an effective description of the collective behavior of some interacting systems at low energy. However, we could also reverse this methodology: taking gauge theories and some other degrees of freedom as initial inputs, it may give rise to exotic orders that correspond to the collective behavior of some underlying model. That is, instead of the gauge theory, the order is emerging. In this presentation, I will attempt to give examples of this scenario by considering ordinary $O(n)$ rotors coupled with, typically discrete, gauge theories. This produces various “emergent” orders. I will discuss the meaning of these orders from a perspective of statistical physics, in the hope that they can be realistic under the rapid development of engineering artificial gauge fields. Cesium solitons Mežnaršič, Tadej When non-interacting Bose-Einstein condensate is confined to a quasi one-dimensional channel it will spread due to dispersion as dictated by the Schrödinger equation. The spreading rate can be affected by changing the interaction between the atoms via the Feshbach resonance. If the interaction is set to just the right value, the attraction between atoms exactly compensates the dispersion. In this case the BEC doesn't spread and we get a bright matter-wave soliton. The maximum number of atoms in a soliton is limited by the frequency of the channel and the interaction between atoms. By setting the inter-atom interaction to different attractive values we are able to create soliton trains with different number of solitons from elongated BECs. Fractional quantum Hall physics in lattice systems Nielsen, Anne Ersbak Bang The fractional quantum Hall effect, which can be realized in certain two-dimensional systems at low temperature and high magnetic field, leads to many interesting properties, such as the possibility to have anyonic quasiparticles that are neither bosons nor fermions. There is currently much interest in investigating the possibilities for having fractional quantum Hall physics in lattice systems, both because it may lead to new ways to realize the effect, and because the lattice gives rise to new features and opportunities. Here, we propose a quite general approach based on conformal field theory to obtain lattice fractional quantum Hall models. The models have analytical ground states, and we use Monte Carlo simulations to compute, e.g., topological entanglement entropies and shape and statistics of anyons. We also discuss how one can interpolate between lattice and continuum fractional quantum Hall models and propose a scheme to implement a related model in ultracold atoms in optical lattices. High-frequency analysis of periodically driven quantum system with slowly varying amplitude Novičenko, Viktor We consider a quantum system periodically driven with a strength which varies slowly on the scale of the driving period. The analysis is based on a general formulation of the Floquet theory relying on the extended Hilbert space. It is shown that the dynamics of the system can be described in terms of a slowly varying effective Floquet Hamiltonian that captures the long-term evolution, as well as rapidly oscillating micromotion operators. We obtain a systematic high-frequency expansion of all these operators. Generalizing the previous studies, the expanded effective Hamiltonian is now time-dependent and contains extra terms appearing due to changes in the periodic driving. The same applies to the micromotion operators which exhibit a slow temporal dependence in addition to the rapid oscillations. As an illustration, we consider a quantum-mechanical spin in an oscillating magnetic field with a slowly changing direction. The effective evolution of the spin is then associated with non-Abelian geometric phases reflecting the geometry of the extended Floquet space. The developed formalism is general and also applies to other periodically driven systems, such as shaken optical lattices with a time-dependent shaking strength, a situation relevant to the cold atom experiments. Versatile detection scheme for topological Bloch-state defects Nuske, Marlon The dynamics in solid state systems is not only governed by the band structure but also by topological defects of the Eigenstates. A paradigmatic example are the Dirac points in graphene. For this system with a two-atomic basis the linear dispersion relation at the Dirac points is accompanied by a vortex of the azimuthal phase of the Eigenstates. In a time-of-flight (ToF) expansion the Eigenstates interfere and the resulting signal contains information about the azimuthal phase. We present a versatile detection scheme that uses off-resonant lattice modulation to extract the azimuthal phase from the ToF signal. This detection scheme is applicable to a variety of two-band systems and can be extended to general multi-band systems. Competating quantum phases in the disordered Bose-Hubbard model Pal, Sukla The effect of disorder on zero temperature phase diagram of two dimensional Bose Hubbard model has been studied in presence of artificial gauge field. Employing single site gutzwiller mean field theory, we incorporate the effect of disorder which shows the presence of Bose glass phase which impedes the direct transition from mott insulator to superfluid phase. Incorporating nearest neighbour interaction, at nearest neighbour strength, VN = 0:02, the density wave states first starts to appear. Applying disorder in this regime shows the co-existence of Bose glass and disordered solid phase depending on the nature and distribution of the disorder. Furthermore, we report the effect of synthetic magnetic field on Bose glass phase. Edge states in bosonic honeycomb lattices Pantaleon Peralta, Pierre Anthony We investigate the properties of magnon edge states in a ferromagnetic honeycomb lattice with zig-zag, bearded and armchair boundaries. In contrast with the fermionic graphene, we find novel edge states due to the missing bonds along the boundary sites. After introducing an external on-site potential at the outermost sites we find that the energy spectra of the edge states are tunable. Additionally, when a non-trivial gap is induced, we find that some of the edge states are topologically protected and also tunable. Our results may explain the origin of the novel edge states recently observed in photonic lattices. Optical Hall conductivity of Haldane-Bose-Hubbard model Patucha, Konrad We study ultra-cold bosonic atoms in optical lattice with gauge potentials. In order to describe these systems, we use Bose-Hubbard model in quantum rotor approximation. This allows us to include influence of spatial correlation – necessary in correct description of lattices with non-zero Chern number such as Haldane model. We calculate the optical Hall conductivity and present its dependence on the temperature and model parameters. We identify two main transport channels and excitation related to them. The results show that the spectral properties of the Berry curvature influences the transverse transport. Transition in traps of different shapes in a system of a few ultra-cold fermions Pęcak, Daniel The ground-state properties of a few spin-1/2 fermions with different masses and interacting via short-range contact forces are studied within an exact diagonalization approach. It is shown that, depending on the shape of the external confinement, different scenarios of the spatial separation between components manifested by specific shapes of the density profiles can be obtained in the strong interaction limit. We find that the ground-state of the system undergoes a specific transition between orderings when the confinement is changed adiabatically from a uniform box to a harmonic oscillator shape. We study the properties of this transition in the framework of the finite-size scaling method adopted to few-body systems. Pelegrí, Gerard Recent theoretical and experimental studies have shown that it is possible to simulate artificial magnetic fields with ultracold atoms in optical lattices [1]. In particular, the possibility to implement chiral, topologically protected edge states analogous to those found in the context of quantum Hall physics has been demonstrated both for fermionic and bosonic atoms [2,3]. In this work, we propose an alternative strategy to implement robust edgelike states (ELS) with an ultracold atom carrying orbital angular momentum (OAM) in a diamond-chain optical lattice. The existance of these states is due to quantum interference effects, and they can be intuitively constructed as combinations of three-site spatial dark states (SDS). These states are very robust against different types of deffects [4] and form a zero-energy flat band. For states with one unit of OAM, the l=1 case, the tunneling amplitudes depend both on the spatial localization and the winding number of the local states, and they may become complex depending on the relative position of the sites [5]. The ELS implemented in this manifold can display global chirality. In addition, the angular momentum degree of freedom opens a gap in the band structure that is not present in the absence of OAM, resembling the effect of a net flux through the plaquettes [6]. Finally, in the limit of unit filling and strong interactions we study the mapping of the system into a spin-1/2 model with two-body nearest neighbour interactions [7]. References [1] M. Aidelsburger, S. Nascimbene, N. Goldman, arXiv 1710.00851. [2] M. Mancini, G. Pagano, G. Cappellini, L. Livi, M. Rider, J. Catani, C. Sias, P. Zoller, M. Inguscio, M. Dalmonte, and L. Fallani, Science 349, 1510-1513 (2015). [3] B. K. Stuhl, H.I. Lu, L.M. Aycock, D. Genkina, and I.B. Spielman, Science 349, 1514-1518 (2015). [4] G. Pelegrí, J. Polo, A. Turpin, M. Lewenstein, J. Mompart, and V. Ahufinger, Phys. Rev. A 95, 013614 (2017). [5] J. Polo, J. Mompart, and V. Ahufinger, Phys. Rev. A 93, 033613 (2016). [6] A. A. Lopes and R. G. Dias, Phys. Rev. B 84, 085124 (2011). [7] G. Pelegrí et al, in preparation. A Versatile Strontium Quantum Gas Machine with a Microscope Piatchenkov, Sergei Strontium opens new perspectives for Hamiltonian engineering because it is an alkaline-earth element with narrow intercombination lines, metastable excited electronic states, and ten collisionally-stable SU(N)-symmetric nuclear spin states. We have built a new versatile Sr machine with quantum gas microscope capability. After precooling on a broad blue transition, we collect 107 atoms at 2 µK in a narrow-line red MOT, load them into a 1064 nm dipole trap, and evaporatively cool them to obtain either a BEC or a degenerate Fermi gas of ~105 atoms. We have now also observed for the first time the doubly-forbidden 1S0 - 3P2 transition in 87Sr by direct laser excitation, which opens up possibilities for quantum computation and gauge field engineering. Non-monotonic response and Klein-Gordon physics in gapless-to-gapped quantum quenches of one-dimensional free fermionic systems. Porta, Sergio The properties of prototypical examples of one-dimensional free fermionic systems undergoing a sudden quantum quench between a gapless state characterized by a linear crossing of the energy bands and a gapped state are analyzed. By means of a Generalized Gibbs Ensemble analysis, we observe an anomalous non-monotonic response of steady state correlation functions as a function of the strength of the mechanism opening the gap. In order to interpret this result, we calculate the full dynamical evolution of these correlation functions. We show that the latter is governed by a Klein-Gordon equation with a mass related to the gap opening mechanism and an additional source term, which depends on the gap as well. The competition between the two terms explains the presence of the non-monotonous behavior. We conclude by arguing the stability of the phenomenon in the cases of non-sudden quenches and higher dimensionality. Charge fractionalization in small fractional-Hall samples Račiūnas, Mantas The discovery of fractional quantum Hall effect (FQHE) in 2D electron gas gave rise to immense interest in topological phases of matter. One of the most intriguing features of FQH state is fractionally charged excitations which embody anyonic statistics. Nowadays, experiments in optical lattices allow much more controllable study of many-body systems, therefore allowing regimes that are impossible to realise in semiconductor based experiments. Historically, FQHE comes from condensed matter systems, which can be characterized by a very large number of particles, as a consequence, theoretical studies were focused only on infinite or periodical Hamiltonians. However, few of the unanswered questions remain: can FQHE states be realised in minuscule lattices, containing only several sites in diameter, and what additional effects would open boundaries produce? These questions are interesting not only from the fundamental point of view, but are crucial for the design of a experiment in optical lattices. Using numerical diagonalization of interacting Harper-Hofstadter Hamiltonian we were able to observe localisation of fractional charge excitations in a square lattice using two different techniques. Driven-dissipative phase transitions of open quantum systems from a Floquet-Liouville perspective Reimer, Viktor The study of driven-dissipative open quantum systems prompted the emergence of a plethora of interesting new physics inaccessible to their equilibrium counterparts [S. Diehl et al., Nat. Phys. 4, 878 (2008)]. Combining Floquet's theorem with the general Liouvillian approach to open quantum systems [M. Grifoni and P. Hänggi, Phys. Rep. 304, 229 (1998)] provides powerful tools to investigate such systems beyond the adiabatic limit. Here, we present a general method to calculate the quasistationary state of a driven-dissipative system coupled to a transmission line (and more generally, to a reservoir) with arbitrary coherent driving strength and modulation frequency of system parameters. Applying this method, we extend our previous results based on the Floquet scattering theory [M. Pletyukhov et al., Phys. Rev. A 95, 043814 (2017)] for a two level system with time-dependent parameters which show the breakdown of the adiabaticity condition even for a slow time modulation. Secondly, we apply our method to a driven Lambda-system exhibiting electromagnetically induced transparency (EIT) and observe how the time modulation modifies the latter phenomenon. Our focus however lies on the third application - the single-mode Kerr nonlinearity model - where driving is considered across the point of the dissipative phase transition [A. Le Boite et al., Phys. Rev. A 95, 023829 (2017)]. The poster discusses the behaviour of observables in the quasistationary regime going beyond the range of driving parameters studied previously. Attractive fermions in a 2D optical lattice with spin-orbit coupling: Charge order, superfluidity, and topological signatures Rosenberg, Peter Exotic states of matter, including high-Tc superconductors, and topological phases, have long been a focus of condensed matter physics. With the recent advent of artificial spin-orbit coupling in ultracold gases, and the remarkable experimental control and enhanced interactions provided by optical lattices, a broad range of novel strongly correlated systems are quickly becoming experimentally accessible. One system of particular interest, given its potential impact on spintronics and quantum computation, is the attractive Fermi gas with spin-orbit coupling in a 2D optical lattice. Here we examine the combined effects of Rashba spin-orbit coupling and interaction in this system, with particular focus on the unique pairing, charge, and spin properties of the ground state, which is computed using the numerically exact auxiliary-field quantum Monte Carlo technique. We also study the behavior of edge currents, which are a potential precursor of various topological phenomena, such as Majorana fermions. In addition to illuminating the behavior of this exotic charge ordered superfluid state, our results serve as high-accuracy benchmarks for the coming generation of precision experiments with ultra-cold gases. Finally, we provide an outlook on future directions, including the addition of a Zeeman field to induce a spin polarization, in order to investigate finite-momentum pairing states and topological superconductivity. Design and characterization of a quantum heat pump in a driven quantum gas Roy, Arko We propose a novel scheme for a quantum heat pump powered by rapid time-periodic driving. We focus our investigation on a system consisting of two coupled driven quantum dots in contact with fermionic reservoirs at different temperatures. Such a configuration can be realized in a quantum-gas microscope. Theoretically we characterize the device by describing the coupling to the reservoirs using the Floquet-Born-Markov approximation. A time-dependent variational analysis of lattice gauge theories Sala, Pablo Fermionic Gaussian states are completely characterized by their two-point correlation functions. These are collected in the so-called covariance matrix, which then becomes the main object in their description. We derive a time-dependent variational description of (1+1)-dimensional gauge theories using the framework of lattice gauge theories as well as fermionic Gaussian states. We compare our results to previously obtained results via matrix product states for ground-state properties and real-time dynamics. Specifically, we investigate the phase transition between the string and string-breaking phases among other properties in the massive Schwinger model and other non-Abelian generalizations Manipulating spin correlations in a periodically driven many-body system Sandholzer, Kilian Periodic driving can be used to coherently control the properties of a many-body state and to realize new phases which are not accessible in static systems. In this context, cold fermions in optical lattices provide a highly tunable platform to investigate driven many-body systems and additionally offer the prospect of quantitative comparisons to theoretical predictions. We implement a driven Fermi-Hubbard model by periodically modulating a 3D hexagonal lattice. In the regime where the drive frequency is much higher than all other relevant energy scales, we verify that the interacting system can be described by a renormalized tunneling. Furthermore, we achieve independent control over the single particle tunneling and the magnetic exchange energy by driving near-resonantly to the interaction. As a consequence, we are able to show that anti-ferromagnetic correlations in a fermionic many-body system can be enhanced or even switched to ferromagnetic correlations. The implementation of more complex modulation schemes opens the possibility to combine the physics of artificial gauge fields and strongly-correlated systems. High-temperature nonequilibrium Bose condensation induced by a hot needle Schnell, Alexander We investigate theoretically a one-dimensional ideal Bose gas that is driven into a steady state far from equilibrium via the coupling to two heat baths: a global bath of temperature $T$ and a ``hot needle'', a bath of temperature $T_h\gg T$ with localized coupling to the system. Remarkably, this system features a crossover to finite-size Bose condensation at temperatures $T$ that are orders of magnitude larger than the equilibrium condensation temperature. This counterintuitive effect is explained by a suppression of long-wavelength excitations resulting from the competition between both baths. Moreover, for sufficiently large needle temperatures ground-state condensation is superseded by condensation into an excited state, which is favored by its weaker coupling to the hot needle. Our results suggest a general strategy for the preparation of quantum degenerate nonequilibrium steady states with unconventional properties and at large temperatures. Interacting Topological Insulators in 1D Superlattices Stenzel, Leo Without interactions, 1D charge pumps can be mapped onto 2D topological systems. The 1D superlattice then corresponds to the transversal kinetic energy. 1D charge pumps are readily realized experimentally. We add 1D repulsive interactions of Fermions and find topologically non-trivial Mott insulators and band insulators. The latter exhibit a topological phase transition which can be understood with an effective 1D model for strong superlattices and interactions. Topological properties and many-body phases of synthetic Hofstadter strips Tirrito , Emanuele Creating local topological excitations in quantum gas microscopes Ünal, F. Nur The idea of inserting a local magnetic flux, representing the field of a thin solenoid, plays an important role in various condensed matter models, especially in the understanding of topological systems. One example is the creation and manipulation of quasiparticle or hole excitations in these systems, which are essential for fault-tolerant quantum information processing. Implementing such local fluxes in cold atom experiments promises great potential. Here, we propose an experimental scheme to realize a local flux in a cold atom setting which takes advantage of the recent developments in synthetic gauge fields and quantum microscopes. To demonstrate the feasibility of our method, we consider quantum-Hall-type lattice systems and study the dynamical creation of topological excitations. We analyze the adiabatic charge pumping by tuning the strength of the local flux. Periodically quantum driven system realization in photonic lattices Upreti, Lavi Kumar Different topologically properties have been found static system. Different in the sense, topological properties depending on the dimension and the symmetry. Here, we explore them for periodically driven system. It has been seen that trivial system in static phase can be made topological by the application of periodic driving. Not only that, we can also have phases where the topological invariant for the bands vanishes eventhough it is topological, aka anomalous phases or Floquet topological insulator. From there, we try to realize such system in photonic system, more precisely waveguide arrays. And we calculate phase diagrams using bulk-boundary correspondence. Probing topological excitations via engineering of an optical solenoid Wang, Botao The realization of artificial gauge fields in optical lattice systems paves a route to the experimental investigation of various topological quantum effects. Here we propose a realistic scheme to locally control artificial gauge fields and to directly probe topological transport effects in Hofstadter optical lattice. In that case the system can be effectively described by a modified Hofstadter Hamiltonian with an additional flux in some individual plaquette. By treating this additional flux as a pump parameter, a different paradigm for quantum charge pumping can be created. Considering that varying gauge field with time gives rise to synthetic electric fields, which in turns affects the particle distribution, a gauge dependent dynamic happens here. As well as that, topological edge currents in a two-dimentional optical lattice can also be generated. Since all these effects are manifested on the spatial density distribution, with recent advances of microscopic manipulations in optical lattices, a direct detection of such topological properties could be achieved in the near future. Topological order in finite-temperature and driven dissipative systems Wawer, Lukas Majorana Box Engineering: Quantum Spin Liquids and Sachdev-Ye-Kitaev Model Yang, Fan Optical Ladder Lattices With Tunable Flux Žlabys, Giedrius Ultracold atoms in optical lattices provide clean and tunable systems to realize many-body quantum physics. They can be used to simulate a variety of effects ranging from superconductivity and superfluidity to novel phases of matter. Particles trapped in an optical lattice are neutral so the Lorentz force does not affect them. A workaround resolving this issue is the introduction of an artificial gauge field that generates magnetic flux. It can be created by using laser assisted tunneling and periodic driving schemes. This also allows to realize a stronger magnetic flux per lattice plaquette than typically available in solid state experiments. In this work, we propose a driving scheme for a quasi-one dimensional ladder lattice that induces a tunable artificial magnetic flux through the lattice plaquettes. By manipulating the shaking phase for each individual site, this flux can be made inhomogeneous in space. It allows us to explore the dynamics and control capabilities of an atomic wave-packet propagating in such a lattice.
b5659ff8fd89aa69
My caricature Ark's Homepage Curriculum Vitae What's New Physics and the Mysterious Event Enhanced Quantum Physics (EEQT) Quantum Future My Kaluza-Klein pages Links to my other online papers dealing with hyperdimensional physics QFG Site Map EEQT, Quantum Jumps and Quantum Fractals Paper "Quantum Jumps, EEQT and the Five Platonic Fractals": PDF or HTML or link to Los Alamos preprint server (but the pdf version there has lower quality images die to size limits on their server) OpenSource Java Files Quantum Fractals Java Applet EEQT Lab Java Applet Image gallery Bibliography of EEQT Note: These two applets crash on certain browsers. Internet Explorer seems to be OK and Mozilla seems to be OK. The applets crash with Opera (for reasons that are not understood) and with older Netscape. Therefore the best thing is to download the .jar files from SourceForge, unpack them, download Java SDK appropriate to the operating system, install it, and then run the .jar files with "java -jar qf.jar" or "java -jar wave.jar" EEQT or Event-Enhanced-Quantum-Theory I. EEQT stands for "Event Enhanced Quantum Theory" - the term introduced by Ph. Blanchard and A. Jadczyk to describe the piecewise deterministic algorithm replacing the Schrödinger equation for continuously monitored quantum systems (and we suspect all quantum systems fall under this category). 1. Isn't it so that EEQT is a step backward toward classical mechanics that we all know are inadequate? EEQT is based on a simple thesis: not all is "quantum" and there are things in this universe that are NOT described by a quantum wave function. Example, going to an extreme: one such case is the wave function itself. Physicists talk about first and second quantization. Sometimes, with considerable embarrassment, a third quantization is considered. But that is usually the end of that. Even the most orthodox quantum physicist controls at some point his "quantizeeverything" urge - otherwise he would have to "quantize his quantizations" ad infinitum, never being able to communicate his results to his colleagues. The part of our reality that is not and must not be "quantized" deserves a separate name. In EEQT we are using the term "classical." This term, as we use it, must be understood in a special, more-general-than-usually-assumed way. "Classical" is not the same as "mechanical." Neither is it the same as "mechanically deterministic." When we say "classical" - it means "outside of the restricted mathematical formalism of Hilbert spaces, linear operators and linear evolutions." It also means: the value of the "Planck constant" does not govern classical parameters. Instead, in a future theory, the value of the Planck constant will be explained in terms of a "non-quantum" paradigm. 12. The name "Event Enhanced Quantum Theory" is misleading. As we have stated: "EEQT is the minimal extension of orthodox quantum theory that allows for events." It DOES enhance quantum theory by adding the new terms to the Liouville equation. When the coupling constant is small, events are rare and EEQT reduces to orthodox quantum theory. Thus it IS an enhancement. (...) II. Most of the essential papers dealing with various aspects of EEQT are available online. III. EEQT allows us to simulate "Nature's Real Working". Of course EEQT is an incomplete theory, yet it tries to simulate the real world events with an underlying quantum substructure. The algorithm of EEQT is non-local, that suggests that Nature itself, in its deeper level, is non-local too. IV. Normally students learning quantum mechanics are being taught that it is impossible to measure position and momentum of a quantum particle. They learn how to derive Heisenberg's uncertainty relations, and they are told that these mathematical relations have such-and-such interpretation. Some are told that the interpretation itself is disputable. In EEQT all the probabilistic interpretation of quantum theory, including Born's interpretation of the wave function, is derived from the dynamics. EEQT allows us to simulate and predict behavior of a quantum system when several, as one normally calls them, incommensurable observables are being measured. The fact is that in such situation the dynamics is chaotic, and no joint probability distribution exists. That explains why ordinary quantum mechanics rightly noticed the problems with defining such a distribution. (For visualization purposes physicists, especially those dealing with quantum chaos, often use Wigner's distribution (which is not positive definite) or Husimi's distribution (which does not reproduce marginal distributions). Quantum Jumps According to EEQT quantum jumps are not directly observable. What we see are the accompanying "events". This part is somewhat tricky, and I will try to explain the trickiness here, in few paragraphs, but without any hope that there will be even one person who will understand what I mean. Well, perhaps mathematicians will do, but are they going to read this page? I doubt. Physicists certainly will think that it is too weird. And they have better things to do than following someone's weird ideas - as every physicist with guts has weird ideas of his/her own! But I would feel guilty if I did not give it a try. So here it is. Physicists do consider quantum jumps. In particular those who deal with theoretical quantum optics and/or quantum computing and information. But these quantum jumps are not being taken as "real". If not for any other reason, than because there are infinitely many jump processes that can be associated with a given Liouville equation, and there is no good reason to choose one rather than other. Thus discontinuous quantum jumps in theoretical quantum optics are considered mainly as a convenient numerical methods for solving the continuous Liouville equation. It is not so in EEQT. But EEQT splits the world into a quantum and a classical part, and quantum physicists deny that the classical part exist. They think all is quantum - the same way Ptolemeian physicists thought that all is perfectly round. Can we propose a clever idea that will show that not all is quantum? Indeed, according to quantum physics the only thing that exists is the quantum wave function. Now, let us us ask this: is the wave function itself a classical or a quantum object? That is, we ask, is location of the wave function in the Hilbert space governed by classical or by quantum laws? Most quantum physicists would pretend they do not understand the question. Some will understand, and will answer: "sure, there is an uncertainty in the state vector, but that is altogether different story. They will point me to Braginsky or Vaidman or some other, more recent, paper - but they will not answer my question: is the quantum wave function a classical or a quantum object? Is it an object at all? And if it is an object, then what kind of animal it is, and where it fits? Philosophers perhaps will point me to Eccles and Popper, but this is not an answer either. What is my answer to my own question? I do not know the answer, but I can speculate. So, here it is: we are talking about models. Models of "Reality". Perhaps nothing but models "exists", but that is not our problem now. If all is about models, then we can think of a model in which wave function is both classical and quantum. In which Wave Function "observes" itself - as John Archibald Wheeler has imagined: "The universe viewed as a self-excited circuit. Starting small (thin U at upper right), it grows (loop of U) to observer participancy - which in turn imparts 'tangible reality' (cf. the delayed-choice experiment of Fig. 22.9) to even the earliest days of the universe" "If the views that we are exploring here are correct, one principle, observer-participancy, suffices to build everything. The picture of the participatory universe will flounder, and have to be rejected, if it cannot account for the building of the law; and space-time as part of the law; and out of law substance. It has no other than a higgledy-piggledy way to build law: out of statistics of billions upon billions of observer participancy each of which by itself partakes of utter randomness." (J.A. Wheeler, "Beyond the Black Hole", in "Some Strangeness in the Proportion", Ed. Harry Woolf, Addison-Wesley, London 1980) To observe itself "It" must split into two "personalities", a quantum one and a classical one. So, here comes the model: consider a pair of wave functions, the function trying to determine its own shape. One element of the pair is considered to be "quantum" - as it determines probabilities and quantum jumps, while the second element of the pair is interpreted as a classical one - its shape is the classical variable. They dance together and they jump together. More details can be found in "Topics in Quantum Dynamics". And here we come to the mathematical description of quantum jumps in EEQT. Of course the simplest situation is when we separate jumps from the continuous evolution. To analyze this particular situation let us think of the simplest possible "toy model". Physicists like toy models, as they usually provide us with explicit solutions whose properties we can study in order to try to understand more complex, real world situations, where the problems get so complicated that there is no hope even for an approximate solution. Physicists usually replace real-world problems with other problems, build out of their toy models, which are still simple enough to be solvable, even if only approximately, and yet mirror some essential features of the "true problems." So, what would be the simplest toy model to play with, that teaches us something about quantum jumps? The quantum system, to be nontrivial, must live in a Hilbert space of at least two complex dimensions. The classical system must have at least two states. Such a toy model was indeed studied in connection to the Quantum Zeno effect., where it was demonstrated that a flip-flop detector strongly coupled (that is "under intensive observation" - watched pot never boils... ) to a two-state quantum system effectively stops the continuous quantum evolution. This model is not interesting though if we want to study pure quantum jumps. Here we need a more complicated model and that is how "tetrahedral model" was developed. It was found that it leads to chaotic dynamics and to fractals of a new type: fractals drawn by a quantum brush on the quantum canvas - a complex projective space. And that is how we come to quantum fractals. Quantum Fractals The details and the bibliography are given in "Quantum Jumps, EEQT and the Five Platonic Fractals." Here let us describe the algorithm and the Java applet. (The applet is a part of an OpenSource project, so additions and enhancements will probably follow its release. ) The canvas, is a surface of the unit sphere In coordinates its points are represented by vectors n = (n1,n2,n3) of unit length, thus (n1)2+(n2)2+(n3)2 = 1. There are five Platonic solids: tetrahedron (N=4), octahedron (N=6), cube (N=8), icosahedron (N=12), dodecahedron (N=20), where N is the number of vertices. They have equal faces, bounded by equilateral polygons. It was Euclid who proved that only five such can exist in a three dimensional world. In his Mysterium Cosmographicum (1595) Johannes Kepler attempted to account for the orbits of the six then known planets by radii of concentric spheres circumscribing or inscribing the solids. This site is a member of WebRing. To browse visit Here. Last modified on: June 27, 2005.
21df6f80e100cb7a
Skip to main content Macroscopic Quantum Resonators (MAQRO): 2015 update Do the laws of quantum physics still hold for macroscopic objects - this is at the heart of Schrödinger’s cat paradox - or do gravitation or yet unknown effects set a limit for massive particles? What is the fundamental relation between quantum physics and gravity? Ground-based experiments addressing these questions may soon face limitations due to limited free-fall times and the quality of vacuum and microgravity. The proposed mission Macroscopic Quantum Resonators (MAQRO) may overcome these limitations and allow addressing such fundamental questions. MAQRO harnesses recent developments in quantum optomechanics, high-mass matter-wave interferometry as well as state-of-the-art space technology to push macroscopic quantum experiments towards their ultimate performance limits and to open new horizons for applying quantum technology in space. The main scientific goal is to probe the vastly unexplored ‘quantum-classical’ transition for increasingly massive objects, testing the predictions of quantum theory for objects in a size and mass regime unachievable in ground-based experiments. The hardware will largely be based on available space technology. Here, we present the MAQRO proposal submitted in response to the 4th Cosmic Vision call for a medium-sized mission (M4) in 2014 of the European Space Agency (ESA) with a possible launch in 2025, and we review the progress with respect to the original MAQRO proposal for the 3rd Cosmic Vision call for a medium-sized mission (M3) in 2010. In particular, the updated proposal overcomes several critical issues of the original proposal by relying on established experimental techniques from high-mass matter-wave interferometry and by introducing novel ideas for particle loading and manipulation. Moreover, the mission design was improved to better fulfill the stringent environmental requirements for macroscopic quantum experiments. MAQRO is a proposal for a medium-sized space mission to use the unique environment of deep space in combination with novel technological developments of space and quantum technology to test the foundations of quantum physics. The central idea is to perform matter-wave interferometry with massive objects (nanospheres of various materials, e.g., glass) with masses up to 1011 atomic mass units (amus). Novel techniques from quantum optomechanics with optically trapped particles are proposed to be used to prepare test particles for these matter-wave interference experiments. The proposal was first submitted in response to the 2010 ‘M3’ call of the ESA for a medium-sized space mission in ESA’s Cosmic Vision program. The original proposal later was published in Ref. [1]. Since this original proposal, significant progress was made in terms of technology development and in refining the details of the scientific instrument (also see Section 9). A detailed technological study was performed under contract with ESA [2], and several studies were performed with respect to the thermal design of the instrument [3, 4]. In a series of experiments, various groups demonstrated feed-back cooling [5, 6] and side-band cooling of optically trapped particles [79]. A study on loading mechanisms of nano- and microparticles for quantum experiments in space was performed under contract with ESA [10], and experiments reported progress on loading, manipulating and keeping particles in optical traps even at high vacuum [1113]. Optomechanical cooling close to the quantum ground state has been demonstrated for a variety of architectures [1416] and seems to be within reach for optically trapped particles [9]. A collaboration of the University of Vienna, the University of Bremen and Airbus Defence & Space, successfully implemented a high-finesse, adhesively bonded optical cavity using space-proof glue and ultra-low-expansion (ULE) material [17]. The same technology is currently in use to implement a high-finesse test cavity with the same specifications as needed for MAQRO. Based on recent theoretical studies [18], the design of MAQRO was adapted for preparing macroscopic superpositions with state-of-the-art non-linear-optics and laser technology [19] also benefiting from recent advances in the single-mode transmission of deep ultra violet (UV) light [20]. In this way, a central drawback of the initial MAQRO proposal (the need for low power, extremely short-wavelength light) could be resolved. Moreover, LISA Pathfinder (LPF) was successfully launched in December 2015 - a technology demonstrator for the Laser Interferometer Space Antenna (LISA) mission, which served as model for the proposed spacecraft, launcher and orbit of MAQRO. By now, the MAQRO consortium, founded in 2013, consists of 32 groups from 9 countries around the world demonstrating the growing support within the scientific community. Here, we present an update of the MAQRO proposal submitted in 2015 in response to a new Cosmic Vision call of ESA for a medium-sized mission. This update takes into account the novel developments highlighted above and proposes additional improvements to the mission design and the scientific instrument of MAQRO. A central goal is to address and overcome potentially critical issues regarding the readiness of core technologies for MAQRO and to provide realistic concepts for further technology development. Our work presents a new benchmark and a review of relevant work towards a ground-breaking mission that will act as a technology pathfinder for novel, macroscopic quantum technology and quantum optomechanics in space. This paper presents an updated version of the mission proposal MAQRO and the progress in defining that proposal and to demonstrate key technologies since the original mission proposal. We will begin in Section 3 by giving the central motivation for MAQRO and the reasons for performing these experiments in space (the ‘case for space’). In Section 4, we will outline the relation of MAQRO to past and future space missions. Section 6 defines the requirements that have to be met in order to achieve the scientific goals of the mission. These form the basis for deriving the technical requirements that have to be fulfilled by the scientific instrument of MAQRO, which will be described in Section 7. In Section 8, we will describe the outline of the mission itself like orbit requirements and mission phases, and we will summarize the progress and changes with respect to the original mission proposal in Section 9. Finally, Section 10 presents conclusions and outlook. In the following subsections, we will present the central motivation for MAQRO and the reasons why the experiments to be performed by MAQRO have to be carried out in space. What are the fundamental physical laws of the universe? The laws of quantum physics challenge our understanding of the nature of physical reality and of space-time, suggesting the necessity of radical revisions of their underlying concepts. Experimental tests of quantum phenomena, such as quantum superpositions involving massive macroscopic objects, provide novel insights into those fundamental questions. MAQRO allows entering a new parameter regime of macroscopic quantum physics addressing some of the most important questions in our current understanding of the basic laws of gravity and of quantum physics of macroscopic bodies. Fundamental science and technology pathfinder The main scientific objective of MAQRO is to test the predictions of quantum theory in a hitherto inaccessible regime of quantum superpositions of macroscopic objects that contain up to 1010 atoms. This is achieved by combining techniques from quantum optomechanics, matter-wave interferometry and from optical trapping of dielectric particles. MAQRO will test quantum physics in a parameter regime orders of magnitude beyond existing ground-based experimental tests - a realm where alternative theoretical models predict noticeable deviations from the laws of quantum physics [2123]. These models have been suggested to harmonize the paradoxical quantum phenomena both with the classical macroscopic world [2427] and with notions of Minkowski space-time [2830]. MAQRO will, therefore, enable a direct investigation of the underlying nature of quantum reality and space-time, and it may pave the way towards testing the ultimate limit of matter-wave interference posed by space-time fluctuations [31, 32]. Recent works showed that MAQRO might even allow testing certain models of dark matter [33, 34]. In contrast to collapse models, even standard quantum theory, in the presence of gravitation, predicts decoherence for spatially extended, massive superpositions [35, 36]. While this is not applicable in a microgravity setting, ground-based tests in this direction may benefit from the technology development necessary for MAQRO. By pushing the limits of state-of-the-art experiments and by harnessing the space environment for achieving the requirements of high-precision quantum experiments, MAQRO may prove a pathfinder for quantum technology in space. For example, quantum optomechanics is already proving a useful tool in high-precision experiments on Earth [37]. MAQRO may open the door for using such technology in future space missions. Why space? In ground-based experiments, the ultimate limitations for observing macroscopic quantum superpositions are vibrations, gravitational field-gradients, and decoherence through interaction with the environment. Such interactions comprise, e.g., collisions with background gas as well as scattering, emission and absorption of blackbody radiation. The spacecraft design of MAQRO allows operating the experimental platform in an environment offering a unique combination of microgravity (\({\lesssim}10^{-9}\mbox{ g}\)), low pressure (\({\lesssim}10^{-13}\mbox{ Pa}\)) and low temperature (\({\lesssim }20\mbox{ K}\)). This allows sufficiently suppressing quantum decoherence for the effects of alternative theoretical models to become experimentally accessible, and to observe the evolution of macroscopic superpositions over free-fall times of about 100 s. The main reasons for performing MAQRO in space are the required quality of the microgravity environment (\({\lesssim}10^{-9}\mbox{ g}\)), the long free-fall times (100 s), the high number of data points required (up to 104 per measurement run), and the combination of low pressure (\({\lesssim}10^{-13}\mbox{ Pa}\)) and low temperature (\({\lesssim}20\mbox{ K}\)) while having full optical access. These conditions cannot be fulfilled with ground-based experiments. MAQRO with respect to other missions For MAQRO as well as for any other space mission, it is essential to see it in context with successful missions in the past as well as in context with future missions MAQRO may share common requirements with. With respect to earlier missions, MAQRO can benefit from technological heritage, which could significantly reduce mission costs. In the case of future missions, if the parameters of other missions are compatible with the requirements of MAQRO, it could be possible to combine the MAQRO scientific instrument with other instruments on a combined mission. This would significantly cut costs in terms of launch and mission operation. Technological heritage for MAQRO MAQRO benefits from recent developments in space technology. In particular, MAQRO relies on technological heritage from LPF [38], the scientific instrument of LPF, which is called LISA Technology Package (LTP) [39], and on technologies from other missions like Gaia [40], Gravity field and steady-state Ocean Circulation Explorer (GOCE) [41, 42], Microscope [43, 44], the Gravity Recovery and Climate Experiment (GRACE) follow-on mission [45, 46] and the James Webb Space Telescope (JWST) [47]. The spacecraft, launcher, ground segment and orbit (Sun-Earth Lagrange Point 1 (L1)/Sun-Earth Lagrange Point 2 (L2)) are identical to LPF. The most apparent modifications with respect to the LPF design are an external, passively cooled optical instrument thermally shielded from the spacecraft, and the use of two capacitive inertial sensors from ONERA technology. In addition, the propulsion system will be mounted differently to achieve the required low vacuum level at the external subsystem, and to achieve low thruster noise in one spatial direction. The additional optical instruments and the external platform will reach the Technological Readiness Level (TRL) ‘technology validated in relevant environment (TRL 5)’ at the start of the BCD phases. For all other elements, we assume the TRLs to range from ‘technology demonstrated in relevant environment (TRL 6)’ to ‘actual system proven in operational environment (TRL 9)’ because of heritage from LPF and other missions. Alternative mission scenarios Implicit strengths of MAQRO are its relatively low weight and power consumption such that MAQRO’s scientific instrument can, in principle, be combined on the same spacecraft with other missions that have similar requirements in precision and orbit. An example could be sun-observation instruments benefiting from an L1 orbit. Another example could be a combination with the ASTROD I mission or similar mission concepts fulfilling the orbit requirements of MAQRO. Scientific objectives Do the laws of quantum physics remain applicable without modification even up to the macroscopic level? This question lies at the heart of Schrödinger’s famous gedankenexperiment (thought experiment) of a dead-and-alive cat [48]. Matter-wave experiments have confirmed the predictions of quantum physics from the microscopic level of electrons [49, 50], atoms and small molecules [51] up to massive molecules with up to 104 amu [52]. Still, experiments are orders of magnitude from where alternative theories predict deviations from quantum physics [23, 53]. Using ever more massive test particles on Earth may soon face principal limitations because of the limited free-fall times as well as the limited quality of microgravity environments achievable on Earth. Currently, it is assumed that this limit will be reached for interferometric experiments with particles in the mass range between 106 amu and 108 amu [18]. These limitations may be overcome by harnessing space as an experimental environment for high-mass matter-wave interferometry [1]. At the same time, quantum optomechanics provides novel tools for quantum-state preparation and high-sensitivity measurements [54]. The mission proposal MAQRO combines these aspects in order to test the foundations of quantum physics in a parameter regime many orders of magnitude beyond current ground-based experiments, in particular, for particle masses in the range between 108 amu and 1011 amu. This way, MAQRO will not only significantly extend the parameter range over which quantum physics can be tested. It will also allow for decisive tests of a number of alternative theories, denoted as ‘collapse models’ predicting notable deviations from the predictions of quantum theory within the parameter regime tested. An important feature of MAQRO is that the parameter range covered has some overlap with experiments that should be achievable on ground even before a possible launch of MAQRO. This allows cross-checking the performance of MAQRO and to provide a fail-safe in case the predictions of quantum physics should fail already for masses between 106 amu and 108 amu. In this case, MAQRO would not allow for observing matter-wave interference due to the presence of strong, non-quantum decoherence. For this reason, the MAQRO instrument is designed for allowing three modes of operation for testing quantum physics over a wide parameter range - even in the presence of strong decoherence: • Non-interferometric tests of collapse models. The stochastic momentum transfer in collapse models can lead to heating of the center-of-mass motion of trapped nanospheres [55, 56]. This can, in principle, be observed by comparing the measured noise spectra with theoretical predictions [57]. • Deviations from quantum physics in wave-packet expansion. As in the frequency-based non-interferometric approach above, this method is based on the stochastic momentum transfer due to collapse mechanisms. In particular, the momentum transfer leads to a random walk resulting in an increased rate for the expansion of wave packets [55, 57, 58]. • High-mass matter-wave interferometry. This central experiment of MAQRO is based on the original M3 proposal [1]. It has been adapted for harnessing the successful technique of Talbot-Lau interferometry, which currently holds the mass record for matter-wave interferometry [52]. The goal is to observe matter-wave interferometry with particles of varying size and mass, comparing the interference visibility the predictions of quantum theory and the predictions of alternative theoretical models. In particular, the non-interferometric tests and observing wave-packet expansion will allow for performing tests in the presence of comparatively strong decoherence mechanisms. If these two tests show agreement with the predictions of quantum physics, MAQRO’s scientific instrument can then be used for performing matter-wave interferometry to test for smaller deviations from quantum physics. Non-interferometric tests of quantum physics The vast majority of the proposals for the test of collapse models put forward so far is based on interferometric approaches in which massive systems are prepared in large spatial quantum superposition states. In order for such tests to be effective, the superposition has to be sufficiently stable in time to allow for the performance of the necessary measurements. Needless to say, these are extremely demanding requirements from a practical viewpoint. Matter-wave interferometry and cavity quantum optomechanics are generally considered as potentially winning technological platforms in this context, and considerable efforts have been made towards the development of suitable experimental configurations using levitated spheres or gas-phase molecular or metallic-cluster beams. Alternatively, one might adopt a radically different approach and think of non-interferometric strategies to achieve the goal of a successful test. MAQRO offers the opportunity for exploring one such possibility by addressing the influences that collapse models (or in general, any non-linear effect on quantum systems) have on the spectrum of light interacting with a radiation-pressure-driven mechanical oscillator in a cavity-optomechanics setting. The overarching goal of this part of MAQRO is to affirm and consolidate novel approaches to the revelation of deviations from standard quantum mechanics in ways that are experimentally viable and open up unforeseen perspectives in the quest at the center of the MAQRO endeavors. A benchmark in this sense will be provided by the assessment of the continuous spontaneous localization (CSL) model through a non-interferometric approach. In particular, we will take advantage of the fact that the inclusion of the CSL mechanism in the dynamics of a harmonic oscillator results in an extra line-broadening effect that can be made visible from its density noise spectrum. By bypassing the necessity of preparing, manipulating, and sustaining the quantum superposition state of a massive object, the proposed scheme would be helpful in bringing the goal of observing collapse-induced effects closer to the current experimental capabilities. The equation of motion of the optomechanical system (regardless of its embodiment) in the presence of the CSL mechanism can be cast in the form given in Equation (1) $$ \frac{\partial}{\partial t} \hat{\mathcal{O}} = \frac{\mathrm {i}}{\hbar } [ \hat{H}, \hat{ \mathcal{O}} ] + \frac{\mathrm {i}}{\hbar} [ \hat{V}_{t}, \hat{\mathcal{O}} ] + \hat{\mathcal{N}}, $$ where \(\hat{\mathcal{O}}\) is an operator of the system, Ĥ is the Hamiltonian of the mechanical oscillator coupled to the cavity light field, \(\hat{\mathcal{N}}\) embodies all the relevant sources of quantum noise affecting the system, and \(\hat{V}_{t}\) is a stochastic linear potential (linked directly to the position of the harmonic oscillator) that accounts for the effective action of the CSL mechanism [56]. It can be shown that such a potential is zero-mean and delta-correlated, and thus embodies a source of white noise that adds up to the relevant noise mechanisms affecting the optomechanical system, namely the damping of the optical cavity and the Brownian motion (occurring at temperature T) of the mechanical oscillator. A lengthy calculation based on the study, in the frequency domain, of the fluctuation operators of both the optical and mechanical system, leads to the following expression for the density noise spectrum of the mechanical system’s position fluctuation: $$ S(\omega) = \frac{2 \alpha^{2}_{s} \hbar^{2} \kappa\chi^{2} (\Delta ^{2} + \kappa^{2} + \omega^{2} ) + \hbar m \omega[ (\Delta^{2} + \kappa^{2} - \omega^{2} )^{2} + 4 \kappa^{2} \omega^{2} ] [\gamma _{m} \mathrm{coth}(\beta\omega) + \mathcal{Y} ]}{\vert 2 \alpha ^{2}_{s} \Delta\,\hbar\chi^{2} + m (\omega^{2} - \omega^{2}_{m} - \mathrm{i} \gamma_{m} \omega) [\Delta^{2} + (\kappa+\mathrm{i} \omega )^{2} ] \vert ^{2}}, $$ where \(\alpha_{s}\) is the steady-state amplitude of the cavity field, κ is the cavity damping rate, χ is the optomechanical coupling rate. Δ is the detuning between the cavity field and an external pump, m is the mass of the mechanical oscillator, \(\gamma_{m}\) is the mechanical damping rate, \(\omega_{m}\) is the mechanical frequency, and β is the inverse temperature of the system. Finally, we have introduced: $$ \mathcal{Y} = \lambda\sqrt{\frac{\hbar}{m \omega_{m}}}, $$ where λ is the CSL coefficient. In our numerical simulations of the observability of the effects, we have used the value of such parameter achieved by assuming Adler’s estimate of the CSL mechanism’s strength. Quite evidently, the CSL mechanism enters into the expression of the density noise spectrum as an extra thermal-like line broadening contribution. While being formally rather appealing, this elegant result also suggests the strategy to implement in order to observe the collapse model itself, and identifies the challenges that have to be faced, namely a cold enough mechanical system that lets the \(\mathcal{Y}\)-dependent term dominate over the temperature-determined one. Our numerical estimate shows that, indeed, it is possible to pinpoint the effects of the CSL contribution in a parameter regime currently available in optomechanical labs. Figure 1 shows a typical result achieved by using the parameters stated in Ref. [56]. Figure 1 Broadening of noise power spectra. Comparison between the density noise spectrum of the mechanical position fluctuation operators with (solid red line) and without (dashed black line) the influence of the CSL mechanism obtained using Adler’s estimate of the CSL coupling strength and a mechanical oscillator of 15 ng. The inset shows an analogous study for \(m=150\mbox{ ng}\) (figure from Ref. [56]). At the present state, this non-interferometric approach has not been investigated in sufficient detail in the context of MAQRO. While this does not impede the main science goals of MAQRO, we plan nevertheless to investigate this non-interferometric method more closely during the study phase of MAQRO. It may offer the attractive possibility to supplement the results of the other two experiments (Sections 5.2 and 5.3). Deviations from quantum physics in wave-packet expansion Most forms of decoherence can be described as resulting from the interaction of a quantum system with its environment [59]. Examples are elastic and inelastic scattering as well as emission of massive particles or radiation [60]. All of these interactions result in a change of momentum, eventually leading to dephasing and decoherence of quantum states. In a paper by Collett and Pearle [55], it was shown that decoherence mechanisms assumed in collapse models also lead to momentum transfer. That means, even in the absence of standard decoherence mechanisms, collapse models may result in a random walk due to stochastic momentum transfer. This random walk can, in principle, be observed when comparing the expansion rate of a quantum wave packet with the predictions of quantum theory as well as with the predictions of alternative models. Apart from the original suggestion for such an experiment [55], there have also been more recent suggestions to observe this effect using free-falling or optically trapped, dielectric particles [57, 58]. Even if there is no decoherence, the width of a quantum wave packet will expand over time according to the Schrödinger equation. The square of the width of the wave packet \(w_{s}(t)^{2}\) evolves according to the following relation: $$ w_{s}(t)^{2} = \bigl\langle \hat{x}^{2}(t) \bigr\rangle _{s} = \bigl\langle \hat{x}^{2}(0) \bigr\rangle + \frac{t^{2}}{m^{2}} \bigl\langle \hat{p}^{2}(0) \bigr\rangle . $$ Here, the subscript ‘s’ denotes evolution according to Schrödinger’s equation, m is the mass of the particle, the angular brackets denote the expectation value for a given quantum state, denotes the position operator, and denotes the momentum operator. Equation (3) relates the width of the wave packet at time t with the initial width of the wave packet and the initial width of the momentum distribution. In the presence of decoherence, the width of the wave packet increases more quickly: $$ w(t)^{2} = \bigl\langle \hat{x}^{2}(t) \bigr\rangle = w_{s}(t)^{2} + \frac{2 \Lambda\hbar^{2}}{3 m^{2}} t^{3}. $$ Here, Λ is a parameter governing the strength of decoherence mechanisms. The width of the wave packet is not an observable - it has to be inferred from the statistical distribution of many measurements [61]. If we assume that we perform N measurements of the particle position and if the result of the jth measurement is \(x_{j}\), for large N, the width of the wave packet can be approximated as: $$ w = \frac{1}{\sqrt{N-1}} \sqrt{\sum^{N}_{j=1} x^{2}_{j}}. $$ Given that the error of each position measurement is \(\Delta x_{j} = \sigma\), the error of our estimate of the width of the wave packet will be: $$ \Delta w = \frac{\sigma}{\sqrt{N-1}} \approx\frac{\sigma}{\sqrt {N}}, $$ where the approximation holds for large N. The mode of operation of this experiment is to determine the wave-packet size as a function of time t, and to compare these measurements with the predictions of quantum physics using Equation (4). In this way, we can experimentally determine the decoherence parameter Λ and compare it with the predictions of quantum physics. The more Λ deviates from the value predicted by quantum physics, the easier it will be to discern by measuring the wave-packet expansion. For simplicity, let us assume that we have a well isolated quantum system, i.e., quantum physics predicts \(\Lambda=0\) or at least much smaller than the deviation we want to measure. The minimum Λ we can distinguish experimentally from the case of no decoherence is: $$ \Lambda> \Lambda_{\mathrm{min}} = 3 m^{2} \frac{\sigma w_{s}(t)}{\sqrt{N-1} \hbar^{2} t^{3}}. $$ We can relate this minimum decoherence parameter to a decoherence rate \(\Gamma= r^{2}_{c} \Lambda\) by introducing a representative length scale \(r_{c}=100\mbox{ nm}\). This is a typical length scale for the experiments in MAQRO and also the same as the length scale chosen in the collapse model of Ghirardi, Rimini and Weber [24]: $$ \Gamma_{\mathrm{min}} = \Lambda_{\mathrm{min}} r^{2}_{c} = 3 m^{2} \frac {\sigma w_{s}(t) r^{2}_{c}}{\sqrt{N-1} \hbar^{2} t^{3}}. $$ In Figure 2, we compare the predictions of several collapse models with that minimum, discernible decoherence rate \(\Gamma _{\mathrm{min}}\). The figure shows that, by investigating wave-packet expansion, MAQRO can, in principle, perform decisive tests of the CSL model even with the originally suggested parameters [27, 55], and MAQRO could test the quantum gravity model of Ellis and others [62, 63]. However, the plot also illustrates that wave-packet expansion will neither allow testing the model of Károlyházy nor that of Diósi-Penrose. Figure 2 Comparison of \(\pmb{\Lambda_{\mathrm{min}}}\) with the predictions of several collapse models. We compare the minimum decoherence parameter needed to be experimentally discernible (black, solid line) with the decoherence parameters predicted for the CSL model with \({\lambda= 2.2\times10^{-17}\mbox{ Hz}}\) (black, dashed), the quantum-gravity model of Ellis et al. (blue, long dashed), the model of Diósi & Penrose (red, dot-dashed), and the model of Károlyházy (green, dotted). Where models predict a higher decoherence rate than \(\Gamma_{\mathrm{min}}\), one can, in principle, distinguish them from the predictions of quantum physics. In order to estimate the values plotted in Figure 2, we assumed that we let the wave-packet expand for a maximum of 100 s, and that we collect at most \(N=24\times10^{3}\) data points to experimentally estimate the decoherence parameter. The number of data points was chosen in order to limit the integration time to at most four weeks. Moreover, we assumed our test particle to initially be in a thermal state of a harmonic oscillator - with a mechanical frequency \(\omega=10^{5}\mbox{ rad/s}\), an average occupation number of 0.3, and that we can determine the particle position with an accuracy of 100 nm. Because the mechanical frequency for an optically trapped particle only depends on the mass density and the material’s dielectric constant, the mechanical frequency is roughly constant for the particles chosen for MAQRO. The occupation number, however, is assumed to be inversely proportional to the mass of the test particle because it depends on the optomechanical coupling achievable. Because testing quantum physics using wave-function expansion was first introduced for the CSL model [55], and because the CSL model represents a rather general, heuristic approach to collapse models, we will now discuss the prerequisites for testing the CSL model in the context of MAQRO. The CSL model depends on two parameters, a and λ, where \(a=100\mbox{ nm}\) defines the typical length scale at which the CSL model predicts a transition from quantum to classical behavior. For λ, which predicts the rate of decohering events on the microscopic level, a wide variety of values have been suggested, ranging from \(2.2\times 10^{-17}\mbox{ Hz}\) [27, 55] to 10−8 Hz [64]. The smaller one assumes the value of λ, the smaller the deviation from quantum physics. Using Equation (8), we can now estimate the smallest value of λ that MAQRO would allow detecting. In particular, we get: $$ \lambda_{\mathrm{min}} = 4 a^{2} \biggl( \frac{m_{p}}{m} \biggr)^{2} f \biggl( \frac{r}{a} \biggr)^{-1} \Lambda_{\mathrm{min}} > m^{2}_{p} f \biggl( \frac{r}{a} \biggr)^{-1} \frac {12 a^{2} \sigma w_{s}(t)}{\sqrt{N-1} \hbar^{2} t^{3}}, $$ where \(m_{p}\) is the proton mass, and [55]: $$ f(x) = \frac{6}{x^{4}} \biggl[ 1-\frac{2}{x^{2}}+ \biggl( 1+ \frac {2}{x^{2}} \biggr) e^{-x^{2}} \biggr]. $$ In Figure 3, we plot \(\lambda_{\mathrm{min}}\) as a function of the particle mass for the case of two different nanosphere materials. The plots show that MAQRO should allow testing the CSL model for localization rates λ even lower than the originally assumed parameters in Refs. [27, 55]. Comparing this result with the plot in Figure 2 shows that MAQRO will also allow testing the quantum gravity model of Ellis et al. Figure 3 Minimum CSL parameter \(\pmb{\lambda_{\mathrm {min}}}\) . The two lines show the prediction of the minimum CSL parameter \(\lambda_{\mathrm{min}}\) discernible from the case of no decoherence - for the cases of a test particle of fused silica (solid, black) and of Hafnia (\(\mathrm{HfO_{2}}\), blue, dashed). Decoherence in high-mass matter-wave interferometry Using matter-wave interferometry with high-mass test particles is the most sensitive tool of MAQRO for testing quantum physics. While the other techniques described earlier allow testing deviations from quantum physics for values of the decoherence parameter larger than \(\Lambda_{\mathrm{min}} \approx10^{14}\mbox{ m}^{-2}\mbox{ s}^{-1}\), high-mass matter-wave interferometry will allow MAQRO testing for even smaller deviations. In the original MAQRO proposal for the M3 call [1], the approach suggested was using far-field interferometry based on preparing a double-slit-like quantum superposition where a massive particle is in superposition of being in two clearly separate positions. Since this original proposal, we have adapted MAQRO to use near-field interferometry instead. In particular, the novel approach is based on well-established techniques having been used in a series of high-mass matter-wave experiments [65], originally using Talbot-Lau interferometry [66]. Typically, near-field matter-wave interferometry is performed using three gratings. The first grating is used for providing a coherent source of particles. This second grating is the center-piece of the interferometer where the high-mass quantum superposition is prepared. Finally, a third, absorptive grating is used for determining the presence of a periodic interference pattern. Over the last two decades, this approach has been adapted for numerous experiments, steadily improving the approach’s applicability to ever higher test-particle masses and sizes. For example, one can replace one or more of the gratings with standing-wave, optical gratings instead of nano-fabricated, material gratings. For example, if the first and third grating are absorptive gratings, the second grating can be a pure phase grating (see, e.g., Ref. [52]). In the most recent and, so far, most powerful adaptation of this technique, all three gratings are replaced by optical gratings, implementing an optical time-domain ionizing matter-wave interferometer (OTIMA) [67]. An alternative approach using only one, pure-phase grating has been proposed recently [18]. Here, we adapt it for use with MAQRO. In particular, instead of using a grating as a coherent source, the source consists of an intra-cavity optical trap used to initially position and to 3D-cool the center-of-mass motion of an individual, trapped particle - that means, the motion of the particle is cooled in all spatial directions (see Figure 4(left)). After this step of preparation, the particle is released from the trap, and the corresponding wave-function will expand for a time \(t_{1}\). Then a second optical beam, perpendicular to the first one, is switched on. This beam forms a standing-wave upon reflection from a mirror. Either one uses another cavity for this or a simple reflection at a mirror (see Figure 4(right)). The optimal option for this beam’s wavelength \(\lambda_{g}\) will be discussed in Section 6.1. This second beam acts as a pure phase grating with grating period \(d=\lambda_{g}/2\). After applying this grating, the state will evolve freely for a time \(t_{2}\), and then all optical fields are switched on in order to measure the position of the particle. The complete process is repeated N times, and the histogram of the particle positions measured can be used to reconstruct the interference pattern. Figure 4 Schematic of the novel near-field interferometry approach for MAQRO. The approach uses a cavity and a standing-wave grating. First a particle is trapped and its center-of-mass motion 3D-cooled using modes in the first cavity (left). The red dot indicates the particle position. Then the particle is released, and the wave-function expands for some time \(t_{1}\). After that time, the optical phase grating is applied for a short time (right). The expanded red region illustrates the expanded wave-function. We will assume a maximum overall time \(T=t_{1}+t_{2}\approx100\mbox{ s}\). This is necessary in order to keep the total integration time for observing an interference pattern within a reasonable time-frame given the limited life time of a space mission. Moreover, longer integration times would be incompatible with the quality of the microgravity environment achievable in MAQRO. We will assume that the initially prepared state is Gaussian, and if we concentrate only on one dimension in the direction we apply the phase grating in, then the corresponding characteristic function is [18]: $$ \chi_{0}(s,q) = \exp\biggl( -\frac{\sigma^{2}_{x} q^{2} + \sigma ^{2}_{p} s^{2}}{2 \hbar^{2}} \biggr). $$ Here, \(\sigma_{x}\) and \(\sigma_{p}\) are the position and momentum uncertainties of the initial state, respectively. Then the interference pattern close to the original position of the particle can be written as (also see Ref. [18]): $$\begin{aligned} P(x) = &\frac{m}{\sqrt{2 \pi} \sigma_{p} T} \sum^{\infty }_{n=-\infty} \exp( \mathrm {i}n k_{g} x ) J_{2 n} \bigl[ \phi_{0} \sin ( \pi n \kappa) \bigr] \\ &{}\times\exp\biggl[ -\frac{1}{2} \biggl( n k_{g} \sigma_{x} \frac{\beta}{\alpha }^{2} \biggr) \biggr]\exp\biggl[ -\frac{\Lambda T ( n \kappa d )^{2}}{3} \biggr]. \end{aligned}$$ To enable this compact notation, we have introduced several definitions. Central to this approach is the Talbot time \(t_{T}=(md^{2})/h\), where h is Planck’s constant, m is the particle mass, and d is the grating period. The Talbot time defines the time scale of the interference. In particular, close to multiples of the Talbot time, the wave-function after applying the phase grating will again have a similar periodic distribution as the grating itself but with the grating period enhanced by a factor \(\mu=T/t_{1}\). This is the Talbot effect. In addition, we introduced \(k_{g}=2 \pi/\mu d\), \(\alpha =t_{1}/t_{T}\), \(\beta=t_{2}/t_{T}\) and \(\kappa=\alpha\beta/(\alpha+\beta)\). \(\phi_{0}\) denotes the phase applied to the quantum state at the antinodes of the phase grating [18], and \(J_{n}(x)\) is a Bessel function of the first kind. It is important to note that an interference-like pattern can also be observed for purely classical particles. This is due to a moiré shadowing effect [66], and the resulting classical ‘interference pattern’ can also be described using the expression for \(P(x)\) but replacing \(\sin(\pi n \kappa)\) with \(\pi n \kappa\) [68]. In Figure 5, we plot the corresponding visibilities for the quantum and the classical case in the absence of decoherence. The plot shows a marked difference between the quantum and the classical predictions - in visibility and in the dependence on \(\phi_{0}\) [18]. Figure 5 Classical vs. quantum interference visibility. Here, we plot the expected quantum (solid black) vs. the corresponding classical interference visibility (blue, dashed) as a function of \(\phi _{0}\) for a test particle of mass \(m=10^{9}\) amu, \(T=100\mbox{ s}\), and \(\lambda_{g}=200\mbox{ nm}\) for the wavelength of the beam for the standing-wave grating. In the presence of decoherence, the interference visibility drops as plotted in Figure 6. The plot was calculated for a mass \(m=10^{9}\) amu, \(T=100\mbox{ s}\) and \(d=100\mbox{ nm}\). For lighter masses, we may, in principle, even choose shorter times \(T<100\mbox{ s}\). However, the phase \(\phi_{0}\) experienced by our particles for a given energy \(E_{G}\) of the optical grating (optical power integrated over the time the grating is turned on) decreases with decreasing particle size: $$ \phi_{0} = \frac{2 \operatorname{Re}(\alpha) E_{G}}{\hbar c \epsilon_{0} a_{G}}, $$ where \(\epsilon_{0}\) is the vacuum permittivity, c is the speed of light, \(a_{G}\) is the waist of the UV mode, and α is the polarizability of the particle. α is proportional to the particle’s mass. Every decrease in mass therefore has to be compensated by higher intensity of UV light in order to achieve the same phase shift. For the smallest particles used in MAQRO, it is even preferable to use infra red (IR) light instead (see Section 6.1). Figure 6 Visibility reduction due to decoherence. Quantum interference visibility reduces as a function of the strength of decoherence, parametrized by the parameter Λ. According to Figure 5, the difference between quantum and classical visibility is very pronounced for \(\phi_{0} \approx 4.2\). For this choice of phase, we plot the expected quantum and classical interference patterns in Figure 7. As expected, the quantum interference shows significantly higher visibility. The plots also demonstrate the marked difference in the shapes of the quantum and classical predictions (see also Ref. [18]). Figure 7 Interference patterns. Expected quantum (black, solid) and classical (blue, dashed) interference patterns. Scientific requirements Here, we will outline the requirements for realizing the scientific objectives of MAQRO. The requirements for observing high-mass matter-wave interferometry are significantly more stringent than for the other scientific objectives (non-interferometric tests of quantum physics, testing quantum physics by observing wave-packet expansion). For this reason, we focus on the requirements for demonstrating high-mass matter-wave interferometry - then the requirements for the other scientific objectives will automatically be fulfilled as well. An overview of the scientific requirements can be found in Table 1. Table 1 Overview of the scientific requirements of MAQRO Phase grating This requirement only applies for high-mass matter-wave interferometry. As discussed in Section 5.3, a pure phase grating with a grating period \(d=\lambda_{G}/2\) can be realized by an optical standing wave with wavelength \(\lambda_{G}\). Here, we describe the scientific requirements for implementing this pure-phase grating. In matter-wave interferometry based on the Talbot effect, the time scale for the free evolution before applying the phase grating (\(t_{1}\)) and the time between this event and the final position measurement (\(t_{2}\)) are determined by the Talbot time \(t_{T}=(m d^{2})/h\) (m: particle mass; d: grating period; h: Planck’s constant). To see reasonable interference visibility, we must have: $$ \kappa= \frac{t_{1} t_{2}}{T t_{T}} \le\frac{T}{4 t_{T}}, $$ where \(T=t_{1} + t_{2}\) is the overall measurement time per data point. As mentioned in Section 5.3, to get reasonable particle statistics and in order to get realistic requirements on the microgravity quality (see Section 6.6), we have to require \(T\le 100\mbox{ s}\). Because the Talbot time is proportional to the particle mass, this requirement results in an increasingly more stringent upper bound on κ for high test masses. On the other hand, κ should be on the order of 1 in order to see a noticeable difference between the quantum prediction of an interference pattern and classically expected moiré ‘shadow patterns’. In combination with Equation (13), this yields a limit on the particle mass: $$ m\lesssim m_{\mathrm{crit}} \equiv\frac{h T}{4 d^{2}}. $$ Figure 8 shows this (rough) mass limit as a function of the grating period chosen. We see that for performing experiments in the mass regime around 109 amu, the grating period should be \(d\le100\mbox{ nm}\). We choose 100 nm for the grating period in MAQRO because it is the shortest wavelength that will be achievable in space in the foreseeable future. Figure 8 Critical mass over grating period. We plot the approximate upper mass limit for seeing ‘useful’ interference as a function of the grating period d. While this is not a strict limit, the interference pattern observed will become ever closer to the classically expected one for increasing mass. Figure 9 shows that high-visibility interference is still possible for \(m=10^{10}\) amu, and the dependence on \(\phi_{0}\) allows a clear distinction between classical and quantum interference patterns. Figure 10 compares the classical and quantum predictions. Figure 9 Interference visibility for high masses. Comparison of quantum visibility (solid, black) and classical visibility (dashed, blue) for \(m=10^{10}\) amu. Figure 10 Expected high-mass interference patterns. The black, solid line is the quantum prediction for test particles with \(m=10^{10}\) amu. The blue, dashed line is the classical prediction. The two patterns are qualitatively different. We also mentioned earlier that the power we need to apply for the phase grating becomes higher for smaller particles. For a duration of 1 μs of the grating and a fused-silica particle of mass \(m=10^{8}\) amu, the optical power would need to be 5 mW. For a mass of \(m=10^{9}\) amu, the required power would still be 0.5 mW. For \(m=10^{8}\) amu, we can instead use a phase grating with \(\lambda_{G}=1{,}064\mbox{ nm}\). For that wavelength, the necessary power of 6 mW is easy to supply - in particular, if we use a low-finesse cavity for enhancing the power applied. Test particles To fulfill MAQRO’s scientific goal of testing the predictions of quantum physics and to compare them with the predictions of competing models over a wide parameter space, MAQRO needs to operate with test particles of various sizes and materials. In particular, MAQRO requires particles with different mass densities to test the dependence of the measurements results on particle mass. Collapse models typically depend more strongly on particle mass than quantum physics, which facilitates their experimental distinction. Known decoherence mechanisms like the scattering, emission and absorption of blackbody radiation depend strongly on the particle size. Performing experiments with particles of different radii will enable tests of such decoherence mechanisms in a new size range while, at the same time, allowing to test alternative theoretical models. Because MAQRO relies on optically trapping particles, the particles must be dielectric and highly transparent. The particles should also be uncharged. Otherwise, there could be additional, strong decohering mechanisms, and the particles may get lost due to electrostatic interaction with the potentially charged optical bench. The particles do not necessarily need to be spherically symmetric. If they are not, the rotational degree of freedoms need to be cooled in addition to the translation degrees of freedom [69, 70]. MAQRO uses scientific heritage from LPF with respect to \(\mathit{1{,}064}\textit{ nm}\) optics and a \(1{,}064\mbox{ nm}\) laser system. For that reason, the test particles need to be transparent at this wavelength. Possible choices for highly transparent materials at this wavelength are various types of fused silica, hafnia (HfO2) and diamond. The mass density of these materials ranges from \(\rho =2{,}200\mbox{ kg m}^{-3}\) (fused silica) to \(\rho=9{,}700\mbox{ kg m}^{-3}\) (hafnia). The scientific goal of MAQRO is to perform tests in the mass range from 108 amu to 1010 amu. Using fused silica with \(\rho=2{,}200\mbox{ kg m}^{-3}\), we can cover this mass range with a nanosphere size range of 30 nm to 120 nm. Using other materials, MAQRO can perform tests for even higher particle masses. The size of the test particles will be comparable to the grating period. In order to get large enough phase shifts, the particle sizes will, therefore, have to be chosen to fulfill Mie-resonance conditions. If this is taken into account, then the relatively large size of the particles will not be a concern. This is discussed in detail in the thesis of S. Nimmrichter [71]. Particle loading The loading mechanism for loading single, dielectric particles into the optical cavity used for state preparation is a central element of MAQRO. For each measurement, it is required to deliver, on demand a single particle to the optical trap. In order to not significantly prolong the time for a measurement run, the time for particle loading needs to be short compared to the measurement time \(T=100\mbox{ s}\). The particles delivered have to be neutral and should have an internal temperature \(T_{i}\le25\mbox{ K}\) as described in Section 6.5. State preparation A prerequisite of MAQRO is that the motion of the trapped particle can be cooled close to the quantum ground state. This is not necessary for the high-mass interferometry scheme as proposed in Ref. [18]. For MAQRO, however, it is imperative that the particle remains limited to a defined region around the original trapping position while the wave function expands. On the one hand, this is necessary in order for the particle to stay within the UV beam used for the phase grating. On the other hand, particles lost from the experimental region might get stuck to optical elements on the optical bench. Such a contamination of the optical elements would eventually lead to a reduction in performance of MAQRO. For these reasons, it is paramount that the motion of the trapped particle is cooled close to the ground state of motion along the cavity axis. Along the axes perpendicular to the cavity axis, the mechanical frequency is much lower but the occupation in this direction should, in energy, also correspond to the occupation along the cavity axis. In order for the particle to stay within a radius of 1 mm (the waist of the UV beam), we require an occupation number of 10 along the cavity and 104 perpendicular to that. Minimizing decoherence effects As we have stated earlier, in order to be able to see high-mass matter-wave interference in MAQRO, we have to ensure that decohering effects are small enough. In particular, the decoherence parameter Λ has to fulfill \(\Lambda\le10^{13}\mbox{ m}^{-2} \mbox{ s}^{-1}\) (see Figure 6). From this, and assuming fused-silica test particles, one can conclude that the internal temperature of our particles has to fulfill \(T_{i}\le45\mbox{ K}\), and the environment temperature also has to fulfill the same requirement \(T_{e}\le45\mbox{ K}\). Given these requirements, decoherence due to scattering, emission and absorption of blackbody radiation will be small enough to observe high-mass matter-wave interference. However, in order to test for deviations from quantum physics like those predicted by collapse models, the usual decoherence mechanisms should be at most of the same size as the decoherence mechanisms we want to test for. Figure 6 shows that MAQRO could, in principle, detect any decoherence mechanisms with a parameter \(\Lambda\ge\Lambda_{\mathrm{min}} =10^{10}\mbox{ m}^{-2}\mbox{ s}^{-1}\) because they would lead to a noticeable reduction in interference visibility. In order to achieve such low level of decoherence, the requirements on the internal temperature of the test particles and on the environment temperature are accordingly more stringent. The particle temperature will always be larger than the environment temperature. By limiting the environment temperature to \({\le}20\mbox{ K}\), and the particle temperature to \({\lesssim}25\mbox{ K}\), we can limit the respective decoherence to \(\Lambda\lesssim10^{11}\mbox{ m}^{-2} \mbox{ s}^{-1}\). If these requirements are not fulfilled but allow for seeing interference in principle, one will have to carefully account for known decoherence mechanisms and check for any additional reduction of interference visibility. In Figure 11 we plot Λ for various decoherence models. MAQRO allows testing alternative theoretical models if they predict a \(\Lambda>\Lambda_{\mathrm{min}}\). The plots show that MAQRO can test the CSL model and the quantum gravity (QG) model already for masses starting from \(m=10^{8}\) amu. In order to also test the Diósi-Penrose (DP) model and the Károlyházy model (K model), the particle mass has to be on the order of \(m=10^{10}\) amu. Figure 11 Decoherence parameter for various collapse models. We plot the decoherence parameter Λ as a function of mass. CSL model: solid, black; QG model: dashed, black; DP model: dot-dashed, green; K model: dotted, red. An additional decoherence effect may be collisions between the test particles and various atoms or molecules - i.e., in imperfect vacuum conditions. The de-Broglie wavelength [72] of such particles will always be significantly shorter than the size of our test particles and the size of the quantum states investigated. For that reason, already one or a very few collisions of our test particles with such other particles will decohere our quantum state. The frequency of such collisions can roughly be estimated as: $$ \nu_{c} = \pi r^{2} v_{g} \rho. $$ This scattering cross section assumes that every particle geometrically hitting the test particle will effectively decohere the quantum state. Let us further assume that \(T=100\mbox{ s}\), and that the gas-particle velocity is 700 m/s for \(T_{e}=20\mbox{ K}\) (that’s a worst-case scenario: hydrogen atoms in equilibrium - the lower the particle mass, the higher the equilibrium velocity). If we want to have less than one collision during a measurement run, this limits the gas density to \(\rho \le500\mbox{ cm }^{-3}\). For thermal equilibrium at \(T_{e}=20\mbox{ K}\), this corresponds to a pressure of \(p\le10^{-13}\mbox{ Pa}\). For faster particles (e.g., direct exposure to solar wind), this limit is accordingly more stringent as illustrated in Figure 12 but, at the same time, the particle density is expected to drop for higher particle energies. These requirements may be relaxed upon more detailed investigation of the scattering cross sections of the particles present at the MAQRO orbit. Moreover, the particle density is expected to be reduced due to the wake-shield effect of the spacecraft and the thermal shield. Figure 12 Maximum particle density for high-velocity particles. Conservative estimate of the maximum particle density allowed as a function of particle velocity. Microgravity environment During the time the test particle is in free fall, it is subject only to gravitational forces. Due to field gradients, the spacecraft and the test particle will experience slightly different gravitational fields. In addition, the spacecraft itself is the source of a gravitational field. If we assume a spacecraft mass of 250 kg, a particle mass of \(m=10^{10}\) amu, and an effective distance of 1 m between the two masses, gravitational attraction towards the spacecraft will displace the test particle by \({\sim}80\mbox{ }\upmu \mbox{m}\) over a time of 100 s. While this is significantly less than the wave-packet expansion during that time, it has to be taken into account very accurately. Gravitational fields parallel to the measurement axes defined by the two cavities illustrated in Figure 4 have to be known even better. Especially in the direction in which we want to observe high-mass matter-wave interference, the position of the particle has to be known much better than the grating period of 100 nm. If we are to compensate for the gravitational field of the spacecraft itself or if we want to compensate solar radiation pressure acting on the spacecraft, we have to use micro thrusters. However, such thrusters inevitably have force-noise, which effectively leads to a random walk of the spacecraft. If this random walk is known, then changes of the position of the spacecraft relative to the test particle can be taken into account in the measurement results. If the random walk is not known, then it may blur the interference pattern similar to decoherence. In particular, if we assume white thruster force noise \(\mathrm{FN}_{0} \mbox{ N/}\sqrt{\mbox{Hz}}\), then the effect of thruster noise on the interference pattern can be described via an effective ‘decoherence’ parameter: $$ \Lambda_{\mathrm{th}} = \frac{2 \mathrm{FN}^{2}_{0} m^{2}}{\hbar^{2} M^{2}}, $$ where M is the mass of the spacecraft, and m is the mass of the test particle. As an example, for \(M=250\mbox{ kg}\), \(m=10^{10}\) amu, and for a thruster force noise of \(\mathrm{FN}_{0}=100\mbox{ nN/}\sqrt {\mbox{Hz}}\) as in LPF, we get \(\Lambda_{\mathrm{th}}=8\times10^{15} \mbox{ m}^{-2}\mbox{ s}^{-1}\). This shows that thruster noise is a critical issue. As mentioned earlier, this is not a problem if the random walk of the spacecraft is known to high enough precision. The precision necessary is not the same in all three spatial directions. Parallel to the UV cavity (and the interference pattern), the effective ‘decoherence’ due to the random walk has to fulfill \(\Lambda\le\Lambda_{\mathrm{min}}\). In terms of accuracy for acceleration measurements along this axis this corresponds to \({\le}1 \mbox{ (pm s}^{-2}\mbox{)/}\sqrt{\mbox{Hz}}\). Parallel to the IR cavity, the requirement is defined by the position accuracy of 100 nm we need for accurately measuring wave-function expansion (see Section 5.2). This results in \({\le}100\mbox{ (pm s}^{-2}\mbox{)/}\sqrt{\mbox{Hz}}\) accuracy for acceleration measurements. Perpendicular to the IR and the UV cavity, the requirement is more relaxed because the position only has to be known much better than the waist of the IR cavity mode (\({\sim}60\mbox{ }\upmu \mbox{m}\)). This results in \({\le} 5\mbox{ (nm s}^{-2}\mbox{)/}\sqrt{\mbox{Hz}}\) for acceleration measurements. Position detection The period of the interference patterns to be observed will be only slightly larger than the grating period of 100 nm. For that reason, in order to resolve these patterns, we need to detect the position of the test particles with accuracy much better than 100 nm along the direction of the UV cavity. Along the IR cavity, the position accuracy only needs to be 100 nm in order to achieve high enough accuracy for measuring wave-packet expansion (see Section 5.2). In the direction perpendicular to the UV and the IR cavities, the accuracy has to be much better than the IR cavity waist (\({\sim}60\mbox{ }\upmu \mbox{m}\)) to enable taking into account the IR wave-front curvature. Proposed scientific instrument To fulfill the stringent requirements on the environment temperature and the particle density of the residual gas, MAQRO is divided into two subsystems. The ‘outer subsystem’ (see Section 7.1) is placed outside the spacecraft and isolated from the spacecraft via thermal shields. The inner subsystem (see Section 7.2) contains most optical and electronic equipment. Optical fibers and an electric harness provide the interface between the two. In Table 2, we provide an overview of the technical requirements of MAQRO. Table 2 Overview of the technical requirements of MAQRO Outer subsystem The outer subsystem can be divided into several assemblies - they are listed in Table 3, along with links to detailed descriptions and an assessment of the TRL at the time of the M4 mission proposal. Table 3 Overview of the assemblies comprising the outer subsystem Thermal-shield structure This outer subsystem contains as few sources of dissipation as possible to achieve optimal passive cooling by radiating directly to deep space. The design also allows direct venting into space, to achieve an extremely high vacuum level. This concept was originally developed for the M3 mission proposal of MAQRO [73], based on related approaches in JWST, Gaia [40] and the Darwin mission proposal [74]. The design was refined in an ESA-funded study [2] and in increasingly detailed thermal simulations [3, 4]. Figure 13 shows the shield geometry. Figure 13 CAD drawing of heat-shield geometry. The structure is attached to the spacecraft facing away from the sun. Three glass-fiber reinforced plastic (GFRP) struts hold three consecutive shields isolating the optical bench from the spacecraft surface (image source: Ref. [4]). As we stated in Section 6.5, to see matter-wave interference in MAQRO, the environment temperature has to fulfill \(T_{e}\le45\mbox{ K}\). In order to use the interferometer to test for small deviations from the predictions of quantum physics, the environment temperature has to be even lower: \(T_{e}\le20\mbox{ K}\). In a thermal study, finite-element simulation was used to demonstrate that these conditions can be fulfilled using the thermal-shield concept of MAQRO [3]. In particular, it was shown that all elements on the optical bench could passively be cooled to \(T_{e}\sim 27\mbox{ K}\), and that the immediate volume around the trapped test particle (the ‘test volume’) could reach an even lower temperature \(T_{e}\sim 16\mbox{ K}\). This thermal study confirmed that the shield geometry was near optimal. In particular, more than three consecutive shields would not bring a significant advantage, while reducing the number of shields to two would lead to a significant increase in the temperature achievable. These results could be further improved in a more detailed thermal analysis. In particular, this was achieved by using reflective instead of refractive optics [4] - yielding a temperature of \(T_{e}\sim25\mbox{ K}\) for the optical bench and \(T_{e}\sim12\mbox{ K}\) for the test volume. The design of the heat shield is based broad technological heritage and the use of space-proof materials. That means, all structural components of the heat shield are space-proof. For this reason, we assess at least TRL 5 for this assembly. Complementary metal-oxide semiconductor (CMOS) camera Optical detection of the position of the test particles plays a central role in MAQRO. To this end, several techniques are combined. One of these techniques is to detect scattered light. For this purpose, we can use technological heritage for a CMOS camera from the JWST [75, 76]. In particular, this technology has been designed in order to allow a separation of the CMOS detector chip from the preprocessing chip [75]. This way, the detector chip with low dissipation can be placed on the optical bench while the preprocessing chip (higher dissipation) can be placed further away from the sensitive experimental region. This is illustrated in Figure 13. For this technology, we estimate TRL 6 or higher. Optical-bench assembly In Figure 4, we assumed that we would potentially use two orthogonal cavities which we denote here as the IR (high-finesse) cavity and a low-finesse IR + UV cavity. The latter was assumed to potentially be a dual-wavelength cavity for \(1{,}064\mbox{ nm}\) and for \({\sim}200\mbox{ nm}\). However, a more detailed analysis shows that we will not be able to use a \({\sim}200\mbox{ nm}\) cavity due to reasons of thermal stability. This is discussed in more detail in Section 7.1.5. Nevertheless, we will denote the cavity as IR + UV cavity to distinguish it from the high-finesse IR cavity. Based on these considerations, Figure 14 shows the optical assembly on top of the optical bench. As we sketched earlier in Figure 4, the main elements are two orthogonally oriented cavities: a high-finesse cavity for \(1{,}064\mbox{ nm}\) light formed by the mirrors R1 and R2. For increased stability and for easier alignment, these mirrors are mounted on blocks of ULE material with a center hole (‘spacers’ S1 and S2). A second cavity (low-finesse, dual-wavelength for \({\sim}200\mbox{ nm}\) and \(1{,}064\mbox{ nm}\)) is formed by the dual-wavelength mirrors DWR1 and DWR2. Figure 14 Top view of the optical bench. The optical bench is \(20\times20\mbox{ cm}^{2}\) large. UVR: UV mirrors; R: IR mirrors; DR: dichroic mirrors; DWR: dual-wavelength mirrors; UVC: UV couplers; IRC: IR couplers; WP: quarter-wave plates; F: lenses; FT: base-plate feed-through S: spacers holding cavity mirrors. The mirrors R1 and R2 form a high-finesse IR cavity containing several modes (violet beam path originating at IRC1). DWR1 and DWR2 form a low-finesse cavity for IR light. The IR beam is indicated in light red, originating from IRC3 and coupled in again at IRC4. The UV beam originates at UVC1. The red-shaded, broad path indicates scattered-light imaging. Four IR fiber couplers IRC1 to IRC4 supply the optical bench with IR light and/or couple it back in again for further use. For directing the UV beam, we use ultra-violet mirrors (UVRs) and dichroic mirrors (DRs). In some instances, dual-wavelength mirrors (DWRs) are used to simultaneously reflect UV and IR light. UVR3 and R4 are parabolic mirrors. Mirrors R4-R8 optically image light scattered by nanoparticles onto the CMOS detector chip, and mirror R3 is used to fold the light coupled from and to IRC2. The light is focused on the detector by the concave mirror R6. Using reflective optics is preferred over refractive optics for thermal considerations (see Section 7.1.1). IR and UV light are combined using the dichroic mirror DR1. DWR1 is highly transparent and DWR2 is highly reflective for 200 nm light. At the same time, both DWR1 and DWR2 should be reflective enough to form a low-finesse IR cavity. At the exit of the cavity, the IR light is coupled back into IRC4. The UV light expands freely from the UV coupler ultra-violet fiber coupler (UVC)1 and is collimated by UVR3 to a beam with 1 mm waist. UV light reflected at DWR2 is coupled back into UVC1 again. The region denoted as FT (feed-through) is a hole through the optical-bench base plate. It allows test particles to be passed from the loading mechanism below the optical bench (see Section 7.1.6) to the trapping region within the IR cavity. There exists direct technological heritage for all parts of the optical-bench assembly except for the high-finesse IR cavity and for the IR + UV cavity. For this reason, we assess the TRL of the optical-bench assembly (without the cavities) to be TRL 6 or higher. However, since the optical bench will be operated at cryogenic temperatures instead of room temperature as in LPF, this will need to be carefully considered and tested. Our assessment of the technological readiness of the cavity assemblies is described in Sections 7.1.4 and 7.1.5. High-finesse IR cavity assembly As described in Section 6.4, the preparation of quantum states in MAQRO requires cooling the center-of-mass motion of optically trapped test particles close to the quantum ground state. To this end, MAQRO will apply a combination of intra-cavity side-band cooling and feed-back cooling [6, 7, 77, 78]. This requires good optomechanical coupling as well as a high-finesse cavity. The cavity of MAQRO has a cavity length of 97 mm. We chose this value for the cavity to be as long as possible given the size of the optical bench. This way, we minimize the solid angle covered by the ‘hot’ cavity mirrors from the point of view of the test particle. The reasoning behind this is to optimize the thermal environment for passive cooling. Because of the large length of the cavity, it has to be asymmetric in order to achieve high enough optomechanical coupling. The precise value of 97 mm results from choosing standard radii of curvature of 30 mm and 75 mm for the cavity mirrors. Given this cavity geometry, we require a minimum finesse of \(3\times10^{4}\) to achieve cooling close to the quantum ground state and to achieve high enough intra-cavity power and a longitudinal mechanical frequency on the order of \(\omega_{m,L} = 10^{5}\mbox{ rad/s}\). In a recent project (MAQROsteps, Project Nr. 840089) funded by the Austrian Research Promotion Agency (FFG), R. Kaltenbaek and his team implemented an adhesively bonded high-finesse IR cavity for optomechanical experiments in ultra-high vacuum. They used space-proof gluing technology and ULE material to implement a stably bonded cavity with a finesse of \(\mathcal{F} = 10^{5}\). Their efforts effectively increased the technological readiness level of this technology to somewhere between ‘technology validated in lab (TRL 4)’ and TRL 5 (relevant environment with respect to vacuum level but not with respect to environment temperature, no radiation and vibration tests). The cavity implemented only had a cavity length of 13 mm. Until mid-2015, they will use a similar approach to demonstrate an adhesively bonded cavity with the same geometry as needed for MAQRO. Low-finesse \(\textit{IR}+\textit{UV}\) cavity Originally, we intended using a dual-wavelength cavity for \(1{,}064\mbox{ nm}\) and \({\sim}200\mbox{ nm}\) to benefit from intra-cavity power enhancement for the 200 nm light and to achieve good position read out using IR light. However, practical limitations prevent the use of a UV cavity, and the finesse of the cavity for the IR wavelength has an upper limit. The reason is that the phase grating has to be applied during a very short time (\({\sim}1\mbox{ }\upmu \mbox{s}\)) after a long time of free expansion \(t_{1}\). During this time of free expansion, the IR and UV lasers cannot be locked to the cavity. Therefore, the IR and UV beams could not be turned on again for a short time without first locking the laser to the cavity again. If we assume that the cavity length L changes by δL, and if we assume that we were on resonance before that change and are still on resonance afterwards, then we get a lower limit on the cavity linewidth κ: $$ \kappa= \frac{\pi c}{2 L \mathcal{F}} > \frac{\delta L}{L} \nu= \frac {\delta L}{L} \frac{c}{\lambda}. $$ Since the optical bench will consist of ULE material (SiC or Zerodur), the relative length change can be assumed to be about \(\delta L / L\sim10^{-6}\) if the temperature is kept stable to 1 K. In that case, we get an upper limit of 30 for the finesse of the IR + UV cavity for \(1{,}064\mbox{ nm}\) and 6 for \({\sim }200\mbox{ nm}\) light. For this reason, we assume that we will not use a cavity for the \({\sim}200\mbox{ nm}\) light and a low-finesse cavity for \(1{,}064\mbox{ nm}\) light. The current TRL is: experimental proof of concept (TRL 3). Loading mechanism The main part of the loading mechanism is located in the inner subsystem (see Section 7.2.6). While that inner part is responsible for dispensing particles from a particle source, and characterizing them, the central tasks of the outer part of the loading mechanism are to guide the particles from the inside of the spacecraft to the optical bench, to discharge the particles and to propel them into the optical trap. In order to transport the test particles from the spacecraft to the optical bench, we will use a hybrid combination of optical trapping and guiding as well as linear Paul trapping. To this end, we use several hollow-core photonic-crystal fibers (HCPCFs) with a core diameter of \({\sim}10\mbox{ }\upmu \mbox{m}\). As far as possible, each of these fibers should run independently along one of the struts of the thermal-shield structure. This way, if one fiber were to be damaged for some reason, the chance would be higher for the other fibers not to be affected. The HCPCFs guiding the test particles will also contain buffer gas to sympathetically cool the particles. The external loading mechanism is contained in a closed chamber that is internally divided into sub chambers (see Figure 15). Each of these sub chambers will be vented to space in order to prevent buffer gas from reaching the experimental platform (see Figure 16). Figure 15 Side view of the loading mechanism. The image illustrates the three sub-divisions of the loading-mechanism chamber. The hollow-core photonic-crystal fiber (HCPCF) is mounted on a fiber coupler (HCFC) close to the four rod-like electrodes of a linear Paul trap. At this position, the test particles are handed over from the guiding fiber to the Paul trap. This is also where the buffer gas will leave the chamber via the HCPCF. The particles are then guided close to a UV coupler where UV light is used to discharge them. Finally, they will enter an IR beam propelling the particles to the top of the optical bench. Figure 16 Bottom view of the optical bench. The image illustrates where venting ducts could be placed to minimize the amount of buffer gas potentially leaking to the experimental region. The figure also shows the position of the external acceleration sensor and fibers from the top of the optical bench. The amount of gas leaking along the HCPCF outside the spacecraft is small: for example, the pressure inside a 103 cm3 chamber with buffer gas at room pressure and a single HCPCF leading from the chamber would only loose a negligible amount of pressure over the lifetime of the mission. Nevertheless, we have to ensure that the buffer gas does not contaminate the vacuum in the experimental region. During the early part of the development phase of MAQRO, we will perform finite-element simulations of the behavior of the buffer gas inside an HCPCF along the length of the fiber and as it exits the fiber at the end. Important questions will be (1) whether sympathetic cooling via the buffer gas allows achieving low enough test-particle temperatures, (2) how much pressure the buffer gas will exhibit on the transported test particles as it exits the fiber end, (3) how badly the buffer gas will contaminate the ultra-high vacuum (UHV) environment of the optical bench, and (4) the ideal configuration of venting ducts. During a later time of the development period, we plan to investigate these questions experimentally in a representative test environment. Figures 15 and 16 illustrate the general idea of the loading mechanism based on two candidate technologies to be investigated during the development phase. Moreover, the figure shows the position of a UV coupler close to the end of the guiding linear Paul trap. The 200 nm light used for the phase grating will also be used in the loading mechanism to discharge the test particles. Finally, an important part of the loading mechanism is a collimated IR beam (1 mm waist) used to propel the particles to the trapping region on top of the optical bench. The same beam will be used at the end of each measurement to dispose of the test particle. We estimate the technological readiness to be TRL 3. A central prerequisite of MAQRO is to prevent random relative motion between the test particle and the spacecraft (see Section 6.6). This results in stringent requirements on the accuracy for measuring accelerations of the spacecraft. While there will be an accelerometer at the center-of-mass of the spacecraft (Section 7.2.7), this will not provide direct information about the relative local acceleration between test particle and optical bench. Using a model of the spacecraft to infer that information inevitably reduces the accuracy of the information gained. To achieve the required accuracy of \({\le}1\mbox{ (pm/s}^{2}\mbox{)/}\sqrt{\mbox{Hz}}\), MAQRO features a second acceleration sensor close to the test particle (see Figure 17 as well as Figures 15 and 16). Figure 17 Sensor core of the external accelerometer. The figure shows test mass and electrode housing. Size of sensor core: \({{\le} 10\times10 \times10\mbox{ cm}^{3}}\), mass: \({\le}2\mbox{ kg}\). Image credit: ONERA. The ONERA sensor to be used will harness a cubic test mass. Based on past experience of ONERA, in the cryogenic environment close to the optical bench, the sensor sensitivity should fulfill the requirements of MAQRO. The control unit and a power-conversion unit will be placed inside the spacecraft with a distance \({\le}2\mbox{ m}\) from the sensor core. Tests on separating the core from the control unit and placing the core in a cryogenic environment were already performed [79]. We estimate TRL 5. Inner subsystem The inner subsystem can be divided into several assemblies - they are listed in Table 4, along with links to detailed descriptions and an assessment of the TRL at the time of the M4 mission proposal. Table 4 Overview of the assemblies comprising the inner subsystem IR laser system For the IR laser system, MAQRO relies on technological heritage from LPF and LISA [80]. We should essentially be able to use the very same laser technology. In particular, this is a highly stable continuous wave (CW) \(1{,}064\mbox{ nm}\) non-planar ring oscillator (NPRO) laser. For MAQRO, we will also need such a laser and keep it locked to the high-finesse cavity on the optical bench. Using an electro-optic modulator (EOM), we will lock a side-band of this laser to the IR + UV cavity. The laser needs to be tunable over at least one full free spectral range (FSR) of the high-finesse IR cavity (1.5 GHz). Due to the LPF heritage, we estimate at least TRL 6. UV source For the phase grating, we need a CW source of \({\sim}200\mbox{ nm}\) light with a pulse duration \({\le}1\mbox{ }\upmu \mbox{s}\) and a peak power \({\le}0.5\mbox{ mW}\). While this is not available off the shelf, the necessary amount of delta-development to adapt existing technology for that purpose should be feasible within the development phase of MAQRO. In particular, there are two possible approaches: (1) frequency double a \({\sim}400\mbox{ nm}\) fundamental beam, e.g., using novel developments in cavity-assisted second-harmonic generation using whispering-gallery-mode β-Barium-Borate resonators [19]. Over the last years, single-frequency \({\sim}400\mbox{ nm}\) laser diodes in Littrow configuration [81] have become readily available commercially. A fall-back option for space applications to produce the \({\sim}400\mbox{ nm}\) pump light is sum-frequency generation using \(1{,}064\mbox{ nm}\) light in combination with a 670 nm InGaAsP laser diode. Another option (2) to generate the 200 nm light is to frequency quintuple \(1{,}064\mbox{ nm}\) light adapting the scheme presented in Ref. [82]. All elements of this have been demonstrated in the lab. We estimate the current readiness to be TRL 3. IR-mode generation In order to optically trap our test particle in the high-finesse IR cavity and to cool its center-of-mass motion in all spatial directions, we intend to use several IR modes. In particular, we use two fundamental and two higher-order transverse electromagnetic modes (TEMs): two TEM00 modes will be used to trap the particle and to cool its motion along the direction of the high-finesse IR cavity [7]. To also cool the motion of the particle along the two dimensions perpendicular to the cavity mode, we will use higher-order TEM01 and TEM10 modes [78]. The two TEM00 modes are separated in frequency by one FSR of the high finesse cavity (\(\mathrm{FSR} = c/(2 L)\approx1.5\mbox{ GHz}\)), where \(L=97\mbox{ mm}\) is the cavity length. The TEM01 and TEM10 modes are close to each other in frequency and about 1.2 GHz from the fundamental TEM00 mode. One can generate the required optical frequency shift of \({\sim}1.2\mbox{ GHz}\) from the fundamental modes by using GHz EOMs for the GHz phase modulation. The modulation frequencies can be separated from the fundamental mode by using a temperature stabilized Fiber-Bragg grating. To generate the required resonance frequencies for the TEM10 and TEM01 modes, one can then use an acousto-optic modulator (AOM) for a frequency shift in the MHz range. AOM and EOM technology is readily available in space as technological heritage from LPF. Spatially, the TEM01 and TEM10 modes can be filtered from the generated light fields by the optical cavity directly. We will also investigate the more efficient conversion of the light fields to these mode shapes by holograms. In order to combine and later separate again the various laser modes, the two TEM00 modes will be prepared in orthogonal polarization. The higher-order spatial modes will be combined (and separated) based on spatial-mode filtering techniques. All these techniques are currently being used in the lab. We estimate the current readiness to be TRL 3. IR-mode locking The IR laser can be locked to the high-finesse IR cavity by using standard Pound-Drever-Hall (PDH) locking techniques [83, 84]. Since the other optical modes for intra-cavity cooling in the high-finesse IR cavity are derived via EOMs and AOMs from the fundamental laser mode (see Section 7.2.3), they also follow any changes of the cavity resonance frequency. In addition, these higher-order modes can in turn be locked to the cavity via PDH locking. We will also use an EOM to generate a mode to be locked to the IR + UV cavity. To this end, each of the modes to be locked to the cavities will separately be frequency modulated in the MHz range to allow for the generation of distinct PDH error signals from the light reflected from the cavity. PDH locking is a standard technique. Its technological readiness is at least TRL 3. Data-acquisition subsystem With this general term we encompass a host of sensors and devices providing information about the performance of the instrument and delivering the measurements results. All these devices are readily available in a laboratory environment and, in particular, given the technological heritage from LTP, we estimate the technological readiness to be TRL 6. Loading mechanism Significant progress has been achieved over the last years towards loading nanoparticles into optical traps at UHV. Common methods include sprays creating microdroplets of liquid solution containing particles (e.g., Refs. [6, 7]) at comparatively high ambient pressure and then using vacuum pumps to reduce the ambient pressure once particles are optically trapped. In this approach, it is paramount to actively cool the motion of the trapped particles in order to achieve low ambient pressure [6, 11]. Other possible approaches include using hybrid optical and Paul traps in combination with particles launched from a ceramic piezoelectric speaker [8], or the transfer of trapped particles at high pressure (as discussed above) to HCPCFs. The idea is that it may be possible to guide the particles inside an HCPCF from a high to a low pressure environment [8587]. In the initial phase of payload development, we will investigate those methods and alternative methods for directly loading optical traps in UHV. The latter methods rely on using ultra-sonic vibrations of a carrier substrate to desorb nanoparticles from the surface. In particular, we will investigate the use of GHz surface-acoustic waves on piezoelectric materials and the use of MHz bulk vibrations in thin-rod piezoelectric materials. After these initial studies, the most promising of the technologies or a combination thereof will be chosen and adapted for use in MAQRO. Common to most approaches will be the initial optical trapping of particles inside the spacecraft in a buffer-gas environment (see Figure 18). While the most natural choice for the buffer gas is Helium because it remains in gas phase even at the low temperatures at the optical bench, this choice will need to be investigated more closely in the initial development phase of the loading mechanism. Gas will only be supplied to the chambers after commissioning. Figure 18 Transfer of particles through a hollow-core photonic-crystal fiber (HCPCF). After loading and characterization of a nanoparticle (indicated by a green dot) inside a buffer-gas chamber, the particle is optically guided into and along an HCPCF outside the spacecraft. The initial optical trap will be used to characterize the particles trapped in order to quantify the size and mass of the particle as well as the charge it carries. Then the particle will be guided outside the spacecraft via a combination of hollow-core fibers and linear Paul trapping. The Paul trapping is necessary to guide and additionally constrain the particle trajectories without having to use too high optical powers. Strong beam intensities would prevent the particles from sympathetic cooling in the presence of the buffer gas. Using amplitude and frequency modulation of the guiding beam, we can shuttle and radially cool the particle (paper in preparation). The linear Paul trap can be realized via four rod-like electrodes encompassing the HCPCF. As we described in Section 6.5, the test particles are required to have a low internal temperature. Inside the MAQRO spacecraft, we do not have the means to cool the particle temperature to that degree. Instead, our approach is to sympathetically cool the particles using the buffer gas. While the buffer gas itself will be approximately at room temperature inside the spacecraft, the gas will quickly cool as it passes along the HCPCF outside the spacecraft. As the HCPCF approaches the optical bench, we expect the temperature of the buffer gas to eventually assume the environment temperature in that region (\({\le}25\mbox{ K}\)). In order to avoid confusing different particle materials, there should be at least one buffer-gas loading chamber per particle material to be used. In a given chamber, there may, however, be various sizes of nanoparticles. Before the particles are guided to the experiment, their size will be determined by observing the light scattered from the particles & by measuring the mechanical frequency of the trapped particles in the hybrid \(\mbox{optical} + \mbox{Paul}\) trap. We estimate the technological readiness of the loading mechanism to be TRL 3. The payload of MAQRO contains a highly sensitive accelerometer positioned at the center of mass of the spacecraft. While the main task of the outer accelerometer (see Section 7.1.7) is to monitor accelerations of the optical bench, the task of the inner accelerometer is to provide the necessary data to precisely control the micro-propulsion system of MAQRO for the drag-free and attitude control system (DFACS). Figure 19 shows the accelerometer inside the MAQRO spacecraft, based on Ref. [88]. The sensitivity of this sensor is \(2\mbox{ (pm/s}^{2}\mbox{)/}\sqrt{\mbox{Hz}}\) at 0.01 Hz on a \(4\times10^{-6}\mbox{m/s}^{2}\) range. Both accelerometers have an acquisition and a science mode. During the acquisition mode, higher accelerations are allowed during the time before the spacecraft enters science operation. Given the rich heritage of ONERA accelerometers [46], we estimate at least TRL 5. Figure 19 Internal accelerometer sensor unit. The internal sensor combines the sensor core and the control unit. Size: \(20\times 20\times20\mbox{ cm}^{3}\), mass: 8 kg. Image credit: ONERA. Critical issues Since the original proposal for the M3 opportunity [1], MAQRO has made significant progress towards maturing the payload technologies and concepts as well as to address critical issues. In the following, we will provide a list of critical issues, and how they will be addressed in MAQRO. Nanoparticle temperature As we described in Section 6.5, the internal temperature of the test particle must not be much higher than the environment temperature. Since the test particles in MAQRO are optically trapped for state preparation and on other occasions, any realistic particle with non-zero absorption will heat. In the original proposal of MAQRO, we suggested to solve this issue by finding better materials. While this can definitely help reducing this problem, it is unlikely to fully solve this issue any time soon, and any solution would be very material specific while MAQRO should perform tests with a variety of nanosphere materials. For this reason, we chose a different approach for M4. While we still propose using low-absorption materials, we plan to overcome this critical issue by a combination of several techniques: (1) using each particle only once and keep it optically trapped only for a short time, (2) use charged particles and a combination of optical trapping and Paul trapping or only Paul trapping whenever possible, and (3) use buffer gas in HCPCFs to sympathetically cool the test particles during transport. Using this combination, we are confident that it will be possible to address and solve this issue. Preparation of macroscopic superpositions In the original proposal, we suggested using extreme UV with very low power but with a wavelength of only 30 nm to prepare the macroscopic superpositions needed to observe double-slit-type interference [1, 2]. Even for the low powers needed, this technology will not exist within a foreseeable time in space. In addition, this approach led to free-fall times well beyond 100 s, which poses another host of problems - e.g., requirements on the thrusters & limitations on the particle statistics achievable during the mission life time. For M4, we fully revised this approach to use well established technology for observing matter-wave interference with massive particles. This approach does not only NOT require extreme UV light, it also brings the benefit of much shorter free-fall times and higher interference visibilities. See Section 5.3. Loading mechanism MAQRO puts exceedingly strict requirements on the mechanism for loading nanoparticles into the optical trap. While there are several candidate technologies in existence, none of them is directly adaptable to MAQRO. For this reason, the costs for payload development include the costs for an intense development phase. During that phase, four candidate technologies will be closely investigated and experimentally tested. At the end, the best combination of these technologies will be implemented for MAQRO. Given recent developments related to the candidate technologies, we are confident that a solution similar to the one described in this proposal can be implemented within time for MAQRO. Operations and measurement technique Here, we will provide an overview of the science operations of MAQRO. The measurement sequence can roughly be divided in three distinct sequences: (1) the loading sequence, (2) a moving sequence for transporting a particle to the target region (the crossing point of the high-finesse IR cavity and the low-finesse IR + UV cavity, and (3) the actual measurement sequence. Figure 20 provides a flow chart of the overall measurement procedure. Figure 20 Flow chart of the measurement procedure of MAQRO. Details are described in the main text. Loading sequence As described in Section 7.2.6, particles will initially be prepared in chambers flooded with buffer gas. In those buffer-gas chambers, a particle will be trapped and characterized. This can already be performed up front before a particle is needed for the experiment. Once this is accomplished, the particle is transferred to a loading region below the optical bench outside the spacecraft using optical transport in an HCPCF assisted by linear Paul trapping. This is described in Section 7.1.6. Before the particle is loaded into the intra-cavity optical trap, it needs to be discharged. The transport to the optical trap operates in free space via radiation pressure. Monitoring via the CMOS camera on the optical bench allows verifying the successful completion of the loading sequence. If it was not successful, the whole procedure has to be repeated until successful. If it was successful, we turn on the side-band cooling along the cavity axis as well as the feed-back cooling for the transverse directions. Moving sequence Loading the particle into the optical trap this way does not guarantee that the particle will be at the correct position along the cavity mode of the high-finesse IR cavity. The correct position is defined via the crossing point of that cavity with the IR + UV cavity. The necessary accuracy of the positioning is determined by the requirement that it should be much better than the mode waist of the IR + UV cavity, that means, \({\ll}1\mbox{ mm}\). By monitoring the scattered light with the CMOS imaging system, we can keep track of the particle position with μm resolution. To move the particle along the cavity axis, we simply turn off the cooling in this direction. The particle motion will heat due to laser noise and light scattering and, if needed, due to purposeful heating by frequency modulation at twice the longitudinal trap frequency. This heating will lead to the particle moving out of the trap along the cavity axis. By observing the CMOS signal, we can determine whether the particle moves in the correct direction. If not, we turn on longitudinal cooling again and restart the moving sequence. If the particle is moving in the correct direction, we only have to wait until it is at the correct position, and then switch on the longitudinal cooling once more. Measurement sequence As soon as the particle is at the correct position, we can use three-dimensional intra-cavity cooling to cool the center-of-mass motion of the particle close to the quantum ground state along the cavity axis and to low occupation numbers in the transverse directions [78]. Also see the scientific requirements in Table 1. When the cooling sequence is completed, all optical fields are switched off, and the wave packet will expand for a time \(t_{1}\), which is chosen depending on the nanoparticle and the phase \(\phi_{0}\) that will be applied. The next step is to turn on the UV beam for a time \({\sim}1\mbox{ }\upmu \mbox{s}\) to apply the pure phase grating. After applying the phase grating, the particle will again propagate freely for a time \(t_{2} = T-t_{1}\). Finally, the IR field in the IR + UV cavity is switched on in order to measure the position of the test particle via cavity readout. After completing the measurement, the particle is no longer needed, and an IR beam orthogonal to the optical bench is applied to propel the particle away from the spacecraft and into space to prevent contamination of the scientific instrument with stray nanoparticles. Proposed mission configuration and profile From the scientific requirements (Section 6, Table 1) it is apparent that MAQRO requires extremely high vacuum conditions, cryogenic temperatures (realized via passive cooling) and very stringent microgravity requirements. A mission to L1/L2 allows fulfilling these requirements. Orbit requirements Following LPF’s example, the MAQRO space-craft is injected into a halo orbit around the sun/earth Lagrange point L1 (L2 would be a feasible alternative), following the initial injection into elliptical earth orbit and 8 apogee raising orbits (see Figure 21). For an orbit around L2, similar considerations are applicable. This configuration corresponds to the Vega mission scenario for an L1/L2 orbit as it was realized in LPF. In particular, this means that a Vega rocket is used for launching the spacecraft plus a propulsion module. The Vega launches the spacecraft attached to the propulsion module to a low Earth orbit (LEO). In the LEO, the spacecraft will orbit Earth for some time for initial system checks and for preparing the transfer to an L1/L2 orbit. This transfer is performed using the propulsion module dedicated to this purpose. Upon reaching L1/L2, the propulsion module is separated from the science spacecraft. The latter then is injected into a Lissajous orbit around the L1/L2 Lagrangian point of the Sun-Earth system. Figure 21 Sketch of the transfer to a halo orbit around L1. Alternative orbits For the original M3 proposal of MAQRO [1], we investigated the possibility of using a highly elliptic orbit (HEO). More recent investigations showed [4] that a HEO is no feasible alternative. Apart from possible issues with repeatedly crossing the van-Allen belt, a main issue for MAQRO would be thermal considerations. Figure 22 shows results reported in Ref. [4] for the heat-shield temperature over time in the course of orbital evolution. These results show that there would only be a short time window during which the optical bench reaches a temperature compatible with the requirements of MAQRO. The acquisition of a full interferogram would therefore take about 10 to 30 orbits increasing the necessary mission life time to more than 10 years. Figure 22 Temperature of the heat-shield structure in HEO configuration. Close to perigee, the thermal-shield structure heats up and then needs more than two weeks to cool down again to fulfil the requirements of MAQRO. Figure from Ref. [4]. A feasible alternative to an L1/L2 orbit would be the orbit suggested for the ASTROD I mission proposal although it would have to be investigated in more detail how the necessary pointing of the optical telescope of ASTROD I would influence the performance of MAQRO. Mission lifetime The total mission time will be 24 months. In the original MAQRO proposal [1], we stated a lifetime of 6 months based on the heritage from LPF. Upon considering the scientific requirements in more detail, however, we found that 6 months would restrict the amount of data to be taken too much to achieve the scientific goals. This limitation can be overcome by extending the mission lifetime to 24 months. This duration would allow achieving the scientific goals by observing interferograms for several test-particle radii and materials. For example, in two years, we could collect several 104 data points per interferogram for representative sets of five particle radii and three different materials. A possible extension of the mission lifetime would allow for a higher number of tests to be performed. Multiple burns (9 for Vega and 15 for Rockot) raise the apogee to \(1.3\times10^{6}\mbox{ km}\) during 15 days. The following transfer to L1 (\(1.5\times10^{6}\mbox{ km}\) from earth) takes 30 days. After science payload commissioning (including an optional bake-out), the heat-shield structure and the optical bench will need to passively cool for about 25 days to reach operating temperature (see Figure 23). After that, the science operation is scheduled to last for 21 months yielding an overall lifetime of 24 months. The ultimate upper limit on the mission life time will be determined by the amount of fuel available for the cold-gas micro-thrusters as well as by the amount of buffer gas and test particles available for performing experiments. Figure 23 Cool-down of parts of the thermal-shield structure over time. Starting from room temperature, the optical bench reaches steady state after about 25 days. Due to its small volume and low heat capacity, the test volume temperature drops more rapidly. Figure from Ref. [4]. System requirements and spacecraft key issues Payload mass budget The MAQRO mass budget is closely based on LPF. As the space-craft platform of MAQRO is identical to the one of LPF, we shall focus on the MAQRO payload and compare it to LTP, the LPF payload. It is apparent that by omitting the heavy inertial sensor from LPF, the payload mass of MAQRO is dramatically reduced, however, several modification have to be taken into account. Table 5 compares the total mass of the MAQRO spacecraft to LPF, and Table 6 shows a detailed list of the mass budgets for the MAQRO payload. Table 5 Total mass budget of MAQRO compared to LPF Table 6 Detailed mass budget of the MAQRO payload For the spacecraft, we will assume the same mass as for LPF, including 51 kg for the cold-gas micro-propulsion system. We add an additional 21 kg of additional fuel to the spacecraft mass budget to account for the longer life time of MAQRO compared to LPF. Given these estimates the overall mass of MAQRO including generous margins is nearly identical to the mass of LPF. However, it may be possible to reduce the mass budget of MAQRO by removing or simplifying the disturbance-reduction system (DRS) of LPF in the case of MAQRO. Power budget In Table 7, the power budget of the MAQRO payload is compared to LTP to demonstrate that the power requirements are essentially the same. The power requirements for other parts of the science space-craft are not listed as they are assumed to be identical. It is therefore possible to conclude that the Pathfinder solar array of \({\sim}680\mbox{ W}\) of Pathfinder is sufficient for the needs of MAQRO. Table 7 Power budget for MAQRO Note that a bake-out mechanism for the outermost heat shield (+optical bench) can optionally be included for MAQRO. The heater requires \({\sim}242\mbox{ W}\) for bake-out at 300 K. Before commissioning, LTP and MAQRO only require 30 W. Nevertheless, this high power may render it unfeasible to perform a bake out unless the necessary power can temporarily be allocated from the science spacecraft. This option will be investigated more closely in the future. The comparison of the total power budget in Table 8 shows that the maximum power consumption of MAQRO in science mode is identical to the power consumption of the spacecraft in transfer orbit. The slight increase in power consumption of MAQRO in science mode with respect to LPF in science mode should be possible using the 680 W power supplied by the solar array. If necessary, some of the equipment can be turned off during the long free-fall times. Table 8 Overview of the total power budget for LTP and MAQRO Link budget Communication for MAQRO will be on X-band using low gain hemispherical and medium gain horn antennas, just as in Pathfinder. A communication bandwidth of 60 kbps fulfills the down-link bandwidth requirements for MAQRO. Therefore \({\sim}6\mbox{ W}\) of transmitted radio frequency (RF)-power are sufficient to establish the required down-link rate for on-station nominal operation. As in Pathfinder, it is suggested to use the 35 m antenna of the ground station Cebreros in Spain. Table 9 provides an overview of the link budget. Table 9 Overview of the communication link budget for LTP and MAQRO for telemetry (TM) and telecommand (TC) Spacecraft thermal design Part of this will be standard thermal-control tasks, to keep the overall spacecraft (S/C) and its external and internal units & equipment within the allowable temperature ranges by a proper thermal balance between isolating and radiating outer surfaces, supported by active control elements such as heaters. This will be based on LPF heritage. In addition, for the MAQRO mission, the thermal design has to focus on a proper thermal interface (I/F) design from the warm S/C to the extremely cold external subsystem, the heat-shield structure. Because the heat-shield structure is supported by an already very stable S/C and because it is coupled well to the extremely stable 3 K environment of deep space, the heat-shield structure can be kept at an extremely stable temperature. To achieve a good thermal stability for the equipment inside the S/C , similarly to the LPF S/C, the MAQRO S/C internal dissipation fluctuations have to be minimized and the S/C interior has to be isolated from the solar array because it inherently introduces solar fluctuations into the S/C. In order to achieve the required \({\le}25\mbox{ K}\) for the optical bench, the (warm) mechanical I/F should be designed as cold as possible, e.g., 270 K, and the S/C surfaces facing towards the external payload should be covered by a high-efficient multi-layer insulation (20 layers) and the outermost layer should have a high emissivity >0.8. This measure will serve for radiative pre-cooling of the outer thermal shield of the payload. Attitude and orbit control Star trackers and Solar sensor are used to determine the attitude. The cold-gas thrusters are exclusively used for attitude control after the propulsion module has been ejected, i.e., there are no reaction wheels. The attitude and control system (AOCS) for the science module is used whenever no science activity is carried out. It is referred to as micropropulsion attitude control system (MPACS) on LPF, and can be used in a similar way for MAQRO. Likewise, a similar (or possibly even simplified) version of the DFACS can be adapted from the LPF concept. Redundancy considerations Because the spacecraft is identical to LPF, we profit from the redundancy scheme of LPF. For example, the thrusters are operated in hot redundancy, and also the IR laser diodes feature redundancy. For MAQRO, we will include multiple buffer-gas tanks, each with at least one HCPCF leading to the outer loading mechanism (see Sections 7.1.6 and 7.2.6), and we will use redundant UV diodes based on the idea of redundant pump diodes in LTP. For the two cavities on the optical bench, the role of input and output can be exchanged as a form of cold redundancy. In addition, we will always guide two UV hollow-core fibers in parallel to the optical bench. For the purpose of the UV grating and for the discharging mechanism, the small displacement between the two fibers is negligible. Vacuum requirements These have been discussed in detail in the M3 mission proposal of MAQRO and in the corresponding published version [1]. There, we showed that the vacuum requirements on the optical bench can be fulfilled on the external platform. The conclusion was that the low temperature of the optical bench will essentially lead to freezing out of outgassing processes. Given the optimized design of the thermal shield and optical bench [3, 4], the temperature is even a bit lower than assumed in Ref. [1]. Outgassing from other, hotter parts of the spacecraft will not affect the experimental region because no part of the spacecraft is in the direct field of view from the optical bench. Particles outgassing from hotter regions of the spacecraft will have high enough velocities to overcome the gravitational attraction of the spacecraft. This leaves us with three effects that may affect the collision rate of the test particle with residual gas or other particles: • Interplanetary particle density. Around L1/L2, this should readily be compatible with the requirements of MAQRO, i.e., the particle density should be \({\le} 500\mbox{ cm}^{-3}\) [89]. • Solar wind. At 1 astronomical unit (distance Earth-Sun) (AU), we expect a particle density of \({\sim}10\mbox{ cm}^{-3}\) with velocities \({\le}500\mbox{ km/s}\). Because the spacecraft will partially shield the solar wind, the particle density will be even less. If we assume \({\sim}1\mbox{ cm}^{-3}\), the conservative limit given in Figure 12 shows that this is within the MAQRO requirements. • Leakage of buffer gas to the experimental region. Using venting ducts as shown in Figure 16, it should be possible to keep the amount of buffer gas reaching the experimental region within requirements. This will have to be investigated in more detail in the future. Heat-shield structure A detailed discussion of the thermal considerations for the thermal-shield structure is given in Section 7.1.1 and in Refs. [3, 4]. Here, we will focus on estimating the mass of the structure. In order to conservatively estimate the mass of the shield structure, we will assume that the shields extend even a bit further than detailed in Section 7.1.1. In particular, we assume that they extend far enough to cover the shield even from radiation from the sun at an angle of 45 degrees. The shields’ diameters will still be smaller than the spacecraft diameter. In Ref. [4], we even investigated the case where the shields extend beyond the spacecraft and are exposed to direct radiation from the sun. Given appropriate coating, even this extreme case should be possible. Under this assumption, and assuming that the points of the conical shields are 10 cm, 15 cm and 20 cm distant from the spacecraft and have opening angles of 7.5, 15 and 22.5, the areas of the three shields are 0.9 m2, 0.6 m2 and 0.4 m2. For an estimate of the mass, let us assume the specific density of Aluminum and a thickness of 1 mm for the shields. Then the sum mass of the three shields is \(m_{\mathrm {layers}} \approx5\mbox{ kg}\). The hollow struts are made from carbon-fiber reinforced plastic (CFRP) of very low thermal conductivity and expansion, as well as good mechanical stability. They are 40 cm long, 2 cm in diameter and have a wall thickness of 1.6 mm, giving a combined weight of less than one 1 kg: \(m_{\mathrm{struts}} \approx0.6\mbox{ kg}\). The struts are fitted to the bushings inserted into the base-plate of the optical bench. Each of the three inserts has approximately 0.2 kg of weight giving a total of \(m_{\mathrm{inserts}} \approx0.6\mbox{ kg}\). Assuming a slightly higher weight \(m_{\mathrm{mount}} \approx1\mbox{ kg}\) for mounting the struts to the spacecraft, we get an overall mass of \(m_{\mathrm{shield}} \approx 7\mbox{ kg}\) for the thermal-shield structure (without the optical bench and harness). Protective cover & shield bake out During transfer to L1 and before ejection of the propulsion module the thermal shield is covered by an additional protective cover. The weight of the cover is estimated to be 5 kg, based on an aluminum cylinder with 1 m diameter, 0.16 mm wall thickness and a height of 0.5 m. Vacuum quality and outgassing is a key aspect of MAQRO. From our analysis in Ref. [1], we found that outgassing is practically completely frozen out at temperatures as low as \({\sim }30\mbox{ K}\). Nevertheless, mainly as a means of risk mitigation for as yet unaccounted effects, it would be very useful to consider bake-out of the thermal shield and the exterior optical bench before commissioning. For that purpose, heaters could be attached to the outermost shield and the optical bench. Considering that the outer shield area is approximately \({\sim}0.43\mbox{ m}^{2}\) and that the effective area of the optical bench is \({\sim}0.1\mbox{ m}^{2}\), we obtain a total radiative surface of \({\sim}0.53\mbox{ m}^{2}\) with an emissivity close to 1. This requires a heating power of \(P\approx242\mbox{ W}\) if we bake-out at 300 K and a heating power of \(P\sim759\mbox{ W}\) if we bake-out at 400 K. The latter is not possible given the solar array of MAQRO. Even baking out at 300 K takes a vast amount of power. A solution may be to use smaller shields as originally proposed in the M3 proposal or to temporarily allocate power from the science spacecraft during the bake-out procedure. Communication, mass-data storage & ground segment A communication window of 8 hours per day, as in the Pathfinder mission, is sufficient to transfer science data to ground. Data are received by the 35 m Cerebros antenna and transferred to European space operations center (ESOC) for further processing. Considering a maximal rate of 20 kbit/s of science and attitude control data during experimental runs, the data recorded during 24 hours of science runs can be transferred to ground at 60 kbit/s in the 8 hour communication window each day. The on-board computer architecture should provide the means to continuously store science data for a period of up to three days in a solid-state mass memory (SSMM), which implies a minimal capacity of \({\sim}650\mbox{ Mbytes}\), which is easily achieved with any modern mass memory (capacity up to 2 TeraBit). A brief overview over the main requirements for the mission profile is given in Table 10. Table 10 Main mission requirements Science operations & archiving Data for MAQRO are received by the 35 m Cerebros antenna in Spain and then routed to the ESOC in Darmstadt. The mission operations center (MOC) there ensures that the spacecraft meets its mission objectives, and it operates and maintains the necessary ground segment infrastructure. Because of the L1/L2 orbit, there will only be 8 h of ground station contact per day at a down-link rate of 60 kbps. The payload is commanded via Payload Operation Requests (POR) stored in the mission time line. Real-time commanding only occurs during commissioning and contingency events. The Science & Technology Operations Center (STOC), located in Madrid, is responsible for the planning of the payload operations, data analysis, and mission archive. Scientific advisers and investigators will collaborate with the core STOC team. Volume requirements for data archiving and distribution are rather low for MAQRO. The total data received over 2 years is estimated to be well below 1 TB, including diagnostic and house-keeping data. Mission phases The spacecraft is injected into a low orbit by the Launch Vehicle. Separation from the upper stage may occur in sunlight or eclipse. Following separation, the Chemical Propulsion Subsystem initializes, following an initialization sequence controlled by on-board software (OBSW). During this period, which may partly be in eclipse, there is no attitude control, and the spacecraft tumbles uncontrolled, with power mainly or solely from the battery (a 600 Wh battery fulfills the needs of MAQRO). Once sensors and actuators are available, a transition to Sun Acquisition mode is autonomously performed. After the initial injection into elliptical earth orbit, the propulsion module is used to transfer the spacecraft to L1 via 8 apogee raising orbits. Shortly before reaching the final on-station orbit around L1, the propulsion module (PRM) and the protective cover of the thermal-shield structure are separated from the Science Module (SCM). After separation, the spacecraft is spin-stabilized sun pointing. The nominal attitude profile is maintained using the micro-propulsion subsystems. Based on technological heritage from LPF, Microscope and Gaia, MAQRO will use cold-gas thrusters providing up to 100 μN variable thrust for the full mission life time. Passive cooling & calibration Directly after commissioning, when the protective cover over the thermal-shield structure is removed, the structure will start to passively cool via radiation to deep space. This cooling period takes about 25 days (see Figure 23). This time can, at the same time, be used for testing and calibration. In particular, we can perform tests of the following components: • IR and UV laser systems • Locking the cavities • CMOS system • Internal and external accelerometers • Loading and characterizing nanoparticles in the buffer-gas chambers • Use accelerometers to measure possible acceleration due to gas leakage • Test runs of switching combinations of thrusters on and off, influence on spacecraft attitude • Transferring particles to optical bench • Discharging particles • Loading particles into optical trap • Measuring particle position • Releasing and recapturing particle • Application of UV phase-grating on particle • Disposing of nanoparticles • Monitor development of heat-shield temperature over time, determine cooling rates, compare with simulations Once enough time has passed to achieve the operating temperature, and after the initial tests are completed, MAQRO can start science operation. Science operation The first experiments to run on MAQRO will be to observe wave-packet expansion to determine the level of decoherence present in our system (see Section 5.2). We will perform tests for at least 3 different particle materials of different mass density. For each particle type, we will perform the experiment with at least 5 different radii. All these tests, including possible repetitions should be completed within the first 10 months after commissioning. If these first experiments demonstrate that everything works, and that decoherence present is small enough, we can switch to the second and most important stage of MAQRO: observing high-mass matter-wave interference (see Section 5.3). If it should be clear already earlier that the prerequisites for performing these experiments are fulfilled, this second sequence of tests can already be started earlier on in the mission. The main goals of the mission should be achieved within the first 20 months after commissioning, leaving some time to perform additional experiments or to repeat experiments to increase statistical significance. If the MAQRO instrument is still operating after the nominal mission life time and an extension of the life time is granted, additional experiments may be performed to increase the scientific output of the mission; for example, performing experiments on wave-packet expansion or high-mass interferometry repeatedly using the same test particle & inferring the influence of particle heating and thermal radiation on the measurement results. Moreover, parameters can be varied in finer steps, or effects like micro-thruster noise on the measurement results can be investigated, and it would be possible to precisely determine the quantum state prepared by performing time-of-flight quantum-state tomography [90]. Spacecraft disposal In general, the halo orbits around L1/L2 are unstable and there is no direct need for spacecraft disposal. In order to enable a safe disposal of the spacecraft after the end of science operations and shortening the drift time, we can either use part of the mission lifetime and the corresponding fuel, or we can add some extra amount of fuel for a disposal after the nominal lifetime scheduled. This will have to be investigated in more detail in the definition phase of the mission. Changes compared to the M3 proposal of MAQRO As we described in the introduction (Section 1), since this original MAQRO proposal [1], significant progress has been made. The proposal has gained increasing support by the international scientific community and by industrial partners. Moreover, since the first proposal, we performed in-depth studies to clearly define the scientific and corresponding technical requirements [2], and we identified the critical issues as well as possible ways to address them. This progress helped us to better define the time line for technology development and the related costs. The design of the mission itself has been improved continuously: • detailed thermal studies of the thermal-shield concept were performed to assure the achievement of the strict technical requirements in terms of environment temperature and vacuum conditions [3, 4]. • in collaboration with ONERA, we devised a concept for integrating high-sensitivity inertial sensors to gain sufficient control of the spacecraft attitude and to monitor non-inertial movements of the spacecraft. This is essential for resolving the position of the test particle with sufficient accuracy. • we extended the mission lifetime to allow for a longer science operation and higher statistical significance of the date gathered, and we adapted the spacecraft to fulfill the mass-budget requirements of the intended launcher despite the increased amount of fuel necessary. Moreover, we improved the scientific instrument: • we adapted the scientific instrument to harness established techniques from near-field high-mass matter-wave interferometry. • in contrast to the earlier proposal, the present one does not rely on using potentially unfeasible UV wavelengths. • we described additional modes of operation for the scientific instrument (see Section 5). These will allow to significantly extend the parameter range accessible to the instrument. • in contrast to the earlier proposal, we suggest a more realistic mechanism for loading test particles into the optical trap based on novel developments in laboratory experiments. • in contrast to the earlier proposal, we propose a more realistic means for achieving 3D cooling of the center-of-mass motion of the test particles. • we modified the operation of the instrument from continuously using the same particle to using a different particle for each measurement run. While this increases the demands on the loading mechanism, it lowers the risk of decoherence due to heating the test particle. • we adopted a realistic scenario for the CMOS imaging system based on technological heritage from the JWST. In addition to these mission-specific improvements, rapid technological progress over the last years has helped increase the TRLs of several key technologies for MAQRO: • optomechanical cooling close to the ground state was demonstrated for various architectures [1416]. • optomechanical cooling of optically trapped particles was demonstrated [59]. • many groups investigated different methods for loading test particles into optical traps and to achieve optical trapping even in UHV [1113]. • adhesively bonded optical cavities using space-proof glue and ULE material have been implemented [17]. Demonstrations of space-proof cavity designs for MAQRO in relevant environments are planned in the near future. Conclusions & outlook We have presented an updated version of the proposal for a medium-sized space mission, MAQRO, originally proposed in 2010. This proposal was submitted in response to the ESA ‘M4’ call for a mission opportunity for a medium-size space mission. The main scientific objective of the mission is testing quantum theory using high-mass matter-wave interferometry in combination with novel techniques from quantum optomechanics. The update includes several significant changes with respect to the original proposal in order to address novel developments as well as critical issues in the original mission proposal. In particular, we presented an update of the thermal shield design allowing to perform high-mass matter-wave interference on a separate platform outside the spacecraft in order to fulfill the strict temperature and vacuum requirements of MAQRO. We introduced a novel type of matter-wave interferometer adapted for a microgravity setting as well as novel schemes for loading test particles into the central optical trap to meet the stringent requirements of MAQRO. This novel approach promises to overcome principal limitations of ground-based experiments and to resolve technical limitations of the earlier proposal by harnessing state-of-the-art space technology, well-established techniques of matter-wave interferometry and recent developments in quantum optomechanics using optically trapped dielectric particles. MAQRO will offer the unique opportunity to investigate a yet untested parameter regime allowing to probe for a quantum-to-classical transition and for possible novel effects at the interface between quantum and gravitational physics. Moreover, the high sensitivity of the MAQRO instrument might even allow testing a specific type of low-energy dark-matter models [33, 34]. The present proposal highlights the rapid progress in recent years to achieve quantum control over macroscopic optomechanical systems and to harness space as an intriguing new environment for tests on the foundations of physics. MAQRO may prove a pathfinder for quantum technology in space, opening the door for a range of future applications in high-sensitivity measurements using techniques from quantum optomechanics and matter-wave interferometry. 1. 1. Kaltenbaek R, Hechenblaikner G, Kiesel N, Romero-Isart O, Schwab KC, Johann U, Aspelmeyer M. Macroscopic quantum resonators (MAQRO). Exp Astron. 2012;34(2):123-64. doi:10.1007/s10686-012-9292-3. 2. 2. Kaltenbaek R, Hechenblaikner G, Kiesel N, Blaser F, Gröblacher S, Hofer S, Vanner MR, Wieczorek W, Schwab KC, Johann U, Aspelmeyer M. Macroscopic quantum experiments in space using massive mechanical resonators. Technical report. Study under contract with ESA, Po P5401000400; 2012. 3. 3. Hechenblaikner G, Hufgard F, Burkhardt J, Kiesel N, Johann U, Aspelmeyer M, Kaltenbaek R. How cold can you get in space? Quantum physics at cryogenic temperatures in space. New J Phys. 2014;16(1):013058. doi:10.1088/1367-2630/16/1/013058. 4. 4. Pilan Zanoni A, Kaltenbaek R, Burkhardt J, Johann U, Hechenblaikner G. Performance of a radiatively cooled system for quantum optomechanical experiments in space. arXiv:1508.01032 (2015). 5. 5. Li T, Kheifets S, Raizen MG. Millikelvin cooling of an optically trapped microsphere in vacuum. Nat Phys. 2011;7(7):527-30. doi:10.1038/nphys1952. 6. 6. Gieseler J, Deutsch B, Quidant R, Novotny L. Subkelvin parametric feedback cooling of a laser-trapped nanoparticle. Phys Rev Lett. 2012;109(10):103603. doi:10.1103/PhysRevLett.109.103603. 7. 7. Kiesel N, Blaser F, Delić U, Grass D, Kaltenbaek R, Aspelmeyer M. Cavity cooling of an optically levitated submicron particle. Proc Natl Acad Sci USA. 2013;110(35):14180-5. doi:10.1073/pnas.1309167110. 8. 8. Millen J, Fonseca PZG, Mavrogordatos T, Monteiro TS, Barker PF. Cavity cooling a single charged levitated nanosphere. Phys Rev Lett. 2015;114:123602. doi:10.1103/PhysRevLett.114.123602. 9. 9. Fonseca PZG, Aranas EB, Millen J, Monteiro TS, Barker PF. Nonlinear dynamics and millikelvin cavity-cooling of levitated nanoparticles, 5. arXiv:1511.08482 (2015). 10. 10. Schmid P, Sezer U, Horak J, Aspelmeyer M, Andt M, Kaltenbaek R. Trapped nanoparticles for space experiments. Technical report. Study conducted under contract with the European Space Agency, AO/1-6889/11/NL/CBi; 2014. 11. 11. Gieseler J, Novotny L, Quidant R. Thermal nonlinearities in a nanomechanical oscillator. Nat Phys. 2013;9(12):806-10. doi:10.1038/nphys2798. 12. 12. Millen J, Deesuwan T, Barker P, Anders J. Nanoscale temperature measurements using non-equilibrium Brownian dynamics of a levitated nanosphere. Nat Nanotechnol. 2014;9(6):425-9. doi:10.1038/nnano.2014.82. 13. 13. Mestres P, Berthelot J, Spasenović M, Gieseler J, Novotny L, Quidant R. Cooling and manipulation of a levitated nanoparticle with an optical fiber trap. Appl Phys Lett 2015;107:151102. doi:10.1063/1.4933180. 14. 14. O’Connell AD, Hofheinz M, Ansmann M, Bialczak RC, Lenander M, Lucero E, Neeley M, Sank D, Wang H, Weides M, Wenner J, Martinis JM, Cleland AN. Quantum ground state and single-phonon control of a mechanical resonator. Nature. 2010;464:697-703. doi:10.1038/nature08967. 15. 15. Teufel JD, Donner T, Li D, Harlow JW, Allman MS, Cicak K, Sirois AJ, Whittaker JD, Lehnert KW, Simmonds RW. Sideband cooling of micromechanical motion to the quantum ground state. Nature. 2011;475(7356):359-63. doi:10.1038/nature10261. 16. 16. Chan J, Alegre Mayer TP, Safavi-Naeini AH, Hill JT, Krause A, Gröblacher S, Aspelmeyer M, Painter O. Laser cooling of a nanomechanical oscillator into its quantum ground state. Nature. 2011;478(7367):89-92. doi:10.1038/nature10461. 17. 17. Kaltenbaek R, Hechenblaikner G, Schuldt T, Pilan-Zanoni A, Kiesel N, Burkhardt J, Aspelmeyer M, Braxmaier C, Johann U. Design and build of a glued cavity with good optical access for experiments in quantum optomechanics. In preparation; 2016. 18. 18. Bateman J, Nimmrichter S, Hornberger K, Ulbricht H. Near-field interferometry of a free-falling nanoparticle from a point-like source. Nat Commun. 2014;5:4788. doi:10.1038/ncomms5788. 19. 19. Lin G, Fürst JU, Strekalov DV, Yu N. Wide-range cyclic phase matching and second harmonic generation in whispering gallery resonators. Appl Phys Lett. 2013;103(18):181107. doi:10.1063/1.4827538. 20. 20. Gebert F, Frosz MH, Weiss T, Wan Y, Ermolov A, Joly NY, Schmidt PO, Russell PSJ. Damage-free single-mode transmission of deep-UV light in hollow-core PCF. Opt Express. 2014;22(13):15388-96. doi:10.1364/OE.22.015388. 21. 21. Bassi A, Ghirardi G. Dynamical reduction models. Phys Rep. 2003;379(5-6):257-426. doi:10.1016/S0370-1573(03)00103-0. 22. 22. Bassi A, Ippoliti E, Adler S. Towards quantum superpositions of a mirror: an exact open systems analysis. Phys Rev Lett. 2005;94(3):030401. doi:10.1103/PhysRevLett.94.030401. 23. 23. Bassi A, Lochan K, Satin S, Singh T, Ulbricht H. Models of wave-function collapse, underlying theories, and experimental tests. Rev Mod Phys. 2013;85(2):471-527. doi:10.1103/RevModPhys.85.471. 24. 24. Ghirardi GC, Rimini A, Weber T. Unified dynamics for microscopic and macroscopic systems. Phys Rev D. 1986;34(2):470-91. doi:10.1103/PhysRevD.34.470. 25. 25. Gisin N. Stochastic quantum dynamics and relativity. Helv Phys Acta. 1989;62:363-71. 26. 26. Pearle P. Combining stochastic dynamical state-vector reduction with spontaneous localization. Phys Rev A. 1989;39(5):2277-89. doi:10.1103/PhysRevA.39.2277. 27. 27. Ghirardi GC, Pearle P, Rimini A. Markov processes in Hilbert space and continuous spontaneous localization of systems of identical particles. Phys Rev A. 1990;42(1):78-89. doi:10.1103/PhysRevA.42.78. 28. 28. Diósi L. Gravitation and quantum-mechanical localization of macro-objects. Phys Lett A. 1984;105(4-5):199-202. doi:10.1016/0375-9601(84)90397-9. 29. 29. Penrose R. On gravity’s role in quantum state reduction. Gen Relativ Gravit. 1996;28:581-600. doi:10.1007/BF02105068. 30. 30. Diósi L. Notes on certain Newton gravity mechanisms of wavefunction localization and decoherence. J Phys A, Math Theor. 2007;40(12):2989-95. doi:10.1088/1751-8113/40/12/S07. 31. 31. Jaekel M, Reynaud S. Gravitational quantum limit for length measurements. Phys Lett A. 1994;185(2):143-8. doi:10.1016/0375-9601(94)90838-9. 32. 32. Lamine B, Hervé R, Lambrecht A, Reynaud S. Ultimate decoherence border for matter-wave interferometry. Phys Rev Lett. 2006;96(5):050405. doi:10.1103/PhysRevLett.96.050405. 33. 33. Riedel CJ. Direct detection of classically undetectable dark matter through quantum decoherence. Phys Rev D. 2013;88(11):116005. doi:10.1103/PhysRevD.88.116005. 34. 34. Bateman J, McHardy I, Merle A, Morris TR, Ulbricht H. On the existence of low-mass dark matter and its direct detection. Sci Rep. 2015;5:8058. doi:10.1038/srep08058. 35. 35. Zych M, Costa F, Pikovski I, Brukner Č. Quantum interferometric visibility as a witness of general relativistic proper time. Nat Commun. 2011;2:505. doi:10.1038/ncomms1498. 36. 36. Pikovski I, Zych M, Costa F, Brukner Č. Universal decoherence due to gravitational time dilation. Nat Phys. 2015;11(8):668-72. doi:10.1038/nphys3366. 37. 37. Abbott B, Abbott R, Adhikari R, Ajith P, Allen B, Allen G, et al., LIGO Scientific Collaboration. Observation of a kilogram-scale oscillator near its quantum ground state. New J Phys. 2009;11(7):073032. doi:10.1088/1367-2630/11/7/073032. 38. 38. Armano M, Benedetti M, Bogenstahl J, Bortoluzzi D, Bosetti P, Brandt N, et al. LISA Pathfinder: the experiment and the route to LISA. Class Quantum Gravity. 2010;26(9):094001. doi:10.1088/0264-9381/26/9/094001. 39. 39. Anza S, Armano M, Balaguer E, Benedetti M, Boatella C, Bosetti P, et al. The LTP experiment on the LISA Pathfinder mission. Class Quantum Gravity. 2005;22(10):S125-S138. doi:10.1088/0264-9381/22/10/001. 40. 40. Lindegren L, Babusiaux C, Bailer-Jones C, Bastian U, Brown AGA, Cropper M, Høg E, Jordi C, Katz D, van Leeuwen F, Luri X, Mignard F, de Bruijne JHJ, Prusti T. The Gaia mission: science, organization and present status. Proc Int Astron Union. 2008;3(S248):217-23. doi:10.1017/S1743921308019133. 41. 41. Drinkwater MR, Haagmans R, Muzi D, Popescu A, Floberghagen R, Kern M, Fehringer M. The GOCE gravity mission: ESA’s first core Earth explorer. In: Proceedings of 3rd international GOCE user workshop, ESA SP-627. 6-8 Nov., 2006, Frascati, Italy; 2007. p. 1-8. 42. 42. Marque J-P, Christophe B, Foulon B. In-orbit data of the accelerometers of the ESA GOCE mission. In: 61st international astronautical congress. vol. 6. Prague, CZ; 2010. p. 10-131. 43. 43. Touboul P, Rodrigues M. The MICROSCOPE space mission. Class Quantum Gravity. 2001;18(13):2487. doi:10.1088/0264-9381/18/13/311. 44. 44. Liorzou F, Boulanger D, Rodrigues M, Touboul P, Selig H. Free fall tests of the accelerometers of the MICROSCOPE mission. Adv Space Res. 2014;54(6):1119-28. doi:10.1016/j.asr.2014.05.009. 45. 45. Sheard BS, Heinzel G, Danzmann K, Shaddock DA, Klipstein WM, Folkner WM. Intersatellite laser ranging instrument for the GRACE follow-on mission. J Geod. 2012;86(12):1083-95. doi:10.1007/s00190-012-0566-3. 46. 46. Christophe B, Boulanger D, Foulon B, Huynh P-A, Lebat V, Liorzou F, Perrot E. A new generation of ultra-sensitive electrostatic accelerometers for GRACE follow-on and towards the next generation gravity missions. Acta Astronaut. 2015;117:1-7. doi:10.1016/j.actaastro.2015.06.021. 47. 47. Lightsey PA. James Webb Space Telescope: large deployable cryogenic telescope in space. Opt Eng. 2012;51(1):011003. doi:10.1117/1.OE.51.1.011003. 48. 48. Schrödinger E. Die gegenwärtige Situation in der Quantenmechanik. Naturwissenschaften. 1935;23:807. 49. 49. Davisson C, Germer LH. The scattering of electrons by a single crystal of nickel. Nature. 1927;119:558-60. 50. 50. Thomson GP. The diffraction of cathode rays by thin films of platinum. Nature. 1927;120:802. 51. 51. Estermann I, Stern O. Beugung von Molekularstrahlen. Z Phys. 1930;61:95-125. 52. 52. Eibenberger S, Gerlich S, Arndt M, Mayor M, Tüxen J. Matter-wave interference of particles selected from a molecular library with masses exceeding 10,000 amu. PCCP, Phys Chem Chem Phys. 2013;15(35):14696-700. doi:10.1039/c3cp51500a. 53. 53. Adler SL, Bassi A. Physics. Is quantum theory exact?. Science. 2009;325(5938):275-6. doi:10.1126/science.1176858. 54. 54. Aspelmeyer M, Kippenberg TJ, Marquardt F. Cavity optomechanics. Rev Mod Phys. 2014;86(4):1391-452. doi:10.1103/RevModPhys.86.1391. 55. 55. Collett B, Pearle P. Wavefunction collapse and random walk. Found Phys. 2003;33(10):1495-541. doi:10.1023/A:1026048530567. 56. 56. Bahrami M, Paternostro M, Bassi A, Ulbricht H. Proposal for a noninterferometric test of collapse models in optomechanical systems. Phys Rev Lett. 2014;112(21):210404. doi:10.1103/PhysRevLett.112.210404. 57. 57. Kaltenbaek R, Aspelmeyer M. Optomechanical Schrödinger cats - a case for space. In: Reiter WL, Yngvason J, editors. Erwin Schrödinger - 50 years after. Vienna: Eur. Math. Soc.; 2013. p. 123-32. doi:10.4171/121-1/6. 58. 58. Bera S, Motwani B, Singh TP, Ulbricht H. A proposal for the experimental detection of CSL induced random walk. Sci Rep. 2015;5:7664. doi:10.1038/srep07664. 59. 59. Zurek WH. Decoherence and the transition from quantum to classical. Phys Today. 1991;44(10):36. doi:10.1063/1.881293. 60. 60. Schlosshauer MA. Decoherence and the quantum-to-classical transition. Berlin: Springer; 2007. doi:10.1007/978-3-540-35775-9. 61. 61. D’Ariano G, Yuen H. Impossibility of measuring the wave function of a single quantum system. Phys Rev Lett. 1996;76(16):2832-5. doi:10.1103/PhysRevLett.76.2832. 62. 62. Ellis J. Search for violations of quantum mechanics. Nucl Phys B. 1984;241(2):381-405. doi:10.1016/0550-3213(84)90053-1. 63. 63. Ellis J, Mohanty S, Nanopoulos DV. Quantum gravity and the collapse of the wavefunction. Phys Lett B. 1989;221(2):113-9. doi:10.1016/0370-2693(89)91482-2. 64. 64. Adler SL. Lower and upper bounds on CSL parameters from latent image formation and IGM heating. J Phys A, Math Theor. 2007;40(12):2935-57. doi:10.1088/1751-8113/40/12/S03. 65. 65. Hornberger K, Gerlich S, Haslinger P, Nimmrichter S, Arndt M. Colloquium: quantum interference of clusters and molecules. Rev Mod Phys. 2012;84(1):157-73. doi:10.1103/RevModPhys.84.157. 66. 66. Brezger B, Arndt M, Zeilinger A. Concepts for near-field interferometers with large molecules. J Opt B, Quantum Semiclass Opt. 2003;5(2):S82-S89. doi:10.1088/1464-4266/5/2/362. 67. 67. Haslinger P, Dörre N, Geyer P, Rodewald J, Nimmrichter S, Arndt M. A universal matter-wave interferometer with optical ionization gratings in the time domain. Nat Phys. 2013;9(3):144-8. doi:10.1038/nphys2542. 68. 68. Hornberger K, Gerlich S, Ulbricht H, Hackermüller L, Nimmrichter S, Goldt IV, Boltalina O, Arndt M. Theory and experimental verification of Kapitza-Dirac-Talbot-Lau interferometry. New J Phys. 2009;11(4):043032. doi:10.1088/1367-2630/11/4/043032. 69. 69. Chang DE, Regal CA, Papp SB, Wilson DJ, Ye J, Painter O, Kimble HJ, Zoller P. Cavity opto-mechanics using an optically levitated nanosphere. Proc Natl Acad Sci USA. 2010;107:1005-10. doi:10.1073/pnas.0912969107. 70. 70. Romero-Isart O, Juan ML, Quidant R, Cirac JI. Toward quantum superposition of living organisms. New J Phys. 2010;12:33015. doi:10.1088/1367-2630/12/3/033015. 71. 71. Nimmrichter S. Macroscopic matter wave interferometry. Switzerland: Springer; 2014. doi:10.1007/978-3-319-07097-1. 72. 72. de Broglie L. Waves and quanta. Nature. 1923;112:540. 73. 73. Kaltenbaek R. Testing quantum physics in space using optically trapped nanospheres. Proc SPIE. 2013;8810:88100B. doi:10.1117/12.2027051. 74. 74. Leger A. DARWIN mission proposal to ESA. arXiv:0707.3385 (2007). 75. 75. Loose M, Beletic J, Garnett J, Muradian N. Space qualification and performance results of the SIDECAR ASIC. In: Mather JC, MacEwen HA, de Graauw MWM, editors. SPIE astronomical telescopes and instrumentation I. vol. 6265. Orlando: International Society for Optics and Photonics; 2006. 62652J. doi:10.1117/12.672705. 76. 76. Bai Y, Bajaj J, Beletic JW, Farris MC, Joshi A, Lauxtermann S, Petersen A, Williams G. Teledyne imaging sensors: silicon CMOS imaging technologies for X-ray, UV, visible, and near infrared. In: High energy, optical, and infrared detectors for astronomy III. Proceedings of SPIE. vol. 7021; 2008. 702102. doi:10.1117/12.792316. 77. 77. Kubanek A, Koch M, Sames C, Ourjoumtsev A, Pinkse PWH, Murr K, Rempe G. Photon-by-photon feedback control of a single-atom trajectory. Nature. 2009;462(7275):898-901. doi:10.1038/nature08563. 78. 78. Yin Z, Li T, Feng M. Three-dimensional cooling and detection of a nanosphere with a single cavity. Phys Rev A. 2011;83(1):013816. doi:10.1103/PhysRevA.83.013816. 79. 79. Lafargue L, Rodrigues M, Touboul P. Towards low-temperature electrostatic accelerometry. Rev Sci Instrum. 2002;73(1):196. doi:10.1063/1.1416103. 80. 80. Tröbs M, Weßels P, Fallnich C, Bode M, Freitag I, Skorupka S, Heinzel G, Danzmann K. Laser development for LISA. Class Quantum Gravity. 2006;23(8):S151-S158. doi:10.1088/0264-9381/23/8/S20. 81. 81. Hildebrandt L, Knispel R, Stry S, Sacher JR, Schael F. Antireflection-coated blue GaN laser diodes in an external cavity and Doppler-free indium absorption spectroscopy. Appl Opt. 2003;42(12):2110. doi:10.1364/AO.42.002110. 82. 82. Vasilyev S, Nevsky A, Ernsting I, Hansen M, Shen J, Schiller S. Compact all-solid-state continuous-wave single-frequency UV source with frequency stabilization for laser cooling of Be+ ions. Appl Phys B. 2011;103(1):27-33. doi:10.1007/s00340-011-4435-1. 83. 83. Pound RV. Electronic frequency stabilization of microwave oscillators. Rev Sci Instrum. 1946;17(11):490. doi:10.1063/1.1770414. 84. 84. Drever RWP, Hall JL, Kowalski FV, Hough J, Ford GM, Munley AJ, Ward H. Laser phase and frequency stabilization using an optical resonator. Appl Phys B. 1983;31(2):97-105. doi:10.1007/BF00702605. 85. 85. Benabid F, Knight J, Russell P. Particle levitation and guidance in hollow-core photonic crystal fiber. Opt Express. 2002;10(21):1195. doi:10.1364/OE.10.001195. 86. 86. Schmidt OA, Euser TG, Russell PSJ. Mode-based microparticle conveyor belt in air-filled hollow-core photonic crystal fiber. Opt Express. 2013;21(24):29383-91. doi:10.1364/OE.21.029383. 87. 87. Grass D. Optical trapping and transport of nanoparticles with hollow core photonic crystal fibers. Masters thesis. University of Vienna; 2013. 88. 88. Lenoir B, Lévy A, Foulon B, Lamine B, Christophe B, Reynaud S. Electrostatic accelerometer with bias rejection for gravitation and Solar System physics. Adv Space Res. 2011;48(7):1248-57. doi:10.1016/j.asr.2011.06.005. 89. 89. Biermann L. Solar corpuscular radiation and the interplanetary gas. Observatory. 1957;77:109-10. 90. 90. Romero-Isart O, Pflanzer AC, Juan ML, Quidant R, Kiesel N, Aspelmeyer M, Cirac JI. Optically levitating dielectrics in the quantum regime: Theory and protocols. Phys Rev A. 2011;83(1):13803. doi:10.1103/PhysRevA.83.013803. Download references AB acknowledges financial support from NANOQUESTFIT, INFN, and the COST Action MP1006. AR is supported by the DLR, Grant No. DLR 50WM1136. LN acknowledges support by ERC-QMES (no. 338763). RK acknowledges support by the FFG (no. 3589434). Author information Correspondence to Rainer Kaltenbaek. Additional information Acronyms & Abbreviations amu, atomic mass unit; AOCS, attitude and control system; AOM, acousto-optic modulator; AU, astronomical unit (distance Earth-Sun); CBE, current best estimate; CFRP, carbon-fiber reinforced plastic; CMOS, complementary metal-oxide semiconductor; CSL, continuous spontaneous localization; CW, continuous wave; DFACS, drag-free and attitude control system; DLR, German space agency; DMU, data-management unit; DP, Diósi-Penrose; DR, dichroic mirror; DRS, disturbance-reduction system; DWR, dual-wavelength mirror; EOM, electro-optic modulator; ESA, European Space Agency; ESOC, European space operations center; FEE, front-end electronics; FFG, Austrian Research Promotion Agency; FSR, free spectral range; GOCE, Gravity field and steady-state Ocean Circulation Explorer; GRACE, Gravity Recovery and Climate Experiment; HCPCF, hollow-core photonic-crystal fiber; HEO, highly elliptic orbit; I/F, interface; IR, infra red; JWST, James Webb Space Telescope; K model, Károlyházy model; L1, Sun-Earth Lagrange Point 1; L2, Sun-Earth Lagrange Point 2; LEO, low Earth orbit; LISA, Laser Interferometer Space Antenna; LPF, LISA Pathfinder; LTP, LISA Technology Package; M3, 3rd Cosmic Vision call for a medium-sized mission; M4, 4th Cosmic Vision call for a medium-sized mission; MAQRO, Macroscopic Quantum Resonators; MOC, mission operations center; MPACS, micropropulsion attitude control system; NPRO, non-planar ring oscillator; OBSW, on-board software; OTIMA, optical time-domain ionizing matter-wave interferometer; PDH, Pound-Drever-Hall; PRM, propulsion module; QG, quantum gravity; RF, radio frequency; S/C, spacecraft; SSMM, solid-state mass memory; TC, telecommand; TEM, transverse electromagnetic mode; TM, telemetry; TRL, Technological Readiness Level; TRL 1, basic principles observed; TRL 2, technology concept formulated; TRL 3, experimental proof of concept; TRL 4, technology validated in lab; TRL 5, technology validated in relevant environment; TRL 6, technology demonstrated in relevant environment; TRL 7, system prototype demonstration in operational environment; TRL 8, system complete and qualified; TRL 9, actual system proven in operational environment; UHV, ultra-high vacuum; ULE, ultra-low-expansion; UV, ultra violet; UVC, ultra-violet fiber coupler; UVR, ultra-violet mirror. Competing interests The authors declare that they have no competing interests. Authors’ contributions RK, GH, KCS, MA, NK, and UJ conceived of the original mission proposal. RK, GH, MA, NK, and UJ conceived of the thermal design of the main scientific instrument. APZ, RK, GH, and UJ improved the thermal design for the M4 mission proposal. JB, HU, and RK developed the idea of adapting the MAQRO instrument to near-field interferometry. PFB provided critical input with respect to particle charging. NK and RK conceived of the mechanism for particle cooling, loading, transport and discharging. CB, NG, RK and TS contributed to the definition of work packages for technology development. NG and RK defined the system-management work packages. RK, GH, MC, and UJ defined the mission orbital parameters and estimated the industrial costs. GH, MC, RK, and UJ estimated the vacuum level to be expected. MC, RK, and UJ defined the launch and separation sequence. AP, JG, KD, LN, LR, MA, and RK organized scientific meetings for discussions on the mission design and goals. All authors contributed to central discussions resulting in the presented mission layout and design, and all authors read and approved the final manuscript. MAQRO Consortium, names after first author sorted alphabetically Rights and permissions Reprints and Permissions About this article Verify currency and authenticity via CrossMark Cite this article Kaltenbaek, R., Aspelmeyer, M., Barker, P.F. et al. Macroscopic Quantum Resonators (MAQRO): 2015 update. EPJ Quantum Technol. 3, 5 (2016). Download citation • space • quantum physics • quantum optomechanics • matter waves • optical trapping
1ae5d7011fe4c898
4.2.53. VIBROT The program VIBROT is used to compute a vibration-rotation spectrum for a diatomic molecule, using as input a potential computed over a grid. The grid should be dense around equilibrium (recommended spacing 0.05 au) and should extend to large distance (say 50 au) if dissociation energies are computed. The potential is fitted to an analytical form using cubic splines. The ro-vibrational Schrödinger equation is then solved numerically (using Numerov’s method) for one vibrational state at a time and for a number of rotational quantum numbers as specified by input. The corresponding wave functions are stored on file VIBWVS for later use. The ro-vibrational energies are analyzed in terms of spectroscopic constants. Weakly bound potentials can be scaled for better numerical precision. The program can also be fed with property functions, such as a dipole moment curve. Matrix elements over the ro-vib wave functions for the property in question are then computed. These results can be used to compute IR intensities and vibrational averages of different properties. VIBROT can also be used to compute transition properties between different electronic states. The program is then run twice to produce two files of wave functions. These files are used as input in a third run, which will then compute transition matrices for input properties. The main use is to compute transition moments, oscillator strengths, and lifetimes for ro-vib levels of electronically excited states. The asymptotic energy difference between the two electronic states must be provided using the ASYMptotic keyword. Dependencies The VIBROT is free-standing and does not depend on any other program. Files Input files The calculation of vibrational wave functions and spectroscopic constants uses no input files (except for the standard input). The calculation of transition properties uses VIBWVS files from two preceding VIBROT runs, redefined as VIBWVS1 and VIBWVS2. Output files VIBROT generates the file VIBWVS with vibrational wave functions for each \(v\) and \(J\) quantum number, when run in the wave function mode. If requested VIBROT can also produce files VIBPLT with the fitted potential and property functions for later plotting. Input This section describes the input to the VIBROT program in the Molcas program system. The program name is &VIBROT Keywords The first keyword to VIBROT is an indicator for the type of calculation that is to be performed. Two possibilities exist: ROVIbrational spectrum VIBROT will perform a vib-rot analysis and compute spectroscopic constants. TRANsition moments VIBROT will compute transition moment integrals using results from two previous calculations of the vib-rot wave functions. In this case the keyword Observable should be included, and it will be interpreted as the transition dipole moment. Note that only one of the above keywords can be used in a single calculation. If none is given the program will only process the input section. After this first keyword follows a set of keywords, which are used to specify the run. Most of them are optional. The compulsory keywords are: Gives the mass of the two atoms. Write mass number (an integer) and the chemical symbol Xx, in this order, for each of the two atoms in free format. If the mass numbers is zero for any atom, the mass of the most abundant isotope will be used. All isotope masses are stored in the program. You may introduce your own masses by giving a negative integer value to the mass number (one of them or both). The masses (in unified atomic mass units, or Da) are then read on the next (or next two) entry(ies). The isotopes of hydrogen can be given as H, D, or T. Gives the potential as an arbitrary number of lines. Each line contains a bond distance (in au) and an energy value (in au). A plot file of the potential is generated if the keyword Plot is added after the last energy input. One more entry should then follow with three numbers specifying the start and end value for the internuclear distance and the distance between adjacent plot points. This input must only be given together with the keyword RoVibrational spectrum. In addition you may want to specify some of the following optional input: One single title line The next entries give the number of grid points used in the numerical solution of the radial Schrödinger equation. The default value is 199. The maximum value that can be used is 4999. The next entry contains two distances Rmin and Rmax (in au) specifying the range in which the vibrational wave functions will be computed. The default values are 1.0 and 5.0 au. Note that these values most often have to be given as input since they vary considerably from one case to another. If the range specified is too small, the program will give a message informing the user that the vibrational wave function is large outside the integration range. The next entry specifies the number of vibrational quanta for which the wave functions and energies are computed. Default value is 3. The next entry specifies the range of rotational quantum numbers. Default values are 0 to 5. If the orbital angular momentum quantum number (\(m_\ell\)) is non zero, the lower value will be adjusted to \(m_\ell\) if the start value given in input is smaller than \(m_\ell\). The next entry specifies the value of the orbital angular momentum (0, 1, 2, etc.). Default value is zero. This keyword is used to scale the potential, such that the binding energy is 0.1 au. This leads to better precision in the numerical procedure and is strongly advised for weakly bound potentials. Only the wave function analysis will be carried out but not the calculation of spectroscopic constants. This keyword indicates the start of input for radial functions of observables other than the energy, for example the dipole moment function. The next line gives a title for this observable. An arbitrary number of input lines follows. Each line contains a distance and the corresponding value for the observable. As for the potential, this input can also end with the keyword Plot, to indicate that a file of the function for later plotting is to be constructed. The next line then contains the minimum and maximum R-values and the distance between adjacent points. When this input is given with the top keyword RoVibrational spectrum the program will compute matrix elements for vibrational wave functions of the current electronic state. Transition moment integrals are instead obtained when the top keyword is Transition moments. In the latter case the calculation becomes rather meaningless if this input is not provided. The program will then only compute the overlap integrals between the vibrational wave functions of the two states. The keyword Observable can be repeated up to ten times in a single run. All observables should be given in atomic units. The next entry gives the temperature (in K) at which the vibrational averaging of observables will be computed. The default is 300 K. The next entry gives the starting value for the energy step used in the bracketing of the eigenvalues. The default value is 0.004 au (88 \(\text{cm}^{-1}\)). This value must be smaller than the zero-point vibrational energy of the molecule. The next entry specifies the asymptotic energy difference between two potential curves in a calculation of transition matrix elements. The default value is zero atomic units. By default, when the Transition moments keyword is given, only the transitions between the lowest rotational level in each vibrational state are computed. The keyword AllRotational specifies that the transitions between all the rotational levels are to be included. Note that this may result in a very large output file. Requests the vibrational wave functions to be printed in the output file. Input example RoVibrational spectrum Title = Vib-Rot spectrum for FeNi Atoms = 0 Fe 0 Ni 1.0 -0.516768 1.1 -0.554562 Plot = 1.0 10.0 0.1 Grid = 150 Range = 1.0 10.0 Vibrations = 10 Rotations = 2 10 Orbital = 2 Dipole Moment 1.0 0.102354 1.1 0.112898 Plot = 1.0 10.0 0.1 Comments: The vibrational-rotation spectrum for \(\ce{FeNi}\) will be computed using the potential curve given in input. The 10 lowest vibrational levels will be obtained and for each level the rotational states in the range \(J\)=2 to 10. The vib-rot matrix elements of the dipole function will also be computed. A plot file of the potential and the dipole function will be generated. The masses for the most abundant isotopes of \(\ce{Fe}\) and \(\ce{Ni}\) will be selected.
42110a2db6dc56e8
Theory of neuromorphic computing by waves Machine-learning by rogue waves, dispersive shocks, and solitons We study artificial neural networks with nonlinear waves as a computing reservoir. We discuss universality and the conditions to learn a dataset in terms of output channels and nonlinearity. A feed-forward three-layer model, with an encoding input layer, a wave layer, and a decoding readout, behaves as a conventional neural network in approximating mathematical functions, real-world datasets, and universal Boolean gates. The rank of the transmission matrix has a fundamental role in assessing the learning abilities of the wave. For a given set of training points, a threshold nonlinearity for universal interpolation exists. When considering the nonlinear Schroedinger equation, the use of highly nonlinear regimes implies that solitons, rogue, and shock waves do have a leading role in training and computing. Our results may enable the realization of novel machine learning devices by using diverse physical systems, as nonlinear optics, hydrodynamics, polaritonics, and Bose-Einstein condensates. The application of these concepts to photonics opens the way to a large class of accelerators and new computational paradigms. In complex wave systems, as multimodal fibers, integrated optical circuits, random, topological devices, and metasurfaces, nonlinear waves can be employed to perform computation and solve complex combinatorial optimization. Couplings between time and orbital angular momentum in propagation-invariant ultrafast vortices In any form of wave propagation, strong spatiotemporal coupling appears when non-elementary, three-dimensional wave-packets are composed by superimposing pure plane waves, or spontaneously generated by light-matter interaction and nonlinear processes. Ultrashort pulses with orbital angular momentum (OAM), or ultrashort vortices, furnish a critical paradigm in which the analysis of the spatiotemporal coupling in the form of temporal-OAM coupling can be carried out accurately by analytical tools. By generalizing and unifying previously reported results, we show that universal and spatially heterogeneous space-time correlations occur in propagation-invariant temporal pulses carrying OAM. In regions with high intensity, the pulse duration has a lower bound fixed by the topological charge of the vortex and such that the duration must increase with the topological charge. In regions with low intensity in the vicinity of the vortex, a large blue-shift of the carrier oscillations and an increase of the number of them is predicted for strongly twisted beams. We think that these very general findings highlight the existence of a structural coupling between space and time, which is relevant at low photon numbers in quantum optics, and also in the highly nonlinear process as the high-harmonics generated with twisted beams. These results have also applications as multi-level classical and quantum free-space or satellite, communications, spectroscopy, and high-harmonic generation. Miguel A. Porras and C. Conti in arXiv:1911.1222 Controlling rogue waves and soliton gases Topological control of extreme waves From optics to hydrodynamics, shock and rogue waves are widespread. Although they appear as distinct phenomena, transitions between extreme waves are allowed. However, these have never been experimentally observed because control strategies are still missing. We introduce the new concept of topological control based on the one-to-one correspondence between the number of wave packet oscillating phases and the genus of toroidal surfaces associated with the nonlinear Schrödinger equation solutions through Riemann theta functions. We demonstrate the concept experimentally by reporting observations of supervised transitions between waves with different genera. Considering the box problem in a focusing photorefractive medium, we tailor the time-dependent nonlinearity and dispersion to explore each region in the state diagram of the nonlinear wave propagation. Our result is the first realization of topological control of nonlinear waves. This new technique casts light on shock and rogue waves generation and can be extended to other nonlinear phenomena. Nature Communications volume 10, Article number: 5090 (2019)
0b095f33a0a5cf57
When developing algorithms in quantum computing, I've noticed that there are two primary models in which this is done. Some algorithms - such as for the Hamiltonian NAND tree problem (Farhi, Goldstone, Guttman) - work by designing a Hamiltonian and some initial state, and then letting the system evolve according to the Schrödinger equation for some time $t$ before performing a measurement. Other algorithms - such as Shor's Algorithm for factoring - work by designing a sequence of Unitary transformations (analogous to gates) and applying these transformations one at a time to some initial state before performing a measurement. My question is, as a novice in quantum computing, what is the relationship between the Hamiltonian model and the Unitary transformation model? Some algorithms, like for the NAND tree problem, have since been adapted to work with a sequence of Unitary transformations (Childs, Cleve, Jordan, Yonge-Mallo). Can every algorithm in one model be transformed into a corresponding algorithm in the other? For example, given a sequence of Unitary transformations to solve a particular problem, is it possible to design a Hamiltonian and solve the problem in that model instead? What about the other direction? If so, what is the relationship between the time in which the system must evolve and the number of unitary transformations (gates) required to solve the problem? I have found several other problems for which this seems to be the case, but no clear cut argument or proof that would indicate that this is always possible or even true. Perhaps it's because I don't know what this problem is called, so I am unsure what to search for. • 3 $\begingroup$ Every polynomial-time algorithm in one corresponds to a polynomial-time algorithm in the other, but it's not clear the degree of the polynomial will be the same. Hopefully somebody will come up with references. These results were proved in the early days of quantum computation, and there should be better proofs of these theorems now. $\endgroup$ – Peter Shor Jul 6 '14 at 1:25 • $\begingroup$ does this relate to what is known as the Heisenberg vs Schroedinger picture of QM which relates to how the operators are defined? also if it isnt covered in Nielsen & Chuang then that would seem to be a major oversight! the NAND tree paper uses "hamiltonian oracles" which seem to be introduced by Farhi/Gutmann 1998. here is a nice survey article on Hamiltonian oracles by Mochon 2007 $\endgroup$ – vzn Jul 6 '14 at 15:58 • $\begingroup$ The book link you provided is actually the textbook we used in my undergraduate course in Quantum Information Processing. The book is really geared towards the Unitary approach (within the context of oracles as well), but not so much in the context of Hamiltonians. My undergrad course was focused from a cs perspective and not a physics perspective, which is why I am most familiar with the Unitary model. $\endgroup$ – user340082710 Jul 8 '14 at 15:47 • $\begingroup$ The paper you provided as well is a good reference in general, but I don't believe it addresses my question either. Lastly, I've taken a look at the Heisenberg vs Schroedinger picture of QM, and it does look related, but I believe my question is different (though I could be wrong - It was a hard to follow the Wikipedia entries). $\endgroup$ – user340082710 Jul 8 '14 at 15:49 • $\begingroup$ I think there are different ways to interpret your question and instead of answering all interpretations, I'd like to ask you the following: Could you be more precise about the version of the Hamiltonian model you have in mind? What is the measure of complexity in this model? (i.e., what is it that counts how difficult it is to solve a problem in the Hamiltonian model?) How is the input to the problem given? Is it given explicitly or do you have to query the input via an oracle? $\endgroup$ – Robin Kothari Jul 10 '14 at 0:27 To show that Hamiltonian evolution can simulate the circuit model, one can use the paper Universal computation by multi-particle quantum walk, which shows that a very specific kind of Hamiltonian evolution (multi-particle quantum walks) is BQP complete, and thus can simulate the circuit model. Here is a survey paper on simulating quantum evolution on a quantum computer. One can use the techniques in this paper to simulate the Hamiltonian evolution model of quantum computers. To do this, one needs to use "Trotterization", which substantially decreases the efficiency of the simulation (although it only introduces a polynomial blowup in computation time). • $\begingroup$ Thanks! These references look quite good and should be able to give me an idea of how this is done. $\endgroup$ – user340082710 Jul 10 '14 at 20:10 Your Answer
fde3c3afe5607fa8
onsdag 9 oktober 2013 Nobel Prize in Chemistry Awarded for Not Solving Schrödinger's Equation               Picture from presentation of Nobel Prize in Chemistry 2013: Multiscale Models • the development of multiscale models for complex chemical systems: 2 kommentarer: 1. I would call this mathematical (or calculational) heuristics, and it is NOT physical science. It is guessing, and guessing by computer at that. They might as well admit they are using fudge factors, and have done with the "progress in science" propaganda, because it is degeneration of science, not progression. 2. Well, a man can only what a man can do, and if solving the Schrödinger equation is beyond human capability, then you have to solve some other equation and that is not necessarily degeneration. It could be just realism, but it could also be fake science.
ef6dbbf25f9bafa5
Student Blog Post: Qantam Physics for teenagers with Chris Ferrie Student blog post by Matthew Raymond On Tuesday this week, physicist Chris Ferry Ph.D talked to Year 9 about the use of mathematics in problem solving, and how we could build mathematical skills through determination and practice. To make his point, he referenced the time-independent Schrödinger equation, a partial differential equation describing the wave function of a single particle moving in an electrical field. He described the equality in polar coordinates: Screen Shot 2019-08-20 at 8.49.52 am The above may seem unintelligible, but consider another equality, 4 x 4 = 16 You’ve hopefully used this statement hundreds of times in your mathematical education, to the point where you can instantly recall it from memory. What makes the four times tables different from Schrödinger’s equation? Practically nothing. I assure you, if you practiced quantum mechanics like you did times tables, you’d be able to recall field theory much faster than common arithmetic. Dr Ferrie made the point that those who practice a subject create the impression they have a kind of superior intelligence. For example. if I asked someone who’d never been exposed to any kind symbolic mathematics what either of the two equalities above represent, I couldn’t possibly hope they’d explain it to me, regardless of how much natural intellect they had. The natural implication is obvious, to understand mathematics, and any other subject, all we have to do is practice! In summary, Dr Ferry illustrated the importance of practice and how that practice manifests in success. A hugely important message sprinkled in quantum mechanics! What more could we ask for? Dr Chris Ferrie.1 Dr Chris Ferrie talks Quantam Physics with Matthew Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
ed9e778dd5458f93
All Issues Volume 39, 2019 Volume 38, 2018 Volume 37, 2017 Volume 36, 2016 Volume 35, 2015 Volume 34, 2014 Volume 33, 2013 Volume 32, 2012 Volume 31, 2011 Volume 30, 2011 Volume 29, 2011 Volume 28, 2010 Volume 27, 2010 Volume 26, 2010 Volume 25, 2009 Volume 24, 2009 Volume 23, 2009 Volume 22, 2008 Volume 21, 2008 Volume 20, 2008 Volume 19, 2007 Volume 18, 2007 Volume 17, 2007 Volume 16, 2006 Volume 15, 2006 Volume 14, 2006 Volume 13, 2005 Volume 12, 2005 Volume 11, 2004 Volume 10, 2004 Volume 9, 2003 Volume 8, 2002 Volume 7, 2001 Volume 6, 2000 Volume 5, 1999 Volume 4, 1998 Volume 3, 1997 Volume 2, 1996 Volume 1, 1995 Discrete & Continuous Dynamical Systems - A June 2018 , Volume 38 , Issue 6 Select all articles Ergodic theorems for nonconventional arrays and an extension of the Szemerédi theorem Yuri Kifer 2018, 38(6): 2687-2716 doi: 10.3934/dcds.2018113 +[Abstract](2730) +[HTML](132) +[PDF](561.84KB) The paper is primarily concerned with the asymptotic behavior as \begin{document}$N\to∞$\end{document} of averages of nonconventional arrays having the form \begin{document}${N^{ - 1}}\sum\limits_{n = 1}^N {\prod\limits_{j = 1}^\ell {{T^{{P_j}(n,N)}}} } {f_j}$\end{document} where \begin{document}$f_j$\end{document}'s are bounded measurable functions, \begin{document}$T$\end{document} is an invertible measure preserving transformation and \begin{document}$P_j$\end{document}'s are polynomials of \begin{document}$n$\end{document} and \begin{document}$N$\end{document} taking on integer values on integers. It turns out that when \begin{document}$T$\end{document} is weakly mixing and \begin{document}$P_j(n, N) = p_jn+q_jN$\end{document} are linear or, more generally, have the form \begin{document}$P_j(n, N) = P_j(n)+Q_j(N)$\end{document} for some integer valued polynomials \begin{document}$P_j$\end{document} and \begin{document}$Q_j$\end{document} then the above averages converge in \begin{document}$L^2$\end{document} but for general polynomials \begin{document}$P_j$\end{document} of both \begin{document}$n$\end{document} and \begin{document}$N$\end{document} the \begin{document}$L^2$\end{document} convergence can be ensured even in the "conventional" case \begin{document}$\ell = 1$\end{document} only when \begin{document}$T$\end{document} is strongly mixing while for \begin{document}$\ell>1$\end{document} strong \begin{document}$2\ell$\end{document}-mixing should be assumed. Studying also weakly mixing and compact extensions and relying on Furstenberg's structure theorem we derive an extension of Szemerédi's theorem saying that for any subset of integers \begin{document}$\Lambda $\end{document} with positive upper density there exists a subset \begin{document}${\cal N}_\Lambda $\end{document} of positive integers having uniformly bounded gaps such that for \begin{document}$N∈{\cal N}_\Lambda $\end{document} and at least \begin{document}$\varepsilon N, \, \varepsilon >0$\end{document} of \begin{document}$n$\end{document}'s all numbers \begin{document}$p_jn+q_jN, \, j = 1, ..., \ell, $\end{document} belong to \begin{document}$\Lambda $\end{document}. We obtain also a version of these results for several commuting transformations which yields a corresponding extension of the multidimensional Szemerédi theorem. Partially hyperbolic sets with a dynamically minimal lamination Luiz Felipe Nobili França 2018, 38(6): 2717-2729 doi: 10.3934/dcds.2018114 +[Abstract](2285) +[HTML](119) +[PDF](385.01KB) We study partially hyperbolic sets of \begin{document}$C^1$\end{document}-diffeomorphisms. For these sets there are defined the strong stable and strong unstable laminations.A lamination is called dynamically minimal when the orbit of each leaf intersects the set densely. We prove that partially hyperbolic sets having a dynamically minimal lamination have empty interior. We also study the Lebesgue measure and the spectral decomposition of these sets. These results can be applied to \begin{document}$C^1$\end{document}-generic/robustly transitive attractors with one-dimensional center bundle. Asymptotic properties of various stochastic Cucker-Smale dynamics Laure Pédèches 2018, 38(6): 2731-2762 doi: 10.3934/dcds.2018115 +[Abstract](2482) +[HTML](140) +[PDF](986.9KB) Starting from the stochastic Cucker-Smale model introduced in [14], we look into its asymptotic behaviours for different kinds of interaction. First in term of ergodicity, when $t$ goes to infinity, seeking invariant probability measures and using Lyapunov functionals. Second, when the number $N$ of particles becomes large, leading to results about propagation of chaos. Remarks on the critical coupling strength for the Cucker-Smale model with unit speed Seung-Yeal Ha, Dongnam Ko and Yinglong Zhang 2018, 38(6): 2763-2793 doi: 10.3934/dcds.2018116 +[Abstract](2811) +[HTML](155) +[PDF](871.21KB) We present a non-trivial lower bound for the critical coupling strength to the Cucker-Smale model with unit speed constraint and short-range communication weight from the viewpoint of a mono-cluster(global) flocking. For a long-range communication weight, the critical coupling strength is zero in the sense that the mono-cluster flocking emerges from any initial configurations for any positive coupling strengths, whereas for a short-range communication weight, a mono-cluster flocking can emerge from an initial configuration only for a sufficiently large coupling strength. Our main interest lies on the condition of non-flocking. We provide a positive lower bound for the critical coupling strength. We also present numerical simulations for the upper and lower bounds for the critical coupling strength depending on initial configurations and compare them with analytical results. Synchronization of positive solutions for coupled Schrödinger equations Chuangye Liu and Zhi-Qiang Wang 2018, 38(6): 2795-2808 doi: 10.3934/dcds.2018118 +[Abstract](2728) +[HTML](207) +[PDF](441.85KB) In this paper, we analyze synchronized positive solutions for a coupled nonlinear Schrödinger equation where \begin{document}$ 2< p<\frac{n}{n-2}, $\end{document} if \begin{document}$ n\ge 3$\end{document} and \begin{document}$ 2< p<+∞ $\end{document}, if \begin{document}$ n = 1, 2, $\end{document} and \begin{document}$μ_1, μ_2, β>0 $\end{document} are positive constants. Our goal is two fold. On one hand we study the question under what conditions the ground states are nontrivial synchronized positive solutions, giving precise conditions in terms of the size of the coupling constant. On the other hand, we examine the questions on whether all positive solutions are synchronized solutions. We have a complete answer for the case \begin{document}$ n = 1 $\end{document} by proving that positivity implies synchronization. The latter result enables us to obtain the exact number of positive solutions even though no uniqueness result holds in the case, and this is quite different from the case \begin{document}$ p = 2 $\end{document} for which uniqueness of positive solutions was known ([19]). Ruelle's inequality in negative curvature Felipe Riquelme 2018, 38(6): 2809-2825 doi: 10.3934/dcds.2018119 +[Abstract](2336) +[HTML](114) +[PDF](390.47KB) In this paper we study different notions of entropy for measure-preserving dynamical systems defined on noncompact spaces. We see that some classical results for compact spaces remain partially valid in this setting. We define a new kind of entropy for dynamical systems defined on noncompact Riemannian manifolds, which satisfies similar properties to the classical ones. As an application, we prove Ruelle's inequality and Pesin's entropy formula for the geodesic flow in manifolds with pinched negative sectional curvature. Introduction to tropical series and wave dynamic on them Nikita Kalinin and Mikhail Shkolnikov 2018, 38(6): 2827-2849 doi: 10.3934/dcds.2018120 +[Abstract](2357) +[HTML](128) +[PDF](516.7KB) The theory of tropical series, that we develop here, firstly appeared in the study of the growth of pluriharmonic functions. Motivated by waves in sandpile models we introduce a dynamic on the set of tropical series, and it is experimentally observed that this dynamic obeys a power law. So, this paper serves as a compilation of results we need for other articles and also introduces several objects interesting by themselves. Reducibility of three dimensional skew symmetric system with Liouvillean basic frequencies Dongfeng Zhang, Junxiang Xu and Xindong Xu 2018, 38(6): 2851-2877 doi: 10.3934/dcds.2018123 +[Abstract](2372) +[HTML](123) +[PDF](518.58KB) In this paper we consider the system \begin{document}$\dot{x} = (A(\epsilon)+ \epsilon^{m} P(t;\epsilon)) x, x∈\mathbb{R}^{3}, $\end{document} where \begin{document}$\epsilon$\end{document} is a small parameter, \begin{document}$A, P$\end{document} are all \begin{document}$3×3$\end{document} skew symmetric matrices, \begin{document}$A$\end{document} is a constant matrix with eigenvalues \begin{document}$± i\bar{λ}(\epsilon)$\end{document} and 0, where \begin{document}$\bar{λ}(\epsilon) = λ+a_{m_{0}}\epsilon^{m_{0}} + O(\epsilon^{m_{0}+1}) (m_{0}< m),$\end{document} \begin{document}$a_{m_{0}}≠ 0,$\end{document} \begin{document}$P$\end{document} is a quasi-periodic matrix with basic frequencies \begin{document}$ω = (1,α)$\end{document} with \begin{document}$α$\end{document} being irrational. First, it is proved that for most of sufficiently small parameters, this system can be reduced to a rotation system. Furthermore, if the basic frequencies satisfy that \begin{document}$ 0≤β(α) < r,$\end{document} where \begin{document}$β(α)$\end{document} measures how Liouvillean \begin{document}$α$\end{document} is, \begin{document}$r$\end{document} is the initial analytic radius, it is proved that for most of sufficiently small parameters, this system can be reduced to constant system by means of a quasi-periodic change of variables. Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces Qunyi Bie, Haibo Cui, Qiru Wang and Zheng-An Yao 2018, 38(6): 2879-2910 doi: 10.3934/dcds.2018124 +[Abstract](2753) +[HTML](128) +[PDF](629.27KB) The present paper is devoted to the compressible nematic liquid crystal flow in the whole space \begin{document}$ \mathbb{R}^N\,(N≥ 2)$\end{document}. Here we concentrate on the incompressible limit in the \begin{document}$ L^p$\end{document} type critical Besov spaces setting. We first establish the existence of global solutions in the framework of \begin{document}$ L^p$\end{document} type critical spaces provided that the initial data are close to some equilibrium states. Based on the global existence, we then consider the incompressible limit problem in the ill prepared data case. We justify the low Mach number convergence to the incompressible flow of liquid crystals in proper function spaces. In addition, the accurate converge rates are obtained. Stability of transonic jets with strong rarefaction waves for two-dimensional steady compressible Euler system Min Ding and Hairong Yuan 2018, 38(6): 2911-2943 doi: 10.3934/dcds.2018125 +[Abstract](2388) +[HTML](121) +[PDF](582.8KB) We study supersonic flow past a convex corner which is surrounded by quiescent gas. When the pressure of the upstream supersonic flow is larger than that of the quiescent gas, there appears a strong rarefaction wave to rarefy the supersonic gas. Meanwhile, a transonic characteristic discontinuity appears to separate the supersonic flow behind the rarefaction wave from the static gas. In this paper, we employ a wave front tracking method to establish structural stability of such a flow pattern under non-smooth perturbations of the upcoming supersonic flow. It is an initial-value/free-boundary problem for the two-dimensional steady non-isentropic compressible Euler system. The main ingredients are careful analysis of wave interactions and construction of suitable Glimm functional, to overcome the difficulty that the strong rarefaction wave has a large total variation. Isolated singularities for elliptic equations with hardy operator and source nonlinearity Huyuan Chen and Feng Zhou 2018, 38(6): 2945-2964 doi: 10.3934/dcds.2018126 +[Abstract](2438) +[HTML](132) +[PDF](470.64KB) In this paper, we concern the isolated singular solutions for semi-linear elliptic equations involving Hardy-Leray potential We classify the isolated singularities and obtain the existence and stability of positive solutions of (1). Our results are based on the study of nonhomogeneous Hardy problem in a new distributional sense. Lozi-like maps Michał Misiurewicz and Sonja Štimac 2018, 38(6): 2965-2985 doi: 10.3934/dcds.2018127 +[Abstract](2460) +[HTML](126) +[PDF](583.08KB) We define a broad class of piecewise smooth plane homeomorphisms which have properties similar to the properties of Lozi maps, including the existence of a hyperbolic attractor. We call those maps Lozi-like. For those maps one can apply our previous results on kneading theory for Lozi maps. We show a strong numerical evidence that there exist Lozi-like maps that have kneading sequences different than those of Lozi maps. Propagation of monostable traveling fronts in discrete periodic media with delay Shi-Liang Wu and Cheng-Hsiung Hsu 2018, 38(6): 2987-3022 doi: 10.3934/dcds.2018128 +[Abstract](2606) +[HTML](198) +[PDF](606.44KB) This paper is devoted to study the front propagation for a class of discrete periodic monostable equations with delay and nonlocal interaction. We first establish the existence of rightward and leftward spreading speeds and prove their coincidence with the minimal wave speeds of the pulsating traveling fronts in the right and left directions, respectively. The dependency of the speeds of propagation on the heterogeneity of the medium and the delay term is also investigated. We find that the periodicity of the medium increases the invasion speed, in comparison with a homogeneous medium; while the delay decreases the invasion speed. Further, we prove the uniqueness of all noncritical pulsating traveling fronts. Finally, we show that all noncritical pulsating traveling fronts are globally exponentially stable, as long as the initial perturbations around them are uniformly bounded in a weight space. High energy solutions of the Choquard equation Daomin Cao and Hang Li 2018, 38(6): 3023-3032 doi: 10.3934/dcds.2018129 +[Abstract](2661) +[HTML](131) +[PDF](369.22KB) In this paper we are concerned with the existence of positive high energy solutions of the Choquard equation. Under certain assumptions, the ground state of Choquard equation does not exist. However, by global compactness analysis, we prove that there exists a positive high energy solution. A singular cahn-hilliard-oono phase-field system with hereditary memory Monica Conti, Stefania Gatti and Alain Miranville 2018, 38(6): 3033-3054 doi: 10.3934/dcds.2018132 +[Abstract](2342) +[HTML](139) +[PDF](447.78KB) We consider a phase-field system modeling phase transition phenomena, where the Cahn-Hilliard-Oono equation for the order parameter is coupled with the Coleman-Gurtin heat law for the temperature. The former suitably describes both local and nonlocal (long-ranged) interactions in the material undergoing phase-separation, while the latter takes into account thermal memory effects. We study the well-posedness and longtime behavior of the corresponding dynamical system in the history space setting, for a class of physically relevant and singular potentials. Besides, we investigate the regularization properties of the solutions and, for sufficiently smooth data, we establish the strict separation property from the pure phases. Interface stabilization of a parabolic-hyperbolic pde system with delay in the interaction Gilbert Peralta and Karl Kunisch 2018, 38(6): 3055-3083 doi: 10.3934/dcds.2018133 +[Abstract](2570) +[HTML](170) +[PDF](523.25KB) A coupled parabolic-hyperbolic system of partial differential equations modeling the interaction of a structure submerged in a fluid is studied. The system being considered incorporates delays in the interaction on the interface between the fluid and the solid. We study the stability properties of the interaction model under suitable assumptions between the competing strengths of the delays and the feedback controls. Liouville theorems for periodic two-component shallow water systems Qiaoyi Hu, Zhixin Wu and Yumei Sun 2018, 38(6): 3085-3097 doi: 10.3934/dcds.2018134 +[Abstract](2439) +[HTML](127) +[PDF](393.31KB) We establish Liouville-type theorems for periodic two-component shallow water systems, including a two-component Camassa-Holm equation (2CH) and a two-component Degasperis-Procesi (2DP) equation. More presicely, we prove that the only global, strong, spatially periodic solutions to the equations, vanishing at some point \begin{document}$(t_0, x_0)$\end{document}, are the identically zero solutions. Also, we derive new local-in-space blow-up criteria for the dispersive 2CH and 2DP. Exit time asymptotics for small noise stochastic delay differential equations David Lipshutz 2018, 38(6): 3099-3138 doi: 10.3934/dcds.2018135 +[Abstract](2607) +[HTML](121) +[PDF](672.4KB) Dynamical system models with delayed dynamics and small noise arise in a variety of applications in science and engineering. In many applications, stable equilibrium or periodic behavior is critical to a well functioning system. Sufficient conditions for the stability of equilibrium points or periodic orbits of certain deterministic dynamical systems with delayed dynamics are known and it is of interest to understand the sample path behavior of such systems under the addition of small noise. We consider a small noise stochastic delay differential equation (SDDE). We obtain asymptotic estimates, as the noise vanishes, on the time it takes a solution of the stochastic equation to exit a bounded domain that is attracted to a stable equilibrium point or periodic orbit of the corresponding deterministic equation. To obtain these asymptotics, we prove a sample path large deviation principle (LDP) for the SDDE that is uniform over initial conditions in bounded sets. The proof of the uniform sample path LDP uses a variational representation for exponential functionals of strong solutions of the SDDE. We anticipate that the overall approach may be useful in proving uniform sample path LDPs for other infinite-dimensional small noise stochastic equations. Sign-changing multi-bump solutions for Kirchhoff-type equations in $\mathbb{R}^3$ Yinbin Deng and Wei Shuai 2018, 38(6): 3139-3168 doi: 10.3934/dcds.2018137 +[Abstract](2996) +[HTML](259) +[PDF](610.26KB) We are interested in the existence of sign-changing multi-bump solutions for the following Kirchhoff equation where \begin{document}$λ$\end{document}>0 is a parameter and the potential \begin{document}$V(x)$\end{document} is a nonnegative continuous function with a potential well \begin{document}$Ω: = int(V^{-1}(0))$\end{document} which possesses \begin{document}$k$\end{document} disjoint bounded components \begin{document}$Ω_1,Ω_2,···,Ω_k$\end{document}. Under some conditions imposed on \begin{document}$f(u)$\end{document}, multiple sign-changing multi-bump solutions are obtained. Moreover, the concentration behavior of these solutions as \begin{document}$λ→ +∞$\end{document} are also studied. Normality and uniqueness of Lagrange multipliers Karla L. Cortez and Javier F. Rosenblueth 2018, 38(6): 3169-3188 doi: 10.3934/dcds.2018138 +[Abstract](2404) +[HTML](160) +[PDF](151.64KB) In this paper we study, for certain problems in the calculus of variations and optimal control, two different questions related to uniqueness of multipliers appearing in first order necessary conditions. One deals with conditions under which a given multiplier associated with an extremal of a fixed function is unique, a property which, in nonlinear programming, is known to be equivalent to the strict Mangasarian-Fromovitz constraint qualification. We show that, for isoperimetric problems in the calculus of variations, a similar characterization holds, but not in optimal control where the corresponding condition is only sufficient for the uniqueness of the multiplier. The other question is related to the set of multipliers associated with all functions for which a solution to the constrained problem is given. We prove that, for both types of problems, this set is a singleton if and only if a strong normality assumption holds. 2018  Impact Factor: 1.143 Email Alert [Back to Top]
727c485c81404c50
Modal Interpretations of Quantum Mechanics First published Tue Nov 12, 2002; substantive revision Wed Dec 12, 2012 The original “modal interpretation” of non-relativistic quantum theory was born in the early 1970s, and at that time the phrase referred to a single interpretation. The phrase now encompasses a class of interpretations, and is better taken to refer to a general approach to the interpretation of quantum theory. We shall describe the history of modal interpretations, how the phrase has come to be used in this way, and the general program of (at least some of) those who advocate this approach. 1. The origin of the modal approach In traditional approaches to quantum measurement theory a central role is played by the projection postulate, which asserts that upon measurement of a physical system its state will be projected (“collapses”) onto a state corresponding to the value found in the measurement. However, this postulate leads to many difficulties: What causes this discontinuous change in the physical state of a system? What exactly is a “measurement” as opposed to an ordinary physical interaction? The postulate is especially worrying when applied to entangled compound systems whose components are well-separated in space. For example, in the Einstein-Podolsky-Rosen (EPR) experiment there are strict correlations between two systems that have interacted in the past, in spite of the fact that the correlated quantities are not sharply defined in the individual systems. The projection postulate in this case implies that the collapse resulting from a measurement on one of the systems instantaneously defines a sharp property in the distant other system. A possible way clear of these problems was noticed by van Fraassen (1972, 1974, 1991), who proposed to eliminate the projection postulate from the theory. Others had made this proposal before, as Bohm (1952) in his theory (itself preceded by de Broglie's proposals from the 1920s), Everett (1957) in his relative-state interpretation and De Witt (1970) with the many-worlds interpretation. Van Fraassen's proposal was, however, different from these other approaches. It relied, in particular, on a distinction between what he called the “dynamical state” and the “value state” of a system at any instant: • The value state represents what actually is the case, that is, all the system's physical properties that are sharply defined at the instant in question. The dynamical state is just the quantum state of the ordinary textbook approach (a vector or density matrix in Hilbert space). For an isolated system, it always evolves according to the Schrödinger equation (in non-relativistic quantum mechanics): so the dynamical state never collapses during its evolution. The value state is (typically) different from the dynamical state. The general idea of this original proposal, and of modal interpretations in general, is that physical systems at all times possess a number of well-defined physical properties, i.e., definite values of physical quantities; these properties can be represented by the system's value state. Which physical quantities are sharply defined, and which values they take, may change in time. Empirical adequacy of course requires that the dynamical state generate the correct Born frequencies of observable quantities. An essential feature of this approach is that a system may have a sharp value of an observable even if the dynamical state is not an eigenstate of that same observable. The proposal thus violates the so-called “eigenstate-eigenvalue link”, which says that a system can only have a sharp value of an observable (namely, one of its eigenvalues) if its quantum state is the corresponding eigenstate. In the value state terminology, the eigenstate-eigenvalue link would say that a system has the value state corresponding to a given eigenvalue of a given observable if and only if its dynamical state is an eigenstate of the observable corresponding to that eigenvalue. This original modal approach accepts the “if” part, but denies the “only if” part. What are the possible “value states” for a given system at a given time? Van Fraassen stipulates the following restriction: propositions about a physical system cannot be jointly true, unless they are represented by commuting observables. In other words, the non-commutativity of observables imposes limits not on our knowledge about the properties of a system, but rather on the possibility of joint existence of properties, independently of our knowledge. Non-commuting quantities, like position and momentum, cannot jointly be well-defined quantities of a physical system. Empirical adequacy requires that, in cases of measurement, the actual value state of the apparatus be one describing a definite measurement result. Therefore, in these cases the dynamical state must generate a probability measure over exactly the set of possible measurement results. However, this original modal approach is more liberal in its assignment of possible value states, and according to many this does not yield a satisfactory account of measurements (see Ruetsche 1996). Van Fraassen's proposal is “modal” because it leads to a modal logic of quantum propositions. Indeed, the dynamical state in general only tells us what is possible. An important point is that one should not consider this modality as arising from an incompleteness of the description, which it is the aim of science to remove. The dynamical state provides us with possible physical properties of the system, and this is all the theory has to do. It is easy to see how, along the same lines as van Fraassen's ideas, a program could come into being for providing a more elaborate “realist” interpretation of quantum theory, a program to which we now turn. 2. General features of modal interpretations In the 1980s several authors presented realist interpretations which, in retrospect, can be regarded as elaborations or variations on the just-mentioned modal themes (for an overview and references, see Dieks and Vermaas 1998). In spite of the differences among them, all the modal interpretations agree on the following points: • The interpretation is based on the standard formalism of quantum mechanics, with one exception: the projection postulate is left out. • The interpretation is realist, in the sense that it assumes that quantum systems possess definite properties at all instants of time. • Quantum mechanics is taken to be fundamental: it applies both to microscopic and macroscopic systems. • The dynamical state of the system (pure or mixed) tells us what the possible properties of the system and their corresponding probabilities are. This is achieved by a precise mathematical rule that specifies a probabilistic relationship between the dynamical state and possible value states. • A quantum measurement is an ordinary physical interaction. There is no collapse of the dynamical state: the dynamical state always evolves unitarily according to the Schrödinger equation. The Kochen-Specker theorem (1967) is a barrier to any realist classical-like interpretation of quantum mechanics, since it proves the impossibility of ascribing precise values to all physical quantities (observables) of a quantum system simultaneously, while preserving the functional relations between commuting observables. Therefore, realist non-collapse interpretations are committed to selecting a privileged set of definite-valued observables out of all observables. Each modal interpretation thus supplies a “rule of definite-value ascription” or “actualization rule”, which picks out, from the set of all observables of a quantum system, the subset of definite-valued properties. The question is: what should this actualization rule look like? Since the mid-1990's a series of approaches faced this question (Clifton 1995a,b; Dickson 1995a,b; Dieks 1995). Each one of them proposed a group of conditions that the set of definite-valued properties should obey, and characterized this set in terms of the dynamical state |φ⟩ of the system. The common result was that the possible value states of the components of a two-part composite system are given by the states occurring in the Schmidt (bi-orthogonal) decomposition of the dynamical state, or, equivalently, by the projectors occurring in the spectral decomposition of the density matrices representing partial systems (obtained by partial tracing)---see Section 4 for more details. The definite-valued properties have also been characterized somewhat differently (Bub and Clifton 1996; for an improved version, see Bub, Clifton and Goldstein 2000), that is, in terms of the quantum state |φ⟩ plus a “privileged observable” R, which is privileged in the sense that it represents a property that is always definite-valued (see also Dieks 2005, 2007). On this basis, Bub (1992, 1994, 1997) suggests that with hindsight a number of traditional interpretations of quantum theory can be characterized as modal interpretations. Notable among them are the Dirac-von Neumann interpretation, (what Bub takes to be) Bohr's interpretation, and Bohm's theory. Bohm's theory is a modal interpretation in which the privileged observable R is the position observable. 3. Atomic modal interpretation The Hilbert space of the universe Huniv, like any Hilbert space, can be factorized in countless ways. If one supposes that each factorization defines a legitimate set of subsystems of the universe, the multiple factorizability implies that there exists a multiplicity of ways of defining the building blocks of nature. If the properties (value states) of all these quantum systems are defined by means of the partial trace with respect to the rest of the universe (see later for more details), it turns out that a contradiction of the Kochen-Specker type arises (Bacciagaluppi 1995). The Atomic Modal Interpretation (AMI, Bacciagaluppi and Dickson 1999) tries to overcome this obstacle by assuming that there is in nature a fixed set of mutually disjoint atomic quantum systems Sj that constitute the building blocks of all the other quantum systems. From the mathematical point of view, this means that the Hilbert space Huniv of the entire universe can only be meaningfully factorized in a single way, which defines a preferred factorization. If each atomic quantum system Sj is represented by its corresponding Hilbert space Hj, then the Hilbert space Huniv of the universe must be written as Huniv = H1H2 ⊗ ... ⊗ Hj ⊗ ... The main appeal of this idea is that it is in consonance with the standard model of particle physics, where the fundamental blocks of nature are the elemental particles, e.g., quarks, electrons, photons, etc., and their interactions. The property ascription to the atomic quantum systems in the AMI further follows the general idea of modal interpretations, that is, the ascription depends via a fixed rule on the dynamical state of the system. The main challenge for the AMI is to justify the assumption that there is a preferred partition of the universe and to provide some idea about what this factorization should look like. AMI also faces a conceptual problem. In this interpretation, a non-atomic quantum system Sσ, defined as composite of atomic quantum systems, does not necessarily have properties that correspond to the outcomes of measurements. The reason is that the system Sσ might be in the quantum state ρσ with an eigenprojector ∏σ such that Trσσσ) = 1. This implies that if one measured the property represented by ∏σ, one would obtain a positive outcome with probability 1. But it may be the case that the projector ∏σ is not a composite of atomic properties and, therefore, according to the AMI, it is not a property possessed by the composite quantum system Sσ. Two answers to this conceptual difficulty have been proposed. The first allows the existence of dispositional properties in addition to ordinary properties (Clifton 1996). According to the second answer, the projector ∏σ of the composite system Sσ shows that Sσ has a collective dynamical effect onto the measurement device, that is, an effect that cannot be explained by the action of the atomic components (Dieks 1998). In other words, the composite quantum system, when interacting with its environment, can behave as a collective entity, screening off the contribution of the atomic quantum systems. This means that sometimes a non-atomic quantum system Sσ may be taken as if it were an atomic quantum system within the framework of a coarse-grained description. 4. Biorthogonal-decomposition and spectral-decomposition modal interpretations In the biorthogonal-decomposition interpretation (BDMI, sometimes known as “Kochen-Dieks modal interpretation”, Kochen 1985; Dieks 1988, 1989a,b, 1994a,b), the definite-valued observables are picked out by the biorthogonal (Schmidt) decomposition of the pure quantum state of the system: • Biorthogonal Decomposition Theorem: Given a vector |ψ⟩ in a tensor-product Hilbert space H1H2, there exist bases {|ai⟩} and {|pi⟩} for H1 and H2 respectively, such that |ψ⟩ can be written as a linear combination of terms of the form |ai⟩ ⊗ |pi⟩. If the absolute values (modulus) of the coefficients in this linear combination are all unequal, then the bases are unique (see, for example, Schrödinger 1935 for a proof). In quantum mechanics the theorem means that, given a composite system consisting of two subsystems, its state picks out (in many cases, uniquely) a basis for each of the subsystems. According to the BDMI, those bases generate the definite-valued properties (the value states) of the corresponding subsystems. The BDMI is particularly appropriate to account for quantum measurement. Let us consider an ideal measurement under the standard von Neumann model, according to which a quantum measurement is an interaction between a system S and a measuring apparatus M. Before the interaction, M is prepared in a ready-to-measure state |p0⟩, eigenvector of the pointer observable P of M, and the state of S is a superposition of the eigenstates |ai⟩ of an observable A of S. The interaction introduces a correlation between the eigenstates |ai⟩ of A and the eigenstates |pi⟩ of P: 0⟩ = ∑i ci |ai⟩ ⊗ |p0⟩ → |ψ⟩ = ∑i ci |ai⟩ ⊗ |pi In this case, according to the BDMI prescription, the preferred context of the measured system S is defined by the set {|ai⟩} and the preferred context of the measuring apparatus M is defined by the set {|pi⟩}. Therefore, the pointer position is a definite-valued property of the apparatus: it acquires one of its possible values (eigenvalues) pi. And analogously in the measured system: the measured observable is a definite-valued property of the measured system, and it acquires one of its possible values (eigenvalues) ai. In spite of the fact that this modal interpretation is characterized by the central role played by biorthogonal decomposition, two different versions can be distinguished. One of them adopts a metaphysics in which all properties are relational and, as a consequence, the fact that the application of the interpretation is restricted to subsystems of a two-component compound system is not a problem (Kochen 1985). This relation has been called “witnessing”: properties are not possessed by the system absolutely, but only when it is “witnessed” by another system. Consider the measurement described above: the pointer “witnesses” the value acquired by the measured observable of the measured system. By contrast, according to the other version (Dieks 1988, 1989a,b) the properties ascribed to the system do not have a relational character. This proposal therefore faces consistency questions about the assignments of definite values to observables according to different ways of splitting up the total system into components. Consider, for example, the three-component composite system αβχ. We could apply the biorthogonal decomposition theorem to the two-component system (i) α(βχ), or (ii) β(χα) or (iii) χ(αβ). Suppose that, as a result of this, in case (i) the system α has the definite-valued property P, in case (ii) the system β has the definite-valued property Q, and in case (iii) the system αβ has the definite-valued property R. How do the definite-valued properties of α and β relate to those of αβ? Are the definite-values properties of system αβ P&Q, or R, or both? This problem has been addressed by different authors during the 1990's (see Vermaas 1999; Bacciagaluppi 1996). This work led to the spectral-decomposition modal interpretation (SDMI, sometimes known as “Vermaas-Dieks modal interpretation”, Vermaas and Dieks 1995) a generalization of the BDMI interpretation to mixed states. The SDMI is based on the spectral decomposition of the reduced density operator: the definite-valued properties ∏i of a system and their corresponding probabilities Pri are given by the non-zero diagonal elements of the spectral decomposition of the system's state, ρ = ∑i αii     Pri = Tr(ρ∏i) This new proposal matches the old one in cases where the old one applies, and generalizes it by fixing the definite-valued properties in terms of multi-dimensional projectors when the biorthogonal decomposition is degenerate: definite-valued properties need not always be represented by one-dimensional vectors—higher-dimensional subspaces of the Hilbert space can also occur. The SDMI also has a direct application to the measurement situation. Consider quantum measurement as described above, where the reduced states of the measured system S and the measuring apparatus M are ρrS  = Tr(M)|ψ⟩⟨ψ| = ∑i |ci|2 |ai⟩⟨ai| = ∑i |ci|2ia ρrM  = Tr(S)|ψ⟩⟨ψ| = ∑i |ci|2 |pi⟩⟨pi| = ∑i |ci|2ip According to the SDMI, the preferred context of S is defined by the projectors ∏ia  and the preferred context of M is defined by projectors ∏ip . Therefore, also in the SDMI, the observables A of S and P of M acquire actual definite values, whose probabilities are given by the diagonal elements of the diagonalized reduced states. The SDMI faces the same difficulty as the non-relational version of the BDMI: the fact that a system can be decomposed in a variety of different ways. In particular, the factorization of a given Hilbert space H into two factors, H = H1H2, can be “rotated” to produce different factorizations H′ = H1′ ⊗ H2′. Are we to apply the SDMI to each such factorization? How are the results related, if at all? A theorem due to Bacciagaluppi (1995, see also Vermaas 1997) shows, in essence, that if one applies the SDMI to the “subsystems” obtained in every factorization and insists that the definite-valued properties so-obtained are not relational, then one will be led to a mathematical contradiction of the Kochen-Specker variety. In response, one could adopt the view that subsystems have their definite-valued properties “relative to a factorization”; we will come back to this issue below. Healey (1989) was also among the first to make use of the biorthogonal decomposition theorem, developing these ideas in a somewhat different direction. His main concern was the apparent non-locality of quantum mechanics. Healey's intuition about the way a modal interpretation based on the biorthogonal decomposition theorem would be applied to, say, an EPR experiment is to implement the idea that an EPR pair possesses a “holistic” property; this can then explain why the apparatus on one side of the experiment acquires a property that is correlated to the result on the other side. In Healey's proposal, the biorthogonal decomposition theorem is used, but the set of possible properties is subsequently modified in order to fulfill a variety of desiderata. The first is consistency: the aim is to avoid Kochen-Specker-type results. A second is to maintain a plausible theory of the relationship between composite systems and their subsystems. A third is to maintain a plausible account of the relations among definite-valued properties at a given time. A fourth is to maintain a plausible account of the relations among definite-valued properties at different times. The structure of definite-valued properties that emerges from these conditions is extremely complicated. Some progress has been made since Healey's book was published (see for example Reeder and Clifton 1995) but, in general, it remains difficult to see what the set of definite-valued properties is according to his approach. 5. Non-ideal measurements Above we suggested that the BDMI and the SDMI solve the measurement problem in a particularly direct way. This is right in the case of the ideal von Neumann measurement, as explained in the previous section, where the eigenstates |ai⟩ of an observable A of the measured system S are perfectly correlated with the eigenstates |pi⟩ of the pointer P of the measuring apparatus M. However, ideal measurement is a situation that can never be achieved in practice: the interaction between S and M never introduces a completely perfect correlation. Two kinds of non-ideal measurements are usually distinguished in the literature: • Imperfect measurement (first kind) i ci |ai⟩ ⊗ |p0⟩ → ∑ij dij |ai⟩ ⊗ |pj⟩ (in general, dij ≠ 0 with ij) • Disturbing measurement (second kind) i ci |ai⟩ ⊗ |p0⟩ → ∑i ci |aid  ⟩ ⊗ |pi⟩ (in general, ⟨aid  | ajd  ⟩ ≠ δij) Note, however, that disturbing measurement can be rewritten as imperfect measurements (and vice versa). Imperfect measurements pose a challenge to the BDMI and the SDMI, since their rules for selecting the definite-valued properties do not pick out the right properties for the apparatus in the imperfect case (see Albert and Loewer 1990, 1991, 1993; also Ruetsche 1995). An example that clearly brings out the difficulties introduced by non-ideal measurements was formulated in the context of Stern-Gerlach experiments (Elby 1993). This argument uses the fact that the wavefunctions in the z-variable typically have infinite “tails” that introduce non-zero cross-terms; therefore, the “tail” of the wavefunction of the “down” beam may produce detection in the upper detector, and vice versa (see Dickson 1994 for a detailed discussion). In fact, if the biorthogonal decomposition is applied to the non-perfectly correlated state ∑ij dij |ai⟩ ⊗ |pj⟩ = ∑i ci′ |ai′⟩ ⊗ |pi′⟩, according to the BDMI the result does not select the pointer P as a definite-valued property, but a different observable P′ with eigenstates |pi′⟩. In this case, in which the definite-valued properties selected by a modal interpretation are different from those expected, the question arises how different they are. In the case of an imperfect measurement, it may be assumed that the dij ≠ 0, with ij, be small; then, the difference might be also small. But in the case of a disturbing measurement, the dij ≠ 0, with ij, need not be small and, as a consequence, the disagreement between the modal interpretation assignment and the experimental result might be unacceptable (see a full discussion in Bacciagaluppi and Hemmo 1996). This fact has been considered as a “silver bullet” for killing the modal interpretations (Harvey Brown, cited in Bacciagaluppi and Hemmo 1996). There is another important problem related to non-ideal measurements. When the final state of the composite system (measured system plus measuring device) is very nearly degenerate when written in the basis given by the measured observable and the apparatus's pointer (that is, when the probabilities for the various results are nearly equal), the spectral decomposition does not, in general, select as definite-valued properties close to those ideally expected. In fact, the observables so selected may be incompatible (non-commuting) with the observables that we expect on the basis of observation (Bacciagaluppi and Hemmo 1994, 1996). In order to face the problems that non-ideal measurements pose to the BDMI and the SDMI, several authors have appealed to the phenomenon of decoherence; this will be discussed below. 6. Properties of composite systems Let us take a composite system αβ, whose component subsystems α and β are represented by the Hilbert spaces Hα and Hβ, respectively, and consider a property represented by the projector ∏α defined on Hα. It is usual to assume that ∏α represents the same property as that represented by ∏αIβ defined on HαHβ, where Iβ is the identity operator on Hβ. This assumption is based on the observational indistinguishability of the magnitudes represented by ∏α and ∏αIβ: if the ∏α-measurement has a certain outcome, then the ∏αIβ-measurement has exactly the same outcome. The question is then: If the rules of the BDMI and the SDMI applied to α assign a value to ∏α, do those rules applied to the composite system αβ assign the same value to ∏αIβ (condition known as Property Composition), and vice versa (Property Decomposition)? The answer to this question is negative: the BDMI and the SDMI violate Property Composition and Property Decomposition (for a proof, see Vermaas 1998). Of course, if one maintains that the projectors ∏α and ∏αIβ represent the same property, the violation of Property Composition and Property Decomposition is a serious problem for any interpretation. This is the position adopted by Arntzenius (1990), who judges this violation to be bizarre, since it assigns different truth values to propositions like ‘the left-hand side of a table is green’ and ‘the table has a green left-hand side’, which are normally not distinguished; a similar argument is put forward by Clifton (1996, see also Clifton 1995c). However, Vermaas (1998) argues that the observational indistinguishability of the magnitudes represented by ∏α and ∏αIβ does not force one to consider these two projectors as representing the same property: in fact, they are distinguishable from a theoretical viewpoint, since they are defined on different Hilbert spaces. Moreover, he argues that the examples developed by Arntzenius and Clifton sound bizarre precisely in the light of Property Composition and Property Decomposition. But in the quantum realm we must accept that the questions of which properties are possessed by a system and which by its subsystems are different questions: the properties of a composite system αβ don't reveal information about the properties of subsystem α, and vice versa. Vermaas concludes that the tenet that ∏α and ∏αIβ do represent the same property can be viewed as an addition to quantum mechanics, which can be denied as, for instance, van Fraassen (1991) did. 7. Dynamics of properties As we have seen, modal interpretations intend to provide, for every instant, a set of definite-valued properties and their probabilities. Some advocates of modal interpretations may be willing to leave the matter, more or less, at that. Others take it to be crucial for any modal interpretation that it also answers questions of the form: Given that the property P of a system has the actual value α at time t0, what is the probability that its property P′ has the actual value β at time t1 > t0? In other words, they want a dynamics of actual properties. There are arguments on both sides. Those who argue for the necessity of such a dynamics maintain that we have to assure that the trajectories of actual properties really are, at least for macroscopic objects, like we see them to be, i.e., like the records contained in memories. For example, we should require not only that the book at rest on the desk possess a definite location, but also that, if undisturbed, its location relative to the desk does not change in time. Accordingly, one cannot get away with simply specifying the definite properties at each instant of time. We need also to show that this specification is at least compatible with a reasonable dynamics; better still, specify this dynamics explicitly. Those who consider a dynamics of actual properties to be superfluous reply that such a dynamics is more than what an interpretation of quantum mechanics needs to provide. Memory contents for each instant are enough to make empirical adequacy possible. As pointed out by Ruetsche (2003), in this debate about the need for a dynamics of actual properties it is important whether the modal interpretation is viewed as leading to a hidden-variables theory, in which value states are added as hidden variables to the original formalism in order to obtain a full description of the physical situation, or rather as only equipping the original formalism with a new semantics. In the first approach one would expect a full dynamics of actual properties, in the second this is not so clear. Of course, modal interpretations do admit a trivial dynamics, namely, one in which there is no correlation from one time to the next. In this case, the probability of a transition from the property P having the actual value α at t0, to the property P′ having the actual value β at t1 > t0 is just the single-time probability for P′ having β at t1. However, this dynamics is unlikely to interest those who feel the need for a dynamics at all. Several researchers have contributed to the project of constructing a more interesting form of dynamics for modal interpretations (see Vermaas 1996, 1998). An important account is due to Bacciagaluppi and Dickson (1999, see also Bacciagaluppi 1998). That work shows the most significant challenges that the construction of a dynamics of actual properties must face. The first challenge is posed by the fact that the set of definite-valued properties—let us call it ‘S’—may change over time. One therefore has to define a family of maps, each one being a 1–1 map from S0 at time t0 to a different St at time t, for any time. With such a family of maps, one can effectively define conditional probabilities within a single state space, and then translate them into “transition” probabilities. For this technique to work, St must have the same cardinality at any time. However, in general this is not the case: for instance, in the SDMI, the number of different projectors appearing in the spectral decomposition of the density matrix may vary with time. A way out of this is to augment S at each time so that its cardinality matches the highest cardinality that S ever achieves. Of course, one hopes to do so in a way that is not completely ad hoc. For example, in the context of the SDMI, Bacciagaluppi, Donald and Vermaas (1995) show that the “trajectory” through Hilbert space of the spectral components of the reduced state of a physical system will, under reasonable conditions, be continuous, or have only isolated discontinuities, so that the trajectory can be naturally extended to a continuous trajectory (see also Donald 1998). This result suggests a natural family of maps as discussed above: map each spectral component at one time to its unique continuous evolved component at later times. The second challenge to the construction of a dynamics arises from the fact that one wants to define transition probabilities over infinitesimal units of time, and then derive the finite-time transition probabilities from them. Bacciagaluppi and Dickson (1999) argue that, adapting results from the theory of stochastic processes, one can show that the procedure may, more or less, be carried out for modal interpretations of at least some varieties. Finally, one must actually define infinitesimal transition probabilities that will give rise to the proper quantum-mechanical probabilities at each time. Following earlier papers by Bell (1984), Vink (1993) and others, Bacciagaluppi and Dickson (1999) define an infinite class of such infinitesimal transition probabilities, such that all of them generate the correct single-time probabilities, which arguably are all we can really test. However, Sudbery (2002) has contended that the form of the transition probabilities would be relevant to the precise form of spontaneous decay or the “Dehmelt quantum jumps”; he independently developed the dynamics of Bacciagaluppi and Dickson and applied it in such a way that it leads to the correct predictions for these experiments. Gambetta and Wiseman (2003, 2004) developed a dynamical modal account in the form of a non-Markovian process with noise, also extending their approach to positive operator-valued measures (POVMs). 8. Perspectival modal interpretation As we have seen, both the SDMI and the non-relational version of the BDMI have to face the problem of the multiple factorizability of a given Hilbert space: if the definite-valued properties are monadic (i.e., non-relational), both interpretations led to a Kochen-Specker-type contradiction (Bacciagaluppi 1995). This points to the direction of an interpretation that makes properties relational, in this case relative to a factorization. Extending this idea, a perspectival modal interpretation (PMI, Bene and Dieks 2002) was developed, in which the properties of a physical system have a relational character and are defined with respect to another physical system that serves as a “reference system” (see Bene 1997). This interpretation is similar in spirit to the idea that systems have properties as “witnessed” by the rest of the universe (Kochen 1985). However, the PMI goes further by defining states of a system not only with respect to the universe, but also with respect to arbitrary larger systems. The PMI is closely related to the SDMI since similar rules are used to assign properties to quantum systems. In the PMI, the state of any system S needs the specification of a “reference system” R with respect to which the state is defined: this state of S with respect to R is denoted by ρRS  . In the special case in which R coincides with S, the state ρSS   is called “the state of S with respect to itself”. If the system S is contained in a system A, the state ρAS   is defined as the density operator that can be derived from ρAA   by taking the partial trace over the degrees of freedom in A that do not pertain to S: ρAS   = Tr(A\S)ρAA With these definitions, the point of departure of the PMI is the quantum state of the whole universe with respect to itself, which it is assumed to be a pure state ρUU   = |ψ⟩⟨ψ| which evolves unitarily according to the Schrödinger equation. For any system S contained in the universe, its state with respect to itself ρSS   is postulated to be one of the projectors of the spectral resolution of ρUS   = Tr(U\S) ρUU   = Tr(U\S) |ψ⟩⟨ψ| In particular, if there is no degeneracy among the eigenvalues of ρUS  , these projectors are one-dimensional and ρSS   is the one-dimensional projector |ψS⟩⟨ψS|. Within this PMI conceptual framework it can be shown that a system may be localized from the perspective of one observer and, nevertheless, may be delocalized from a different perspective. But it also follows that observers who look at the same macroscopic object, at the same time and under identical circumstances, will see it (practically) at the same spot. The core idea of this interpretation is that all different relational descriptions, given from different perspectives, are equally objective and all correspond to physical reality (which has a relational character itself). We cannot explain the relational states by appealing to a definition in terms of more basic non-relational states. Further analysis shows that in this interpretation EPR-type situations can be understood in a basically local manner. Indeed, the change in the relational state of particle 2 with respect to the 2-particle system can be understood as a consequence of the change in the reference system brought about by the local measurement interaction between particle 1 and the measuring device. This local measurement is responsible for the creation of a new perspective, and from this new perspective there is a new relational state of particle 2 (see also Dieks 2009). The PMI agrees with Bohr's qualitative argument that any reasonable definition of physical reality in the quantum realm should include the experimental setup. However, the PMI is more general in the sense that the state of a system is defined with respect to any larger physical system, not necessarily an instrument. This removes the threat of subjectivism, since the relational states follow unambiguously from the quantum formalism and the physics of the situation. It is interesting to consider the connections between the PMI and other relational proposals. For instance, Berkovitz and Hemmo (2006) propose the prospects of a relational modal interpretation in the relativistic case (we will come back to this point below). In turn, Rovelli and coworkers propose an explicit ‘relational quantum mechanics’ that emphasizes the possibility of different descriptions of a physical system depending on the perspective (Rovelli 1996; Rovelli and Smerlak 2007; Laudisa and Rovelli 2008; see also van Fraassen 2010). In spite of the points of contact between the PMI and Rovelli's relational interpretation, there are significant differences. In Rovelli's proposal, the concepts of measurement interaction and of definite outcomes of measurements are primary; moreover, the state has to be updated every time that a measurement event occurs and, as a consequence, it changes discontinuously with every new event. On the contrary, the PMI is a realist interpretation where a measurement is nothing else than a quantum interaction, and where unitary evolution is the main dynamical principle, also when systems interact (see Dieks 2009). 9. Modal-Hamiltonian interpretation As Bub (1997) points out, in most modal interpretations the preferred context of definite-valued observables depends on the state of the system. An exception is Bohmian mechanics, in which the preferred context is a priori defined by the position observable; in this case, property composition and property decomposition hold. But this is not the only reasonable possibility for a modal interpretation with a fixed preferred observable. In fact, the modal-Hamiltonian interpretation (MHI, Lombardi and Castagnino 2008; Ardenghi, Castagnino, and Lombardi 2009; Lombardi, Castagnino, and Ardenghi 2010; Ardenghi and Lombardi 2011) endows the Hamiltonian of a system with a determining role, both in the definition of systems and subsystems and in the selection of the preferred context. The MHI is based on the following postulates: • Systems postulate (SP): A quantum system S is represented by a pair (O, H) such that (i) O is a space of self-adjoint operators on a Hilbert space, representing the observables of the system, (ii) HO is the time-independent Hamiltonian of the system S, and (iii) if ρ0O′ (where O′ is the dual space of O) is the initial state of S, it evolves according to the Schrödinger equation. Although any quantum system can be decomposed in parts in many ways, according to the MHI a decomposition leads to parts which are also quantum systems only when the components' behaviors are dynamically independent of each other, that is, when there is no interaction among the subsystems: • Composite systems postulate (CSP): A quantum system represented by S: (O, H), with initial state ρ0O′, is composite when it can be partitioned into two quantum systems S1: (O1, H1) and S2: (O2, H2) such that (i) O = O1O2, and (ii) H = H1I2 + I1H2 (where I1 and I2 are the identity operators in the corresponding tensor product spaces). In this case, we say that S1 and S2 are subsystems of the composite system S = S1S2. If the system is not composite, it is elemental. With respect to the preferred context, the basic idea of the MHI is that the Hamiltonian of the system defines actualization. Any observable that does not have the symmetries of the Hamiltonian cannot acquire a definite actual value, since this actualization would break the symmetry of the system in an arbitrary way. • Actualization rule (AR): Given an elemental quantum system represented by S: (O, H), the actual-valued observables of S are H and all the observables commuting with H and having, at least, the same symmetries as H. The selection of the preferred context exclusively on the basis of a preferred observable has been criticized by arguing that in the Hilbert space formalism all observables are on an equal footing. However, quantum mechanics is not just Hilbert space mathematics: it is a physical theory that includes a dynamical law in which the Hamiltonian is singled out to play a central role. The justification for selecting the Hamiltonian as the preferred observable ultimately lies in the success of the MHI and its ability to solve interpretive difficulties. With respect to the first point: the scheme has been applied to several well-known physical situations (free particle with spin, harmonic oscillator, free hydrogen atom, Zeeman effect, fine structure, the Born-Oppenheimer approximation), leading to results consistent with empirical evidence (Lombardi and Castagnino 2008, Section 5). With respect to interpretation, the MHI confronts quantum contextuality by selecting a preferred context, and has proved to be able to supply an account of the measurement problem, both in its ideal and its non-ideal versions; moreover, in the non-ideal case it gives a criterion to distinguish between reliable and non-reliable measurements (Lombardi and Castagnino 2008, Section 6). In the MHI property composition and property decomposition hold because the actualization rule only applies to elemental systems: the definite-valued properties of composite systems are selected on the basis of those of the elemental components, and following the usual quantum assumption according to which the observable A1 of a subsystem S1 and the observable A = A1I2 of the composite system S = S1S2 represent the same property (Ardenghi and Lombardi 2011). The preferred context of the MHI does not change with time: the definite-valued observables always commute with the Hamiltonian and, therefore, they are constants of motion of the system. This means that they are the same during the whole “life” of the quantum system as a closed system, since its initial “birth”, when it arises as a result of an interaction, up to its final “death”, when it disappears by interacting with another system. As a consequence, there is no need of accounting for the dynamics of the actual properties as in the BDMI and the SDMI. 10. The interpretation of probability One of the leading ideas of the modal interpretations is probabilism: quantum mechanics does not correspond in a one-to-one way to actual reality, but rather provides us with a list of possibilities and their probabilities. Therefore, the notions of possibility and probability are central in this interpretive framework. This raises two issues: the formal treatment of probabilities, and the interpretation of probability. Since the set of events corresponding to all projector operators on a given Hilbert space does not have a Boolean structure, the Born probability (which is defined over these projectors) does not satisfy the definition of probability of Kolmogorov (which applies to a Boolean algebra of events). For this reason, some authors define a generalized non-Kolmogorovian probability function over the ortho-algebra of quantum events (Hughes 1989; Cohen 1989). Modal interpretations do not follow this path: they conceive probabilities as represented by a Kolmogorovian measure on the Boolean algebra representing the definite-valued quantities, generated by mutually commuting projectors. The various modal interpretations differ from each other in their definitions of the preferred context on which the Kolmogorovian probability is defined. As we have seen, the definite-valued properties of a system are usually characterized in terms of the quantum state |φ⟩ and a privileged observable R (Bub and Clifton 1996; Bub, Clifton, and Goldstein 2000; Dieks 2005). Dieks (2007) derives a uniqueness result, namely that given the splitting of a total Hilbert space into two factors spaces, representing the system and its environment, respectively, the Boolean lattice of definite-valued observables is fixed by the state of the system alone. Furthermore, it follows that the Born measure is the only one that is definable from just the product structure of Hilbert space, the state in the Hilbert space, and the definite-valued observables selected by the state. The MHI defines a context as a complete set of orthogonal projectors {∏α}, such that ∑ii = I and ∏ij = δiji, where I is the identity operator in HH. Since each context generates a Boolean structure, the state of the system defines a Kolmogorovian probability function on each individual context (Lombardi and Castagnino 2008). However, only the probabilities defined on the context determined by the eigenprojectors of the Hamiltonian of an elemental closed system correspond to the possible values one of which becomes actual. In modal interpretations the event space on which the (preferred) probability measure is defined is a space of possible events, among which only one becomes actual. The fact that the actual event is not singled out by these interpretations is what makes them fundamentally probabilistic. This aspect distinguishes modal interpretations from many-worlds interpretations, where the probability measure is defined on a space of events that are all actual. Nevertheless, this does not mean that all modal interpretations agree about the interpretation of probability. In the context of the BDMI, the SDMI and the PMI, it is usually claimed that, given the space of possible events, the state generates an ignorance-interpretable probability measure over this set: quantum probabilities quantify the ignorance of the observer about the actual values acquired by the system's observables (see, e.g., Dieks 1988; Clifton 1995a; Vermaas 1999; Bene and Dieks 2002). By contrast to actualism—the conception that reduces possibility to actuality (see Dieks 2010)—some modal interpretations, in particular the MHI, adopt a possibilist conception, according to which possible events—possibilia—constitute a basic ontological category (see Menzel 2007). The probability measure is in this case seen as a representation of an ontological propensity of a possible quantum event to become actual (Lombardi and Castagnino 2008; see also Suárez 2004). These views do not all exclude each other. If probabilities quantify ignorance about the actual values of the observables, this need not mean that this ignorance can be removed by the addition of further information. If quantum probabilities are ontological propensities, our ignorance about the possible event that becomes actual is a necessary consequence of the indeterministic nature of the system because there simply is no additional information about a more accurate state of the system. 11. The role of decoherence According to the environment-induced approach to decoherence (Zurek 1981, 2003; see also Schlosshauer 2007), the measuring apparatus is an open system in continuous interaction with its environment; as a consequence of this interaction, the reduced state of the apparatus and the measured system becomes, almost instantaneously, indistinguishable from a state that would represent an ignorance mixture (“proper mixture”) over unknown values of the apparatus' pointer. The idea that decoherence might play a role in modal interpretations was proposed by several authors early on (Dieks 1989b; Healey 1995). But the phenomenon has acquired a central relevance in the modal context in relation to the discussion of non-ideal measurements. As we have seen, in the BDMI and the SDMI, the biorthogonal or the spectral decomposition does not pick out the right properties for the apparatus in non-ideal measurements. Bacciagaluppi and Hemmo (1996) show that, when the apparatus is a finite-dimensional system in interaction with an environment with a huge number of degrees of freedom, decoherence guarantees that the spectral decomposition of the apparatus' reduced state will be very close to the ideally expected result and, as a consequence, the apparatus' pointer is—approximately—selected as an actual definite-valued observable. Alternatively, Bub (1997) proposes that it is not decoherence—with the “tracing out” of the environment and the diagonalization of the reduced state of the apparatus—that is relevant for the definite value of the pointer, but the triorthogonal or n-orthogonal decomposition theorem, since it singles out a unique pointer basis for the apparatus. In either case, the interaction of the environment seems to be a great help to the BDMI and the SDMI for handling non-ideal measurements with finite-dimensional apparatuses. However, the case of infinitely many distinct states for the apparatus is perhaps more realistic. Bacciagaluppi (2000) has analyzed this situation, using a continuous model of the apparatus' interaction with the environment. He concludes that in this case the spectral decomposition of the reduced state of the apparatus does not pick out states that are close enough to the ideally expected state. This result applies more generally to other cases where a macroscopic system (not idealized as finite-dimensional) experiences decoherence due to interaction with its environment (see Donald 1998). As said above, in the case of the MHI decoherence is not explicitly appealed to in order to account for the definite reading of the apparatus' pointer (neither in ideal nor in non-ideal measurements). However, there still is a relation with the decoherence program. In fact, the measuring apparatus is always a macroscopic system with a huge number of degrees of freedom, and the pointer must be a “collective” and empirically accessible observable; as a consequence, the many degrees of freedom corresponding to the degeneracies of the pointer play the role of a decohering “internal environment” (for details, see Lombardi 2010; Lombardi et al. 2011). The compatibility between the MHI and decoherence becomes clearer when the phenomenon of decoherence is understood from a closed-system perspective (Castagnino, Laura, and Lombardi 2007; Castagnino, Fortin, and Lombardi 2010; Lombardi, Fortin, and Castagnino 2012). 12. Open problems and perspectives There are a number of open problems and perspectives in the modal program. Here we will consider some of them. Modal interpretations are based on the standard formalism of quantum mechanics (in the Hilbert space version or in the algebraic version). However, Brown, Suárez and Bacciagaluppi (1998) argue that there is more to quantum reality than what is described by operators and quantum states: they claim that gauges and coordinate systems are important to our description of physical reality as well, while modal interpretations (AM, BDMI and SDMI) have standardly not taken such things into consideration. In a similar vein, it has been argued that the Galilean space-time symmetries endow the formal skeleton of quantum mechanics with the physical flesh and blood that identify the fundamental physical magnitudes and that allow the theory to be applied to concrete physical situations (Lombardi and Castagnino 2008). The set of definite-valued observables of a system should be left invariant by the Galilean transformations: it would be unacceptable that this set changed as a mere result of a change in the perspective from which the system is described. On the basis of this idea, the MHI rule of actualization has been reformulated in an explicitly invariant form, in terms of the Casimir operators of the Galilean group (Ardenghi, Castagnino, and Lombardi 2009; Lombardi, Castagnino, and Ardenghi 2010). Another fundamental question is the relativistic extension of the modal approach. Dickson and Clifton (1998) have shown that a large class of modal interpretations of ordinary quantum mechanics cannot be made Lorentz-invariant in a straightforward way (see also Myrvold 2002). With respect to the extension to algebraic quantum field theory (see Dieks 2002; Kitajima 2004), Clifton (2000) proposed a natural generalization of the non-relativistic modal scheme, but Earman and Ruetsche (2005) showed that it is not yet clear whether it will be able to deal with measurement situations and whether it is empirically adequate. The problems revealed by these investigations are due to the non-relativistic nature of the formalism of quantum mechanics that is employed, in particular to the fact that the concept of a state of an extended system at one instant is central. In a local field-theoretic context this becomes different, and this may avoid conflicts with relativity (Earman and Ruetsche 2005). Berkovitz and Hemmo (2005) and Hemmo and Berkovitz (2005) propose a different way out: they argue that perspectivalism can come to the rescue here (see also Berkovitz and Hemmo 2006). In turn, in the context of the MHI, it has been argued that the actualization rule, expressed in terms of the Casimir operators of the Galilean group in non-relativistic quantum mechanics, can be transferred to the relativistic domain by changing the symmetry group accordingly: the definite-valued observables of a system would be those represented by the Casimir operators of the Poincaré group. Since the mass operator and the squared spin operator are the only Casimir operators of the Poincaré group, they would always be definite-valued observables. This conclusion would be in agreement with a usual assumption in quantum field theory: elemental particles always have definite values of mass and spin, and those values are precisely what define the different kinds of elemental particles of the theory (Ardenghi, Castagnino, and Lombardi 2009; Lombardi, Castagnino, and Ardenghi 2010). There are also specifically philosophical issues concerning ontological matters: about the nature of the items referred to by quantum mechanics, that is, about the basic categories of the quantum ontology. As we have seen, in general the properties of quantum systems are considered to be monadic, with the exception of the relational version of the BDMI and the PMI where these properties are relational. In any case, it might be asked whether a quantum system has to be conceived as an individual substratum supporting properties or as a mere “bundle” of properties. Lombardi and Castagnino (2008), and da Costa, Lombardi and Lastiri (forthcoming) have suggested that, in the modal context, the bundle view might be appropriate to supply an answer to the problem of indistinguishability (see also French and Krause 2006). These and similar problems, and their proposed solutions, have arisen in the context of detailed technical investigations. This illustrates one of the advantages of the modal approach: it makes use of a precise set of rules that determine the set of definite-valued observables, and this makes it possible to derive rigorous results. It may well be that several of these results, e.g., no-go theorems, can be applied to other interpretations as well (e.g., to the many-worlds interpretation, see Dieks 2007). Whatever the merit of the modal ideas in the end, one can at least say that they have given rise to a serious and fruitful series of investigations into the nature of quantum theory. • Albert, D. and B. Loewer, 1990, “Wanted dead or alive: two attempts to solve Schrödinger's paradox,” in Proceedings of the PSA 1990, Vol. 1, A. Fine, M. Forbes, and L. Wessels (eds.), East Lansing, Michigan: Philosophy of Science Association, pp. 277–285. • –––, 1991, “Some alleged solutions to the measurement problem,” Synthese, 88: 87–98. • –––, 1993, “Non-ideal measurements,” Foundations of Physics Letters, 6: 297–305. • Ardenghi, J. S., M. Castagnino, and O. Lombardi, 2009, “Quantum mechanics: modal interpretation and Galilean transformations,” Foundations of Physics, 39: 1023–1045. • Ardenghi, J. S. and O. Lombardi, 2011, “The Modal-Hamiltonian Interpretation of quantum mechanics as a kind of ‘atomic’ interpretation,” Physics Research International, 2011: 379604. • Arntzenius, F., 1990, “Kochen's interpretation of quantum mechanics,” in Proceedings of the PSA 1990, Vol. 1, A. Fine, M. Forbes, and L. Wessels (eds.), East Lansing, Michigan: Philosophy of Science Association, pp. 241–249. • Bacciagaluppi, G., 1995, “A Kochen-Specker theorem in the modal interpretation of quantum mechanics,” International Journal of Theoretical Physics, 34: 1205–1216. • –––, 1996, Topics in the Modal Interpretation of Quantum Mechanics. Dissertation, Cambridge University. • –––, 1998, “Bohm-Bell dynamics in the modal interpretation,” in The Modal Interpretation of Quantum Mechanics, D. Dieks, and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 177–211. • –––, 2000, “Delocalized properties in the modal interpretation of a continuous model of decoherence,” Foundations of Physics, 30: 1431–1444. • Bacciagaluppi, G. and M. Dickson, 1999, “Dynamics for modal interpretations,” Foundations of Physics, 29: 1165–1201. • Bacciagaluppi, G., M. Donald, and P. Vermaas, 1995, “Continuity and discontinuity of definite properties in the modal interpretation,” Helvetica Physica Acta, 68: 679–704. • Bacciagaluppi, G. and M. Hemmo, 1994, “Making sense of approximate decoherence,” in Proceedings of the PSA 1994, Vol. 1, D. Hull, M. Forbes, and R. Burian (eds.), East Lansing, Michigan: Philosophy of Science Association, pp. 345–354. • –––, 1996, “Modal interpretations, decoherence and measurements,” Studies in History and Philosophy of Modern Physics, 27: 239–277. • Ballentine, L., 1998, Quantum Mechanics: A Modern Development, Singapore: World Scientific. • Bell, J. S., 1984, “Beables for quantum field theory,” in Speakable and Unspeakable in Quantum Mechanics (1987), Cambridge: Cambridge University Press, pp. 173–180. • Bene, G., 1997, “Quantum reference systems: A new framework for quantum mechanics,” Physica A, 242: 529–565. • Bene, G. and D. Dieks, 2002, “A perspectival version of the modal interpretation of quantum mechanics and the origin of macroscopic behavior,” Foundations of Physics, 32: 645–671. • Berkovitz, J. and M. Hemmo, 2005, “Can modal interpretations of quantum mechanics be reconciled with relativity?,” Philosophy of Science, 72: 789–801. • –––, 2006, “A new modal interpretation in terms of relational properties,” in Physical Theory and its Interpretation: Essays in honor of Jeffrey Bub, W. Demopoulos and I. Pitowsky (eds.), New York: Springer, pp.1–28. • Bohm, D., 1952, “A suggested interpretation of the quantum theory in terms of ‘hidden’ variables, I and II,” Physical Review, 85: 166–193. • Brown, H., M. Suárez, and G. Bacciagaluppi, 1998, “Are ‘sharp values’ of observables always objective elements of reality?,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 69–101. • Bub, J., 1992, “Quantum mechanics without the projection postulate,” Foundations of Physics, 22: 737–754. • –––, 1994, “On the structure of quantal proposition systems,” Foundations of Physics, 24: 1261–1279. • –––, 1997, Interpreting the Quantum World, Cambridge: Cambridge University Press. • Bub, J. and R. Clifton, 1996, “A uniqueness theorem for interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 27: 181–219. • Bub, J., R. Clifton, and S. Goldstein, 2000, “Revised proof of the uniqueness theorem for ‘no collapse’ interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 31: 95–98. • Castagnino, M., S. Fortin, and O. Lombardi, 2010, “Is the decoherence of a system the result of its interaction with the environment?,” Modern Physics Letters A, 25: 1431–1439. • Castagnino, M., R. Laura, and O. Lombardi, 2007, “A general conceptual framework for decoherence in closed and open systems,” Philosophy of Science, 74: 968–980. • Clifton, R., 1995a, “Independently motivating the Kochen-Dieks modal interpretation of quantum mechanics,” The British Journal for the Philosophy of Science, 46: 33–57. • –––, 1995b, “Making sense of the Kochen-Dieks ‘no-collapse’ interpretation of quantum mechanics independent of the measurement problem,” Annals of the New York Academy of Science, 755: 570–578. • –––, 1995c, “Why modal interpretations of quantum mechanics must abandon classical reasoning about the physical properties,” International Journal of Theoretical Physics, 34: 1302–1312. • –––, 1996, “The properties of modal interpretations of quantum mechanics,” The British Journal for the Philosophy of Science, 47: 371–398. • –––, 2000, “The modal interpretation of algebraic quantum field theory,” Physics Letters A, 271: 167–177. • Cohen, D. W., 1989, An Introduction to Hilbert Space and Quantum Logic, New York: Springer-Verlag. • Da Costa, N., O. Lombardi, and M. Lastiri, forthcoming, “A modal ontology of properties for quantum mechanics,” Synthese, DOI 10.1007/s11229-012-0218-4 [available online]. • De Witt, B. S. M., 1970, “Quantum mechanics and reality,” Physics Today, 23: 30–35. • Dickson, M., 1994, “Wavefunction tails in the modal interpretation,” in D. Hull, M. Forbes, and R. Burian (eds.), Proceedings of the PSA 1994, Vol. 1, East Lansing, Michigan: Philosophy of Science Association, pp. 366–376. • –––, 1995a, “Faux-Boolean algebras, classical probability, and determinism,” Foundations of Physics Letters, 8: 231–242. • –––, 1995b, “Faux-Boolean algebras and classical models,” Foundations of Physics Letters, 8: 401–415. • Dickson, M. and R. Clifton, 1998, “Lorentz-invariance in modal interpretations,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 9–48. • Dieks, D., 1988, “The formalism of quantum theory: an objective description of reality?,” Annalen der Physik, 7: 174–190. • –––, 1989a, “Quantum mechanics without the projection postulate and its realistic interpretation,” Foundations of Physics, 38: 1397–1423. • –––, 1989b, “Resolution of the measurement problem through decoherence of the quantum state,” Physics Letters A, 142: 439–446. • –––, 1994a, “Objectification, measurement and classical limit according to the modal interpretation of quantum mechanics,” in P. Busch, P. Lahti, and P. Mittelstaedt (eds.), Proceedings of the Symposium on the Foundations of Modern Physics, Singapore: World Scientific, pp. 160–167. • –––, 1994b, “Modal interpretation of quantum mechanics, measurements, and macroscopic behaviour,” Physical Review A, 49: 2290–2300. • –––, 1995, “Physical motivation of the modal interpretation of quantum mechanics,” Physics Letters A, 197: 367–371. • –––, 1998, “Preferred factorizations and consistent property attribution”, in Quantum Measurement: Beyond Paradox, R. Healey and G. Hellman (eds.), Minneapolis: University of Minnesota Press, pp. 144–160. • –––, 2002, “Events and covariance in the interpretation of quantum field theory,” in Ontological Aspects of Quantum Field Theory, M. Kuhlmann, H. Lyre, and A. Wayne (eds.), Singapore: World Scientific, pp. 215–234. • –––, 2005, “Quantum mechanics: an intelligible description of objective reality?,” Foundations of Physics, 35: 399–415. • –––, 2007, “Probability in modal interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 38: 292–310. • –––, 2009, “Objectivity in perspective: relationism in the interpretation of quantum mechanics,” Foundations of Physics, 39: 760–775. • –––, 2010, “Quantum mechanics, chance and modality,” Philosophica, 83: 117–137. • Dieks, D. and P. Vermaas (eds.), 1998, The Modal Interpretation of Quantum Mechanics, Dordrecht: Kluwer Academic Publishers. • Donald, M., 1998, “Discontinuity and continuity of definite properties in the modal interpretation,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 213–222. • Earman, J. and L. Ruetsche, 2005, “Relativistic invariance and modal interpretations,” Philosophy of Science, 72: 557–583. • Elby, A., 1993, “Why ‘modal’ interpretations of quantum mechanics don't solve the measurement problem,” Foundations of Physics Letters, 6: 5–19. • Everett, H., 1957, “Relative state formulation of quantum mechanics,” Review of Modern Physics, 29: 454–462. • French, S. and D. Krause, 2006, Identity in Physics: A Historical, Philosophical and Formal Analysis, Oxford: Oxford University Press. • Gambetta, J. and H. M. Wiseman, 2003, “Interpretation of non-Markovian stochastic Schrödinger equations as a hidden-variable theory,” Physical Review A, 68: 062104. • –––, 2004, “Modal dynamics for positive operator measures,” Foundations of Physics, 34: 419–448. • –––, 1995, “Dissipating the quantum measurement problem,” Topoi, 14: 55–65. • Hemmo, M. and J. Berkovitz, 2005, “Modal interpretations of quantum mechanics and relativity: a reconsideration,” Foundations of Physics, 35: 373–397. • Hughes, R. I. G., 1989, The Structure and Interpretation of Quantum Mechanics, Cambridge Mass.: Harvard University Press. • Kitajima, Y., 2004, “A remark on the modal interpretation of algebraic quantum field theory,” Physics Letters A, 331: 181–186. • Kochen, S., 1985, “A new interpretation of quantum mechanics,” in Symposium on the Foundations of Modern Physics 1985, P. Mittelstaedt and P. Lahti (eds.), Singapore: World Scientific, pp. 151–169. • Kochen, S. and E. Specker, 1967, “The problem of hidden variables in quantum mechanics,” Journal of Mathematics and Mechanics, 17: 59–87. • Laudisa, F. and C. Rovelli, 2008, “Relational quantum mechanics,” in The Stanford Encyclopedia of Philosophy, Fall 2008 Edition, Edward N. Zalta (ed.), URL = <>. • Lombardi, O., 2010, “The central role of the Hamiltonian in quantum mechanics: decoherence and interpretation,” Manuscrito, 33: 307–349. • Lombardi, O. and M. Castagnino, 2008, “A modal-Hamiltonian interpretation of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 39: 380–443. • Lombardi, O., M. Castagnino, and J. S. Ardenghi, 2010, “The modal-Hamiltonian interpretation and the Galilean covariance of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 41: 93–103. • Lombardi, O., S. Fortin, and M. Castagnino, 2012, “The problem of identifying the system and the environment in the phenomenon of decoherence,” in EPSA Philosophy of Science: Amsterdam 2009, H. W. de Regt, S. Hartmann, and S. Okasha (eds.), Dordrecht: Springer, pp. 161–174. • Lombardi, O., S. Fortin, M. Castagnino, and J. S. Ardenghi, 2011, “Compatibility between environment-induced decoherence and the modal-Hamiltonian interpretation of quantum mechanics,” Philosophy of Science, 78: 1024–1036. • Menzel, C., 2007, “Actualism,” in The Stanford Encyclopedia of Philosophy, Spring 2007 Edition, Edward N. Zalta (ed.), URL = <>. • Myrvold, W., 2002, “Modal interpretations and relativity,” Foundations of Physics, 32: 1773–1784. • Reeder, N. and R. Clifton, 1995, “Uniqueness of prime factorizations of linear operators in quantum mechanics,” Physics Letters A, 204: 198–204. • Rovelli, C., 1996, “Relational quantum mechanics,” International Journal of Theoretical Physics, 35: 1637–1678. • Rovelli, C. and M. Smerlak, 2007, “Relational EPR,” Foundations of Physics, 37: 427–445. • Ruetsche, L., 1995, “Measurement error and the Albert-Loewer problem,” Foundations of Physics Letters, 8: 327–344. • –––, 1996, “Van Fraassen on preparation and measurement,” Philosophy of Science, 63: S338-S346. • –––, 2003, “Modal semantics, modal dynamics and the problem of state preparation,” International Studies in the Philosophy of Science, 17: 25–41. • Schlosshauer, M., 2007, Decoherence and the Quantum-to-Classical Transition, Heidelberg-Berlin: Springer. • Schrödinger, E., 1935, “Discussion of probability relations between separated systems,” Proceedings of the Cambridge Philosophical Society, 31: 555–563. • Suárez, M., 2004, “Quantum selections, propensities and the problem of measurement,” The British Journal for the Philosophy of Science, 55: 219–255. • Sudbery, A., 2002, “Diese verdammte Quantenspringerei,” Studies in History and Philosophy of Modern Physics, 33: 387–411. • van Fraassen, B. C., 1972, “A formal approach to the philosophy of science,” in Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain, R. Colodny (ed.), Pittsburgh: University of Pittsburgh Press, pp. 303–366.. • –––, 1974, “The Einstein-Podolsky-Rosen paradox,” Synthese, 29: 291–309. • –––, 1991, Quantum Mechanics, Oxford: Clarendon Press. • –––, 2010, “Rovelli's world,” Foundations of Physics, 40: 390–417. • Vermaas, P., 1996, “Unique transition probabilities in the modal interpretation,” Studies in History and Philosophy of Modern Physics, 27: 133–159. • –––, 1997, “A no-go theorem for joint property ascriptions in modal interpretations of quantum mechanics,” Physical Review Letters, 78: 2033–2037. • –––, 1998, “The pros and cons of the Kochen-Dieks and the atomic modal interpretation,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 103–148. • –––, 1999, A Philosopher's Understanding of Quantum Mechanics: Possibilities and Impossibilities of a Modal Interpretation, Cambridge: Cambridge University Press. • Vermaas, P. and D. Dieks, 1995, “The modal interpretation of quantum mechanics and its generalization to density operators,” Foundations of Physics, 25: 145–158. • Vink, J., 1993, “Quantum mechanics in terms of discrete beables,” Physical Review A, 48: 1808–1818. • Zurek, W. H., 1981, “Pointer basis of quantum apparatus: into what mixtures does the wave packet collapse?,” Physical Review D, 24: 1516–1525. • –––, 2003, “Decoherence, einselection, and the quantum origins of the classical,” Reviews of Modern Physics, 75: 715–776. Other Internet Resources [Please contact the author with suggestions.] Copyright © 2012 by Olimpia Lombardi <> Dennis Dieks <> Please Read How You Can Help Keep the Encyclopedia Free
4aaebab589a56985
This is a discussion of a present category of science. For the work by Aristotle, see “Physics (Aristotle)”. For a history of the science, see “History of physics”. Enlarge picture Physics is the science of matter[1] and its motion[2][3], as well as space and time[4][5] —the science that deals with concepts such as force, energy, mass, and charge. As an experimental science, its goal is to understand the natural world.[6][7] For the etymology of the word physics, see physis (φύσις). In one form or another, physics is one of the oldest academic disciplines; through its modern subfield of astronomy, it may be the oldest of all.[8] Sometimes synonymous with philosophy, chemistry and even certain branches of mathematics and biology during the last two millennia, physics emerged as a modern science in the 17th century[9] and these disciplines are now generally distinct, although the boundaries remain difficult to define. Advances in physics often translate to the technological sector, and sometimes influence the other sciences, as well as mathematics and philosophy. For example, advances in the understanding of electromagnetism have led to the widespread use of electrically driven devices (televisions, computers, home appliances etc.); advances in thermodynamics led to the development of motorized transport; and advances in mechanics led to the development of the calculus, quantum chemistry, and the use of instruments like the electron microscope in microbiology. Today, physics is a broad and highly developed subject. Research is often divided into four subfields: condensed matter physics; atomic, molecular, and optical physics; high energy physics; and astronomy and astrophysics. Most physicists also specialize in either theoretical or experimental research, the former dealing with the development of new theories, and the latter dealing with the experimental testing of theories and the discovery of new phenomena. Despite important discoveries during the last four centuries, there are a number of open questions in physics, and many areas of active research. Core theories Although physics encompasses a wide variety of phenomena, all competent physicists are familiar with the basic theories of classical mechanics, electromagnetism, relativity, thermodynamics, and quantum mechanics. Each of these theories has been tested in numerous experiments and proven to be an accurate model of nature within its domain of validity. For example, classical mechanics correctly describes the motion of objects in everyday experience, but it breaks down at the atomic scale, where it is superseded by quantum mechanics, and at speeds approaching the speed of light, where relativistic effects become important. While these theories have long been well-understood, they continue to be areas of active research—for example, a remarkable aspect of classical mechanics known as chaos theory was developed in the 20th century, three centuries after the original formulation of mechanics by Isaac Newton (1642–1727). The basic theories form a foundation for the study and research of more specialized topics. A table of these theories, along with many of the concepts they employ, can be found here. Classical mechanics Main article: Classical mechanics Enlarge picture A pulley uses the principle of mechanical advantage so that a small force can lift a heavy weight. Classical mechanics is a model of the physics of forces acting upon bodies. It is often referred to as "Newtonian mechanics" after Isaac Newton and his laws of motion. Mechanics is subdivided into statics, which models objects at rest, kinematics, which models objects in motion, and dynamics, which models objects subjected to forces. The classical mechanics of continuous and deformable objects is continuum mechanics, which can itself be broken down into solid mechanics and fluid mechanics according to the state of matter being studied. The latter, the mechanics of liquids and gases, includes hydrostatics, hydrodynamics, pneumatics, aerodynamics, and other fields. Classical mechanics produces very accurate results within the domain of everyday experience. It is superseded by relativistic mechanics for systems moving at large velocities near the speed of light, quantum mechanics for systems at small distance scales, and relativistic quantum field theory for systems with both properties. Nevertheless, classical mechanics is still very useful, because it is much simpler and easier to apply than these other theories, and it has a very large range of approximate validity. Classical mechanics can be used to describe the motion of human-sized objects (such as tops and baseballs), many astronomical objects (such as planets and galaxies), and certain microscopic objects (such as organic molecules). An important concept of mechanics is the identification of conserved energy and momentum, which lead to the Lagrangian and Hamiltonian reformulations of Newton's laws. Theories such as fluid mechanics and the kinetic theory of gases result from applying classical mechanics to macroscopic systems. A relatively recent result of considerations concerning the dynamics of nonlinear systems is chaos theory, the study of systems in which small changes in a variable may have large effects. Newton's law of universal gravitation, formulated within classical mechanics, explained Kepler's laws of planetary motion and helped make classical mechanics an important element of the Scientific Revolution. Main article: Electromagnetism Enlarge picture Electromagnetism describes the interaction of charged particles with electric and magnetic fields. It can be divided into electrostatics, the study of interactions between electric charges at rest, and electrodynamics, the study of interactions between moving charges and radiation. The classical theory of electromagnetism is based on the Lorentz force law and Maxwell's equations. Electrostatics is the study of phenomena associated with charged bodies at rest. As described by Coulomb’s law, such bodies exert forces on each other. Their behavior can be analyzed in terms of the concept of an electric field surrounding any charged body, such that another charged body placed within the field is subject to a force proportional to the magnitude of its own charge and the magnitude of the field at its location. Whether the force is attractive or repulsive depends on the polarity of the charge. Electrostatics has many applications, ranging from the analysis of phenomena such as thunderstorms to the study of the behavior of electron tubes. Electrodynamics is the study of phenomena associated with charged bodies in motion and varying electric and magnetic fields. Since a moving charge produces a magnetic field, electrodynamics is concerned with effects such as magnetism, electromagnetic radiation, and electromagnetic induction, including such practical applications as the electric generator and the electric motor. This area of electrodynamics, known as classical electrodynamics, was first systematically explained by James Clerk Maxwell, and Maxwell’s equations describe the phenomena of this area with great generality. A more recent development is quantum electrodynamics, which incorporates the laws of quantum theory in order to explain the interaction of electromagnetic radiation with matter. Dirac, Heisenberg, and Pauli were pioneers in the formulation of quantum electrodynamics. Relativistic electrodynamics accounts for relativistic corrections to the motions of charged particles when their speeds approach the speed of light. It applies to phenomena involved with particle accelerators and electron tubes carrying high voltages and currents. Electromagnetism encompasses various real-world electromagnetic phenomena. For example, light is an oscillating electromagnetic field that is radiated from accelerating charged particles. Aside from gravity, most of the forces in everyday experience are ultimately a result of electromagnetism. The principles of electromagnetism find applications in various allied disciplines such as microwaves, antennas, electric machines, satellite communications, bioelectromagnetics, plasmas, nuclear research, fiber optics, electromagnetic interference and compatibility, electromechanical energy conversion, radar meteorology, and remote sensing. Electromagnetic devices include transformers, electric relays, radio/TV, telephones, electric motors, transmission lines, waveguides, optical fibers, and lasers. Enlarge picture High-precision test of general relativity by the Cassini space probe (artist's impression): radio signals sent between the Earth and the probe (green wave) are delayed by the warpage of space and time (blue lines). Relativity is a generalization of classical mechanics that describes fast-moving or very massive systems. It remains consistent with Maxwell's equations and includes special and general relativity. The theory of special relativity was proposed in 1905 by Albert Einstein in his article "On the Electrodynamics of Moving Bodies". It is based on two postulates: (1) that the mathematical forms of the laws of physics are invariant in all inertial systems; and (2) that the speed of light in a vacuum is constant and independent of the source or observer. Reconciling the two postulates requires a unification of space and time into the frame-dependent concept of spacetime. Special relativity has a variety of surprising consequences that seem to violate common sense, but all have been experimentally verified. It overthrows Newtonian notions of absolute space and time by stating that distance and time depend on the observer, and that time and space are perceived differently, depending on the observer. The theory leads to the assertion of change in mass, dimension, and time with increased velocity. It also yields the equivalence of matter and energy, as expressed in the mass-energy equivalence formula E = mc², where c is the speed of light in a vacuum. Special relativity and the Galilean relativity of Newtonian mechanics agree when velocities are small compared to the speed of light. Special relativity does not describe gravitation; however, it can handle accelerated motion in the absence of gravitation. [10] General relativity is the geometrical theory of gravitation published by Albert Einstein in 1915/16.[11][12] It unifies special relativity, Newton's law of universal gravitation, and the insight that gravitation can be described by the curvature of space and time. In general relativity, the curvature of space-time is produced by the energy of matter and radiation. General relativity is distinguished from other metric by its use of the Einstein field equations to relate space-time content and space-time curvature. Local Lorentz Invariance requires that the manifolds described in GR be 4-dimensional and Lorentzian instead of Riemannian. In addition, the principle of general covariance forces that mathematics be expressed using tensor calculus. The first success of general relativity was in explaining the anomalous perihelion precession of Mercury. Then in 1919, Sir Arthur Eddington announced that observations of stars near the eclipsed Sun confirmed general relativity's prediction that massive objects bend light. Since then, many other observations and experiments have confirmed many of the predictions of general relativity, including gravitational time dilation, the gravitational redshift of light, signal delay, and gravitational radiation. In addition, numerous observations are interpreted as confirming one of general relativity's most mysterious and exotic predictions, the existence of black holes. Thermodynamics and statistical mechanics Main article: Thermodynamics Enlarge picture Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems at the macroscopic scale, and the transfer of energy as heat.<ref name="Perrot" >Perrot, Pierre (1998). A to Z of Thermodynamics. Oxford University Press. ISBN 0-19-856552-6. [13] Historically, thermodynamics developed out of need to increase the efficiency of early steam engines.[14] The starting point for most thermodynamic considerations are the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work.[15] They also postulate the existence of a quantity named entropy, which can be defined for any system.[16] In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. Statistical mechanics analyzes macroscopic systems by applying statistical principles to their microscopic constituents. It provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life. Thermodynamics can be explained as a natural result of statistics and mechanics (classical and quantum) at the microscopic level. In this way, the gas laws can be derived, from the assumption that a gas is a collection of individual particles, as hard spheres with mass. Conversely, if the individual particles are also considered to have charge, then the individual accelerations of those particles will cause the emission of light. It was these considerations which caused Max Planck to formulate his law of blackbody radiation,[17] but only with the assumption that the spectrum of radiation emitted from these particles is not continuous in frequency, but rather quantized.[18] Quantum mechanics Main article: Quantum mechanics Enlarge picture Quantum mechanics is the branch of physics treating atomic and subatomic systems and their interaction with radiation in terms of observable quantities. It is based on the observation that all forms of energy are released in discrete units or bundles called "quanta". Remarkably, quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wavefunctions. The Schrödinger equation plays the role in quantum mechanics that Newton's laws and conservation of energy serve in classical mechanics—i.e., it predicts the future behavior of a dynamic system—and is a wave equation in terms of the wavefunction which predicts analytically and precisely the probability of events or outcomes. The formalism of quantum mechanics was developed during the 1920s. In 1924, Louis de Broglie proposed that not only do light waves sometimes exhibit particle-like properties, as in the photoelectric effect and atomic spectra, but particles may also exhibit wavelike properties. Two different formulations of quantum mechanics were presented following de Broglie’s suggestion. The wave mechanics of Erwin Schrödinger (1926) involves the use of a mathematical entity, the wave function, which is related to the probability of finding a particle at a given point in space. The matrix mechanics of Werner Heisenberg (1925) makes no mention of wave functions or similar concepts but was shown to be mathematically equivalent to Schrödinger’s theory. A particularly important discovery of the quantum theory is the uncertainty principle, enunciated by Heisenberg in 1927, which places an absolute theoretical limit on the accuracy of certain measurements; as a result, the assumption by earlier scientists that the physical state of a system could be measured exactly and used to predict future states had to be abandoned. Quantum mechanics was combined with the theory of relativity in the formulation of P. A. M. Dirac (1928), which, in addition, predicted the existence of antiparticles. Other developments of the theory include quantum statistics, presented in one form by Einstein and S. N. Bose (the Bose-Einstein statistics) and in another by Dirac and Enrico Fermi (the Fermi-Dirac statistics); quantum electrodynamics, concerned with interactions between charged particles and electromagnetic fields; its generalization, quantum field theory; and quantum electronics. The discovery of quantum mechanics in the early 20th century revolutionized physics, and quantum mechanics is fundamental to most areas of current research. Theory and experiment The culture of physics research differs from most sciences in the separation of theory and experiment. Since the twentieth century, most individual physicists have specialized in either theoretical physics or experimental physics. The great Italian physicist Enrico Fermi (19011954), who made fundamental contributions to both theory and experimentation in nuclear physics, was a notable exception. In contrast, almost all the successful theorists in biology and chemistry (e.g. American quantum chemist and biochemist Linus Pauling) have also been experimentalists, although this is changing as of late. Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they are strongly dependent upon each other. Progress in physics frequently comes about when experimentalists make a discovery that existing theories cannot explain, or when new theories generate experimentally testable predictions. Theorists working closely with experimentalists frequently employ phenomenology. Theoretical physics is closely related to mathematics, which provides the language of physical theories, and large areas of mathematics, such as calculus, have been invented specifically to solve problems in physics. Theorists may also rely on numerical analysis and computer simulations, which play an ever richer role in the formulation of physical models. The fields of mathematical and computational physics are active areas of research. Theoretical physics has historically rested on philosophy and metaphysics; electromagnetism was unified this way[19]. Thus physicists may speculate with multidimensional spaces and parallel universes, and from this, hypothesize theories. Experimental physics informs, and is informed by, engineering and technology. Experimental physicists involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas which are not well explored by theorists. Research fields Contemporary research in physics can be broadly divided into condensed matter physics; atomic, molecular, and optical physics; particle physics; and astrophysics. Since the twentieth century, the individual fields of physics have become increasingly specialized, and today most physicists work in a single field for their entire careers. "Universalists" such as Albert Einstein (18791955) and Lev Landau (19081968), who worked in multiple fields of physics, are now very rare[20]. A table of the major fields of physics, along with their subfields and the theories they employ can be found here. Condensed matter Enlarge picture Condensed matter physics is by far the largest field of contemporary physics. Much progress has also been made in theoretical condensed matter physics. By one estimate, one third of all American physicists identify themselves as condensed matter physicists. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967. In 1978, the Division of Solid State Physics at the American Physical Society was renamed as the Division of Condensed Matter Physics.[21] Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering. Atomic, molecular, and optical Enlarge picture A military scientist operates a laser on an optical table. Atomic, molecular, and optical physics (AMO) is the study of matter-matter and light-matter interactions on the scale of single atoms or structures containing a few atoms. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of the energy scales that are relevant. All three areas include both classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view). Atomic physics studies the electron hull of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions, low-temperature collision dynamics, the collective behavior of atoms in weakly interacting gases (Bose-Einstein Condensates and dilute Fermi degenerate systems), precision measurements of fundamental constants, and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see, e.g., hyperfine splitting), but intra-nuclear phenomenon such as fission and fusion are considered part of high energy physics. Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects, but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm. High energy/Particle Physics Main article: Particle physics Enlarge picture Installation of a 1270-ton component of the CMS detector for the Large Hadron Collider, which physicists hope will detect the Higgs boson of the Standard Model. The current state of the classification of elementary particles is the Standard Model. It describes the strong, weak, and electromagnetic fundamental forces, using mediating gauge bosons. The species of gauge bosons are the gluons, W- and W+ and Z bosons, and the photon, respectively. The model also contains 24 fundamental particles (12 particle/anti-particle pairs), which are the constituents of matter. Finally, it predicts the existence of a type of boson known as the Higgs boson, which has yet to be discovered. Main articles: Astrophysics and Physical cosmology Astrophysics developed from the ancient science of astronomy. Astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. After centuries of developments by Babylonian and Greek astronomers, western astronomy lay dormant for fourteen centuries until Nicolaus Copernicus modified the Ptolemaic system by placing the sun at the center of the universe. Tycho Brahe's detailed observations led to Kepler's laws of planetary motion, and Galileo's telescope helped the discipline develop into a modern science. Isaac Newton's theory of universal gravitation provided a physical, dynamic basis for Kepler's laws. By the early 19th cent., the science of celestial mechanics had reached a highly developed state at the hands of Leonhard Euler, J. L. Lagrange, P. S. Laplace, and others. Powerful new mathematical techniques allowed solution of most of the remaining problems in classical gravitational theory as applied to the solar system. At the end of the 19th century, the discovery of spectral lines in sunlight proved that the chemical elements found in the Sun were also found on Earth. Interest shifted from determining the positions and distances of stars to studying their physical composition (see stellar structure and stellar evolution). Because the application of physics to astronomy became increasingly important throughout the 20th century, the distinction between astronomy and astrophysics has faded. Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein’s theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe was expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang. The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established a precise model of the evolution of the universe, which include cosmic inflation, dark energy and dark matter. The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the earth’s atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy. The Hubble Space Telescope, launched in 1990, has made possible visual observations of a quality far exceeding those of earthbound instruments; earth-bound observatories using telescopes with adaptive optics will now be able to compensate for the turbulence of Earth's atmosphere. Applied physics Main article: Applied Physics Applied physics is a general term for physics which is intended for a particular use. Applied is distinguished from pure by a subtle combination of factors such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work.[22] It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem. The approach is similar to that of applied mathematics. Applied physicists can also be interested in the use of physics for scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics. Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges or other structures, while acoustics is used to design better concert halls. An understanding of physics is important to the design of realistic flight simulators, video games, and movies. 1. ^ R. P. Feynman, R. B. Leighton, M. Sands (1963), The Feynman Lectures on Physics, ISBN 0-201-02116-1 Hard-cover. p.1-1 Feynman begins with the atomic hypothesis, as his most compact statement of all scientific knowledge: "If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations ..., what statement would contain the most information in the fewest words? I believe it is ... that all things are made up of atoms -- little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. ..." vol. I p. I-2 2. ^ James Clerk Maxwell (1876), Matter and Motion. Notes and appendices by Joseph Larmor. "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular succession of events". p.1 3. ^ "Give me matter and motion, and I will construct the universe." --Rene Descartes (1596-1650) 4. ^ [1] 5. ^ E.F. Taylor, J.A. Wheeler (2000), ''Exploring Black Holes: Introduction to General Relativity, ISBN 0-201-38423-X Hard-cover. Back cover: "Spacetime tells matter how to move; mass tells spacetime how to curve." 6. ^ H.D. Young & R.A. Freedman, University Physics with Modern Physics: 11th Edition: International Edition (2004), Addison Wesley. Chapter 1, section 1.1, page 2 has this to say: "Physics is an experimental science. Physicists observe the phenomena of nature and try to find patterns and principles that relate these phenomena. These patterns are called physical theories or, when they are very well established and of broad use, physical laws or principles." 7. ^ Steve Holzner, Physics for Dummies (2006), Wiley. Chapter 1, page 7 says: "Physics is the study of your world and the world and universe around you." See Amazon Online Reader: Physics For Dummies (For Dummies(Math & Science)), last viewed 24 Nov 2006. 8. ^ Evidence exists that the earliest civilizations dating back to beyond 3000BC, such as the Sumerians, Ancient Egyptians, and the Indus Valley Civilization, all had a predictive knowledge and a very basic understanding of the motions of the Sun, Moon, and stars. 9. ^ Francis Bacon (1620), Novum Organum was critical in the development of scientific method. 10. ^ Taylor, Edwin F. & John Archibald Wheeler (1966), Spacetime Physics, San Francisco: W.H. Freeman and Company, ISBN 0-7167-0336-X See, for example, The Relativistic Rocket, Problem #58, page 141, and its worked answer. 11. ^ Einstein, Albert (November 25, 1915). "Die Feldgleichungun der Gravitation". Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin: 844-847. Retrieved on 2006-09-12.1915&rft.aulast=Einstein&rft.aufirst=Albert&rft.pages=844-847&">  12. ^ Einstein, Albert (1916). "The Foundation of the General Theory of Relativity" (PDF). Annalen der Physik. Retrieved on 2006-09-03.  15. ^ Van Ness, H.C. (1969). Understanding Thermodynamics. Dover Publications, Inc.. ISBN 0-486-63277-6.  16. ^ Dugdale, J.S. (1998). Entropy and its Physical Meaning. Taylor and Francis. ISBN 0-7484-0569-0.  17. ^ Max Planck (1925), A Survey of Physical Theory derives his law of blackbody radiation in the notes on pp. 115-116, ISBN 0-486-67867-9 18. ^ Feynman Lectures on Physics, vol I p. 41-6, ISBN 0-201-02010-6 19. ^ See, for example, the influence of Kant and Ritter on Oersted. 22. ^ Stanford Applied Physics Department Description Further reading ..... Click the link for more information. ..... Click the link for more information. physics has brought not only fundamental changes in ideas about the material world, mathematics and philosophy, but also, through technology, a transformation of society. Physics is considered both a body of knowledge and the practice that makes and transmits it. ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. * It needs to be expanded. ..... Click the link for more information. The term SPACE (capitalized) can refer to: • , a Canadian science-fiction channel • The Society for Promotion of Alternative Computing and Employment • DSPACE, a term in computational complexity theory ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. In physics, force is an action or agency that causes a body of mass m to accelerate. It may be experienced as a lift, a push, or a pull. The acceleration of the body is proportional to the vector sum of all forces acting on it (known as net force or resultant force). ..... Click the link for more information. energy (from the Greek ενεργός, energos, "active, working")[1] is a scalar physical quantity that is a property of objects and systems of objects which is conserved by nature. ..... Click the link for more information. Mass is a fundamental concept in physics, roughly corresponding to the intuitive idea of "how much matter there is in an object". Mass is a central concept of classical mechanics and related subjects, and there are several definitions of mass within the framework of relativistic ..... Click the link for more information. ..... Click the link for more information. In the scientific method, an experiment (Latin: ex- periri, "of (or from) trying") is a set of observations performed in the context of solving a particular problem or question, to support or falsify a hypothesis or research concerning phenomena. ..... Click the link for more information. ..... Click the link for more information. Physis (φύσις) is a Greek theological, philosophical, and scientific term usually translated into English as "nature". In the Odyssey, Homer uses the word once (its earliest known occurrence), referring to the intrinsic way of growth of a particular species ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. Biology (from Greek: βίος, bio, "life"; and λόγος, logos, "knowledge"), also referred to as the biological sciences, is the scientific study of life. ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. Mechanics (Greek Μηχανική ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. An electron microscope is a type of microscope that uses electrons as a way to illuminate and create an image of a specimen. It has much higher magnification and resolving power than a light microscope, with magnifications up to about two million times, compared to about two ..... Click the link for more information. ..... Click the link for more information. Theoretical physics employs mathematical models and abstractions of physics, as opposed to experimental processes, in an attempt to understand nature. Its central core is mathematical physics 1, though other conceptual techniques are also used. ..... Click the link for more information.
7f0ba9b9ff183513
Take the 2-minute tour × Is there anything in the physics that enforces the wave function to be $C^2$? Are weak solutions to the Schroedinger equation physical? I am reading the beginning chapters of Griffiths and he doesn't mention anything. share|improve this question Related: physics.stackexchange.com/q/1067/2451 –  Qmechanic Jan 17 '12 at 23:57 Thanks, but I don't think a good answer was given there. –  user19192 Jan 18 '12 at 0:05 add comment 3 Answers The time-independent Schroedinger equation for the position-space wavefunction has the form $$\left(\frac{-\hbar^2}{2m}\nabla^2 +(V-E) \right)\Psi=0$$ Where $E$ is the energy of that particular eigenstate, and $V$ in general depends on the position. All physical wavefunctions must be in some superposition of states that satisfy this equation. At least in nonrelativistic QM, the wavefunction is not allowed to have infinte energy. If the second derivative of the wavefunction does not exist or is infinite, it implies that either $V$ has some property that "cancels out" the discontinutiy (as in the infinite square well), or that the wavefunction is continuous and differentiable everywhere. Generally, $\Psi$ must always be continuous, and any spatial derivative of $\Psi$ must exist unless $V$ is infinite at that point. share|improve this answer add comment Some of this was discussed elsewhere. See « significance of unbounded operators » http://physics.stackexchange.com/a/19569/6432 . It is not true the wave function has to be continuous, it just has to be measurable (i.e., a limit of step functions almost everywhere). Naturally you might wonder what sense Schroedinger's equation makes if you apply it to a step function...but the answer is easier than worrying about distributional weak solutions. The point is that you can solve the time-dependent Schroedinger equation with the exponential $$e^{itH},$$ which is a family of unitary operators, and which is better behaved than the $H$ you have to use in Schroedinger's equation. The $H$ you have to use, for example $$-{\partial ^2\over\partial x^2} + \mathrm{other\ stuff}, $$ is unbounded. And non-differentiable functions are not in its domain. But plugging it in to the power series for exponential converges in norm anyway, and so the resulting operator, being bounded and even unitary on a dense domain of the Hilbert space, can be extended painlessly to the entire space, even step functions. So it makes more sense to say that the solution to Schroedinger's equation with a given initial condition $\psi_o$ is $$\psi_t (x) = e^{itH}\cdot \psi_o (x)$$ and there is no need to bring in distributional weak solutions. These considerations are called the Stone--von Neumann theorem. But such functions are not very important and indeed it is possible to do all of Quantum Mechanics with smooth functions, especially if you take the attitude that, for example, a square well potential would also be unphysical and is really just a simplified approximation of a physical potential which smoothed off those square corners but had a formula that was unmanageable.... See Anthony Sudbery, Quantum Mechanics and the Particles of Nature, which since it is written by a mathematician, is careful about unimportant issues like this. That family of operators I wrote down is called the time-evolution operators, and they are an example of a unitary group with one-parameter, time. It is easy to see that if $\psi_o$, the initial condition, the state of the quantum system at time $t=0$, is nice and smooth, then all the future states will be nice and smooth too. Furthermore, all the usual quantum observables have eigenstates which are nice and smooth, so if you perform a future measurement, you will get a function which is nice and smooth and its future time evolution will remain that way, until the next measurement, etc. until Doomsday. That said, for all practical purposes you may assume all wave functions are smooth and that the only reason you study discontinuous ones is as convenient approximations. The comment one sometimes hears is that a wave function that was not in the domain of the Hamiltonian would « have infinite energy » but this is nonsense. In Quantum Mechanics, you are not allowed to talk about a quantum system as having a definite value of an observable unless it is in an eigenstate of that observable. What you can ask is, what would be the expectation of that observable. If the wave function $\psi$ is discontinuous and not in the domain of the Hamiltonian, it cannot be an eigenstate, but if its energy is measured, the answer will always be finite. Yet, the expectation of its energy does not exist, or you could say, the expectation « is infinite ». Not the energy, its expectation. There is nothing very unphysical about this because expectation itself is not very directly physical: you cannot measure the expectation unless you make infinitely many measurements, and your estimated answer, even for this discontinuous function, will always be a finite expectation. It's just that those estimates are way inaccurate, the expectation really is infinite (like the Cauchy distribution in statistics). But even for such a « bad » wavefunction, all the axioms of Quantum Mechanics apply: the probability that the energy, if measured, will be 7 erg, is calculated the usual way. But these bad wave functions never arise in elementary systems or exercises so most people think they are « unphysical ». And, as I said, if the initial condition is a « good » wave function, the system will never evolve outside of that. This, I think, is connected with the fact that in QM, all systems have a finite number of degrees of freedom: this would no longer be true for quantum systems with infinitely many degrees of freedom such as are studied in Statistical Mechanics. share|improve this answer Right, there's nothing wrong about step functions, delta-functions (the derivatives of the former), and others, and that's why physicists freely work with them and never mention artificial mathematical constraints. Still, some discontinuities may make the kinetic energy infinity, so they don't exist in the finite-energy spectrum. I would add that the most natural space of functions to consider is $L^2$, all square-integrable functions. They may be Fourier-transformed or converted to other (discrete...) bases. A subset also has a finite (expectation value of) energy. –  Luboš Motl Jan 18 '12 at 7:22 add comment Here we want to show that there is an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation $$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \qquad\qquad (1)$$ tend to be rather nice. First rewrite eq. (1) in integral form $$ \psi(x)~=~ \frac{2m}{\hbar^2} \int^{x}\mathrm{d}y \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z) .\qquad\qquad (2)$$ There are various cases. 1. Case $V \in {\cal L}^2_{\rm loc}(\mathbb{R})$ is a locally square integrable function. Assume the wavefunction $\psi \in {\cal L}^2_{\rm loc}(\mathbb{R})$ as well. Then the product $(V-E)\psi\in {\cal L}^1_{\rm loc}(\mathbb{R})$ due to Cauchy–Schwarz inequality. Then the integral $y\mapsto \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z)$ is continuous, and hence the wavefunction $\psi$ on the lhs. of eq. (2) is smooth $\psi\in C^{1}(\mathbb{R}).$ 2. Case $V \in C^{p}(\mathbb{R})$ for a non-negative integer $p\in\mathbb{N}_0$. Similar bootstrap argument shows that $\psi\in C^{p+2}(\mathbb{R}).$ The above two cases do not cover a couple of often-used mathematically idealized potentials $V(x)$, e.g., 1. the infinite wall $V(x)=\infty$ in some region. (The wavefunction must vanish $\psi(x)=0$ in this region.) 2. or a Dirac delta distribution $V(x)=V_0\delta(x)$. See also here. share|improve this answer add comment Your Answer
ff93535d89f05940
Erwin Schrödinger From The Infosphere, the Futurama Wiki Jump to: navigation, search Tertiary character Erwin Schrödinger Erwin Schrödinger.png Facing URL after a car crash (6ACV16). Date of birth12 August, 1887 Planet of originEarth, Europe, Austria, Vienna First appearance"Law and Oracle" (6ACV16) Voiced byMaurice LaMarche Wikipedia has information unrelated to Futurama Erwin Schrödinger is a physicist considered one of the fathers of quantum mechanics. While widely believed to have been born in August 1887 and was believed to have died in January 1961, Schrödinger was seen in New New York in July 3011 (6ACV16), so it is possible that he never actually died, but was rather frozen at (for example) Applied Cryogenics. Schrödinger wears his hair slicked back, a pair of glasses, a black bow tie, a brown jacket, a white shirt, brown pants, and red shoes; speaks with a German accent; and has a malevolent-looking expression. His likeness of today is similar to that of the 1940s. He is described as being "a major violator of the laws of Physics" by Chief O'Mannahan, who claims that "guys like [him] really bust [her] uterus". 19th and 20th centuries[edit] According to mainstream belief, Erwin Rudolf Josef Alexander Schrödinger was born on 12 August, 1887, in Vienna and died of tuberculosis on 4 January, 1961, also in Vienna. He received the Nobel Prize in Physics in 1933 for the Schrödinger equation and proposed the thought experiment Schrödinger's cat in 1935, among many other conquests. 31st century[edit] Schrödinger, in July 3011, broke speed limit, possibly in an attempt to secure his box, and it got the attention of two NNYPD officers. After detecting that Schrödinger's car was going at fifteen miles per hour over the speed of light, policemen Fry and URL engaged in pursuit of the physicist, following him to Circuit City in their motorcycles. Schrödinger did not stop the vehicle and a Tron-like race took place. Although he managed to evade them for a while, Schrödinger was tricked into entering the Fresnel Circle. With the City's light, the Circle created a rainbow of duplicates of Schrödinger's car and of Schrödinger himself, which ultimately caused the car to crash. Upon crawling out of his car, Schrödinger was approached by the policemen, who learned his identity by examining his DNA and career chip. Schrödinger was then asked what was in the box that he had in the car and responded that it was "a cat, some poison, and a cesium atom", going on to say that the cat was half-dead, half-alive. Doubting his words, Fry opened the box and was attacked by the cat, the status of which was therefore confirmed. URL found the poison immediately afterwards. Fry and URL later took Schrödinger to the NNYPD and Chief O'Mannahan rewarded them with a promotion to the Future Crimes Division. Image gallery[edit] Additional Info[edit] • Prior to appearing in "Law and Oracle", Schrödinger's cat was parodied in "Mars University" as "Witten's Dog" and he was also referenced in "A Clone of My Own", where a club called Schrödinger's Kit-Kat Club is seen. The name of the club also references his thought experiment. • URL pronounced his first name incorrectly. It is actually pronounced "Er-VIN", and not "Er-WIN". Fry: DNA and career chip, please. URL: Answer him, fool. [Fry enters the car.] Fry: Says you. URL: There's also a lotta drugs in there. [Chief O'Mannahan's office. Chief O'Mannahan is shaving above a drawing of Schrödinger, whom Fry and URL have brought to the NNYPD.] Chief O'Mannahan: You boys did good. Nailed a major violator of the laws of Physics. URL: He's goin' down. [URL lifts up Schrödinger's cat.] Cat's gonna testify. [Chief O'Mannahan lifts up the drawing of Schrödinger, revealing it to read WANTED.] Chief O'Mannahan: Guys like this really bust my uterus. You're both getting a promotion! Ever heard of the Future Crimes Division?
93425b389df35a2f
From New World Encyclopedia Jump to: navigation, search 1 (none)hydrogenhelium Name, Symbol, Number hydrogen, H, 1 Chemical series nonmetals Group, Period, Block 1, 1, s Appearance colorless Atomic mass 1.00794(7) g/mol Electron configuration 1s1 Electrons per shell 1 Physical properties Phase gas Density (0 °C, 101.325 kPa) 0.08988 g/L Melting point 14.01 K (−259.14 °C, −434.45 °F) Boiling point 20.28 K (−252.87 °C, −423.17 °F) Triple point 13.8033 K, 7.042 kPa Critical point 32.97 K, 1.293 MPa Heat of fusion (H2) 0.117 kJ/mol Heat of vaporization (H2) 0.904 kJ/mol Heat capacity (25 °C) (H2) 28.836 J/(mol·K) Vapor pressure P/Pa 1 10 100 1 k 10 k 100 k at T/K         15 20 Atomic properties Crystal structure hexagonal Oxidation states 1, −1 (amphoteric oxide) Electronegativity 2.20 (Pauling scale) Ionization energies 1st: 1312.0 kJ/mol Atomic radius 25 pm Atomic radius (calc.) 53 pm (Bohr radius) Covalent radius 37 pm Van der Waals radius 120 pm Thermal conductivity (300 K) 180.5 mW/(m·K) CAS registry number 1333-74-0 (H2) Notable isotopes Main article: Isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.985% H is stable with 0 neutrons 2H 0.0115% H is stable with 1 neutron 3H trace 12.32 years β 0.019 3He Hydrogen (chemical symbol H, atomic number 1) is the lightest chemical element and the most abundant of all elements, constituting roughly 75 percent of the elemental mass of the universe.[1] Stars in the main sequence are mainly composed of hydrogen in its plasma state. In the Earth's natural environment, free (uncombined) hydrogen is relatively rare. At standard temperature and pressure, it takes the form of a colorless, odorless, tasteless, highly flammable gas made up of diatomic molecules (H2). On the other hand, the element is widely distributed in combination with other elements, and many of its compounds are vital for living systems. Its most familiar compound is water (H2O). Elemental hydrogen is industrially produced from hydrocarbons such as methane, after which most elemental hydrogen is used "captively" (meaning locally, at the production site). The largest markets are about equally divided between fossil fuel upgrading (such as hydrocracking) and ammonia production (mostly for the fertilizer market). The most common naturally occurring isotope of hydrogen, known as protium, has a single proton and no neutrons. In ionic compounds, it can take on either a positive charge (becoming a cation, H+, which is a proton) or a negative charge (becoming an anion, H, called a hydride). It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics. The term hydrogen (Latin: 'hydrogenium') can be traced to a combination of the ancient Greek words hydor, meaning "water," and genes, meaning "forming." This refers to the observation that when hydrogen burns, it produces water. Natural occurrence Hydrogen is the most abundant element in the universe, making up 75 percent of normal matter by mass and over 90 percent by number of atoms.[2] This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through proton-proton reaction nuclear fusion. Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2 (for data see table). However, hydrogen gas is very rare in the Earth's atmosphere (1 part per million by volume) because of its light weight, which enables it to escape Earth's gravity more easily than heavier gases. Although H atoms and H2 molecules are abundant in interstellar space, they are difficult to generate, concentrate and purify on Earth. Still, hydrogen is the third most abundant element on the Earth's surface.[3] Most of the Earth's hydrogen is in the form of chemical compounds such as hydrocarbons and water.[4] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus. Methane is a hydrogen source of increasing importance. Discovery of H2 Role in history of quantum theory The hydrogen atom Electron energy levels Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius (image not to scale) • 1H is the most common hydrogen isotope with an abundance of more than 99.98 percent. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium. • 2H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in its nucleus. Deuterium comprises 0.0026–0.0184 percent (by mole-fraction or atom-fraction) of hydrogen samples on Earth, with the lower number tending to be found in samples of hydrogen gas and the higher enrichments (0.015 percent or 150 parts per million) typical of ocean water. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1H-NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion. Hydrogen is the only element that has different names for its isotopes in common use today (During the early study of radioactivity, various heavy radioactive isotopes were given names, but such names are no longer used. The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, but the corresponding symbol P is already in use for phosphorus and thus is not available for protium. IUPAC states that while this use is common, it is not preferred. Elemental molecular forms First tracks observed in liquid hydrogen bubble chamber at the Bevatron There are two different types of diatomic hydrogen molecules that differ by the relative spin of their nuclei.[9] In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state; in the parahydrogen form the spins are antiparallel and form a singlet. At standard temperature and pressure, hydrogen gas contains about 25 percent of the para form and 75 percent of the ortho form, also known as the "normal form."[10] The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but since the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The physical properties of pure parahydrogen differ slightly from those of the normal form.[11] The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene. Hydrogen is the lightest element in the periodic table, with an atomic mass of 1.00794 g/mol. For lack of a better place, it is generally shown at the top of group 1 (former group 1A). It is, however, a nonmetal, whereas the other members of group 1 are alkali metals. Hydrogen can combust rapidly in air. It burned rapidly in the Hindenburg airship disaster May 6, 1937. Hydrogen gas is highly flammable and will burn at concentrations as low as four percent H2 in air. The combustion reaction may be written as follows: The reaction generates a large amount of heat. The enthalpy of combustion is – 286 kJ/mol. When mixed with oxygen across a wide range of proportions, hydrogen explodes upon ignition. Pure hydrogen-oxygen flames are nearly invisible to the naked eye, as illustrated by the faintness of flame from the main space shuttle engines (as opposed to the easily visible flames from the shuttle boosters). Thus it is difficult to visually detect if a hydrogen leak is burning. The Hindenburg airship flames seen in the adjacent picture are hydrogen flames colored with material from the covering skin of the zeppelin which contained carbon and pyrophoric aluminum powder, as well as other combustible materials.[18] Regardless of the cause of this fire, this was clearly primarily a hydrogen fire since skin of the airship alone would have taken many hours to burn.[19] Another characteristic of hydrogen fires is that the flames tend to ascend rapidly with the gas in air, as illustrated by the Hindenburg flames, causing less damage than hydrocarbon fires. For example, two-thirds of the Hindenburg passengers survived the hydrogen fire, and many of the deaths that occurred were from falling or from gasoline burns.[20] Reaction with halogens Covalent and organic compounds Hydrogen forms a vast array of compounds with carbon. Because of their general association with living things, these compounds came to be called organic compounds; the study of their properties is known as organic chemistry and their study in the context of living organisms is known as biochemistry. By some definitions, "organic" compounds are only required to contain carbon, but most of them also contain hydrogen, and the carbon-hydrogen bond is responsible for many of their chemical characteristics. Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. To chemists, the term "hydride" usually implies that the H atom has acquired a negative or anionic character, denoted H. The existence of the hydride anion, suggested by G. N. Lewis in 1916 for group I and II salt-like hydrides, was demonstrated by Moers in 1920 with the electrolysis of molten lithium hydride (LiH), that produced a stoichiometric quantity of hydrogen at the anode.[21] For hydrides other than group I and II metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group II hydrides is BeH2, which is polymeric. In lithium aluminum hydride, the AlH4 anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, there are over one hundred binary borane hydrides known, but only one binary aluminum hydride.[22] Binary indium hydride has not yet been identified, although larger complexes exist.[23] "Protons" and acids H2 is produced in chemistry and biology laboratories, often as a byproduct of other reactions; in industry for the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents in biochemical reactions. Laboratory syntheses Zn + 2 H+ → Zn2+ + H2 Aluminum produces H2 upon treatment with an acid or a base: The electrolysis of water is a simple method of producing hydrogen, although the resulting hydrogen necessarily has less energy content than was required to produce it. A low-voltage current is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert metals (iron, for instance, would oxidize, and thus decrease the amount of oxygen given off). The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is between 80 and 94 percent.[26] In 2007 it was discovered that an alloy of aluminum and gallium in pellet form added to water could be used to generate hydrogen.[27] The process creates also creates alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be reused. This potentially has important implications for a hydrogen economy, since hydrogen can be produced on-site and does not need to be transported. Industrial syntheses Hydrogen can be prepared in several different ways but the economically most important processes involve removal of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[28] At high temperatures (700–1100 °C; 1,300–2,000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H2. CH4 + H2OCO + 3 H2 CH4 → C + 2 H2 Consequently, steam reforming typically employs an excess of H2O. CO + H2OCO2 + H2 Other important methods for H2 production include partial oxidation of hydrocarbons: CH4 + 0.5 O2CO + 2 H2 and the coal reaction, which can serve as a prelude to the shift reaction above:[28] C + H2OCO + H2 Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia (the world's fifth-most produced industrial compound), hydrogen is generated from natural gas. Biological syntheses H2 is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between H2 and its component two protons and two electrons. Evolution of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water.[29] The triple point temperature of equilibrium hydrogen is a defining fixed point on the International Temperature Scale of 1990 (ITS-90). Hydrogen as an energy carrier See also 1. Hydrogen in the Universe, NASA. Retrieved December 27, 2007. 2. Steve Gagnon, It’s Elemental: Hydrogen, Jefferson Lab. Retrieved December 27, 2007. 3. Basic Research Needs for the Hydrogen Economy, Argonne National Laboratory, U.S. Department of Energy, Office of Science Laboratory. Retrieved December 27, 2007. 4. 4.0 4.1 4.2 G. L. Miessler and D. A. Tarr, Inorganic Chemistry, 3rd ed. (Upper Saddle River, NJ: Pearson Prentice Hall, 2004, ISBN 0130354716). 5. Webelements – Hydrogen Historical Information. Retrieved December 27, 2007. 6. R. Berman, A. H. Cooke and R. W. Hill, “Cryogenics,” Ann. Rev. Phys. Chem. 7 (1956): 1–20. 7. Y. B. Gurov, D. V. Aleshkin, M. N. Berh, S. V. Lapushkin, et al., Spectroscopy of superheavy hydrogen isotopes in stopped-pion absorption by nuclei, Physics of Atomic Nuclei 68 (3) (2004): 491–497. 8. A. A. Korsheninnikov, et al., Experimental evidence for the existence of 7H and for a specific structure of 8He, Phys. Rev. Lett. 90 (2003)” 082501. 9. Hydrogen (H2) Applications and Uses, Universal Industrial Gases, Inc. Retrieved December 27, 2007. 10. V. I. Tikhonov and A. A. Volkov, Separation of water into its ortho and para isomers, Science 296 (5577) (2002) :2363. 11. CH. 6 - NASA Glenn Research Center Glenn Safety Manual: Hydrogen, Document GRC-MQSA.001, March 2006. Retrieved December 27, 2007. 12. Y. Y. Milenko, R. M. Sibileva and M. A. Strzhemechny, Natural ortho-para conversion rate in liquid and gaseous hydrogen, J. Low. Temp. Phys. 107 (1-2) (1997): 77–92. 13. R. E. Svadlenak and A. B. Scott, The conversion of ortho-to parahydrogen on iron oxide-zinc oxide catalysts, J. Am. Chem. Soc. 79(20) (1957): 5385–5388. 14. H3+ Resource Center. Universities of Illinois and Chicago. Retrieved December 27, 2007. 15. T. Takeshita, W. E. Wallace and R. S. Craig, Hydrogen solubility in 1:5 compounds between yttrium or thorium and nickel or cobalt, Inorg. Chem. 13 (9) (1974): 2282. 16. R. Kirchheim, T. Mutschele and W. Kieninger, Hydrogen in amorphous and nanocrystalline metals, Mater. Sci. Eng. 99 (1988): 457–462. 17. R. Kirchheim, Hydrogen solubility and diffusivity in defective and amorphous metals, Prog. Mater. Sci. 32 (4) (1988): 262–325. 18. A. Bain and W. D. Van Vorst, The Hindenburg tragedy revisited: the fatal flaw exposed, International Journal of Hydrogen Energy 24 (5) (1999): 399–403. 19. John Dziadecki, Hindenburg Hydrogen Fire. Retrieved December 27, 2007. 20. The Hindenburg Disaster, Swiss Hydrogen Association. Retrieved December 27, 2007. 21. K. Moers,. 2. Z. Anorg. Allgem. Chem. 113 (1920): 191. 22. A. J. Downs and C. R. Pulham, The hydrides of aluminium, gallium, indium, and thallium: a re-evaluation, Chem. Soc. Rev. 23 (1994): 175–183. 23. D. E. Hibbs, C. Jones and N. A. Smithies, A remarkably stable indium trihydride complex: Synthesis and characterization of [InH3{P(C6H11)3}], Chem. Commum. (1999): 185–186. 24. M. Okumura, L. I. Yeh, J. D. Myers and Y. T. Lee, Infrared spectra of the solvated hydronium ion: Vibrational predissociation spectroscopy of mass-selected H3O+•H2On•H2m (1990). 25. A. Carrington and I. R. McNab, The infrared predissociation spectrum of triatomic hydrogen cation (H3+), Accounts of Chemical Research 22 (1989): 218–222. 26. Bellona Report on Hydrogen. Retrieved December 27, 2007. 27. “New process generates hydrogen from aluminum alloy to run engines, fuel cells,” Physorg.com (May 16, 2007). Retrieved December 27, 2007. 28. 28.0 28.1 28.2 D. W. Oxtoby, H. P. Gillis and N. H. Nachtrieb, Principles of Modern Chemistry, 5th ed. (Belmont, CA: Thomson Brooks/Cole, 2002, ISBN 0030353734). 29. R. Cammack, M. Frey and R. Robson, Hydrogen as a Fuel: Learning from Nature (London: Taylor & Francis, 2001). 30. O. Kruse, J. Rupprecht, K. P. Bader, S. Thomas-Hal, P. M. Schenk, G. Finazzi and B. Hankamer, Improved photobiological H2 production in engineered green algal cells, J. Biol. Chem. 280 (40) (2005): 34170–34177. 31. H.O. Smith and Q. Xu, Hydrogen from Water in a Novel Recombinant Oxygen-Tolerant Cyanobacteria System, United States Department of Energy FY2005 Progress Report, IV.E.6. Retrieved December 27, 2007. 32. Hydrogen, Los Alamos National Laboratory. Retrieved December 27, 2007. 33. Joseph Romm, The Hype About Hydrogen: Fact and Fiction in the Race to Save the Climate (New York: Island Press, 2004, ISBN 1559637048). • General Electric. 1989. Chart of the Nuclides. General Electric Company. Retrieved December 27, 2007. • Ferreira-Aparicio, P., M. J. Benito, and J. L. Sanz. 2005. New Trends in Reforming Technologies: from Hydrogen Industrial Plants to Multifuel Microreformers. Catalysis Reviews 47: 491–588. • Krebs, Robert E. 1998. The History and Use of Our Earth's Chemical Elements: A Reference Guide. Westport, CT: Greenwood Press. ISBN 0313301239 • Newton, David E. 1994. The Chemical Elements. New York: Franklin Watts. ISBN 0531125017 • Rigden, John S. 2002. Hydrogen: The Essential Element. Cambridge, MA: Harvard University Press. ISBN 0531125017 • Romm, Joseph J. 2004. The Hype about Hydrogen, Fact and Fiction in the Race to Save the Climate. Washington, D.C.: Island Press. ISBN 155963703X • Stwertka, Albert. 2002. A Guide to the Elements. New York: Oxford University Press. ISBN 0195150279 External links All links retrieved March 29, 2014. E numbers     Research begins here...
4d63dbe5bbae905b
IJPAP Vol.47(08) [August 2009] : [11] Collection home page Issue DateTitleAuthor(s) Aug-2009Economical and thermal optimization of possible options to control visible plume from wet cooling towersTyagi, S K; Park, S R; Tyagi, V V; Anand, S Aug-2009Temperature dependent study of volume and thermal expansivity of solids based on equation of stateKapoor, Kamal; Dass, Narsingh Aug-2009Effect of grinding on the crystal structure of recently excavated dolomiteRamasamy, V; Ponnusamy, V; Sabari, S; Anishia, S R; Gomathi, S S Aug-2009Analysis of sound ray theory and FEM for ultrasonic propagation in a finite rodChen, Youxing; Wang, Zhaoba; Zheng, Jianli; Zhao, Xia; Li, Yuan Aug-2009Shear viscosity of dense fluidSrivastava, Rajat; Tewari, Ashutosh; Khanna, K N Aug-2009Vibrational spectra and normal coordinate analysis of diethyl carbamazineGunasekaran, S; Anita, B Aug-2009ESR, infrared and optical absorption studies of Cu2+ ion doped in 60B2O3-10TeO2-(30-x)MO-xPbO (M = Zn, Cd) glassesUpender, G; Kamalaker, V; Vardhani, C P; Mouli, V Chandra Aug-2009Pair correction function for square-well fluidsTiwari, Ashutosh; Khanna, K N Aug-2009Density, viscosity and speed of sound of binary liquid mixtures of sulpholane with aliphatic amines at T =308.15 KKrishna, P Murali; Kumar, B Ranjith; Sathyanarayana, B; Jyothi, K Amara; Satyanarayana, N Aug-2009Computational studies on the structure and vibrational spectra of 2-hydroxy-5-methyl-3-nitropyridineSingh, Hari Ji; Srivastava, Priyanka Aug-2009Exact solution of relativistic Schrödinger equation for the central complex potential V(r) = iar + (b/r)Srivastava, V K; Bose, (Late) S K
7d3dbf39672c4484
Does Quantum Physics Make it Easier to Believe in God? English: Schrödinger equation of quantum mecha... Schrödinger equation (1927) “Not in any direct way. That is, it doesn’t provide an argument for the existence of God.  But it does so indirectly, by providing an argument against the philosophy called materialism (or “physicalism“), which is the main intellectual opponent of belief in God in today’s world.” (Big Questions Online)
b8fb1763639afadf
Nobel Prizes and Laureates Nobel Prizes and Laureates The Nobel Prize in Physics 1965 Sin-Itiro Tomonaga, Julian Schwinger, Richard P. Feynman Share this: Nobel Lecture Nobel Lecture, December 11, 1965 The Development of the Space-Time View of Quantum Electrodynamics We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing. Since winning the prize is a personal thing, I thought I could be excused in this particular situation, if I were to talk personally about my relationship to quantum electrodynamics, rather than to discuss the subject itself in a refined and finished fashion. Furthermore, since there are three people who have won the prize in physics, if they are all going to be talking about quantum electrodynamics itself, one might become bored with the subject. So, what I would like to tell you about today are the sequence of events, really the sequence of ideas, which occurred, and by which I finally came out the other end with an unsolved problem for which I ultimately received a prize. I realize that a truly scientific paper would be of greater value, but such a paper I could publish in regular journals. So, I shall use this Nobel Lecture as an opportunity to do something of less value, but which I cannot do elsewhere. I ask your indulgence in another manner. I shall include details of anecdotes which are of no value either scientifically, nor for understanding the development of ideas. They are included only to make the lecture more entertaining. I worked on this problem about eight years until the final publication in 1947. The beginning of the thing was at the Massachusetts Institute of Technology, when I was an undergraduate student reading about the known physics, learning slowly about all these things that people were worrying about, and realizing ultimately that the fundamental problem of the day was that the quantum theory of electricity and magnetism was not completely satisfactory. This I gathered from books like those of Heitler and Dirac. I was inspired by the remarks in these books; not by the parts in which everything was proved and demonstrated carefully and calculated, because I couldn't understand those very well. At the young age what I could understand were the remarks about the fact that this doesn't make any sense, and the last sentence of the book of Dirac I can still remember, "It seems that some essentially new physical ideas are here needed." So, I had this as a challenge and an inspiration. I also had a personal feeling, that since they didn't get a satisfactory answer to the problem I wanted to solve, I don't have to pay a lot of attention to what they did do. I did gather from my readings, however, that two things were the source of the difficulties with the quantum electrodynamical theories. The first was an infinite energy of interaction of the electron with itself. And this difficulty existed even in the classical theory. The other difficulty came from some infinites which had to do with the infinite numbers of degrees of freedom in the field. As I understood it at the time (as nearly as I can remember) this was simply the difficulty that if you quantized the harmonic oscillators of the field (say in a box) each oscillator has a ground state energy of (½) and there is an infinite number of modes in a box of every increasing frequency w, and therefore there is an infinite energy in the box. I now realize that that wasn't a completely correct statement of the central problem; it can be removed simply by changing the zero from which energy is measured. At any rate, I believed that the difficulty arose somehow from a combination of the electron acting on itself and the infinite number of degrees of freedom of the field. Well, it seemed to me quite evident that the idea that a particle acts on itself, that the electrical force acts on the same particle that generates it, is not a necessary one - it is a sort of a silly one, as a matter of fact. And, so I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons. That means there is no field at all. You see, if all charges contribute to making a single common field, and if that common field acts back on all the charges, then each charge must act back on itself. Well, that was where the mistake was, there was no field. It was just that when you shook one charge, another would shake later. There was a direct interaction between charges, albeit with a delay. The law of force connecting the motion of one charge with another would just involve a delay. Shake this one, that one shakes later. The sun atom shakes; my eye electron shakes eight minutes later, because of a direct interaction across. Now, this has the attractive feature that it solves both problems at once. First, I can say immediately, I don't let the electron act on itself, I just let this act on that, hence, no self-energy! Secondly, there is not an infinite number of degrees of freedom in the field. There is no field at all; or if you insist on thinking in terms of ideas like that of a field, this field is always completely determined by the action of the particles which produce it. You shake this particle, it shakes that one, but if you want to think in a field way, the field, if it's there, would be entirely determined by the matter which generates it, and therefore, the field does not have any independent degrees of freedom and the infinities from the degrees of freedom would then be removed. As a matter of fact, when we look out anywhere and see light, we can always "see" some matter as the source of the light. We don't just see light (except recently some radio reception has been found with no apparent material source). You see then that my general plan was to first solve the classical problem, to get rid of the infinite self-energies in the classical theory, and to hope that when I made a quantum theory of it, everything would just be fine. Then I went to graduate school and somewhere along the line I learned what was wrong with the idea that an electron does not act on itself. When you accelerate an electron it radiates energy and you have to do extra work to account for that energy. The extra force against which this work is done is called the force of radiation resistance. The origin of this extra force was identified in those days, following Lorentz, as the action of the electron itself. The first term of this action, of the electron on itself, gave a kind of inertia (not quite relativistically satisfactory). But that inertia-like term was infinite for a point-charge. Yet the next term in the sequence gave an energy loss rate, which for a point-charge agrees exactly with the rate you get by calculating how much energy is radiated. So, the force of radiation resistance, which is absolutely necessary for the conservation of energy would disappear if I said that a charge could not act on itself. So, I learned in the interim when I went to graduate school the glaringly obvious fault of my own theory. But, I was still in love with the original theory, and was still thinking that with it lay the solution to the difficulties of quantum electrodynamics. So, I continued to try on and off to save it somehow. I must have some action develop on a given electron when I accelerate it to account for radiation resistance. But, if I let electrons only act on other electrons the only possible source for this action is another electron in the world. So, one day, when I was working for Professor Wheeler and could no longer solve the problem that he had given me, I thought about this again and I calculated the following. Suppose I have two charges - I shake the first charge, which I think of as a source and this makes the second one shake, but the second one shaking produces an effect back on the source. And so, I calculated how much that effect back on the first charge was, hoping it might add up the force of radiation resistance. It didn't come out right, of course, but I went to Professor Wheeler and told him my ideas. He said, - yes, but the answer you get for the problem with the two charges that you just mentioned will, unfortunately, depend upon the charge and the mass of the second charge and will vary inversely as the square of the distance R, between the charges, while the force of radiation resistance depends on none of these things. I thought, surely, he had computed it himself, but now having become a professor, I know that one can be wise enough to see immediately what some graduate student takes several weeks to develop. He also pointed out something that also bothered me, that if we had a situation with many charges all around the original source at roughly uniform density and if we added the effect of all the surrounding charges the inverse R square would be compensated by the R2 in the volume element and we would get a result proportional to the thickness of the layer, which would go to infinity. That is, one would have an infinite total effect back at the source. And, finally he said to me, and you forgot something else, when you accelerate the first charge, the second acts later, and then the reaction back here at the source would be still later. In other words, the action occurs at the wrong time. I suddenly realized what a stupid fellow I am, for what I had described and calculated was just ordinary reflected light, not radiation reaction. But, as I was stupid, so was Professor Wheeler that much more clever. For he then went on to give a lecture as though he had worked this all out before and was completely prepared, but he had not, he worked it out as he went along. First, he said, let us suppose that the return action by the charges in the absorber reaches the source by advanced waves as well as by the ordinary retarded waves of reflected light; so that the law of interaction acts backward in time, as well as forward in time. I was enough of a physicist at that time not to say, "Oh, no, how could that be?" For today all physicists know from studying Einstein and Bohr, that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. So, it did not bother me any more than it bothered Professor Wheeler to use advance waves for the back reaction - a solution of Maxwell's equations, which previously had not been physically used. Professor Wheeler used advanced waves to get the reaction back at the right time and then he suggested this: If there were lots of electrons in the absorber, there would be an index of refraction n, so, the retarded waves coming from the source would have their wave lengths slightly modified in going through the absorber. Now, if we shall assume that the advanced waves come back from the absorber without an index - why? I don't know, let's assume they come back without an index - then, there will be a gradual shifting in phase between the return and the original signal so that we would only have to figure that the contributions act as if they come from only a finite thickness, that of the first wave zone. (More specifically, up to that depth where the phase in the medium is shifted appreciably from what it would be in vacuum, a thickness proportional to l/(n-1). ) Now, the less the number of electrons in here, the less each contributes, but the thicker will be the layer that effectively contributes because with less electrons, the index differs less from 1. The higher the charges of these electrons, the more each contribute, but the thinner the effective layer, because the index would be higher. And when we estimated it, (calculated without being careful to keep the correct numerical factor) sure enough, it came out that the action back at the source was completely independent of the properties of the charges that were in the surrounding absorber. Further, it was of just the right character to represent radiation resistance, but we were unable to see if it was just exactly the right size. He sent me home with orders to figure out exactly how much advanced and how much retarded wave we need to get the thing to come out numerically right, and after that, figure out what happens to the advanced effects that you would expect if you put a test charge here close to the source? For if all charges generate advanced, as well as retarded effects, why would that test not be affected by the advanced waves from the source? I found that you get the right answer if you use half-advanced and half-retarded as the field generated by each charge. That is, one is to use the solution of Maxwell's equation which is symmetrical in time and that the reason we got no advanced effects at a point close to the source in spite of the fact that the source was producing an advanced field is this. Suppose the source s surrounded by a spherical absorbing wall ten light seconds away, and that the test charge is one second to the right of the source. Then the source is as much as eleven seconds away from some parts of the wall and only nine seconds away from other parts. The source acting at time t=0 induces motions in the wall at time +10. Advanced effects from this can act on the test charge as early as eleven seconds earlier, or at t= -1. This is just at the time that the direct advanced waves from the source should reach the test charge, and it turns out the two effects are exactly equal and opposite and cancel out! At the later time +1 effects on the test charge from the source and from the walls are again equal, but this time are of the same sign and add to convert the half-retarded wave of the source to full retarded strength. Thus, it became clear that there was the possibility that if we assume all actions are via half-advanced and half-retarded solutions of Maxwell's equations and assume that all sources are surrounded by material absorbing all the the light which is emitted, then we could account for radiation resistance as a direct action of the charges of the absorber acting back by advanced waves on the source. Many months were devoted to checking all these points. I worked to show that everything is independent of the shape of the container, and so on, that the laws are exactly right, and that the advanced effects really cancel in every case. We always tried to increase the efficiency of our demonstrations, and to see with more and more clarity why it works. I won't bore you by going through the details of this. Because of our using advanced waves, we also had many apparent paradoxes, which we gradually reduced one by one, and saw that there was in fact no logical difficulty with the theory. It was perfectly satisfactory. We also found that we could reformulate this thing in another way, and that is by a principle of least action. Since my original plan was to describe everything directly in terms of particle motions, it was my desire to represent this new theory without saying anything about fields. It turned out that we found a form for an action directly involving the motions of the charges only, which upon variation would give the equations of motion of these charges. The expression for this action A is where is the four-vector position of the ith particle as a function of some parameter . The first term is the integral of proper time, the ordinary action of relativistic mechanics of free particles of mass mi. (We sum in the usual way on the repeated index m.) The second term represents the electrical interaction of the charges. It is summed over each pair of charges (the factor ½ is to count each pair once, the term i=j is omitted to avoid self-action) .The interaction is a double integral over a delta function of the square of space-time interval I2 between two points on the paths. Thus, interaction occurs only when this interval vanishes, that is, along light cones. The fact that the interaction is exactly one-half advanced and half-retarded meant that we could write such a principle of least action, whereas interaction via retarded waves alone cannot be written in such a way. So, all of classical electrodynamics was contained in this very simple form. It looked good, and therefore, it was undoubtedly true, at least to the beginner. It automatically gave half-advanced and half-retarded effects and it was without fields. By omitting the term in the sum when i=j, I omit self-interaction and no longer have any infinite self-energy. This then was the hoped-for solution to the problem of ridding classical electrodynamics of the infinities. It turns out, of course, that you can reinstate fields if you wish to, but you have to keep track of the field produced by each particle separately. This is because to find the right field to act on a given particle, you must exclude the field that it creates itself. A single universal field to which all contribute will not do. This idea had been suggested earlier by Frenkel and so we called these Frenkel fields. This theory which allowed only particles to act on each other was equivalent to Frenkel's fields using half-advanced and half-retarded solutions. There were several suggestions for interesting modifications of electrodynamics. We discussed lots of them, but I shall report on only one. It was to replace this delta function in the interaction by another function, say, f(I2ij), which is not infinitely sharp. Instead of having the action occur only when the interval between the two charges is exactly zero, we would replace the delta function of I2 by a narrow peaked thing. Let's say that f(Z) is large only near Z=0 width of order a2. Interactions will now occur when T2-R2 is of order a2 roughly where T is the time difference and R is the separation of the charges. This might look like it disagrees with experience, but if a is some small distance, like 10-13 cm, it says that the time delay T in action is roughly or approximately, - if R is much larger than a, T=R±a2/2R. This means that the deviation of time T from the ideal theoretical time R of Maxwell, gets smaller and smaller, the further the pieces are apart. Therefore, all theories involving in analyzing generators, motors, etc., in fact, all of the tests of electrodynamics that were available in Maxwell's time, would be adequately satisfied if were 10-13 cm. If R is of the order of a centimeter this deviation in T is only 10-26 parts. So, it was possible, also, to change the theory in a simple manner and to still agree with all observations of classical electrodynamics. You have no clue of precisely what function to put in for f, but it was an interesting possibility to keep in mind when developing quantum electrodynamics. It also occurred to us that if we did that (replace d by f) we could not reinstate the term i=j in the sum because this would now represent in a relativistically invariant fashion a finite action of a charge on itself. In fact, it was possible to prove that if we did do such a thing, the main effect of the self-action (for not too rapid accelerations) would be to produce a modification of the mass. In fact, there need be no mass mi, term, all the mechanical mass could be electromagnetic self-action. So, if you would like, we could also have another theory with a still simpler expression for the action A. In expression (1) only the second term is kept, the sum extended over all i and j, and some function replaces d. Such a simple form could represent all of classical electrodynamics, which aside from gravitation is essentially all of classical physics. Although it may sound confusing, I am describing several different alternative theories at once. The important thing to note is that at this time we had all these in mind as different possibilities. There were several possible solutions of the difficulty of classical electrodynamics, any one of which might serve as a good starting point to the solution of the difficulties of quantum electrodynamics. I would also like to emphasize that by this time I was becoming used to a physical point of view different from the more customary point of view. In the customary view, things are discussed as a function of time in very great detail. For example, you have the field at this moment, a differential equation gives you the field at the next moment and so on; a method, which I shall call the Hamilton method, the time differential method. We have, instead (in (1) say) a thing that describes the character of the path throughout all of space and time. The behavior of nature is determined by saying her whole spacetime path has a certain character. For an action like (1) the equations obtained by variation (of Xim (ai)) are no longer at all easy to get back into Hamiltonian form. If you wish to use as variables only the coordinates of particles, then you can talk about the property of the paths - but the path of one particle at a given time is affected by the path of another at a different time. If you try to describe, therefore, things differentially, telling what the present conditions of the particles are, and how these present conditions will affect the future you see, it is impossible with particles alone, because something the particle did in the past is going to affect the future. Therefore, you need a lot of bookkeeping variables to keep track of what the particle did in the past. These are called field variables. You will, also, have to tell what the field is at this present moment, if you are to be able to see later what is going to happen. From the overall space-time view of the least action principle, the field disappears as nothing but bookkeeping variables insisted on by the Hamiltonian method. To summarize, when I was done with this, as a physicist I had gained two things. One, I knew many different ways of formulating classical electrodynamics, with many different mathematical forms. I got to know how to express the subject every which way. Second, I had a point of view - the overall space-time point of view - and a disrespect for the Hamiltonian method of describing physics. I would like to interrupt here to make a remark. The fact that electrodynamics can be written in so many ways - the differential equations of Maxwell, various minimum principles with fields, minimum principles without fields, all different kinds of ways, was something I knew, but I have never understood. It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don't know why this is - it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn't look at all like the way you said it before. I don't know what the reason for this is. I think it is somehow a representation of the simplicity of nature. A thing like the inverse square law is just right to be represented by the solution of Poisson's equation, which, therefore, is a very different way to say the same thing that doesn't look at all like the way you said it before. I don't know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing. I was now convinced that since we had solved the problem of classical electrodynamics (and completely in accordance with my program from M.I.T., only direct interaction between particles, in a way that made fields unnecessary) that everything was definitely going to be all right. I was convinced that all I had to do was make a quantum theory analogous to the classical one and everything would be solved. So, the problem is only to make a quantum theory, which has as its classical analog, this expression (1). Now, there is no unique way to make a quantum theory from classical mechanics, although all the textbooks make believe there is. What they would tell you to do, was find the momentum variables and replace them by , but I couldn't find a momentum variable, as there wasn't any. The character of quantum mechanics of the day was to write things in the famous Hamiltonian way - in the form of a differential equation, which described how the wave function changes from instant to instant, and in terms of an operator, H. If the classical physics could be reduced to a Hamiltonian form, everything was all right. Now, least action does not imply a Hamiltonian form if the action is a function of anything more than positions and velocities at the same moment. If the action is of the form of the integral of a function, (usually called the Lagrangian) of the velocities and positions at the same time then you can start with the Lagrangian and then create a Hamiltonian and work out the quantum mechanics, more or less uniquely. But this thing (1) involves the key variables, positions, at two different times and therefore, it was not obvious what to do to make the quantum-mechanical analogue. I tried - I would struggle in various ways. One of them was this; if I had harmonic oscillators interacting with a delay in time, I could work out what the normal modes were and guess that the quantum theory of the normal modes was the same as for simple oscillators and kind of work my way back in terms of the original variables. I succeeded in doing that, but I hoped then to generalize to other than a harmonic oscillator, but I learned to my regret something, which many people have learned. The harmonic oscillator is too simple; very often you can work out what it should do in quantum theory without getting much of a clue as to how to generalize your results to other systems. So that didn't help me very much, but when I was struggling with this problem, I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think that a good place to discuss intellectual matters is a beer party. So, he sat by me and asked, "what are you doing" and so on, and I said, "I'm drinking beer." Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said, "listen, do you know any way of doing quantum mechanics, starting with action - where the action integral comes into the quantum mechanics?" "No", he said, "but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow." Next day we went to the Princeton Library, they have little rooms on the side to discuss things, and he showed me this paper. What Dirac said was the following: There is in quantum mechanics a very important quantity which carries the wave function from one time to another, besides the differential equation but equivalent to it, a kind of a kernal, which we might call K(x', x), which carries the wave function j(x) known at time t, to the wave function j(x') at time, t+e Dirac points out that this function K was analogous to the quantity in classical mechanics that you would calculate if you took the exponential of ie, multiplied by the Lagrangian imagining that these two positions x,x' corresponded t and t+e. In other words, Professor Jehle showed me this, I read it, he explained it to me, and I said, "what does he mean, they are analogous; what does that mean, analogous? What is the use of that?" He said, "you Americans! You always want to find a use for everything!" I said, that I thought that Dirac must mean that they were equal. "No", he explained, "he doesn't mean they are equal." "Well", I said, "let's see what happens if we make them equal." So I simply put them equal, taking the simplest example where the Lagrangian is ½Mx2 - V(x) but soon found I had to put a constant of proportionality A in, suitably adjusted. When I substituted for K to get and just calculated things out by Taylor series expansion, out came the Schrödinger equation. So, I turned to Professor Jehle, not really understanding, and said, "well, you see Professor Dirac meant that they were proportional." Professor Jehle's eyes were bugging out - he had taken out a little notebook and was rapidly copying it down from the blackboard, and said, "no, no, this is an important discovery. You Americans are always trying to find out how something can be used. That's a good way to discover things!" So, I thought I was finding out what Dirac meant, but, as a matter of fact, had made the discovery that what Dirac thought was analogous, was, in fact, equal. I had then, at least, the connection between the Lagrangian and quantum mechanics, but still with wave functions and infinitesimal times. It must have been a day or so later when I was lying in bed thinking about these things, that I imagined what would happen if I wanted to calculate the wave function at a finite interval later. I would put one of these factors eieL in here, and that would give me the wave functions the next moment, t+e and then I could substitute that back into (3) to get another factor of eieL and give me the wave function the next moment, t+2e and so on and so on. In that way I found myself thinking of a large number of integrals, one after the other in sequence. In the integrand was the product of the exponentials, which, of course, was the exponential of the sum of terms like eL. Now, L is the Lagrangian and e is like the time interval dt, so that if you took a sum of such terms, that's exactly like an integral. That's like Riemann's formula for the integral Ldt, you just take the value at each point and add them together. We are to take the limit as e-0, of course. Therefore, the connection between the wave function of one instant and the wave function of another instant a finite time later could be obtained by an infinite number of integrals, (because e goes to zero, of course) of exponential where S is the action expression (2). At last, I had succeeded in representing quantum mechanics directly in terms of the action S. This led later on to the idea of the amplitude for a path; that for each possible way that the particle can go from one point to another in space-time, there's an amplitude. That amplitude is e to the times the action for the path. Amplitudes from various paths superpose by addition. This then is another, a third way, of describing quantum mechanics, which looks quite different than that of Schrödinger or Heisenberg, but which is equivalent to them. Now immediately after making a few checks on this thing, what I wanted to do, of course, was to substitute the action (1) for the other (2). The first trouble was that I could not get the thing to work with the relativistic case of spin one-half. However, although I could deal with the matter only nonrelativistically, I could deal with the light or the photon interactions perfectly well by just putting the interaction terms of (1) into any action, replacing the mass terms by the non-relativistic (Mx2/2)dt. When the action has a delay, as it now had, and involved more than one time, I had to lose the idea of a wave function. That is, I could no longer describe the program as; given the amplitude for all positions at a certain time to compute the amplitude at another time. However, that didn't cause very much trouble. It just meant developing a new idea. Instead of wave functions we could talk about this; that if a source of a certain kind emits a particle, and a detector is there to receive it, we can give the amplitude that the source will emit and the detector receive. We do this without specifying the exact instant that the source emits or the exact instant that any detector receives, without trying to specify the state of anything at any particular time in between, but by just finding the amplitude for the complete experiment. And, then we could discuss how that amplitude would change if you had a scattering sample in between, as you rotated and changed angles, and so on, without really having any wave functions. It was also possible to discover what the old concepts of energy and momentum would mean with this generalized action. And, so I believed that I had a quantum theory of classical electrodynamics - or rather of this new classical electrodynamics described by action (1). I made a number of checks. If I took the Frenkel field point of view, which you remember was more differential, I could convert it directly to quantum mechanics in a more conventional way. The only problem was how to specify in quantum mechanics the classical boundary conditions to use only half-advanced and half-retarded solutions. By some ingenuity in defining what that meant, I found that the quantum mechanics with Frenkel fields, plus a special boundary condition, gave me back this action, (1) in the new form of quantum mechanics with a delay. So, various things indicated that there wasn't any doubt I had everything straightened out. It was also easy to guess how to modify the electrodynamics, if anybody ever wanted to modify it. I just changed the delta to an f, just as I would for the classical case. So, it was very easy, a simple thing. To describe the old retarded theory without explicit mention of fields I would have to write probabilities, not just amplitudes. I would have to square my amplitudes and that would involve double path integrals in which there are two S's and so forth. Yet, as I worked out many of these things and studied different forms and different boundary conditions. I got a kind of funny feeling that things weren't exactly right. I could not clearly identify the difficulty and in one of the short periods during which I imagined I had laid it to rest, I published a thesis and received my Ph.D. During the war, I didn't have time to work on these things very extensively, but wandered about on buses and so forth, with little pieces of paper, and struggled to work on it and discovered indeed that there was something wrong, something terribly wrong. I found that if one generalized the action from the nice Langrangian forms (2) to these forms (1) then the quantities which I defined as energy, and so on, would be complex. The energy values of stationary states wouldn't be real and probabilities of events wouldn't add up to 100%. That is, if you took the probability that this would happen and that would happen - everything you could think of would happen, it would not add up to one. Another problem on which I struggled very hard, was to represent relativistic electrons with this new quantum mechanics. I wanted to do a unique and different way - and not just by copying the operators of Dirac into some kind of an expression and using some kind of Dirac algebra instead of ordinary complex numbers. I was very much encouraged by the fact that in one space dimension, I did find a way of giving an amplitude to every path by limiting myself to paths, which only went back and forth at the speed of light. The amplitude was simple (ie) to a power equal to the number of velocity reversals where I have divided the time into steps and I am allowed to reverse velocity only at such a time. This gives (as approaches zero) Dirac's equation in two dimensions - one dimension of space and one of time . Dirac's wave function has four components in four dimensions, but in this case, it has only two components and this rule for the amplitude of a path automatically generates the need for two components. Because if this is the formula for the amplitudes of path, it will not do you any good to know the total amplitude of all paths, which come into a given point to find the amplitude to reach the next point. This is because for the next time, if it came in from the right, there is no new factor ie if it goes out to the right, whereas, if it came in from the left there was a new factor ie. So, to continue this same information forward to the next moment, it was not sufficient information to know the total amplitude to arrive, but you had to know the amplitude to arrive from the right and the amplitude to arrive to the left, independently. If you did, however, you could then compute both of those again independently and thus you had to carry two amplitudes to form a differential equation (first order in time). And, so I dreamed that if I were clever, I would find a formula for the amplitude of a path that was beautiful and simple for three dimensions of space and one of time, which would be equivalent to the Dirac equation, and for which the four components, matrices, and all those other mathematical funny things would come out as a simple consequence - I have never succeeded in that either. But, I did want to mention some of the unsuccessful things on which I spent almost as much effort, as on the things that did work. To summarize the situation a few years after the way, I would say, I had much experience with quantum electrodynamics, at least in the knowledge of many different ways of formulating it, in terms of path integrals of actions and in other forms. One of the important by-products, for example, of much experience in these simple forms, was that it was easy to see how to combine together what was in those days called the longitudinal and transverse fields, and in general, to see clearly the relativistic invariance of the theory. Because of the need to do things differentially there had been, in the standard quantum electrodynamics, a complete split of the field into two parts, one of which is called the longitudinal part and the other mediated by the photons, or transverse waves. The longitudinal part was described by a Coulomb potential acting instantaneously in the Schrödinger equation, while the transverse part had entirely different description in terms of quantization of the transverse waves. This separation depended upon the relativistic tilt of your axes in spacetime. People moving at different velocities would separate the same field into longitudinal and transverse fields in a different way. Furthermore, the entire formulation of quantum mechanics insisting, as it did, on the wave function at a given time, was hard to analyze relativistically. Somebody else in a different coordinate system would calculate the succession of events in terms of wave functions on differently cut slices of space-time, and with a different separation of longitudinal and transverse parts. The Hamiltonian theory did not look relativistically invariant, although, of course, it was. One of the great advantages of the overall point of view, was that you could see the relativistic invariance right away - or as Schwinger would say - the covariance was manifest. I had the advantage, therefore, of having a manifestedly covariant form for quantum electrodynamics with suggestions for modifications and so on. I had the disadvantage that if I took it too seriously - I mean, if I took it seriously at all in this form, - I got into trouble with these complex energies and the failure of adding probabilities to one and so on. I was unsuccessfully struggling with that. Then Lamb did his experiment, measuring the separation of the 2S½ and 2P½ levels of hydrogen, finding it to be about 1000 megacycles of frequency difference. Professor Bethe, with whom I was then associated at Cornell, is a man who has this characteristic: If there's a good experimental number you've got to figure it out from theory. So, he forced the quantum electrodynamics of the day to give him an answer to the separation of these two levels. He pointed out that the self-energy of an electron itself is infinite, so that the calculated energy of a bound electron should also come out infinite. But, when you calculated the separation of the two energy levels in terms of the corrected mass instead of the old mass, it would turn out, he thought, that the theory would give convergent finite answers. He made an estimate of the splitting that way and found out that it was still divergent, but he guessed that was probably due to the fact that he used an unrelativistic theory of the matter. Assuming it would be convergent if relativistically treated, he estimated he would get about a thousand megacycles for the Lamb-shift, and thus, made the most important discovery in the history of the theory of quantum electrodynamics. He worked this out on the train from Ithaca, New York to Schenectady and telephoned me excitedly from Schenectady to tell me the result, which I don't remember fully appreciating at the time. Returning to Cornell, he gave a lecture on the subject, which I attended. He explained that it gets very confusing to figure out exactly which infinite term corresponds to what in trying to make the correction for the infinite change in mass. If there were any modifications whatever, he said, even though not physically correct, (that is not necessarily the way nature actually works) but any modification whatever at high frequencies, which would make this correction finite, then there would be no problem at all to figuring out how to keep track of everything. You just calculate the finite mass correction Dm to the electron mass mo, substitute the numerical values of mo+Dm for m in the results for any other problem and all these ambiguities would be resolved. If, in addition, this method were relativistically invariant, then we would be absolutely sure how to do it without destroying relativistically invariant. After the lecture, I went up to him and told him, "I can do that for you, I'll bring it in for you tomorrow." I guess I knew every way to modify quantum electrodynamics known to man, at the time. So, I went in next day, and explained what would correspond to the modification of the delta-function to f and asked him to explain to me how you calculate the self-energy of an electron, for instance, so we can figure out if it's finite. I want you to see an interesting point. I did not take the advice of Professor Jehle to find out how it was useful. I never used all that machinery which I had cooked up to solve a single relativistic problem. I hadn't even calculated the self-energy of an electron up to that moment, and was studying the difficulties with the conservation of probability, and so on, without actually doing anything, except discussing the general properties of the theory. But now I went to Professor Bethe, who explained to me on the blackboard, as we worked together, how to calculate the self-energy of an electron. Up to that time when you did the integrals they had been logarithmically divergent. I told him how to make the relativistically invariant modifications that I thought would make everything all right. We set up the integral which then diverged at the sixth power of the frequency instead of logarithmically! So, I went back to my room and worried about this thing and went around in circles trying to figure out what was wrong because I was sure physically everything had to come out finite, I couldn't understand how it came out infinite. I became more and more interested and finally realized I had to learn how to make a calculation. So, ultimately, I taught myself how to calculate the self-energy of an electron working my patient way through the terrible confusion of those days of negative energy states and holes and longitudinal contributions and so on. When I finally found out how to do it and did it with the modifications I wanted to suggest, it turned out that it was nicely convergent and finite, just as I had expected. Professor Bethe and I have never been able to discover what we did wrong on that blackboard two months before, but apparently we just went off somewhere and we have never been able to figure out where. It turned out, that what I had proposed, if we had carried it out without making a mistake would have been all right and would have given a finite correction. Anyway, it forced me to go back over all this and to convince myself physically that nothing can go wrong. At any rate, the correction to mass was now finite, proportional to where a is the width of that function f which was substituted for d. If you wanted an unmodified electrodynamics, you would have to take a equal to zero, getting an infinite mass correction. But, that wasn't the point. Keeping a finite, I simply followed the program outlined by Professor Bethe and showed how to calculate all the various things, the scatterings of electrons from atoms without radiation, the shifts of levels and so forth, calculating everything in terms of the experimental mass, and noting that the results as Bethe suggested, were not sensitive to a in this form and even had a definite limit as ag0. The rest of my work was simply to improve the techniques then available for calculations, making diagrams to help analyze perturbation theory quicker. Most of this was first worked out by guessing - you see, I didn't have the relativistic theory of matter. For example, it seemed to me obvious that the velocities in non-relativistic formulas have to be replaced by Dirac's matrix a or in the more relativistic forms by the operators . I just took my guesses from the forms that I had worked out using path integrals for nonrelativistic matter, but relativistic light. It was easy to develop rules of what to substitute to get the relativistic case. I was very surprised to discover that it was not known at that time, that every one of the formulas that had been worked out so patiently by separating longitudinal and transverse waves could be obtained from the formula for the transverse waves alone, if instead of summing over only the two perpendicular polarization directions you would sum over all four possible directions of polarization. It was so obvious from the action (1) that I thought it was general knowledge and would do it all the time. I would get into arguments with people, because I didn't realize they didn't know that; but, it turned out that all their patient work with the longitudinal waves was always equivalent to just extending the sum on the two transverse directions of polarization over all four directions. This was one of the amusing advantages of the method. In addition, I included diagrams for the various terms of the perturbation series, improved notations to be used, worked out easy ways to evaluate integrals, which occurred in these problems, and so on, and made a kind of handbook on how to do quantum electrodynamics. But one step of importance that was physically new was involved with the negative energy sea of Dirac, which caused me so much logical difficulty. I got so confused that I remembered Wheeler's old idea about the positron being, maybe, the electron going backward in time. Therefore, in the time dependent perturbation theory that was usual for getting self-energy, I simply supposed that for a while we could go backward in the time, and looked at what terms I got by running the time variables backward. They were the same as the terms that other people got when they did the problem a more complicated way, using holes in the sea, except, possibly, for some signs. These, I, at first, determined empirically by inventing and trying some rules. I have tried to explain that all the improvements of relativistic theory were at first more or less straightforward, semi-empirical shenanigans. Each time I would discover something, however, I would go back and I would check it so many ways, compare it to every problem that had been done previously in electrodynamics (and later, in weak coupling meson theory) to see if it would always agree, and so on, until I was absolutely convinced of the truth of the various rules and regulations which I concocted to simplify all the work. During this time, people had been developing meson theory, a subject I had not studied in any detail. I became interested in the possible application of my methods to perturbation calculations in meson theory. But, what was meson theory? All I knew was that meson theory was something analogous to electrodynamics, except that particles corresponding to the photon had a mass. It was easy to guess the d-function in (1), which was a solution of d'Alembertian equals zero, was to be changed to the corresponding solution of d'Alembertian equals m2. Next, there were different kind of mesons - the one in closest analogy to photons, coupled via , are called vector mesons - there were also scalar mesons. Well, maybe that corresponds to putting unity in place of the , I would here then speak of "pseudo vector coupling" and I would guess what that probably was. I didn't have the knowledge to understand the way these were defined in the conventional papers because they were expressed at that time in terms of creation and annihilation operators, and so on, which, I had not successfully learned. I remember that when someone had started to teach me about creation and annihilation operators, that this operator creates an electron, I said, "how do you create an electron? It disagrees with the conservation of charge", and in that way, I blocked my mind from learning a very practical scheme of calculation. Therefore, I had to find as many opportunities as possible to test whether I guessed right as to what the various theories were. One day a dispute arose at a Physical Society meeting as to the correctness of a calculation by Slotnick of the interaction of an electron with a neutron using pseudo scalar theory with pseudo vector coupling and also, pseudo scalar theory with pseudo scalar coupling. He had found that the answers were not the same, in fact, by one theory, the result was divergent, although convergent with the other. Some people believed that the two theories must give the same answer for the problem. This was a welcome opportunity to test my guesses as to whether I really did understand what these two couplings were. So, I went home, and during the evening I worked out the electron neutron scattering for the pseudo scalar and pseudo vector coupling, saw they were not equal and subtracted them, and worked out the difference in detail. The next day at the meeting, I saw Slotnick and said, "Slotnick, I worked it out last night, I wanted to see if I got the same answers you do. I got a different answer for each coupling - but, I would like to check in detail with you because I want to make sure of my methods." And, he said, "what do you mean you worked it out last night, it took me six months!" And, when we compared the answers he looked at mine and he asked, "what is that Q in there, that variable Q?" (I had expressions like (tan -1Q) /Q etc.). I said, "that's the momentum transferred by the electron, the electron deflected by different angles." "Oh", he said, "no, I only have the limiting value as Q approaches zero; the forward scattering." Well, it was easy enough to just substitute Q equals zero in my form and I then got the same answers as he did. But, it took him six months to do the case of zero momentum transfer, whereas, during one evening I had done the finite and arbitrary momentum transfer. That was a thrilling moment for me, like receiving the Nobel Prize, because that convinced me, at last, I did have some kind of method and technique and understood how to do something that other people did not know how to do. That was my moment of triumph in which I realized I really had succeeded in working out something worthwhile. At this stage, I was urged to publish this because everybody said it looks like an easy way to make calculations, and wanted to know how to do it. I had to publish it, missing two things; one was proof of every statement in a mathematically conventional sense. Often, even in a physicist's sense, I did not have a demonstration of how to get all of these rules and equations from conventional electrodynamics. But, I did know from experience, from fooling around, that everything was, in fact, equivalent to the regular electrodynamics and had partial proofs of many pieces, although, I never really sat down, like Euclid did for the geometers of Greece, and made sure that you could get it all from a single simple set of axioms. As a result, the work was criticized, I don't know whether favorably or unfavorably, and the "method" was called the "intuitive method". For those who do not realize it, however, I should like to emphasize that there is a lot of work involved in using this "intuitive method" successfully. Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven. It must be clearly understood that in all this work, I was representing the conventional electrodynamics with retarded interaction, and not my half-advanced and half-retarded theory corresponding to (1). I merely use (1) to guess at forms. And, one of the forms I guessed at corresponded to changing d to a function f of width a2, so that I could calculate finite results for all of the problems. This brings me to the second thing that was missing when I published the paper, an unresolved difficulty. With d replaced by f the calculations would give results which were not "unitary", that is, for which the sum of the probabilities of all alternatives was not unity. The deviation from unity was very small, in practice, if a was very small. In the limit that I took a very tiny, it might not make any difference. And, so the process of the renormalization could be made, you could calculate everything in terms of the experimental mass and then take the limit and the apparent difficulty that the unitary is violated temporarily seems to disappear. I was unable to demonstrate that, as a matter of fact, it does. It is lucky that I did not wait to straighten out that point, for as far as I know, nobody has yet been able to resolve this question. Experience with meson theories with stronger couplings and with strongly coupled vector photons, although not proving anything, convinces me that if the coupling were stronger, or if you went to a higher order (137th order of perturbation theory for electrodynamics), this difficulty would remain in the limit and there would be real trouble. That is, I believe there is really no satisfactory quantum electrodynamics, but I'm not sure. And, I believe, that one of the reasons for the slowness of present-day progress in understanding the strong interactions is that there isn't any relativistic theoretical model, from which you can really calculate everything. Although, it is usually said, that the difficulty lies in the fact that strong interactions are too hard to calculate, I believe, it is really because strong interactions in field theory have no solution, have no sense they're either infinite, or, if you try to modify them, the modification destroys the unitarity. I don't think we have a completely satisfactory relativistic quantum-mechanical model, even one that doesn't agree with nature, but, at least, agrees with the logic that the sum of probability of all alternatives has to be 100%. Therefore, I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug. I am, of course, not sure of that. This completes the story of the development of the space-time view of quantum electrodynamics. I wonder if anything can be learned from it. I doubt it. It is most striking that most of the ideas developed in the course of this research were not ultimately used in the final result. For example, the half-advanced and half-retarded potential was not finally used, the action expression (1) was not used, the idea that charges do not act on themselves was abandoned. The path-integral formulation of quantum mechanics was useful for guessing at final expressions and at formulating the general theory of electrodynamics in new ways - although, strictly it was not absolutely necessary. The same goes for the idea of the positron being a backward moving electron, it was very convenient, but not strictly necessary for the theory because it is exactly equivalent to the negative energy sea point of view. We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another. The method used here, of reasoning in physical terms, therefore, appears to be extremely inefficient. On looking back over the work, I can only feel a kind of regret for the enormous amount of physical reasoning and mathematically re-expression which ends by merely re-expressing what was previously known, although in a form which is much more efficient for the calculation of specific problems. Would it not have been much easier to simply work entirely in the mathematical framework to elaborate a more efficient expression? This would certainly seem to be the case, but it must be remarked that although the problem actually solved was only such a reformulation, the problem originally tackled was the (possibly still unsolved) problem of avoidance of the infinities of the usual theory. Therefore, a new theory was sought, not just a modification of the old. Although the quest was unsuccessful, we should look at the question of the value of physical ideas in developing a new theory. Many different physical ideas can describe the same physical reality. Thus, classical electrodynamics can be described by a field view, or an action at a distance view, etc. Originally, Maxwell filled space with idler wheels, and Faraday with fields lines, but somehow the Maxwell equations themselves are pristine and independent of the elaboration of words attempting a physical description. The only true physical description is that describing the experimental meaning of the quantities in the equation - or better, the way the equations are to be used in describing experimental observations. This being the case perhaps the best way to proceed is to try to guess equations, and disregard physical models or descriptions. For example, McCullough guessed the correct equations for light propagation in a crystal long before his colleagues using elastic models could make head or tail of the phenomena, or again, Dirac obtained his equation for the description of the electron by an almost purely mathematical proposition. A simple physical view by which all the contents of this equation can be seen is still lacking. So what happened to the old theory that I fell in love with as a youth? Well, I would say it's become an old lady, that has very little attractive left in her and the young today will not have their hearts pound anymore when they look at her. But, we can say the best we can for any old woman, that she has been a very good mother and she has given birth to some very good children. And, I thank the Swedish Academy of Sciences for complimenting one of them. Thank you. Copyright © The Nobel Foundation 1965 Share this: To cite this page MLA style: "Richard P. Feynman - Nobel Lecture: The Development of the Space-Time View of Quantum Electrodynamics". Nobel Media AB 2014. Web. 30 Jul 2015. <>
00160929f451ba32
The de Broglie-Bohm Causal Interpretation of Quantum Mechanics and its Application to some Simple Systems by Colijn, Caroline Abstract (Summary) The de Broglie-Bohm causal interpretation of quantum mechanics is discussed, and applied to the hydrogen atom in several contexts. Prominent critiques of the causal program are noted and responses are given; it is argued that the de Broglie-Bohm theory is of notable interest to physics. Using the causal theory, electron trajectories are found for the conventional Schrödinger, Pauli and Dirac hydrogen eigenstates. In the Schrödinger case, an additional term is used to account for the spin; this term was not present in the original formulation of the theory but is necessary for the theory to be embedded in a relativistic formulation. In the Schrödinger, Pauli and Dirac cases, the eigenstate trajectories are shown to be circular, with electron motion revolving around the z-axis. Electron trajectories are also found for the 1s-2p0 transition problem under the Schrödinger equation; it is shown that the transition can be characterized by a comparison of the trajectory to the relevant eigenstate trajectories. The structures of the computed trajectories are relevant to the question of the possible evolution of a quantum distribution towards the standard quantum distribution (quantum equilibrium); this process is known as quantum relaxation. The transition problem is generalized to include all possible transitions in hydrogen stimulated by semi-classical radiation, and all of the trajectories found are examined in light of their implications for the evolution of the distribution to the standard distribution. Several promising avenues for future research are discussed. Bibliographical Information: School:University of Waterloo School Location:Canada - Ontario Source Type:Master's Thesis Keywords:mathematics quantum causal de broglie bohm interpretation hydrogen foundations trajectories relaxation Date of Publication:01/01/2003 © 2009 All Rights Reserved.
e3a09b69259ffad0
Last modified on 21 November 2012, at 07:38 Quantum Mechanics/Time Independent Schrödinger Consider a particle confined to a one-dimensional box with impenetrable walls. When you solve the Schrödinger equation for the wavefunctions you get two sets of solutions: those of positive parity, and those of negative parity: \Psi_{P=1} = A \cos \left[\frac{(2n+1) \pi x}{a}\right] and \Psi_{P=-1} = A \cos \left(\frac{2n \pi x}{a}\right), where n is any positive integer and A is a normalisation constant. Now, we can have all of these infinite states and if you've ever studied Fourier Analysis you may have noticed, with these states you can form any function you wish---that is, the wavefunctions are complete. So what have we learned? Well, a lot actually: we have discovered the eigenstates of the Hamiltonian which can be used to determine the particle's time dependance. Derivation of the Time-Independent Schrödinger EquationEdit We start with the general Schrödinger Equation, and use separation of variables. We have H \Psi = \hat \epsilon \Psi We separate \Psi into two functions: \Psi ( x , t ) = T ( t ) X ( x ) So now the Schrödinger Equation is H T X = \hat \epsilon T X We know from earlier that the "interesting" part of the energy operator \hat \epsilon is a partial derivative with respect to time, and the "interesting" part of the Hamiltonian H is a partial derivative with respect to position. As T does not depend on position, it is not affected by H. Similarly, X is not affected by \hat \epsilon. So we have: T H X = X \hat \epsilon T We can multiply on the left by T^{-1} X^{-1} to obtain X^{-1} H X = T^{-1} \hat \epsilon T Note that the left side only depends on x, on the right side only depends on t. We have two functions which are totally indenpendent, but are somehow equal to each other. This is only possible if both functions are equal to a consant, which we call E. X^{-1} H X = E T^{-1} \hat \epsilon T = E Naturally this implies H X = E X \hat \epsilon T = E T We can then expand H and \hat \epsilon and solve this equation.
c8f763fc997c182a
Next Contents Previous In our discussion of manifolds, it became clear that there were various notions we could talk about as soon as the manifold was defined; we could define functions, take their derivatives, consider parameterized paths, set up tensors, and so on. Other concepts, such as the volume of a region or the length of a path, required some additional piece of structure, namely the introduction of a metric. It would be natural to think of the notion of "curvature", which we have already used informally, is something that depends on the metric. Actually this turns out to be not quite true, or at least incomplete. In fact there is one additional structure we need to introduce - a "connection" - which is characterized by the curvature. We will show how the existence of a metric implies a certain connection, whose curvature may be thought of as that of the metric. The connection becomes necessary when we attempt to address the problem of the partial derivative not being a good tensor operator. What we would like is a covariant derivative; that is, an operator which reduces to the partial derivative in flat space with Cartesian coordinates, but transforms as a tensor on an arbitrary manifold. It is conventional to spend a certain amount of time motivating the introduction of a covariant derivative, but in fact the need is obvious; equations such as $ \partial_{\mu}$T$\scriptstyle \mu$$\scriptstyle \nu$ = 0 are going to have to be generalized to curved space somehow. So let's agree that a covariant derivative would be a good thing to have, and go about setting it up. In flat space in Cartesian coordinates, the partial derivative operator $ \partial_{\mu}$ is a map from (k, l ) tensor fields to (k, l + 1) tensor fields, which acts linearly on its arguments and obeys the Leibniz rule on tensor products. All of this continues to be true in the more general situation we would now like to consider, but the map provided by the partial derivative depends on the coordinate system used. We would therefore like to define a covariant derivative operator $ \nabla$ to perform the functions of the partial derivative, but in a way independent of coordinates. We therefore require that $ \nabla$ be a map from (k, l ) tensor fields to (k, l + 1) tensor fields which has these two properties: 1. linearity: $ \nabla$(T + S) = $ \nabla$T + $ \nabla$S ; 2. Leibniz (product) rule: $ \nabla$(T $ \otimes$ S) = ($ \nabla$T) $ \otimes$ S + T $ \otimes$ ($ \nabla$S) . If $ \nabla$ is going to obey the Leibniz rule, it can always be written as the partial derivative plus some linear transformation. That is, to take the covariant derivative we first take the partial derivative, and then apply a correction to make the result covariant. (We aren't going to prove this reasonable-sounding statement, but Wald goes into detail if you are interested.) Let's consider what this means for the covariant derivative of a vector V$\scriptstyle \nu$. It means that, for each direction $ \mu$, the covariant derivative $ \nabla_{\mu}^{}$ will be given by the partial derivative $ \partial_{\mu}^{}$ plus a correction specified by a matrix ($ \Gamma_{\mu}^{}$)$\scriptstyle \rho$$\scriptstyle \sigma$ (an n × n matrix, where n is the dimensionality of the manifold, for each $ \mu$). In fact the parentheses are usually dropped and we write these matrices, known as the connection coefficients, with haphazard index placement as $ \Gamma^{\rho}_{\mu\sigma}$. We therefore have Equation 3.1 (3.1) Notice that in the second term the index originally on V has moved to the $ \Gamma$, and a new index is summed over. If this is the expression for the covariant derivative of a vector in terms of the partial derivative, we should be able to determine the transformation properties of $ \Gamma^{\nu}_{\mu\lambda}$ by demanding that the left hand side be a (1, 1) tensor. That is, we want the transformation law to be Equation 3.2 (3.2) Let's look at the left side first; we can expand it using (3.1) and then transform the parts that we understand: Equation 3.3 (3.3) The right side, meanwhile, can likewise be expanded: Equation 3.4 (3.4) These last two expressions are to be equated; the first terms in each are identical and therefore cancel, so we have Equation 3.5 (3.5) where we have changed a dummy index from $ \nu$ to $ \lambda$. This equation must be true for any vector V$\scriptstyle \lambda$, so we can eliminate that on both sides. Then the connection coefficients in the primed coordinates may be isolated by multiplying by $ \partial$x$\scriptstyle \lambda$/$ \partial$x$\scriptstyle \lambda{^\prime}$. The result is Equation 3.6 (3.6) This is not, of course, the tensor transformation law; the second term on the right spoils it. That's okay, because the connection coefficients are not the components of a tensor. They are purposefully constructed to be non-tensorial, but in such a way that the combination (3.1) transforms as a tensor - the extra terms in the transformation of the partials and the $ \Gamma$'s exactly cancel. This is why we are not so careful about index placement on the connection coefficients; they are not a tensor, and therefore you should try not to raise and lower their indices. What about the covariant derivatives of other sorts of tensors? By similar reasoning to that used for vectors, the covariant derivative of a one-form can also be expressed as a partial derivative plus some linear transformation. But there is no reason as yet that the matrices representing this transformation should be related to the coefficients $ \Gamma^{\nu}_{\mu\lambda}$. In general we could write something like Equation 3.7 (3.7) where $ \widetilde{\Gamma}^{\lambda}_{\mu\nu}$ is a new set of matrices for each $ \mu$. (Pay attention to where all of the various indices go.) It is straightforward to derive that the transformation properties of $ \widetilde{\Gamma}$ must be the same as those of $ \Gamma$, but otherwise no relationship has been established. To do so, we need to introduce two new properties that we would like our covariant derivative to have (in addition to the two above): 1. commutes with contractions: $ \nabla_{\mu}^{}$(T$\scriptstyle \lambda$$\scriptstyle \lambda$$\scriptstyle \rho$) = ($ \nabla$T)$\scriptstyle \mu$$\scriptstyle \lambda$$\scriptstyle \lambda$$\scriptstyle \rho$ , 2. reduces to the partial derivative on scalars: $ \nabla_{\mu}^{}$$ \phi$ = $ \partial_{\mu}$$ \phi$ . There is no way to "derive" these properties; we are simply demanding that they be true as part of the definition of a covariant derivative. Let's see what these new properties imply. Given some one-form field $ \omega_{\mu}^{}$ and vector field V$\scriptstyle \mu$, we can take the covariant derivative of the scalar defined by $ \omega_{\lambda}^{}$V$\scriptstyle \lambda$ to get Equation 3.8 (3.8) But since $ \omega_{\lambda}^{}$V$\scriptstyle \lambda$ is a scalar, this must also be given by the partial derivative: Equation 3.9 (3.9) This can only be true if the terms in (3.8) with connection coefficients cancel each other; that is, rearranging dummy indices, we must have Equation 3.10 (3.10) But both $ \omega_{\sigma}^{}$ and V$\scriptstyle \lambda$ are completely arbitrary, so Equation 3.11 (3.11) The two extra conditions we have imposed therefore allow us to express the covariant derivative of a one-form using the same connection coefficients as were used for the vector, but now with a minus sign (and indices matched up somewhat differently): Equation 3.12 (3.12) It should come as no surprise that the connection coefficients encode all of the information necessary to take the covariant derivative of a tensor of arbitrary rank. The formula is quite straightforward; for each upper index you introduce a term with a single + $ \Gamma$, and for each lower index a term with a single - $ \Gamma$: Equation 3.13 (3.13) This is the general expression for the covariant derivative. You can check it yourself; it comes from the set of axioms we have established, and the usual requirements that tensors of various sorts be coordinate-independent entities. Sometimes an alternative notation is used; just as commas are used for partial derivatives, semicolons are used for covariant ones: Equation 3.14 (3.14) Once again, I'm not a big fan of this notation. To define a covariant derivative, then, we need to put a "connection" on our manifold, which is specified in some coordinate system by a set of coefficients $ \Gamma^{\lambda}_{\mu\nu}$ (n3 = 64 independent components in n = 4 dimensions) which transform according to (3.6). (The name "connection" comes from the fact that it is used to transport vectors from one tangent space to another, as we will soon see.) There are evidently a large number of connections we could define on any manifold, and each of them implies a distinct notion of covariant differentiation. In general relativity this freedom is not a big concern, because it turns out that every metric defines a unique connection, which is the one used in GR. Let's see how that works. The first thing to notice is that the difference of two connections is a (1, 2) tensor. If we have two sets of connection coefficients, $ \Gamma^{\lambda}_{\mu\nu}$ and $ \widehat{\Gamma}^{\lambda}_{\mu\nu}$, their difference S$\scriptstyle \mu$$\scriptstyle \nu$$\scriptstyle \lambda$ = $ \Gamma^{\lambda}_{\mu\nu}$ - $ \widehat{\Gamma}^{\lambda}_{\mu\nu}$ (notice index placement) transforms as Equation 3.15 (3.15) This is just the tensor transormation law, so S$\scriptstyle \mu$$\scriptstyle \nu$$\scriptstyle \lambda$ is indeed a tensor. This implies that any set of connections can be expressed as some fiducial connection plus a tensorial correction. Next notice that, given a connection specified by $ \Gamma^{\lambda}_{\mu\nu}$, we can immediately form another connection simply by permuting the lower indices. That is, the set of coefficients $ \Gamma^{\lambda}_{\nu\mu}$ will also transform according to (3.6) (since the partial derivatives appearing in the last term can be commuted), so they determine a distinct connection. There is thus a tensor we can associate with any given connection, known as the torsion tensor, defined by Equation 3.16 (3.16) It is clear that the torsion is antisymmetric its lower indices, and a connection which is symmetric in its lower indices is known as "torsion-free." We can now define a unique connection on a manifold with a metric g$\scriptstyle \mu$$\scriptstyle \nu$ by introducing two additional properties: A connection is metric compatible if the covariant derivative of the metric with respect to that connection is everywhere zero. This implies a couple of nice properties. First, it's easy to show that the inverse metric also has zero covariant derivative, Equation 3.17 (3.17) Second, a metric-compatible covariant derivative commutes with raising and lowering of indices. Thus, for some vector field V$\scriptstyle \lambda$, Equation 3.18 (3.18) With non-metric-compatible connections one must be very careful about index placement when taking a covariant derivative. Our claim is therefore that there is exactly one torsion-free connection on a given manifold which is compatible with some given metric on that manifold. We do not want to make these two requirements part of the definition of a covariant derivative; they simply single out one of the many possible ones. We can demonstrate both existence and uniqueness by deriving a manifestly unique expression for the connection coefficients in terms of the metric. To accomplish this, we expand out the equation of metric compatibility for three different permutations of the indices: Equation 3.19 Equation 3.19 (3.19) We subtract the second and third of these from the first, and use the symmetry of the connection to obtain Equation 3.20 (3.20) It is straightforward to solve this for the connection by multiplying by g$\scriptstyle \sigma$$\scriptstyle \rho$. The result is Equation 3.21 (3.21) This is one of the most important formulas in this subject; commit it to memory. Of course, we have only proved that if a metric-compatible and torsion-free connection exists, it must be of the form (3.21); you can check for yourself (for those of you without enough tedious computation in your lives) that the right hand side of (3.21) transforms like a connection. This connection we have derived from the metric is the one on which conventional general relativity is based (although we will keep an open mind for a while longer). It is known by different names: sometimes the Christoffel connection, sometimes the Levi-Civita connection, sometimes the Riemannian connection. The associated connection coefficients are sometimes called Christoffel symbols and written as $ \left\{\vphantom{{}^{\,\,\sigma}_{\mu\nu} }\right.$ $\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$$ \left.\vphantom{{}^{\,\,\sigma}_{\mu\nu} }\right\}$; we will sometimes call them Christoffel symbols, but we won't use the funny notation. The study of manifolds with metrics and their associated connections is called "Riemannian geometry." As far as I can tell the study of more general connections can be traced back to Cartan, but I've never heard it called "Cartanian geometry." Before putting our covariant derivatives to work, we should mention some miscellaneous properties. First, let's emphasize again that the connection does not have to be constructed from the metric. In ordinary flat space there is an implicit connection we use all the time - the Christoffel connection constructed from the flat metric. But we could, if we chose, use a different connection, while keeping the metric flat. Also notice that the coefficients of the Christoffel connection in flat space will vanish in Cartesian coordinates, but not in curvilinear coordinate systems. Consider for example the plane in polar coordinates, with metric Equation 3.22 (3.22) The nonzero components of the inverse metric are readily found to be grr = 1 and g$\scriptstyle \theta$$\scriptstyle \theta$ = r-2. (Notice that we use r and $ \theta$ as indices in an obvious notation.) We can compute a typical connection coefficient: Equation 3.23         Equation 3.23 (3.23) Sadly, it vanishes. But not all of them do: Equation 3.24 (3.24) Continuing to turn the crank, we eventually find Equation 3.25 (3.25) The existence of nonvanishing connection coefficients in curvilinear coordinate systems is the ultimate cause of the formulas for the divergence and so on that you find in books on electricity and magnetism. Contrariwise, even in a curved space it is still possible to make the Christoffel symbols vanish at any one point. This is just because, as we saw in the last section, we can always make the first derivative of the metric vanish at a point; so by (3.21) the connection coefficients derived from this metric will also vanish. Of course this can only be established at a point, not in some neighborhood of the point. Another useful property is that the formula for the divergence of a vector (with respect to the Christoffel connection) has a simplified form. The covariant divergence of V$\scriptstyle \mu$ is given by Equation 3.26 (3.26) It's easy to show (see pp. 106-108 of Weinberg) that the Christoffel connection satisfies Equation 3.27 (3.27) and we therefore obtain Equation 3.28 (3.28) As the last factoid we should mention about connections, let us emphasize (once more) that the exterior derivative is a well-defined tensor in the absence of any connection. The reason this needs to be emphasized is that, if you happen to be using a symmetric (torsion-free) connection, the exterior derivative (defined to be the antisymmetrized partial derivative) happens to be equal to the antisymmetrized covariant derivative: Equation 3.29 (3.29) This has led some misfortunate souls to fret about the "ambiguity" of the exterior derivative in spaces with torsion, where the above simplification does not occur. There is no ambiguity: the exterior derivative does not involve the connection, no matter what connection you happen to be using, and therefore the torsion never enters the formula for the exterior derivative of anything. Before moving on, let's review the process by which we have been adding structures to our mathematical constructs. We started with the basic notion of a set, which you were presumed to know (informally, if not rigorously). We introduced the concept of open subsets of our set; this is equivalent to introducing a topology, and promoted the set to a topological space. Then by demanding that each open set look like a region of $ \bf R^{n}_{}$ (with n the same for each set) and that the coordinate charts be smoothly sewn together, the topological space became a manifold. A manifold is simultaneously a very flexible and powerful structure, and comes equipped naturally with a tangent bundle, tensor bundles of various ranks, the ability to take exterior derivatives, and so forth. We then proceeded to put a metric on the manifold, resulting in a manifold with metric (or sometimes "Riemannian manifold"). Independently of the metric we found we could introduce a connection, allowing us to take covariant derivatives. Once we have a metric, however, there is automatically a unique torsion-free metric-compatible connection. (In principle there is nothing to stop us from introducing more than one connection, or more than one metric, on any given manifold.) The situation is thus as portrayed in the diagram on the next page. Figure 3.1 Having set up the machinery of connections, the first thing we will do is discuss parallel transport. Recall that in flat space it was unnecessary to be very careful about the fact that vectors were elements of tangent spaces defined at individual points; it is actually very natural to compare vectors at different points (where by "compare" we mean add, subtract, take the dot product, etc.). The reason why it is natural is because it makes sense, in flat space, to "move a vector from one point to another while keeping it constant." Then once we get the vector from one point to another we can do the usual operations allowed in a vector space. Figure 3.2 The concept of moving a vector along a path, keeping constant all the while, is known as parallel transport. As we shall see, parallel transport is defined whenever we have a connection; the intuitive manipulation of vectors in flat space makes implicit use of the Christoffel connection on this space. The crucial difference between flat and curved spaces is that, in a curved space, the result of parallel transporting a vector from one point to another will depend on the path taken between the points. Without yet assembling the complete mechanism of parallel transport, we can use our intuition about the two-sphere to see that this is the case. Start with a vector on the equator, pointing along a line of constant longitude. Parallel transport it up to the north pole along a line of longitude in the obvious way. Then take the original vector, parallel transport it along the equator by an angle $ \theta$, and then move it up to the north pole as before. It is clear that the vector, parallel transported along two paths, arrived at the same destination with two different values (rotated by $ \theta$). Figure 3.3 It therefore appears as if there is no natural way to uniquely move a vector from one tangent space to another; we can always parallel transport it, but the result depends on the path, and there is no natural choice of which path to take. Unlike some of the problems we have encountered, there is no solution to this one - we simply must learn to live with the fact that two vectors can only be compared in a natural way if they are elements of the same tangent space. For example, two particles passing by each other have a well-defined relative velocity (which cannot be greater than the speed of light). But two particles at different points on a curved manifold do not have any well-defined notion of relative velocity - the concept simply makes no sense. Of course, in certain special situations it is still useful to talk as if it did make sense, but it is necessary to understand that occasional usefulness is not a substitute for rigorous definition. In cosmology, for example, the light from distant galaxies is redshifted with respect to the frequencies we would observe from a nearby stationary source. Since this phenomenon bears such a close resemblance to the conventional Doppler effect due to relative motion, it is very tempting to say that the galaxies are "receding away from us" at a speed defined by their redshift. At a rigorous level this is nonsense, what Wittgenstein would call a "grammatical mistake" - the galaxies are not receding, since the notion of their velocity with respect to us is not well-defined. What is actually happening is that the metric of spacetime between us and the galaxies has changed (the universe has expanded) along the path of the photon from here to there, leading to an increase in the wavelength of the light. As an example of how you can go wrong, naive application of the Doppler formula to the redshift of galaxies implies that some of them are receding faster than light, in apparent contradiction with relativity. The resolution of this apparent paradox is simply that the very notion of their recession should not be taken literally. Enough about what we cannot do; let's see what we can. Parallel transport is supposed to be the curved-space generalization of the concept of "keeping the vector constant" as we move it along a path; similarly for a tensor of arbitrary rank. Given a curve x$\scriptstyle \mu$($ \lambda$), the requirement of constancy of a tensor T along this curve in flat space is simply $ {{dT}\over{d\lambda}}$ = $ {{dx^\mu}\over{d\lambda}}$$ {{\partial T}\over{\partial x^\mu}}$ = 0. We therefore define the covariant derivative along the path to be given by an operator Equation 3.30 (3.30) We then define parallel transport of the tensor T along the path x$\scriptstyle \mu$($ \lambda$) to be the requirement that, along the path, Equation 3.31 (3.31) This is a well-defined tensor equation, since both the tangent vector dx$\scriptstyle \mu$/d$ \lambda$ and the covariant derivative $ \nabla$T are tensors. This is known as the equation of parallel transport. For a vector it takes the form Equation 3.32 (3.32) We can look at the parallel transport equation as a first-order differential equation defining an initial-value problem: given a tensor at some point along the path, there will be a unique continuation of the tensor to other points along the path such that the continuation solves (3.31). We say that such a tensor is parallel transported. The notion of parallel transport is obviously dependent on the connection, and different connections lead to different answers. If the connection is metric-compatible, the metric is always parallel transported with respect to it: Equation 3.33 (3.33) It follows that the inner product of two parallel-transported vectors is preserved. That is, if V$\scriptstyle \mu$ and W$\scriptstyle \nu$ are parallel-transported along a curve x$\scriptstyle \sigma$($ \lambda$), we have Equation 3.34 (3.34) One thing they don't usually tell you in GR books is that you can write down an explicit and general solution to the parallel transport equation, although it's somewhat formal. First notice that for some path $ \gamma$ : $ \lambda$ $ \rightarrow$ x$\scriptstyle \sigma$($ \lambda$), solving the parallel transport equation for a vector V$\scriptstyle \mu$ amounts to finding a matrix P$\scriptstyle \mu$$\scriptstyle \rho$($ \lambda$,$ \lambda_{0}^{}$) which relates the vector at its initial value V$\scriptstyle \mu$($ \lambda_{0}^{}$) to its value somewhere later down the path: Equation 3.35 (3.35) Of course the matrix P$\scriptstyle \mu$$\scriptstyle \rho$($ \lambda$,$ \lambda_{0}^{}$), known as the parallel propagator, depends on the path $ \gamma$ (although it's hard to find a notation which indicates this without making $ \gamma$ look like an index). If we define Equation 3.36 (3.36) where the quantities on the right hand side are evaluated at x$\scriptstyle \nu$($ \lambda$), then the parallel transport equation becomes Equation 3.37 (3.37) Since the parallel propagator must work for any vector, substituting (3.35) into (3.37) shows that P$\scriptstyle \mu$$\scriptstyle \rho$($ \lambda$,$ \lambda_{0}^{}$) also obeys this equation: Equation 3.38 (3.38) To solve this equation, first integrate both sides: Equation 3.39 (3.39) The Kronecker delta, it is easy to see, provides the correct normalization for $ \lambda$ = $ \lambda_{0}^{}$. We can solve (3.39) by iteration, taking the right hand side and plugging it into itself repeatedly, giving Equation 3.40 (3.40) The nth term in this series is an integral over an n-dimensional right triangle, or n-simplex. Equation 3.40a Figure 3.4 It would simplify things if we could consider such an integral to be over an n-cube instead of an n-simplex; is there some way to do this? There are n! such simplices in each cube, so we would have to multiply by 1/n! to compensate for this extra volume. But we also want to get the integrand right; using matrix notation, the integrand at nth order is A($ \eta_{n}^{}$)A($ \eta_{n-1}^{}$) ... A($ \eta_{1}^{}$), but with the special property that $ \eta_{n}^{}$ $ \geq$ $ \eta_{n-1}^{}$ $ \geq$ ... $ \geq$ $ \eta_{1}^{}$. We therefore define the path-ordering symbol, $ \cal {P}$, to ensure that this condition holds. In other words, the expression Equation 3.41 (3.41) stands for the product of the n matrices A($ \eta_{i}^{}$), ordered in such a way that the largest value of $ \eta_{i}^{}$ is on the left, and each subsequent value of $ \eta_{i}^{}$ is less than or equal to the previous one. We then can express the nth-order term in (3.40) as Equation 3.42 (3.42) This expression contains no substantive statement about the matrices A($ \eta_{i}^{}$); it is just notation. But we can now write (3.40) in matrix form as Equation 3.43 (3.43) This formula is just the series expression for an exponential; we therefore say that the parallel propagator is given by the path-ordered exponential Equation 3.44 (3.44) where once again this is just notation; the path-ordered exponential is defined to be the right hand side of (3.43). We can write it more explicitly as Equation 3.45 (3.45) It's nice to have an explicit formula, even if it is rather abstract. The same kind of expression appears in quantum field theory as "Dyson's Formula," where it arises because the Schrödinger equation for the time-evolution operator has the same form as (3.38). As an aside, an especially interesting example of the parallel propagator occurs when the path is a loop, starting and ending at the same point. Then if the connection is metric-compatible, the resulting matrix will just be a Lorentz transformation on the tangent space at the point. This transformation is known as the "holonomy" of the loop. If you know the holonomy of every possible loop, that turns out to be equivalent to knowing the metric. This fact has let Ashtekar and his collaborators to examine general relativity in the "loop representation," where the fundamental variables are holonomies rather than the explicit metric. They have made some progress towards quantizing the theory in this approach, although the jury is still out about how much further progress can be made. With parallel transport understood, the next logical step is to discuss geodesics. A geodesic is the curved-space generalization of the notion of a "straight line" in Euclidean space. We all know what a straight line is: it's the path of shortest distance between two points. But there is an equally good definition -- a straight line is a path which parallel transports its own tangent vector. On a manifold with an arbitrary (not necessarily Christoffel) connection, these two concepts do not quite coincide, and we should discuss them separately. We'll take the second definition first, since it is computationally much more straightforward. The tangent vector to a path x$\scriptstyle \mu$($ \lambda$) is dx$\scriptstyle \mu$/d$ \lambda$. The condition that it be parallel transported is thus Equation 3.46 (3.46) or alternatively Equation 3.47 (3.47) This is the geodesic equation, another one which you should memorize. We can easily see that it reproduces the usual notion of straight lines if the connection coefficients are the Christoffel symbols in Euclidean space; in that case we can choose Cartesian coordinates in which $ \Gamma^{\mu}_{\rho\sigma}$ = 0, and the geodesic equation is just d2x$\scriptstyle \mu$/d$ \lambda^{2}_{}$ = 0, which is the equation for a straight line. That was embarrassingly simple; let's turn to the more nontrivial case of the shortest distance definition. As we know, there are various subtleties involved in the definition of distance in a Lorentzian spacetime; for null paths the distance is zero, for timelike paths it's more convenient to use the proper time, etc. So in the name of simplicity let's do the calculation just for a timelike path - the resulting equation will turn out to be good for any path, so we are not losing any generality. We therefore consider the proper time functional, Equation 3.48 (3.48) where the integral is over the path. To search for shortest-distance paths, we will do the usual calculus of variations treatment to seek extrema of this functional. (In fact they will turn out to be curves of maximum proper time.) Equation 3.49 (3.49) (The second line comes from Taylor expansion in curved spacetime, which as you can see uses the partial derivative, not the covariant derivative.) Plugging this into (3.48), we get Equation 3.50 (3.50) Since $ \delta$x$\scriptstyle \sigma$ is assumed to be small, we can expand the square root of the expression in square brackets to find Equation 3.51 (3.51) It is helpful at this point to change the parameterization of our curve from $ \lambda$, which was arbitrary, to the proper time $ \tau$ itself, using Equation 3.52 (3.52) We plug this into (3.51) (note: we plug it in for every appearance of d$ \lambda$) to obtain Equation 3.53 Equation 3.53 (3.53) where in the last line we have integrated by parts, avoiding possible boundary contributions by demanding that the variation $ \delta$x$\scriptstyle \sigma$ vanish at the endpoints of the path. Since we are searching for stationary points, we want $ \delta$$ \tau$ to vanish for any variation; this implies Equation 3.54 (3.54) where we have used dg$\scriptstyle \mu$$\scriptstyle \sigma$/d$ \tau$ = (dx$\scriptstyle \nu$/d$ \tau$)$ \partial_{\nu}$g$\scriptstyle \mu$$\scriptstyle \sigma$. Some shuffling of dummy indices reveals Equation 3.55 (3.55) and multiplying by the inverse metric finally leads to Equation 3.56 (3.56) We see that this is precisely the geodesic equation (3.32), but with the specific choice of Christoffel connection (3.21). Thus, on a manifold with metric, extremals of the length functional are curves which parallel transport their tangent vector with respect to the Christoffel connection associated with that metric. It doesn't matter if there is any other connection defined on the same manifold. Of course, in GR the Christoffel connection is the only one which is used, so the two notions are the same. The primary usefulness of geodesics in general relativity is that they are the paths followed by unaccelerated particles. In fact, the geodesic equation can be thought of as the generalization of Newton's law $ \bf f$ = m$ \bf a$ for the case $ \bf f$ = 0. It is also possible to introduce forces by adding terms to the right hand side; in fact, looking back to the expression (1.103) for the Lorentz force in special relativity, it is tempting to guess that the equation of motion for a particle of mass m and charge q in general relativity should be Equation 3.57 (3.57) We will talk about this more later, but in fact your guess would be correct. Having boldly derived these expressions, we should say some more careful words about the parameterization of a geodesic path. When we presented the geodesic equation as the requirement that the tangent vector be parallel transported, (3.47), we parameterized our path with some parameter $ \lambda$, whereas when we found the formula (3.56) for the extremal of the spacetime interval we wound up with a very specific parameterization, the proper time. Of course from the form of (3.56) it is clear that a transformation Equation 3.58 (3.58) for some constants a and b, leaves the equation invariant. Any parameter related to the proper time in this way is called an affine parameter, and is just as good as the proper time for parameterizing a geodesic. What was hidden in our derivation of (3.47) was that the demand that the tangent vector be parallel transported actually constrains the parameterization of the curve, specifically to one related to the proper time by (3.58). In other words, if you start at some point and with some initial direction, and then construct a curve by beginning to walk in that direction and keeping your tangent vector parallel transported, you will not only define a path in the manifold but also (up to linear transformations) define the parameter along the path. Of course, there is nothing to stop you from using any other parameterization you like, but then (3.47) will not be satisfied. More generally you will satisfy an equation of the form Equation 3.59 (3.59) for some parameter $ \alpha$ and some function f ($ \alpha$). Conversely, if (3.59) is satisfied along a curve you can always find an affine parameter $ \lambda$($ \alpha$) for which the geodesic equation (3.47) will be satisfied. An important property of geodesics in a spacetime with Lorentzian metric is that the character (timelike/null/spacelike) of the geodesic (relative to a metric-compatible connection) never changes. This is simply because parallel transport preserves inner products, and the character is determined by the inner product of the tangent vector with itself. This is why we were consistent to consider purely timelike paths when we derived (3.56); for spacelike paths we would have derived the same equation, since the only difference is an overall minus sign in the final answer. There are also null geodesics, which satisfy the same equation, except that the proper time cannot be used as a parameter (some set of allowed parameters will exist, related to each other by linear transformations). You can derive this fact either from the simple requirement that the tangent vector be parallel transported, or by extending the variation of (3.48) to include all non-spacelike paths. Let's now explain the earlier remark that timelike geodesics are maxima of the proper time. The reason we know this is true is that, given any timelike curve (geodesic or not), we can approximate it to arbitrary accuracy by a null curve. To do this all we have to do is to consider "jagged" null curves which follow the timelike one: Figure 3.5 As we increase the number of sharp corners, the null curve comes closer and closer to the timelike curve while still having zero path length. Timelike geodesics cannot therefore be curves of minimum proper time, since they are always infinitesimally close to curves of zero proper time; in fact they maximize the proper time. (This is how you can remember which twin in the twin paradox ages more - the one who stays home is basically on a geodesic, and therefore experiences more proper time.) Of course even this is being a little cavalier; actually every time we say "maximize" or "minimize" we should add the modifier "locally." It is often the case that between two points on a manifold there is more than one geodesic. For instance, on S2 we can draw a great circle through any two points, and imagine travelling between them either the short way or the long way around. One of these is obviously longer than the other, although both are stationary points of the length functional. The final fact about geodesics before we move on to curvature proper is their use in mapping the tangent space at a point p to a local neighborhood of p. To do this we notice that any geodesic x$\scriptstyle \mu$($ \lambda$) which passes through p can be specified by its behavior at p; let us choose the parameter value to be $ \lambda$(p) = 0, and the tangent vector at p to be Equation 3.60 (3.60) for k$\scriptstyle \mu$ some vector at p (some element of Tp). Then there will be a unique point on the manifold M which lies on this geodesic where the parameter has the value $ \lambda$ = 1. We define the exponential map at p, expp : Tp $ \rightarrow$ M, via Equation 3.61 (3.61) where x$\scriptstyle \nu$($ \lambda$) solves the geodesic equation subject to (3.60). Figure 3.6 For some set of tangent vectors k$\scriptstyle \mu$ near the zero vector, this map will be well-defined, and in fact invertible. Thus in the neighborhood of p given by the range of the map on this set of tangent vectors, the the tangent vectors themselves define a coordinate system on the manifold. In this coordinate system, any geodesic through p is expressed trivially as Equation 3.62 (3.62) for some appropriate vector k$\scriptstyle \mu$. We won't go into detail about the properties of the exponential map, since in fact we won't be using it much, but it's important to emphasize that the range of the map is not necessarily the whole manifold, and the domain is not necessarily the whole tangent space. The range can fail to be all of M simply because there can be two points which are not connected by any geodesic. (In a Euclidean signature metric this is impossible, but not in a Lorentzian spacetime.) The domain can fail to be all of Tp because a geodesic may run into a singularity, which we think of as "the edge of the manifold." Manifolds which have such singularities are known as geodesically incomplete. This is not merely a problem for careful mathematicians; in fact the "singularity theorems" of Hawking and Penrose state that, for reasonable matter content (no negative energies), spacetimes in general relativity are almost guaranteed to be geodesically incomplete. As examples, the two most useful spacetimes in GR - the Schwarzschild solution describing black holes and the Friedmann-Robertson-Walker solutions describing homogeneous, isotropic cosmologies - both feature important singularities. Having set up the machinery of parallel transport and covariant derivatives, we are at last prepared to discuss curvature proper. The curvature is quantified by the Riemann tensor, which is derived from the connection. The idea behind this measure of curvature is that we know what we mean by "flatness" of a connection - the conventional (and usually implicit) Christoffel connection associated with a Euclidean or Minkowskian metric has a number of properties which can be thought of as different manifestations of flatness. These include the fact that parallel transport around a closed loop leaves a vector unchanged, that covariant derivatives of tensors commute, and that initially parallel geodesics remain parallel. As we shall see, the Riemann tensor arises when we study how any of these properties are altered in more general contexts. We have already argued, using the two-sphere as an example, that parallel transport of a vector around a closed loop in a curved space will lead to a transformation of the vector. The resulting transformation depends on the total curvature enclosed by the loop; it would be more useful to have a local description of the curvature at each point, which is what the Riemann tensor is supposed to provide. One conventional way to introduce the Riemann tensor, therefore, is to consider parallel transport around an infinitesimal loop. We are not going to do that here, but take a more direct route. (Most of the presentations in the literature are either sloppy, or correct but very difficult to follow.) Nevertheless, even without working through the details, it is possible to see what form the answer should take. Imagine that we parallel transport a vector V$\scriptstyle \sigma$ around a closed loop defined by two vectors A$\scriptstyle \nu$ and B$\scriptstyle \mu$: Figure 3.7 The (infinitesimal) lengths of the sides of the loop are $ \delta$a and $ \delta$b, respectively. Now, we know the action of parallel transport is independent of coordinates, so there should be some tensor which tells us how the vector changes when it comes back to its starting point; it will be a linear transformation on a vector, and therefore involve one upper and one lower index. But it will also depend on the two vectors A and B which define the loop; therefore there should be two additional lower indices to contract with A$\scriptstyle \nu$ and B$\scriptstyle \mu$. Furthermore, the tensor should be antisymmetric in these two indices, since interchanging the vectors corresponds to traversing the loop in the opposite direction, and should give the inverse of the original answer. (This is consistent with the fact that the transformation should vanish if A and B are the same vector.) We therefore expect that the expression for the change $ \delta$V$\scriptstyle \rho$ experienced by this vector when parallel transported around the loop should be of the form Equation 3.63 (3.63) where R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ is a (1, 3) tensor known as the Riemann tensor (or simply "curvature tensor"). It is antisymmetric in the last two indices: Equation 3.64 (3.64) (Of course, if (3.63) is taken as a definition of the Riemann tensor, there is a convention that needs to be chosen for the ordering of the indices. There is no agreement at all on what this convention should be, so be careful.) Knowing what we do about parallel transport, we could very carefully perform the necessary manipulations to see what happens to the vector under this operation, and the result would be a formula for the curvature tensor in terms of the connection coefficients. It is much quicker, however, to consider a related operation, the commutator of two covariant derivatives. The relationship between this and parallel transport around a loop should be evident; the covariant derivative of a tensor in a certain direction measures how much the tensor changes relative to what it would have been if it had been parallel transported (since the covariant derivative of a tensor in a direction along which it is parallel transported is zero). The commutator of two covariant derivatives, then, measures the difference between parallel transporting the tensor first one way and then the other, versus the opposite ordering. Figure 3.8 The actual computation is very straightforward. Considering a vector field V$\scriptstyle \rho$, we take Equation 3.65 (3.65) In the last step we have relabeled some dummy indices and eliminated some terms that cancel when antisymmetrized. We recognize that the last term is simply the torsion tensor, and that the left hand side is manifestly a tensor; therefore the expression in parentheses must be a tensor itself. We write Equation 3.66 (3.66) where the Riemann tensor is identified as Equation 3.67 (3.67) There are a number of things to notice about the derivation of this expression: A useful notion is that of the commutator of two vector fields X and Y, which is a third vector field with components Equation 3.69 (3.69) Both the torsion tensor and the Riemann tensor, thought of as multilinear maps, have elegant expressions in terms of the commutator. Thinking of the torsion as a map from two vector fields to a third vector field, we have Equation 3.70 (3.70) and thinking of the Riemann tensor as a map from three vector fields to a fourth one, we have Equation 3.71 (3.71) In these expressions, the notation $ \nabla_{X}^{}$ refers to the covariant derivative along the vector field X; in components, $ \nabla_{X}^{}$ = X$\scriptstyle \mu$$ \nabla_{\mu}^{}$. Note that the two vectors X and Y in (3.71) correspond to the two antisymmetric indices in the component form of the Riemann tensor. The last term in (3.71), involving the commutator [X, Y], vanishes when X and Y are taken to be the coordinate basis vector fields (since [$ \partial_{\mu}$,$ \partial_{\nu}$] = 0), which is why this term did not arise when we originally took the commutator of two covariant derivatives. We will not use this notation extensively, but you might see it in the literature, so you should be able to decode it. Having defined the curvature tensor as something which characterizes the connection, let us now admit that in GR we are most concerned with the Christoffel connection. In this case the connection is derived from the metric, and the associated curvature may be thought of as that of the metric itself. This identification allows us to finally make sense of our informal notion that spaces for which the metric looks Euclidean or Minkowskian are flat. In fact it works both ways: if the components of the metric are constant in some coordinate system, the Riemann tensor will vanish, while if the Riemann tensor vanishes we can always construct a coordinate system in which the metric components are constant. The first of these is easy to show. If we are in some coordinate system such that $ \partial_{\sigma}$g$\scriptstyle \mu$$\scriptstyle \nu$ = 0 (everywhere, not just at a point), then $ \Gamma^{\rho}_{\mu\nu}$ = 0 and $ \partial_{\sigma}$$ \Gamma^{\rho}_{\mu\nu}$ = 0; thus R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ = 0 by (3.67). But this is a tensor equation, and if it is true in one coordinate system it must be true in any coordinate system. Therefore, the statement that the Riemann tensor vanishes is a necessary condition for it to be possible to find coordinates in which the components of g$\scriptstyle \mu$$\scriptstyle \nu$ are constant everywhere. It is also a sufficient condition, although we have to work harder to show it. Start by choosing Riemann normal coordinates at some point p, so that g$\scriptstyle \mu$$\scriptstyle \nu$ = $ \eta_{\mu\nu}^{}$ at p. (Here we are using $ \eta_{\mu\nu}^{}$ in a generalized sense, as a matrix with either +1 or -1 for each diagonal element and zeroes elsewhere. The actual arrangement of the +1's and -1's depends on the canonical form of the metric, but is irrelevant for the present argument.) Denote the basis vectors at p by $ \hat{e}_{(\mu)}$, with components $ \hat{e}_{(\mu)}^{\sigma}$. Then by construction we have Equation 3.72 (3.72) Now let us parallel transport the entire set of basis vectors from p to another point q; the vanishing of the Riemann tensor ensures that the result will be independent of the path taken between p and q. Since parallel transport with respect to a metric compatible connection preserves inner products, we must have Equation 3.73 (3.73) We therefore have specified a set of vector fields which everywhere define a basis in which the metric components are constant. This is completely unimpressive; it can be done on any manifold, regardless of what the curvature is. What we would like to show is that this is a coordinate basis (which can only be true if the curvature vanishes). We know that if the $ \hat{e}_{(\mu)}$'s are a coordinate basis, their commutator will vanish: Equation 3.74 (3.74) What we would really like is the converse: that if the commutator vanishes we can find coordinates y$\scriptstyle \mu$ such that $ \hat{e}_{(\mu)}$ = $ {{\partial} \over{\partial y^\mu}}$. In fact this is a true result, known as Frobenius's Theorem. It's something of a mess to prove, involving a good deal more mathematical apparatus than we have bothered to set up. Let's just take it for granted (skeptics can consult Schutz's Geometrical Methods book). Thus, we would like to demonstrate (3.74) for the vector fields we have set up. Let's use the expression (3.70) for the torsion: Equation 3.75 (3.75) The torsion vanishes by hypothesis. The covariant derivatives will also vanish, given the method by which we constructed our vector fields; they were made by parallel transporting along arbitrary paths. If the fields are parallel transported along arbitrary paths, they are certainly parallel transported along the vectors $ \hat{e}_{(\mu)}$, and therefore their covariant derivatives in the direction of these vectors will vanish. Thus (3.70) implies that the commutator vanishes, and therefore that we can find a coordinate system y$\scriptstyle \mu$ for which these vector fields are the partial derivatives. In this coordinate system the metric will have components $ \eta_{\mu\nu}^{}$, as desired. The Riemann tensor, with four indices, naively has n4 independent components in an n-dimensional space. In fact the antisymmetry property (3.64) means that there are only n(n - 1)/2 independent values these last two indices can take on, leaving us with n3(n - 1)/2 independent components. When we consider the Christoffel connection, however, there are a number of other symmetries that reduce the independent components further. Let's consider these now. Equation 3.76 (3.76) Let us further consider the components of this tensor in Riemann normal coordinates established at a point p. Then the Christoffel symbols themselves will vanish, although their derivatives will not. We therefore have Equation 3.77 Equation 3.77 (3.77) In the second line we have used $ \partial_{\mu}^{}$g$\scriptstyle \lambda$$\scriptstyle \tau$ = 0 in RNC's, and in the third line the fact that partials commute. From this expression we can notice immediately two properties of R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$; it is antisymmetric in its first two indices, Equation 3.78 (3.78) and it is invariant under interchange of the first pair of indices with the second: Equation 3.79 (3.79) With a little more work, which we leave to your imagination, we can see that the sum of cyclic permutations of the last three indices vanishes: Equation 3.80 (3.80) This last property is equivalent to the vanishing of the antisymmetric part of the last three indices: Equation 3.81 (3.81) All of these properties have been derived in a special coordinate system, but they are all tensor equations; therefore they will be true in any coordinates. Not all of them are independent; with some effort, you can show that (3.64), (3.78) and (3.81) together imply (3.79). The logical interdependence of the equations is usually less important than the simple fact that they are true. Given these relationships between the different components of the Riemann tensor, how many independent quantities remain? Let's begin with the facts that R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ is antisymmetric in the first two indices, antisymmetric in the last two indices, and symmetric under interchange of these two pairs. This means that we can think of it as a symmetric matrix R[$\scriptstyle \rho$$\scriptstyle \sigma$][$\scriptstyle \mu$$\scriptstyle \nu$], where the pairs $ \rho$$ \sigma$ and $ \mu$$ \nu$ are thought of as individual indices. An m × m symmetric matrix has m(m + 1)/2 independent components, while an n × n antisymmetric matrix has n(n - 1)/2 independent components. We therefore have Equation 3.82 (3.82) independent components. We still have to deal with the additional symmetry (3.81). An immediate consequence of (3.81) is that the totally antisymmetric part of the Riemann tensor vanishes, Equation 3.83 (3.83) In fact, this equation plus the other symmetries (3.64), (3.78) and (3.79) are enough to imply (3.81), as can be easily shown by expanding (3.83) and messing with the resulting terms. Therefore imposing the additional constraint of (3.83) is equivalent to imposing (3.81), once the other symmetries have been accounted for. How many independent restrictions does this represent? Let us imagine decomposing Equation 3.84 (3.84) It is easy to see that any totally antisymmetric 4-index tensor is automatically antisymmetric in its first and last indices, and symmetric under interchange of the two pairs. Therefore these properties are independent restrictions on X$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$, unrelated to the requirement (3.83). Now a totally antisymmetric 4-index tensor has n(n - 1)(n - 2)(n - 3)/4! terms, and therefore (3.83) reduces the number of independent components by this amount. We are left with Equation 3.85 (3.85) independent components of the Riemann tensor. In four dimensions, therefore, the Riemann tensor has 20 independent components. (In one dimension it has none.) These twenty functions are precisely the 20 degrees of freedom in the second derivatives of the metric which we could not set to zero by a clever choice of coordinates. This should reinforce your confidence that the Riemann tensor is an appropriate measure of curvature. In addition to the algebraic symmetries of the Riemann tensor (which constrain the number of independent components at any point), there is a differential identity which it obeys (which constrains its relative values at different points). Consider the covariant derivative of the Riemann tensor, evaluated in Riemann normal coordinates: Equation 3.86 (3.86) We would like to consider the sum of cyclic permutations of the first three indices: Equation 3.87 (3.87) Once again, since this is an equation between tensors it is true in any coordinate system, even though we derived it in a particular one. We recognize by now that the antisymmetry R$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ = - R$\scriptstyle \sigma$$\scriptstyle \rho$$\scriptstyle \mu$$\scriptstyle \nu$ allows us to write this result as Equation 3.88 (3.88) This is known as the Bianchi identity. (Notice that for a general connection there would be additional terms involving the torsion tensor.) It is closely related to the Jacobi identity, since (as you can show) it basically expresses Equation 3.89 (3.89) It is frequently useful to consider contractions of the Riemann tensor. Even without the metric, we can form a contraction known as the Ricci tensor: Equation 3.90 (3.90) Notice that, for the curvature tensor formed from an arbitrary (not necessarily Christoffel) connection, there are a number of independent contractions to take. Our primary concern is with the Christoffel connection, for which (3.90) is the only independent contraction (modulo conventions for the sign, which of course change from place to place). The Ricci tensor associated with the Christoffel connection is symmetric, Equation 3.91 (3.91) as a consequence of the various symmetries of the Riemann tensor. Using the metric, we can take a further contraction to form the Ricci scalar: Equation 3.92 (3.92) An especially useful form of the Bianchi identity comes from contracting twice on (3.87): Equation 3.93 (3.93) Equation 3.94 (3.94) (Notice that, unlike the partial derivative, it makes sense to raise an index on the covariant derivative, due to metric compatibility.) If we define the Einstein tensor as Equation 3.95 (3.95) then we see that the twice-contracted Bianchi identity (3.94) is equivalent to Equation 3.96 (3.96) The Einstein tensor, which is symmetric due to the symmetry of the Ricci tensor and the metric, will be of great importance in general relativity. The Ricci tensor and the Ricci scalar contain information about "traces" of the Riemann tensor. It is sometimes useful to consider separately those pieces of the Riemann tensor which the Ricci tensor doesn't tell us about. We therefore invent the Weyl tensor, which is basically the Riemann tensor with all of its contractions removed. It is given in n dimensions by Equation 3.97 (3.97) This messy formula is designed so that all possible contractions of C$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ vanish, while it retains the symmetries of the Riemann tensor: Equation 3.98 (3.98) The Weyl tensor is only defined in three or more dimensions, and in three dimensions it vanishes identically. For n $ \geq$ 4 it satisfies a version of the Bianchi identity, Equation 3.99 (3.99) One of the most important properties of the Weyl tensor is that it is invariant under conformal transformations. This means that if you compute C$\scriptstyle \rho$$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$ for some metric g$\scriptstyle \mu$$\scriptstyle \nu$, and then compute it again for a metric given by $ \Omega^{2}_{}$(x)g$\scriptstyle \mu$$\scriptstyle \nu$, where $ \Omega$(x) is an arbitrary nonvanishing function of spacetime, you get the same answer. For this reason it is often known as the "conformal tensor." After this large amount of formalism, it might be time to step back and think about what curvature means for some simple examples. First notice that, according to (3.85), in 1, 2, 3 and 4 dimensions there are 0, 1, 6 and 20 components of the curvature tensor, respectively. (Everything we say about the curvature in these examples refers to the curvature associated with the Christoffel connection, and therefore the metric.) This means that one-dimensional manifolds (such as S1) are never curved; the intuition you have that tells you that a circle is curved comes from thinking of it embedded in a certain flat two-dimensional plane. (There is something called "extrinsic curvature," which characterizes the way something is embedded in a higher dimensional space. Our notion of curvature is "intrinsic," and has nothing to do with such embeddings.) The distinction between intrinsic and extrinsic curvature is also important in two dimensions, where the curvature has one independent component. (In fact, all of the information about the curvature is contained in the single component of the Ricci scalar.) Consider a cylinder, $ \bf R$ × S1. Figure 3.9 Although this looks curved from our point of view, it should be clear that we can put a metric on the cylinder whose components are constant in an appropriate coordinate system -- simply unroll it and use the induced metric from the plane. In this metric, the cylinder is flat. (There is also nothing to stop us from introducing a different metric in which the cylinder is not flat, but the point we are trying to emphasize is that it can be made flat in some metric.) The same story holds for the torus: Figure 3.10 We can think of the torus as a square region of the plane with opposite sides identified (in other words, S1 × S1), from which it is clear that it can have a flat metric even though it looks curved from the embedded point of view. A cone is an example of a two-dimensional manifold with nonzero curvature at exactly one point. We can see this also by unrolling it; the cone is equivalent to the plane with a "deficit angle" removed and opposite sides identified: Figure 3.11 In the metric inherited from this description as part of the flat plane, the cone is flat everywhere but at its vertex. This can be seen by considering parallel transport of a vector around various loops; if a loop does not enclose the vertex, there will be no overall transformation, whereas a loop that does enclose the vertex (say, just one time) will lead to a rotation by an angle which is just the deficit angle. Figure 3.12 Equation 3.100 (3.100) where a is the radius of the sphere (thought of as embedded in $ \bf R^{3}_{}$). Without going through the details, the nonzero connection coefficients are Equation 3.101 (3.101) Let's compute a promising component of the Riemann tensor: Equation 3.102 Equation 3.102 (3.102) (The notation is obviously imperfect, since the Greek letter $ \lambda$ is a dummy index which is summed over, while the Greek letters $ \theta$ and $ \phi$ represent specific coordinates.) Lowering an index, we have Equation 3.103 (3.103) It is easy to check that all of the components of the Riemann tensor either vanish or are related to this one by symmetry. We can go on to compute the Ricci tensor via R$\scriptstyle \mu$$\scriptstyle \nu$ = g$\scriptstyle \alpha$$\scriptstyle \beta$R$\scriptstyle \alpha$$\scriptstyle \mu$$\scriptstyle \beta$$\scriptstyle \nu$. We obtain Equation 3.104 (3.104) The Ricci scalar is similarly straightforward: Equation 3.105 (3.105) Therefore the Ricci scalar, which for a two-dimensional manifold completely characterizes the curvature, is a constant over this two-sphere. This is a reflection of the fact that the manifold is "maximally symmetric," a concept we will define more precisely later (although it means what you think it should). In any number of dimensions the curvature of a maximally symmetric space satisfies (for some constant a) Equation 3.106 (3.106) which you may check is satisfied by this example. Notice that the Ricci scalar is not only constant for the two-sphere, it is manifestly positive. We say that the sphere is "positively curved" (of course a convention or two came into play, but fortunately our conventions conspired so that spaces which everyone agrees to call positively curved actually have a positive Ricci scalar). From the point of view of someone living on a manifold which is embedded in a higher-dimensional Euclidean space, if they are sitting at a point of positive curvature the space curves away from them in the same way in any direction, while in a negatively curved space it curves away in opposite directions. Negatively curved spaces are therefore saddle-like. Figure 3.13 Enough fun with examples. There is one more topic we have to cover before introducing general relativity itself: geodesic deviation. You have undoubtedly heard that the defining property of Euclidean (flat) geometry is the parallel postulate: initially parallel lines remain parallel forever. Of course in a curved space this is not true; on a sphere, certainly, initially parallel geodesics will eventually cross. We would like to quantify this behavior for an arbitrary curved space. The problem is that the notion of "parallel" does not extend naturally from flat to curved spaces. Instead what we will do is to construct a one-parameter family of geodesics, $ \gamma_{s}^{}$(t). That is, for each s $ \in$ $ \bf R$, $ \gamma_{s}^{}$ is a geodesic parameterized by the affine parameter t. The collection of these curves defines a smooth two-dimensional surface (embedded in a manifold M of arbitrary dimensionality). The coordinates on this surface may be chosen to be s and t, provided we have chosen a family of geodesics which do not cross. The entire surface is the set of points x$\scriptstyle \mu$(s, t) $ \in$ M. We have two natural vector fields: the tangent vectors to the geodesics, Equation 3.107 (3.107) and the "deviation vectors" Equation 3.108 (3.108) This name derives from the informal notion that S$\scriptstyle \mu$ points from one geodesic towards the neighboring ones. Figure 3.14 The idea that S$\scriptstyle \mu$ points from one geodesic to the next inspires us to define the "relative velocity of geodesics," Equation 3.109 (3.109) and the "relative acceleration of geodesics," Equation 3.110 (3.110) You should take the names with a grain of salt, but these vectors are certainly well-defined. Since S and T are basis vectors adapted to a coordinate system, their commutator vanishes: [S, T] = 0 . We would like to consider the conventional case where the torsion vanishes, so from (3.70) we then have Equation 3.111 (3.111) With this in mind, let's compute the acceleration: Equation 3.112 (3.112) Let's think about this line by line. The first line is the definition of a$\scriptstyle \mu$, and the second line comes directly from (3.111). The third line is simply the Leibniz rule. The fourth line replaces a double covariant derivative by the derivatives in the opposite order plus the Riemann tensor. In the fifth line we use Leibniz again (in the opposite order from usual), and then we cancel two identical terms and notice that the term involving T$\scriptstyle \rho$$ \nabla_{\rho}^{}$T$\scriptstyle \mu$ vanishes because T$\scriptstyle \mu$ is the tangent vector to a geodesic. The result, Equation 3.113 (3.113) is known as the geodesic deviation equation. It expresses something that we might have expected: the relative acceleration between two neighboring geodesics is proportional to the curvature. Physically, of course, the acceleration of neighboring geodesics is interpreted as a manifestation of gravitational tidal forces. This reminds us that we are very close to doing physics by now. There is one last piece of formalism which it would be nice to cover before we move on to gravitation proper. What we will do is to consider once again (although much more concisely) the formalism of connections and curvature, but this time we will use sets of basis vectors in the tangent space which are not derived from any coordinate system. It will turn out that this slight change in emphasis reveals a different point of view on the connection and curvature, one in which the relationship to gauge theories in particle physics is much more transparent. In fact the concepts to be introduced are very straightforward, but the subject is a notational nightmare, so it looks more difficult than it really is. Up until now we have been taking advantage of the fact that a natural basis for the tangent space Tp at a point p is given by the partial derivatives with respect to the coordinates at that point, $ \hat{e}_{(\mu)}$ = $ \partial_{\mu}$. Similarly, a basis for the cotangent space T*p is given by the gradients of the coordinate functions, $ \hat{\theta}^{(\mu)}$ = dx$\scriptstyle \mu$. There is nothing to stop us, however, from setting up any bases we like. Let us therefore imagine that at each point in the manifold we introduce a set of basis vectors $ \hat{e}_{(a)}$ (indexed by a Latin letter rather than Greek, to remind us that they are not related to any coordinate system). We will choose these basis vectors to be "orthonormal", in a sense which is appropriate to the signature of the manifold we are working on. That is, if the canonical form of the metric is written $ \eta_{ab}^{}$, we demand that the inner product of our basis vectors be Equation 3.114 (3.114) where g( , ) is the usual metric tensor. Thus, in a Lorentzian spacetime $ \eta_{ab}^{}$ represents the Minkowski metric, while in a space with positive-definite metric it would represent the Euclidean metric. The set of vectors comprising an orthonormal basis is sometimes known as a tetrad (from Greek tetras, "a group of four") or vielbein (from the German for "many legs"). In different numbers of dimensions it occasionally becomes a vierbein (four), dreibein (three), zweibein (two), and so on. (Just as we cannot in general find coordinate charts which cover the entire manifold, we will often not be able to find a single set of smooth basis vector fields which are defined everywhere. As usual, we can overcome this problem by working in different patches and making sure things are well-behaved on the overlaps.) The point of having a basis is that any vector can be expressed as a linear combination of basis vectors. Specifically, we can express our old basis vectors $ \hat{e}_{(\mu)}$ = $ \partial_{\mu}$ in terms of the new ones: Equation 3.115 (3.115) The components ea$\scriptstyle \mu$ form an n × n invertible matrix. (In accord with our usual practice of blurring the distinction between objects and their components, we will refer to the ea$\scriptstyle \mu$ as the tetrad or vielbein, and often in the plural as "vielbeins.") We denote their inverse by switching indices to obtain e$\scriptstyle \mu$a, which satisfy Equation 3.116 (3.116) These serve as the components of the vectors $ \hat{e}_{(a)}$ in the coordinate basis: Equation 3.117 (3.117) In terms of the inverse vielbeins, (3.114) becomes Equation 3.118 (3.118) or equivalently Equation 3.119 (3.119) This last equation sometimes leads people to say that the vielbeins are the "square root" of the metric. We can similarly set up an orthonormal basis of one-forms in T*p, which we denote $ \hat{\theta}^{(a)}$. They may be chosen to be compatible with the basis vectors, in the sense that Equation 3.120 (3.120) It is an immediate consequence of this that the orthonormal one-forms are related to their coordinate-based cousins $ \hat{\theta}^{(\mu)}$ = dx$\scriptstyle \mu$ by Equation 3.121 (3.121) Equation 3.122 (3.122) The vielbeins ea$\scriptstyle \mu$ thus serve double duty as the components of the coordinate basis vectors in terms of the orthonormal basis vectors, and as components of the orthonormal basis one-forms in terms of the coordinate basis one-forms; while the inverse vielbeins serve as the components of the orthonormal basis vectors in terms of the coordinate basis, and as components of the coordinate basis one-forms in terms of the orthonormal basis. Any other vector can be expressed in terms of its components in the orthonormal basis. If a vector V is written in the coordinate basis as V$\scriptstyle \mu$$ \hat{e}_{(\mu)}$ and in the orthonormal basis as Va$ \hat{e}_{(a)}$, the sets of components will be related by Equation 3.123 (3.123) So the vielbeins allow us to "switch from Latin to Greek indices and back." The nice property of tensors, that there is usually only one sensible thing to do based on index placement, is of great help here. We can go on to refer to multi-index tensors in either basis, or even in terms of mixed components: Equation 3.124 (3.124) Looking back at (3.118), we see that the components of the metric tensor in the orthonormal basis are just those of the flat metric, $ \eta_{ab}^{}$. (For this reason the Greek indices are sometimes referred to as "curved" and the Latin ones as "flat.") In fact we can go so far as to raise and lower the Latin indices using the flat metric and its inverse $ \eta^{ab}_{}$. You can check for yourself that everything works okay (e.g., that the lowering an index with the metric commutes with changing from orthonormal to coordinate bases). By introducing a new set of basis vectors and one-forms, we necessitate a return to our favorite topic of transformation properties. We've been careful all along to emphasize that the tensor transformation law was only an indirect outcome of a coordinate transformation; the real issue was a change of basis. Now that we have non-coordinate bases, these bases can be changed independently of the coordinates. The only restriction is that the orthonormality property (3.114) be preserved. But we know what kind of transformations preserve the flat metric - in a Euclidean signature metric they are orthogonal transformations, while in a Lorentzian signature metric they are Lorentz transformations. We therefore consider changes of basis of the form Equation 3.125 (3.125) where the matrices $ \Lambda_{a'}^{}$a(x) represent position-dependent transformations which (at each point) leave the canonical form of the metric unaltered: Equation 3.126 (3.126) In fact these matrices correspond to what in flat space we called the inverse Lorentz transformations (which operate on basis vectors); as before we also have ordinary Lorentz transformations $ \Lambda^{a'}_{}$a, which transform the basis one-forms. As far as components are concerned, as before we transform upper indices with $ \Lambda^{a'}_{}$a and lower indices with $ \Lambda_{a'}^{}$a. So we now have the freedom to perform a Lorentz transformation (or an ordinary Euclidean rotation, depending on the signature) at every point in space. These transformations are therefore called local Lorentz transformations, or LLT's. We still have our usual freedom to make changes in coordinates, which are called general coordinate transformations, or GCT's. Both can happen at the same time, resulting in a mixed tensor transformation law: Equation 3.127 (3.127) Translating what we know about tensors into non-coordinate bases is for the most part merely a matter of sticking vielbeins in the right places. The crucial exception comes when we begin to differentiate things. In our ordinary formalism, the covariant derivative of a tensor is given by its partial derivative plus correction terms, one for each index, involving the tensor and the connection coefficients. The same procedure will continue to be true for the non-coordinate basis, but we replace the ordinary connection coefficients $ \Gamma^{\lambda}_{\mu\nu}$ by the spin connection, denoted $ \omega_{\mu}^{}$ab. Each Latin index gets a factor of the spin connection in the usual way: Equation 3.128 (3.128) (The name "spin connection" comes from the fact that this can be used to take covariant derivatives of spinors, which is actually impossible using the conventional connection coefficients.) In the presence of mixed Latin and Greek indices we get terms of both kinds. The usual demand that a tensor be independent of the way it is written allows us to derive a relationship between the spin connection, the vielbeins, and the $ \Gamma^{\nu}_{\mu\lambda}$'s. Consider the covariant derivative of a vector X, first in a purely coordinate basis: Equation 3.129 (3.129) Now find the same object in a mixed basis, and convert into the coordinate basis: Equation 3.130 (3.130) Comparison with (3.129) reveals Equation 3.131 (3.131) or equivalently Equation 3.132 (3.132) A bit of manipulation allows us to write this relation as the vanishing of the covariant derivative of the vielbein, Equation 3.133 (3.133) which is sometimes known as the "tetrad postulate." Note that this is always true; we did not need to assume anything about the connection in order to derive it. Specifically, we did not need to assume that the connection was metric compatible or torsion free. Since the connection may be thought of as something we need to fix up the transformation law of the covariant derivative, it should come as no surprise that the spin connection does not itself obey the tensor transformation law. Actually, under GCT's the one lower Greek index does transform in the right way, as a one-form. But under LLT's the spin connection transforms inhomogeneously, as Equation 3.134 (3.134) So far we have done nothing but empty formalism, translating things we already knew into a new notation. But the work we are doing does buy us two things. The first, which we already alluded to, is the ability to describe spinor fields on spacetime and take their covariant derivatives; we won't explore this further right now. The second is a change in viewpoint, in which we can think of various tensors as tensor-valued differential forms. For example, an object like X$\scriptstyle \mu$a, which we think of as a (1, 1) tensor written with mixed indices, can also be thought of as a "vector-valued one-form." It has one lower Greek index, so we think of it as a one-form, but for each value of the lower index it is a vector. Similarly a tensor A$\scriptstyle \mu$$\scriptstyle \nu$ab, antisymmetric in $ \mu$ and $ \nu$, can be thought of as a "(1, 1)-tensor-valued two-form." Thus, any tensor with some number of antisymmetric lower Greek indices and some number of Latin indices can be thought of as a differential form, but taking values in the tensor bundle. (Ordinary differential forms are simply scalar-valued forms.) The usefulness of this viewpoint comes when we consider exterior derivatives. If we want to think of X$\scriptstyle \mu$a as a vector-valued one-form, we are tempted to take its exterior derivative: Equation 3.135 (3.135) It is easy to check that this object transforms like a two-form (that is, according to the transformation law for (0, 2) tensors) under GCT's, but not as a vector under LLT's (the Lorentz transformations depend on position, which introduces an inhomogeneous term into the transformation law). But we can fix this by judicious use of the spin connection, which can be thought of as a one-form. (Not a tensor-valued one-form, due to the nontensorial transformation law (3.134).) Thus, the object Equation 3.136 (3.136) as you can verify at home, transforms as a proper tensor. An immediate application of this formalism is to the expressions for the torsion and curvature, the two tensors which characterize any given connection. The torsion, with two antisymmetric lower indices, can be thought of as a vector-valued two-form T$\scriptstyle \mu$$\scriptstyle \nu$a. The curvature, which is always antisymmetric in its last two indices, is a (1, 1)-tensor-valued two-form, Rab$\scriptstyle \mu$$\scriptstyle \nu$. Using our freedom to suppress indices on differential forms, we can write the defining relations for these two tensors as Equation 3.137 (3.137) Equation 3.138 (3.138) These are known as the Maurer-Cartan structure equations. They are equivalent to the usual definitions; let's go through the exercise of showing this for the torsion, and you can check the curvature for yourself. We have Equation 3.139 (3.139) which is just the original definition we gave. Here we have used (3.131), the expression for the $ \Gamma^{\lambda}_{{\mu\nu}}$'s in terms of the vielbeins and spin connection. We can also express identities obeyed by these tensors as Equation 3.140 (3.140) Equation 3.141 (3.141) The first of these is the generalization of R$\scriptstyle \rho$[$\scriptstyle \sigma$$\scriptstyle \mu$$\scriptstyle \nu$] = 0, while the second is the Bianchi identity $ \nabla_{[\lambda\vert}^{}$R$\scriptstyle \rho$$\scriptstyle \sigma$|$\scriptstyle \mu$$\scriptstyle \nu$] = 0. (Sometimes both equations are called Bianchi identities.) The form of these expressions leads to an almost irresistible temptation to define a "covariant-exterior derivative", which acts on a tensor-valued form by taking the ordinary exterior derivative and then adding appropriate terms with the spin connection, one for each Latin index. Although we won't do that here, it is okay to give in to this temptation, and in fact the right hand side of (3.137) and the left hand sides of (3.140) and (3.141) can be thought of as just such covariant-exterior derivatives. But be careful, since (3.138) cannot; you can't take any sort of covariant derivative of the spin connection, since it's not a tensor. So far our equations have been true for general connections; let's see what we get for the Christoffel connection. The torsion-free requirement is just that (3.137) vanish; this does not lead immediately to any simple statement about the coefficients of the spin connection. Metric compatibility is expressed as the vanishing of the covariant derivative of the metric: $ \nabla$g = 0. We can see what this leads to when we express the metric in the orthonormal basis, where its components are simply $ \eta_{ab}^{}$: Equation 3.142 Equation 3.142 (3.142) Then setting this equal to zero implies Equation 3.143 (3.143) Thus, metric compatibility is equivalent to the antisymmetry of the spin connection in its Latin indices. (As before, such a statement is only sensible if both indices are either upstairs or downstairs.) These two conditions together allow us to express the spin connection in terms of the vielbeins. There is an explicit formula which expresses this solution, but in practice it is easier to simply solve the torsion-free condition Equation 3.144 (3.144) using the asymmetry of the spin connection, to find the individual components. We now have the means to compare the formalism of connections and curvature in Riemannian geometry to that of gauge theories in particle physics. (This is an aside, which is hopefully comprehensible to everybody, but not an essential ingredient of the course.) In both situations, the fields of interest live in vector spaces which are assigned to each point in spacetime. In Riemannian geometry the vector spaces include the tangent space, the cotangent space, and the higher tensor spaces constructed from these. In gauge theories, on the other hand, we are concerned with "internal" vector spaces. The distinction is that the tangent space and its relatives are intimately associated with the manifold itself, and were naturally defined once the manifold was set up; an internal vector space can be of any dimension we like, and has to be defined as an independent addition to the manifold. In math lingo, the union of the base manifold with the internal vector spaces (defined at each point) is a fiber bundle, and each copy of the vector space is called the "fiber" (in perfect accord with our definition of the tangent bundle). Besides the base manifold (for us, spacetime) and the fibers, the other important ingredient in the definition of a fiber bundle is the "structure group," a Lie group which acts on the fibers to describe how they are sewn together on overlapping coordinate patches. Without going into details, the structure group for the tangent bundle in a four-dimensional spacetime is generally GL (4,$ \bf R$), the group of real invertible 4 × 4 matrices; if we have a Lorentzian metric, this may be reduced to the Lorentz group SO(3, 1). Now imagine that we introduce an internal three-dimensional vector space, and sew the fibers together with ordinary rotations; the structure group of this new bundle is then SO(3). A field that lives in this bundle might be denoted $ \phi^{A}_{}$(x$\scriptstyle \mu$), where A runs from one to three; it is a three-vector (an internal one, unrelated to spacetime) for each point on the manifold. We have freedom to choose the basis in the fibers in any way we wish; this means that "physical quantities" should be left invariant under local SO(3) transformations such as Equation 3.145 (3.145) where OA'A(x$\scriptstyle \mu$) is a matrix in SO(3) which depends on spacetime. Such transformations are known as gauge transformations, and theories invariant under them are called "gauge theories." For the most part it is not hard to arrange things such that physical quantities are invariant under gauge transformations. The one difficulty arises when we consider partial derivatives, $ \partial_{\mu}$$ \phi^{A}_{}$. Because the matrix OA'A(x$\scriptstyle \mu$) depends on spacetime, it will contribute an unwanted term to the transformation of the partial derivative. By now you should be able to guess the solution: introduce a connection to correct for the inhomogeneous term in the transformation law. We therefore define a connection on the fiber bundle to be an object A$\scriptstyle \mu$AB, with two "group indices" and one spacetime index. Under GCT's it transforms as a one-form, while under gauge transformations it transforms as Equation 3.146 (3.146) (Beware: our conventions are so drastically different from those in the particle physics literature that I won't even try to get them straight.) With this transformation law, the "gauge covariant derivative" Equation 3.147 (3.147) transforms "tensorially" under gauge transformations, as you are welcome to check. (In ordinary electromagnetism the connection is just the conventional vector potential. No indices are necessary, because the structure group U(1) is one-dimensional.) It is clear that this notion of a connection on an internal fiber bundle is very closely related to the connection on the tangent bundle, especially in the orthonormal-frame picture we have been discussing. The transformation law (3.146), for example, is exactly the same as the transformation law (3.134) for the spin connection. We can also define a curvature or "field strength" tensor which is a two-form, Equation 3.148 (3.148) in exact correspondence with (3.138). We can parallel transport things along paths, and there is a construction analogous to the parallel propagator; the trace of the matrix obtained by parallel transporting a vector around a closed curve is called a "Wilson loop." We could go on in the development of the relationship between the tangent bundle and internal vector bundles, but time is short and we have other fish to fry. Let us instead finish by emphasizing the important difference between the two constructions. The difference stems from the fact that the tangent bundle is closely related to the base manifold, while other fiber bundles are tacked on after the fact. It makes sense to say that a vector in the tangent space at p "points along a path" through p; but this makes no sense for an internal vector bundle. There is therefore no analogue of the coordinate basis for an internal space -- partial derivatives along curves have nothing to do with internal vectors. It follows in turn that there is nothing like the vielbeins, which relate orthonormal bases to coordinate bases. The torsion tensor, in particular, is only defined for a connection on the tangent bundle, not for any gauge theory connections; it can be thought of as the covariant exterior derivative of the vielbein, and no such construction is available on an internal bundle. You should appreciate the relationship between the different uses of the notion of a connection, without getting carried away. Next Contents Previous
d4edb40cb51f4c07
Quantum mechanics From Wikipedia, the free encyclopedia   (Redirected from Quantum Mechanics) Jump to: navigation, search Quantum mechanics (QM – also known as quantum physics, or quantum theory) is a branch of physics which deals with physical phenomena at nanoscopic scales where the action is on the order of the Planck constant. It departs from classical mechanics primarily at the quantum realm of atomic and subatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. Quantum mechanics provides a substantially useful framework for many features of the modern periodic table of elements including the behavior of atoms during chemical bonding and has played a significant role in the development of many modern technologies. In advanced topics of quantum mechanics, some of these behaviors are macroscopic (see macroscopic quantum phenomena) and emerge at only extreme (i.e., very low or very high) energies or temperatures (such as in the use of superconducting magnets). For example, the angular momentum of an electron bound to an atom or molecule is quantized. In contrast, the angular momentum of an unbound electron is not quantized. In the context of quantum mechanics, the wave–particle duality of energy and matter and the uncertainty principle provide a unified view of the behavior of photons, electrons, and other atomic-scale objects. The mathematical formulations of quantum mechanics are abstract. A mathematical function, the wavefunction, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Mathematical manipulations of the wavefunction usually involve bra–ket notation which requires an understanding of complex numbers and linear functionals. The wavefunction formulation treats the particle as a quantum harmonic oscillator, and the mathematics is akin to that describing acoustic resonance. Many of the results of quantum mechanics are not easily visualized in terms of classical mechanics. For instance, in a quantum mechanical model the lowest energy state of a system, the ground state, is non-zero as opposed to a more "traditional" ground state with zero kinetic energy (all particles at rest). Instead of a traditional static, unchanging zero energy state, quantum mechanics allows for far more dynamic, chaotic possibilities, according to John Wheeler. The earliest versions of quantum mechanics were formulated in the first decade of the 20th century. About this time, the atomic theory and the corpuscular theory of light (as updated by Einstein)[1] first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Early quantum theory was significantly reformulated in the mid-1920s by Werner Heisenberg, Max Born and Pascual Jordan, (matrix mechanics); Louis de Broglie and Erwin Schrödinger (wave mechanics); and Wolfgang Pauli and Satyendra Nath Bose (statistics of subatomic particles). Moreover, the Copenhagen interpretation of Niels Bohr became widely accepted. By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann[2] with a greater emphasis placed on measurement in quantum mechanics, the statistical nature of our knowledge of reality, and philosophical speculation about the role of the observer. Quantum mechanics has since permeated throughout many aspects of 20th-century physics and other disciplines including quantum chemistry, quantum electronics, quantum optics, and quantum information science. Much 19th-century physics has been re-evaluated as the "classical limit" of quantum mechanics and its more advanced developments in terms of quantum field theory, string theory, and speculative quantum gravity theories. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[3] In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper entitled "On the nature of light and colours". This experiment played a major role in the general acceptance of the wave theory of light. In 1838, with the discovery of cathode rays by Michael Faraday, these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[4] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or "energy elements") precisely matched the observed patterns of black-body radiation. According to Planck, each energy element, E, is proportional to its frequency, ν: E = h \nu\ Max Planck is considered the father of the Quantum Theory where h is Planck's constant. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[7] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[citation needed] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect in which shining light on certain materials can eject electrons from the material. The other exemplar that led to quantum mechanics was the study of electromagnetic waves, such as visible and ultraviolet light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or "quanta", Albert Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon) with a discrete quantum of energy that was dependent on its frequency.[8] Einstein was able to use the photon theory of light to explain the photoelectric effect for which he won the 1921 Nobel Prize in Physics. This led to a theory of unity between subatomic particles and electromagnetic waves in which particles and waves are neither simply particle nor wave but have certain properties of each. This originated the concept of wave–particle duality. While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors, superfluids, and large organic molecules.[9] The word quantum derives from the Latin, meaning "how great" or "how much".[10] In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and sub-atomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[11] Some fundamental aspects of the theory are still actively studied.[12] Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If the physical nature of an atom was solely described by classical mechanics electrons would not "orbit" the nucleus since orbiting electrons emit radiation (due to circular motion) and would eventually collide with the nucleus due to this loss of energy. This framework was unable to explain the stability of atoms. Instead, electrons remain in an uncertain, non-deterministic, "smeared", probabilistic, wave–particle orbital about the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[13] Mathematical formulations[edit] See also: Quantum logic In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[14] David Hilbert,[15] John von Neumann,[16] and Hermann Weyl[17] the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors"). Formally, these reside in a complex separable Hilbert space—variously called the "state space" or the "associated Hilbert space" of the system—that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system—for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues. Fig. 1: Probability densities corresponding to the wavefunctions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Brighter areas correspond to higher probability density in a position measurement. Such wavefunctions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics, and are modes of oscillation as well, possessing a sharp energy and, thus, a definite frequency. The angular momentum and energy are quantized, and take only discrete values like those shown (as is the case for resonant frequencies in acoustics) Some wave functions produce probability distributions that are constant, or independent of time—such as when in a stationary state of constant energy, time vanishes in the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1) (note, however, that only the lowest angular momentum states, labeled s, are spherically symmetric).[30] The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the "wave-like" behavior of quantum states. As it turns out, analytic solutions of the Schrödinger equation are available for only a very small number of relatively simple model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen molecular ion, and the hydrogen atom are the most important representatives. Even the helium atom—which contains just one more electron than does the hydrogen atom—has defied all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions, however. In the important method known as perturbation theory, one uses the analytic result for a simple quantum mechanical model to generate a result for a more complicated model that is related to the simpler model by (for one example) the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces only weak (small) deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos. Mathematically equivalent formulations of quantum mechanics[edit] There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics - matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[31] Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM was overlooked until the 1954 Nobel award. The role is noted in a 2005 biography of Born, which recounts his role in the matrix formulation of quantum mechanics, and the use of probability amplitudes. Heisenberg himself acknowledges having learned matrices from Born, as published in a 1940 festschrift honoring Max Planck.[32] In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[33] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Interactions with other scientific theories[edit] The rules of quantum mechanics are fundamental. They assert that the state space of a system is a Hilbert space, and that observables of that system are Hermitian operators acting on that space—although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical mechanics when a system moves to higher energies or—equivalently—larger quantum numbers, i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero. In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, then attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. List of unsolved problems in physics In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wavefunction collapse", give rise to the reality we perceive? When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical \scriptstyle -e^2/(4 \pi\ \epsilon_{_0}\ r) Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. These three men shared the Nobel Prize in Physics in 1979 for this work.[34] It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity (the most accurate theory of gravity currently known) and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. Classical mechanics has also been extended into the complex domain, with complex classical mechanics exhibiting behaviors similar to quantum mechanics.[35] Quantum mechanics and classical physics[edit] Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[36] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[37] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[38] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein–Podolsky–Rosen (EPR) paradox — an attempt to disprove quantum mechanics by an appeal to local realism.[39] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[40] Quantum coherence is not typically evident at macroscopic scales, though an exception to this rule may occur at extremely low temperatures (i.e. approaching absolute zero) at which quantum behavior may manifest itself macroscopically.[41] This is in accordance with the following observations: • Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[42] • While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with particles of extremely small size or velocities approaching the speed of light, the laws of classical, often considered "Newtonian", physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[43] Relativity and quantum mechanics[edit] Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence and while they do not directly contradict each other theoretically (at least with regard to their primary claims), they have proven extremely difficult to incorporate into one consistent, cohesive model.[44] Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly contributing to the field, he did not accept many of the more "philosophical consequences and interpretations" of quantum mechanics, such as the lack of deterministic causality. He is famously quoted as saying, in response to this aspect, "My God does not play with dice". He also had difficulty with the assertion that a single subatomic particle can occupy numerous areas of space at one time. However, he was also the first to notice some of the apparently exotic consequences of entanglement, and used them to formulate the Einstein–Podolsky–Rosen paradox in the hope of showing that quantum mechanics had unacceptable implications if taken as a complete description of physical reality. This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that - although Einstein was correct in identifying seemingly paradoxical implications of quantum mechanical nonlocality - these implications could be experimentally tested. Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement. According to the paper of J. Bell and the Copenhagen interpretation—the common interpretation of quantum mechanics by physicists since 1927 - and contrary to Einstein's ideas, quantum mechanics was not, at the same time a "realistic" theory and a "local" theory. The Einstein–Podolsky–Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner - although the two particles can be an arbitrary distance apart. However, this effect does not violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is used in high-security commercial applications in banking and government. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th and 21st century physics. Many prominent physicists, including Stephen Hawking, have labored for many years in the attempt to discover a theory underlying everything. This TOE would combine not only the different models of subatomic physics, but also derive the four fundamental forces of nature - the strong force, electromagnetism, the weak force, and gravity - from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, he has concluded that one is not obtainable, and has stated so publicly in his lecture "Gödel and the End of Physics" (2002).[45] Attempts at a unified field theory[edit] Main article: Grand unified theory The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory,[46][unreliable source](blog) has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field,[47] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of those searching for a coherent TOE is Edward Witten, a theoretical physicist who formulated the M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are - at lower energies - completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing. Another popular theory is Loop quantum gravity (LQG), a theory that describes the quantum properties of gravity. It is also a theory of quantum space and quantum time, because in general relativity the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature of the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here it is space itself which is discrete. More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time, is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to theory, there is no meaning to length shorter than this (cf. Planck scale energy). Therefore LQG predicts that not just matter, but also space itself, has an atomic structure. Loop quantum Gravity was first proposed by Carlo Rovelli. Philosophical implications[edit] Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[48] According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."[49] The Copenhagen interpretation - due largely to the Danish theoretical physicist Niels Bohr - remains the quantum mechanical formalism that is currently most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality". It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementarity nature of evidence obtained under different experimental situations. Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement. Einstein held that there should be a local hidden variable theory underlying quantum mechanics and, consequently, that the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the Einstein–Podolsky–Rosen paradox. John Bell showed that this "EPR" paradox led to experimentally testable differences between quantum mechanics and local realistic theories. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that the physical world cannot be described by any local realistic theory.[50] The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[51] This is not accomplished by introducing some "new axiom" to quantum mechanics, but on the contrary, by removing the axiom of the collapse of the wave packet. All of the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical - not just formally mathematical, as in other interpretations - quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe (i.e., the consistent state contribution to the aforementioned superposition) that we, as observers, inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, these "parallel universes" will never be accessible to us. The inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away at the speed of light towards the other end of the universe. In order to prove that the wave function did not collapse, one would have to bring all these particles back and measure them again, together with the system that was originally measured. Not only is this completely impractical, but even if one could theoretically do this, it would have to destroy any evidence that the original measurement took place (to include the physicist's memory!); in light of these Bell tests, Cramer (1986) formulated his transactional interpretation.[52] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen Interpretation. Quantum mechanics had enormous[53] success in explaining many of the features of our world. Quantum mechanics is often the only tool available that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism). Quantum mechanics is also critically important for understanding how individual atoms combine covalently to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Relativistic quantum mechanics can, in principle, mathematically describe most of chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and the magnitudes of the energies involved.[54] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics. A working mechanism of a resonant tunneling diode device, based on the phenomenon of quantum tunneling through potential barriers A great deal of modern technological inventions operate at a scale where quantum effects are significant. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging (MRI). The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems and devices. Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances. Quantum tunneling is vital to the operation of many devices - even in the simple light switch, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. While quantum mechanics primarily applies to the atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperature. Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black-body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[55] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this basic fundamental process of the plant kingdom.[56] Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Free particle[edit] For example, consider a free particle. In quantum mechanics, there is wave–particle duality, so the properties of the particle can be described as the properties of a wave. Therefore, its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with complete precision. However, one can measure the position (alone) of a moving free particle, creating an eigenstate of position with a wavefunction that is very large (a Dirac delta) at a particular position x, and zero everywhere else. If one performs a position measurement on such a wavefunction, the resultant x will be obtained with 100% probability (i.e., with full certainty, or complete precision). This is called an eigenstate of position—or, stated in mathematical terms, a generalized position eigenstate (eigendistribution). If the particle is in an eigenstate of position, then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum, then its position is completely unknown.[57] In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.[58] 3D confined electron wave functions for each eigenstate in a Quantum Dot. Here, rectangular and triangular-shaped quantum dots are shown. Energy states in rectangular dots are more ‘s-type’ and ‘p-type’. However, in a triangular dot, the wave functions are mixed due to confinement symmetry. Step potential[edit] Scattering at a finite potential step of height V0, shown in green. The amplitudes and direction of left- and right-moving waves are indicated. Yellow is the incident wave, blue are reflected and transmitted waves, red does not occur. E > V0 for this figure. The potential in this case is given by: V(x)= \begin{cases} 0, & x < 0, \\ V_0, & x \ge 0. \end{cases} The solutions are superpositions of left- and right-moving waves: \psi_1(x)= \frac{1}{\sqrt{k_1}} \left(A_\rightarrow e^{i k_1 x} + A_\leftarrow e^{-ik_1x}\right)\quad x<0 \psi_2(x)= \frac{1}{\sqrt{k_2}} \left(B_\rightarrow e^{i k_2 x} + B_\leftarrow e^{-ik_2x}\right)\quad x>0 where the wave vectors are related to the energy via k_1=\sqrt{2m E/\hbar^2}, and k_2=\sqrt{2m (E-V_0)/\hbar^2} with coefficients A and B determined from the boundary conditions and by imposing a continuous derivative on the solution. Each term of the solution can be interpreted as an incident, reflected, or transmitted component of the wave, allowing the calculation of transmission and reflection coefficients. Notably, in contrast to classical mechanics, incident particles with energies greater than the potential step are partially reflected. Rectangular potential barrier[edit] This is a model for the quantum tunneling effect which plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Quantum tunneling is central to physical phenomena involved in superlattices. Particle in a box[edit] 1-dimensional potential energy box (or infinite potential well) Main article: Particle in a box The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and infinite potential energy everywhere outside that region. For the one-dimensional case in the x direction, the time-independent Schrödinger equation may be written[59] - \frac {\hbar ^2}{2m} \frac {d ^2 \psi}{dx^2} = E \psi. With the differential operator defined by \hat{p}_x = -i\hbar\frac{d}{dx} the previous equation is evocative of the classic kinetic energy analogue, \frac{1}{2m} \hat{p}_x^2 = E, with state \psi in this case having energy E coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are \psi(x) = A e^{ikx} + B e ^{-ikx} \qquad\qquad E = \frac{\hbar^2 k^2}{2m} or, from Euler's formula, \psi(x) = C \sin kx + D \cos kx.\! The infinite potential walls of the box determine the values of C, D, and k at x = 0 and x = L where ψ must be zero. Thus, at x = 0, \psi(0) = 0 = C\sin 0 + D\cos 0 = D\! and D = 0. At x = L, \psi(L) = 0 = C\sin kL.\! in which C cannot be zero as this would conflict with the Born interpretation. Therefore, since sin(kL) = 0, kL must be an integer multiple of π, k = \frac{n\pi}{L}\qquad\qquad n=1,2,3,\ldots. The quantization of energy levels follows from this constraint on k, since E = \frac{\hbar^2 \pi^2 n^2}{2mL^2} = \frac{n^2h^2}{8mL^2}. Finite potential well[edit] Main article: Finite potential well A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wavefunction is not pinned to zero at the walls of the well. Instead, the wavefunction must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Harmonic oscillator[edit] Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wavefunction), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C,D,E,and F) are standing waves (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy. As in the classical case, the potential for the quantum harmonic oscillator is given by This problem can either be treated by directly solving the Schrödinger, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by \psi_n(x) = \sqrt{\frac{1}{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{ - \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots. where Hn are the Hermite polynomials, H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left(e^{-x^2}\right) and the corresponding energy levels are E_n = \hbar \omega \left(n + {1\over 2}\right). This is another example illustrating the quantization of energy for bound states. See also[edit] 1. ^ Ben-Menahem, Ari (2009). Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1. Springer. p. 3678. ISBN 3540688315. , Extract of page 3678 2. ^ van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society 64: Part2:95–99.  3. ^ Max Born & Emil Wolf, Principles of Optics, 1999, Cambridge University Press 4. ^ Mehra, J.; Rechenberg, H. (1982). The historical development of quantum theory. New York: Springer-Verlag. ISBN 0387906428.  5. ^ Kragh, Helge (2002). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. p. 58. ISBN 0-691-09552-3. , Extract of page 58 6. ^ E Arunan (2010). "Peter Debye". Resonance (journal) (Indian Academy of Sciences) 15 (12).  7. ^ Kuhn, T. S. (1978). Black-body theory and the quantum discontinuity 1894-1912. Oxford: Clarendon Press. ISBN 0195023838.  8. ^ Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [On a heuristic point of view concerning the production and transformation of light]. Annalen der Physik 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.  Reprinted in The collected papers of Albert Einstein, John Stachel, editor, Princeton University Press, 1989, Vol. 2, pp. 149-166, in German; see also Einstein's early work on the quantum hypothesis, ibid. pp. 134-148. 9. ^ "Quantum interference of large organic molecules". Nature.com. Retrieved April 20, 2013.  10. ^ "Quantum - Definition and More from the Free Merriam-Webster Dictionary". Merriam-webster.com. Retrieved 2012-08-18.  11. ^ http://mooni.fccj.org/~ethall/quantum/quant.htm 12. ^ Compare the list of conferences presented here 13. ^ Oocities.com at the Wayback Machine (archived October 26, 2009)[dead link] 14. ^ P.A.M. Dirac, The Principles of Quantum Mechanics, Clarendon Press, Oxford, 1930. 15. ^ D. Hilbert Lectures on Quantum Theory, 1915–1927 16. ^ J. von Neumann, Mathematische Grundlagen der Quantenmechanik, Springer, Berlin, 1932 (English translation: Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955). 17. ^ H.Weyl "The Theory of Groups and Quantum Mechanics", 1931 (original title: "Gruppentheorie und Quantenmechanik"). 18. ^ Greiner, Walter; Müller, Berndt (1994). Quantum Mechanics Symmetries, Second edition. Springer-Verlag. p. 52. ISBN 3-540-58080-8. , Chapter 1, p. 52 19. ^ "Heisenberg - Quantum Mechanics, 1925–1927: The Uncertainty Relations". Aip.org. Retrieved 2012-08-18.  20. ^ a b Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition. Jones and Bartlett Publishers, Inc. p. 215. ISBN 0-7637-2470-X. , Chapter 8, p. 215 21. ^ "[Abstract] Visualization of Uncertain Particle Movement". Actapress.com. Retrieved 2012-08-18.  22. ^ Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Campbridge University Press. p. 265. ISBN 0-521-80412-4. , Chapter , p. 23. ^ Dict.cc 24. ^ "Topics: Wave-Function Collapse". Phy.olemiss.edu. 2012-07-27. Retrieved 2012-08-18.  25. ^ "Collapse of the wave-function". Farside.ph.utexas.edu. Retrieved 2012-08-18.  26. ^ "Determinism and Naive Realism : philosophy". Reddit.com. 2009-06-01. Retrieved 2012-08-18.  27. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Demonstrations.wolfram.com. Retrieved 2010-10-15.  28. ^ Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Demonstrations.wolfram.com. Retrieved 2010-10-15.  29. ^ Mathews, Piravonu Mathews; Venkatesan, K. (1976). A Textbook of Quantum Mechanics. Tata McGraw-Hill. p. 36. ISBN 0-07-096510-2. , Chapter 2, p. 36 30. ^ "Wave Functions and the Schrödinger Equation" (PDF). Retrieved 2010-10-15. [dead link] 31. ^ [1][dead link] 32. ^ Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124-8 and 285-6. 33. ^ http://ocw.usu.edu/physics/classical-mechanics/pdf_lectures/06.pdf 34. ^ "The Nobel Prize in Physics 1979". Nobel Foundation. Retrieved 2010-02-16.  35. ^ Carl M. Bender, Daniel W. Hook, Karta Kooner (2009-12-31). "Complex Elliptic Pendulum". arXiv:1001.0131 [hep-th]. 36. ^ See, for example, Precision tests of QED. The relativistic refinement of quantum mechanics known as quantum electrodynamics (QED) has been shown to agree with experiment to within 1 part in 108 for some atomic properties. 37. ^ Tipler, Paul; Llewellyn, Ralph (2008). Modern Physics (5 ed.). W. H. Freeman and Company. pp. 160–161. ISBN 978-0-7167-7550-8.  38. ^ "Quantum mechanics course iwhatisquantummechanics". Scribd.com. 2008-09-14. Retrieved 2012-08-18.  39. ^ A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777 (1935). [2] 40. ^ "Between classical and quantum�" (PDF). Retrieved 2012-08-19.  41. ^ (see macroscopic quantum phenomena, Bose–Einstein condensate, and Quantum machine) 42. ^ "Atomic Properties". Academic.brooklyn.cuny.edu. Retrieved 2012-08-18.  43. ^ http://assets.cambridge.org/97805218/29526/excerpt/9780521829526_excerpt.pdf 44. ^ "There is as yet no logically consistent and complete relativistic quantum field theory.", p. 4.  — V. B. Berestetskii, E. M. Lifshitz, L P Pitaevskii (1971). J. B. Sykes, J. S. Bell (translators). Relativistic Quantum Theory 4, part I. Course of Theoretical Physics (Landau and Lifshitz) ISBN 0-08-016025-5 45. ^ Stephen Hawking; Gödel and the end of physics 46. ^ "Life on the lattice: The most accurate theory we have". Latticeqcd.blogspot.com. 2005-06-03. Retrieved 2010-10-15.  47. ^ Parker, B. (1993). Overcoming some of the problems. pp. 259–279.  48. ^ The Character of Physical Law (1965) Ch. 6; also quoted in The New Quantum Universe (2003), by Tony Hey and Patrick Walters 49. ^ Weinberg, S. "Collapse of the State Vector", Phys. Rev. A 85, 062116 (2012). 50. ^ "Action at a Distance in Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. 2007-01-26. Retrieved 2012-08-18.  51. ^ "Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. Retrieved 2012-08-18.  52. ^ The Transactional Interpretation of Quantum Mechanics by John Cramer. Reviews of Modern Physics 58, 647-688, July (1986) 53. ^ See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14-11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8-6), and lasers (vol III, pp. 9-13). 54. ^ Introduction to Quantum Mechanics with Applications to Chemistry - Linus Pauling, E. Bright Wilson. Books.google.com. 1985-03-01. ISBN 9780486648712. Retrieved 2012-08-18.  55. ^ Anderson, Mark (2009-01-13). "Is Quantum Mechanics Controlling Your Thoughts? | Subatomic Particles". DISCOVER Magazine. Retrieved 2012-08-18.  56. ^ "Quantum mechanics boosts photosynthesis". physicsworld.com. Retrieved 2010-10-23.  57. ^ Davies, P. C. W.; Betts, David S. (1984). Quantum Mechanics, Second edition. Chapman and Hall. p. 79. ISBN 0-7487-4446-0. , Chapter 6, p. 79 58. ^ Baofu, Peter (2007-12-31). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. Books.google.com. ISBN 9789812708991. Retrieved 2012-08-18.  59. ^ Derivation of particle in a box, chemistry.tidalswan.com The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus. More technical: Further reading[edit] External links[edit] Course material
a1817f05fce6b9ed
About this Journal Submit a Manuscript Table of Contents Journal of Mathematics Volume 2013 (2013), Article ID 520214, 105 pages Research Article Two Parameters Deformations of Ninth Peregrine Breather Solution of the NLS Equation and Multi-Rogue Waves Institut de Mathématiques de Bourgogne, UMR 5584 CNRS, Université de Bourgogne, Faculté des Sciences Mirande, 9 Avenue Alain Savary, BP 47870, 21078 Dijon Cedex, France Received 17 November 2012; Accepted 8 February 2013 Academic Editor: S. T. Ali This paper is a continuation of a recent paper on the solutions of the focusing NLS equation. The representation in terms of a quotient of two determinants gives a very efficient method of determination of famous Peregrine breathers and their deformations. Here we construct Peregrine breathers of order and multi-rogue waves associated by deformation of parameters. The analytical expression corresponding to Peregrine breather is completely given. 1. Introduction From the fundamental work of Zakharov and Shabat in 1972 who solved the nonlinear Schrödinger equation (NLS) using the inverse scattering method, a lot of studies have been carried out on this equation. Its and Kotlyarov studied the case of periodic and almost periodic algebrogeometric solutions to the focusing NLS equation and constructed these solutions in 1976 [1]. Peregrine constructed the first quasi-rational solutions of NLS equation in 1983, nowadays called worldwide Peregrine breathers. In 1985, Akhmediev et al. obtained the two-phase almost periodic solution to the NLS equation and obtained the first higher order analogue of the Peregrine breather [2]. Other families of higher order were constructed in a series of articles by Akhmediev et al. [3, 4] using Darboux transformations. In 2010, it has been shown in [5] that rational solutions of the NLS equation can be written as a quotient of two Wronskians. Recently, in [6] a new representation of the solutions of the NLS equation has been constructed in terms of a ratio of two Wronskian determinants of even order composed of elementary functions; the related solutions of NLS are of order . When we perform the passage to the limit when some parameter tends to , we got families of multi-rogue wave solutions of the focusing NLS equation depending on a certain number of parameters. It allows to recognize the famous Peregrine breather [7] and also higher order Peregrine’s breathers constructed by Akhmediev et al. [3, 8]. Recently, another representation of the solutions of the focusing NLS equation, as a ratio of two determinants, has been given in [9] using generalized Darboux transform. A new approach has been done in [10], which gives a determinant representation of solutions of the focusing NLS equation, obtained from Hirota bilinear method, derived by reduction of the Gram determinant representation for Davey-Stewartson system. Here, we construct the breather of order , which shows the efficiency of this method. 2. Expression of Solutions of NLS Equation in terms of Wronskian Determinant and Quasi-Rational Limit 2.1. Solutions of the NLS Equation in terms of Functions The solution of the NLS equation is given in terms of truncated theta function by (see [11]) where In this formula, , , , and are functions of the parameters , ; they are defined by the formulas The parameters , , are real numbers such that Condition (5) implies that Complex numbers are defined in the following way: , , are arbitrary real numbers. 2.2. Relation between and Fredholm Determinant The function defined in (3) can be rewritten with a summation in terms of subsets of , We choose in formula (3) as for , and for . Let be the unit matrix and the matrix defined by Then has the following form: From the beginning of this section, has the same expression as in (10), so we have clearly the equality Then the solution of NLS equation takes the form 2.3. Link between Fredholm Determinants and Wronskians We consider the following functions: We use the following notations: is the wronskian . We consider matrix defined by Then we have the following statement. Theorem 1. Consider where Proof. We start to remove factor in each row in the wronskian for . Then with The determinant can be written as where , , and , , , , , and , , . Denoting , , the determinant of is clearly equal to Then we use the following lemma. Lemma 2. Let , , and , the matrix formed by replacing the th row of by the th row of . Then Proof. For , the transposed matrix in the cofactors of , we have the well-known formula . So it is clear that . The general term of the product can be written as We get Thus, . According to the relation (22) of the previous lemma, we get where is the matrix formed by replacing the th row of by the th row of defined previously. We compute and we get We can simplify the quotient So can be expressed as Then dividing each column by , , and multiplying each row by , , we get and therefore the wronskian can be written as It follows that So, the solution of NLS equation takes the form 2.4. Wronskian Representation of Solutions of NLS Equation From the previous section, we get the following result. Theorem 3. Function defined by is a smooth solution of the focusing NLS equation depending on two real parameters, and . 2.5. Quasi-Rational Solutions of NLS Equation in terms of a Limit of a Ratio of Wronskian Determinants In the following, we take the limit when the parameters for and for . For simplicity, we denote the term by . We consider the parameter written in the form When goes to , we realize limited expansions at order , for , of the terms The parameters and , for , are chosen in the form Then we have the following result. Theorem 4. With the parameters defined by (35), and chosen as in (37), for , the function defined by is a quasi-rational solution of the NLS equation (1) depending on two parameters. 3. Quasi-Rational Solutions of Order 9 Wa have already constructed in [6] solutions for the cases until , and this method gives the same results. We do not reproduce it here. We only give solutions of NLS equation in the case . Because of the length of the expressions of polynomials and in the solutions of the NLS equation defined by we only give them in the appendix. In the following cases, we only give the plots for the modulus of in the coordinates. For , , we obtain Akhmediev’s breather; we get the expected amplitude of for the spike (Figure 1). Figure 1: Solution of NLS, , , . If we choose , , we obtain Figure 2. Figure 2: Solution of NLS, , , . If we choose , , we have Figure 3. Figure 3: Solution of NLS, , , . It can be noted that Figures 2 and 3 are closely analogous with Figure 2(b) in paper [12] of Kedziora et al. In that work ( ), it was pointed out that the shift (here corresponding to and nonzero) pulls out a ring of fundamental rogue elements, corresponding to 15 of them there and to 17 here. It leaves behind a rogue wave of order , that is, 6 there (amplitude = 13) and 7 here (amplitude = 15). Of course, Figure 1 here is analogous with Figure 2(a) there (amplitudes 19 and 17, resp.). 4. Conclusion The method used in the present paper provides a powerful tool as the explicit analytical formulation of the ninth order shows it. To my knowledge, it is the first time that the analytical expression of the Peregrine breather of order nine is presented. It confirms the conjecture about the shape of the breather in the coordinates, the maximum of amplitude equal to , and the degree of polynomials in and here equal to . For and nonzero, the maximum is less than that, as discussed above and seen in Figures 2 and 3. In the following, we choose all the parameters and equal to ; here . The solution of NLS equation takes the form with
c6b54e0cd5c70b93
Questaal Home The Input File (CTRL) This web page explains the structure of the main input file (called the ctrl file). You can automatically generate a template for this file from bare structural and chemical data using the blm utility. Input can also be supplied through a parallel input stream, namely the command line switches. Switches are flagged by command-line arguments beginning with  ‘‘ . They serve many purposes: some switches apply to all executables, others are specific to one or a few of them. Command-line arguments can also modify the contents of the input file described on this page: variables can be assigned from the command-line before the input file is parsed. Switches are documented in the command-line documentation for most executables; also any executable provides a brief summary of most switches it recognizes if you run it with --help, e.g. lmf --help. As explained below, data is identified by a label called a token. A token is part of a tag, which is the full label with multiple parts, in the logical structure of a tree. The top-level or first part (branch), we denote as category; the last is the token, and data to be read in immediately follows the token. The tag does not itself appear in the input file, only the branches as such as the category and token as explained below. This web page documents the contents of each token, organized by category, in the same way the ctrl file is structured. A more detailed description of the syntax can be found in the input file manual. Note also that the reader does not parse lines directly as read from the ctrl file. It is first passed through a preprocessor, which can modify the contents of the input. See here for complete documentation of the preprocessor’s syntax. Note: that the full name of the input file is ctrl.ext; you supply the extension on the command line. (If the extension is omitted, dat is used.) The same extension will be tacked onto names of most other files read or generated by the codes. Table of Contents 1. Input File Structure Here is a sample input file for the compound Bi2Te3 written for the lmf code. categories tokens VERS LM:7 FP:7 HAM AUTOBAS[PNU=1 LOC=1 LMTO=3 MTO=1 GW=0] ITER MIX=B2,b=.3 NIT=10 CONVC=1e-5 BZ NKABC=3 METAL=5 N=2 W=.01 NSPEC=2 NBAS=5 NL=4 PLAT= 1 0 4.0154392 -0.5 0.8660254 4.0154392 -0.5 -0.8660254 4.0154392 ATOM=Te Z= 52 R= 2.870279 ATOM=Bi Z= 83 R= 2.856141 ATOM=Te POS= 0.0000000 0.0000000 0.0000000 ATOM=Te POS= -0.5000000 -0.8660254 1.4616199 ATOM=Te POS= 0.5000000 0.8660254 -1.4616199 ATOM=Bi POS= 0.5000000 0.8660254 0.8030878 ATOM=Bi POS= -0.5000000 -0.8660254 -0.8030878 Each element of data follows a token. The token tells the reader what the data signifies. Each token belongs to a category. In the sample input file above VERS, HAM, ITER, BZ, STRUC, SPEC, SITE are categories that organize the input by topic. Any text that begins in the first column is a category. The full identifier is called the tag and it has the logical structure of a tree. The tag’s trunk (or top-level) is the category and the last is the token, e.g. GMAX associated with HAM and PLAT associated with STRUC. After the token comes the data to be parsed. In most cases category and token comprise comprise the entire tag, e.g. BZ_METAL. Thus the category groups tags into themes, the token identifies a particular type of data within the theme. Sometimes a tag has three branches, e.g. HAM_AUTOBAS_LOC. Note: input files described here (ctrl.ext) can be automatically constructed from init files using the blm utility. init files and ctrl files are structured with categories and tokens in essentially the same way. For a another description of categories and tokens, see the init file documentation. Tags, Categories and Tokens The input file offers a very flexible free format: tags identify data to be read by a program, e.g. reads a number (.01) from token W=. In this case W= belongs to the BZ category, so the full tag name is BZ_W. A category holds information for a family of data, for example BZ contains parameters associated with Brillouin zone integration. The entire input system has at present a grand total of 18 categories, though any one program uses only a subset of them. Consider the Brillouin zone integration category. You plan to carry out the BZ integration using the Methfessel-Paxton sampling method. M-P integration has two parameters: polynomial order n and gaussian width w. Two tags are used to identify them: BZ_N and BZ_W; they are usually expressed in the input file as follows: BZ N=2 W=.01 This format style is the most commonly used because it is clean and easy to read; but it conceals the tree structure a little. The same data can equally be written: BZ[ N=2 W=.01] Now the tree structure is apparent: [..] delimits the scope of tag BZ. Any tag that starts in the first column is a category, so any non-white character appearing in the first column automatically starts a new category, and also terminates any prior category. N= and W= mark tokens BZ_N and BZ_W. Apart from the special use of the first column to identify categories, data is largely free-format, though there are a few mild exceptions. Thus: BZ N=2 BZ W=.01 N=2 BZ[ W=.01 N=2] all represent the same information. Note: if two categories appear in an input file, only the first is used. Subsequent categories are ignored. Generally, only the first tag is used when more than one appears within a given scope. Usually the tag tree has only two levels (category and token) but not always. For example, data associated with atomic sites must be supplied for each site. In this case the tree has three levels, e.g. SITE_ATOM_POS. Site data is typically represented in a format along the following lines: SITE ATOM=Ga POS= 0 0 0 RELAX=T ATOM=As POS= .25 .25 .25 The scope of SITE starts at “SITE” and terminates just before “END”. There will be multiple instances of the SITE_ATOM tag, one for each site. The scope of the first instance begins with the first occurrence of ATOM and terminates just before the second: ATOM=Ga POS= 0 0 0 RELAX=T And the scope of the second SITE_ATOM is ATOM=As POS= .25 .25 .25 Note that ATOM simultaneously acts like a token pointing to data (e.g. Ga) and as a tag holding tokens within it, in this case SITE_ATOM_POS and (for the first site) SITE_ATOM_RELAX. Some tags are required; others are optional; still others (in fact most) may not be used at all by a particular program. If a code needs site data, SITE_ATOM_POS is required, but SITE_ATOM_RELAX is probably optional, or not read at all. Note: this manual contains a more careful description of the input file’s syntax. Input lines are passed through a preprocessor, which provides a wide flexibility in how input files are structured. The preprocessor has many features in common with a programming language, including the ability to declare and assign variables, evaluate algebraic expressions; and it has constructs for branching and looping, to make possible multiple or conditional reading of input lines. For example, supposing through a prior preprocessor instruction you have declared a variable range, and it has been assigned the value 3. This line in the input file: is turned in to: The preprocessor treats text inside brackets {…} as an expression (usually an algebraic expression), which is evaluated and rendered back as an ASCII string. See this annotated lmf output for an example. The preprocessor’s programming language makes it possible for a single file to serve as input for many materials systems in the manner of a database; or as documentation. Also you can easily vary input conditions in a parameteric fashion. Other files besides ctrl.ext are first parsed by the preprocessor — files for site positions, Euler angles for noncollinear magnetism are read through the preprocessor, among others. 2. Help with finding tokens Seeing the effect of the preprocessor The preprocessor can act in nontrivial ways. To see the effect of the preprocessor, use the --showp command-line option. See this annotated output for an example. Finding what tags the parser seeks It is often the case that you want to input some information but don’t know the name of the tag you need. Try searching this page for a keyword. You can list each tag a particular tool reads, together with a synopsis of its function, by adding --input to the command-line. Search for keywords in the text to find what you need. Take for an example: lmchk --input This switch tells the parser not to try and read anything, but print out information about what it would try to read. Several useful bits of information are given, including a brief description of each tag in the following format. A snippet of the output is reproduced below: Tag Input cast (size,min) IO_VERBOS opt i4v 5, 1 default = 35 Verbosity stack for printout. May also be set from the command-line: --pr#1[,#2] IO_IACTIV opt i4 1, 1 default = 0 Turn on interactive mode. May also be controlled from the command-line: --iactiv or --iactiv=no STRUC_FILE opt chr 1, 0 (Not used if data read from EXPRESS_file) Name of site file containing basis and lattice information. Read NBAS, PLAT, and optionally ALAT from site file, if specified. Otherwise, they are read from the ctrl file. STRUC_PLAT reqd r8v 9, 9 Primitive lattice vectors, in units of alat SPEC_ATOM_LMX opt i4 1, 1 (default depends on prior input) l-cutoff for basis SITE_ATOM_POS reqd* r8v 3, 1 Atom coordinates, in units of alat - If preceding token is not parsed, attempt to read the following: SITE_ATOM_XPOS reqd r8v 3, 1 Atom coordinates, as (fractional) multiples of the lattice vectors The table tells you IO_VERBOS and IO_IACTIV are optional tags; default values are 35 and 0, respectively. A single integer will be read from the latter tag, and between one and five integers will be read from IO_VERBOS. There is a brief synopsis explaining the functions of each. For these particular cases, the output gives alternative means to perform equivalent functions through command-line switches. STRUC_FILE=fname is optional. Here fname is a character string: it should be the site file name fname.ext from which lattice information is read. If you do use this tag, other tags in the STRUC category (NBAS, PLAT, ALAT) may be omitted. Otherwise, STRUC_PLAT is required input; the parser requires 9 numbers. The synopsis also tells you that you can specify the same information using EXPRESS_file=fname (see EXPRESS category below). SPEC_ATOM_LMX is optional input whose default value depends on other input (in this case, atomic number). SITE_ATOM_POS is required input in the sense that you must supply either it or SITE_ATOM_XPOS. The * in reqd* the information in SITE_ATOM_POS can be supplied by an alternate tag – SITE_ATOM_XPOS in this case. Note: if site data is given through a site file, all the other tags in the SITE category will be ignored. The cast (real, integer, character) of each tag is indicated, and also how many numbers are to be read. Sometimes tags will look for more than one number, but allow you to supply fewer. For example, BZ_NKABC in the snippet below looks for three numbers to determine the k-mesh, which are the number of divisions only each of the reciprocal lattice vectors. If you supply only one number it is copied to elements 2 and 3. BZ_NKABC reqd i4v 3, 1 (Not used if data read from EXPRESS_nkabc) No. qp along each of 3 lattice vectors. Supply one number for all vectors or a separate number for each vector. Command-line options --help performs a similar function for the command line arguments: it prints out a brief summary of arguments effective in the executable you are using. A more complete description of general-purpose command line options can be found on this page. See this annotated lmfa output for an example. Displaying tags read by the parser To see what is actually read by a particular tool, run your tool with --show=2 or --show. See the annotated lmf output for an example. These special modes are summarized here. 3. The EXPRESS category Section 3 provides some description of the input and purpose of tags in each category. There is one special category, EXPRESS, whose purpose is to simplify and streamline input files. Tags in EXPRESS are effectively aliases for tags in other categories, e.g. reading EXPRESS_gmax reads the same input as HAM_GMAX. If you put a tag into EXPRESS, it will be read there and any tag appearing in its usual location will be ignored. Thus including GMAX in HAM would have no effect if gmax is present in EXPRESS. EXPRESS collects the most commonly used tags in one place. There is usually a one-to-one correspondence between the tag in EXPRESS and its usual location. The sole exception to this is EXPRESS_file, which performs the same function as the pair of tags, STRUC_FILE and SITE_FILE. Thus in using EXPRESS_file all structural data is supplied through the site file. 4. Input File Categories This section documents the tokens read from the input file, arranged by category. Remember that each executable reads its only tokens specific to it. Note: The tables below list the input systems’ tokens and their function. Tables are organized by category. • The Arguments column refers to the cast belonging to the token (“l”, “i”, “r”, and “c” refer to logical, integer, floating-point and character data, respectively) • The Program column indicates which programs the token is specific to, if any • The Optional column indicates whether the token is optional (Y) or required (N) • The Default column indicates the default value, if any • The Explanation column describes the token’s function. See Table of Contents Category BZ holds information concerning the numerical integration of quantities such as energy bands over the Brillouin Zone (BZ). The LMTO programs permit both sampling and tetrahedron integration methods. Both are described in bzintegration, and the relative merits of the two different methods are discussed. As implemented both methods use a uniform, regularly spaced mesh of k-points, which divides the BZ into microcells as described here. Normally you specify this mesh by the number of divisions of each of the three primitive reciprocal lattice vectors (which are the inverse, transpose of the lattice vectors PLAT); NKABC below. These tokens are read by programs that make hamiltonians in periodic crystals (lmf,lm,lmgf,lmpg,tbe). Some tokens apply only to codes that make energy bands, (lmf,lm,tbe). GETQPl YFRead list of k-points from disk file qpts.ext. This is a special mode, and you normally would let the program choose its own mesh by specifying the number of divisions (see NKABC). If token is not parsed, the program will attempt to parse NKABC. NKABC1 to 3 i N The number of divisions in the three directions of the reciprocal lattice vectors. k-points are generated along a uniform mesh on each of these axes. (This is the optimal general purpose quadrature for periodic functions as it integrates the largest number of sine and cosine functions exactly for a specified number of points.) The parser will attempt to read three integers. If only one number is read, the missing second and third entries assume the value of the first. Information from NKABC, together with BZJOB below, contains specifications equivalent to the widely used “Monkhorst Pack” scheme. But it is more transparent and easier to understand. The number of k-points in the full BZ is the product of these numbers; the number of irreducible k-points may be reduced by symmetry operations. PUTQPl YFIf T, write out the list of irreducible k-points to file qpts, and the weights for tetrahedron integration if available. BZJOB1 to 3 i Y0Controls the centering of the k-points in the BZ: 0: the mesh is centered so that one point lies at the origin. 1: points symmetrically straddle the origin. Three numbers are supplied, corresponding to each of the three primitive reciprocal lattice vectors. As with NKABC if only one number is read the missing second and third entries assume the value of the first. I123l YTControls looping order when generating regular mesh of k-points. T: inner loop is along QLAT(1); outer loop along QLAT(3) F: inner loop is along QLAT(3); outer loop along QLAT(1) METALilmf, lm, tbeY5Specifies how the weights are generated for Brillouin zone integration. For a detailed description, see this page. The METAL token accepts the following: 0. System assumed to be an insulator; weights determined a priori. 2. Integration weights are read from file wkp.ext, which will have been generated in a prior band pass. If wkp.ext is unavailable, the program will temporarily switch to METAL=3. 5. Like METAL=3 in which two passes are made but eigenvectors are kept in memory, so there is no additional overhead in making the first pass. This is the recommended mode for lmf unless you are working with a large system and need to conserve memory. TETRAilmf,lm,tbeY1Selects BZ integration method. 0: Methfessel-Paxton sampling integration. Tokens NPTS, N, W, EF0, DELEF, DOS (see below) are relevant to this integration scheme. 1: tetrahedron integration, with Bloechl weights Nilmf,lm,tbeY0Polynomial order for M-P sampling integration. (Not used with tetrahedron integration or for insulators).  0: integration uses standard gaussian method. >0: integration uses generalized gaussian functions, i.e. polynomial of order N × gaussian to generate integration weights. −1: use the Fermi function rather than gaussians to broaden the δ-function. This generates the actual electron (fermi) distribution for a finite temperature. Add 100: by default, if a gap is found separating occupied and unoccupied states, the program will treat the system as an insulator, even when METAL>0. To suppress this, add 100 to N (use −101 for Fermi distribution). Wrlmf,lm,tbeY5e-3Case BZ_N≥0 :  broadening (Gaussian width) for Gaussian sampling integration (Ry). Case BZ_N<0 :  kBT (Ry) where kB is the Boltzmann constant and T the temperature. W is not used for insulators or when using tetrahedron integration. EF0rlmf,lm,tbeY0Initial guess at Fermi energy. Used when  TETRA=0, or when  BZ_METAL=4 (which does not use the tetrahedron method for the density). DELEFrlmf,lm,tbeY0.05Initial uncertainty in Fermi level for sampling integration. Used when  TETRA=0, or when  BZ_METAL=4 (which does not use the tetrahedron method for the density). As the system approaches self-consistency this window is reduced. ZBAKrlmf,lmY0Homogeneous background charge SAVDOSilmf,lm,tbeY00: does not save dos on disk. 1: writes the total density of states on NPTS energy mesh points to disk file dos.ext. 2: Write weights to disk for partial DOS (does not work for lmf; in the ASA this occurs automatically). 4: Same as (2), but write weights m-resolved (ASA). 1. SAVDOS>0 uses DOS and NPTS tags also. 2. You may also cause lm or lmf to generate m-resolved dos the from command-line (see --pdos). DOS2 r Y-1,0Energy window over which DOS accumulated (Ry). Needed either for sampling integration or if SAVDOS>0. NPTSi Y1001Number of points in the density-of-states energy mesh used in conjunction with sampling integration. Needed either for sampling integration or if SAVDOS>0. EFMAXrlmf,lm,tbeY2Only eigenvectors whose eigenvalues are less than EFMAX are computed; this improves execution efficiency. NEVMXilmf,lm,tbeY0>0 : Find at most NEVMX eigenvectors. =0 : program uses internal default. <0 : no eigenvectors are generated (and correspondingly, nothing associated with eigenvectors such as density). Caution: if you want to look at partial DOS well above the Fermi level (which usually comes out around 0), you must set EFMAX and NEVMX high enough to encompass the range of interest. ZVALr Yall LDANumber of electrons to accumulate in BZ integration. Normally zval is computed by the program. NOINVllmf,lm,tbeYFSuppress the automatic addition of the inversion to the list of point group operations. Usually the inversion symmetry can be included in the determination of the irreducible part of the BZ because of time reversal symmetry. There may be cases where this symmetry is broken: e.g. when spin-orbit coupling is included or when the (beyond LDA) self-energy breaks time-reversal symmetry. In most cases, the program will automatically disable this addition in cases that it knows the symmetry is broken. FSMOM2 rlmf,lmY0 0Set the global magnetic moment (collinear magnetic case). In the fixed-spin moment method, a spin-dependent potential shift Beff is added to constrain the total magnetic moment to value assigned by FSMOM. No constraint is imposed if this value is zero (the default). Optional second argument #2 supplies an initial Beff. It is applied whether or not the first argument #1 is 0. If #1 ≠ 0, Beff is made consistent with it. DMATKllmf,lmgfYFCalculate the density matrix. Implementation still not ready. INVITllmf,lmYTGenerate eigenvectors by inverse iteration (this is the default). It is more efficient than the QL method, but occasionally fails to find all the vectors. When this happens, the program stops with the message: DIAGNO: tinvit cannot find all evecs If you encounter this message set INVIT=F. EMESHrlmgf,lmpgY10,0,-1,…Parameters defining contour integration for Green’s function methods. See also the GF documentation. 1. number of energy points n. 2. contour type:  0: Uniform mesh of nz points: Real part of z between emin and emax  1: Same as 0, but reverse sign of Im z  10: elliptical contour  11: same as 10, but reverse sign of Im z  100s digit used for special modifications  Add 100 for nonequil part using Im(z)=delne  Add 200 for nonequil part using Im(z)=del00  Add 300 for mixed elliptical contour + real axis to find fermi level  Add 1000 to set nonequil part only. 3. lower bound for energy contour emin (on the real axis). 4. upper bound for energy contour emax, e.g. Fermi level (on the real axis). 5. (elliptical contour) eccentricity: ranges between 0 (circle) and 1 (line)  (uniform contour) Im z. 6. (elliptical contour) bunching parameter eps : ranges between 0  (distributed symmetrically) and 1 (bunched toward emax)  (uniform contour) not used. 7. (nonequilibrium GF, lmpg) nzne = number of points on nonequilibrium contour. 8. (nonequilibrium GF, lmpg) vne = difference in fermi energies of right and left leads. 9. (nonequilibrium GF, lmpg) delne = Im part of E for nonequilibrium contour. 10 (nonequilibrium GF, lmpg) substitutes for delne when making the surface self-energy. MULLitbeY0Mulliken population analysis. Mulliken population analysis is also implemented in lmf, but you specify the analysis with a command-line argument. See Table of Contents This category enables users to declare variables in algebraic expressions. The syntax is a string of declarations inside the category, e.g: CONST a=10.69 nspec=4+2 Variables declared this way are similar to, but distinct from variables declared for the preprocessor, such as % const nbas=5 In the latter case the preprocessor makes a pass, and may use expressions involving variables declared by e.g. “% const nbas=5” to alter the structure of the input file. Variables declared for use by the preprocessor lose their definition after the preprocessor completes. The following code segment illustrates both types: % const nbas=5 CONST a=10.69 nspec=4 STRUC ALAT=a NSPEC=nspec NBAS={nbas} After the preprocessor compiles, the input file appears as: CONST a=10.69 nspec=4 When the  CONST  category is read (it is read before other categories), variables  a  and  nspec  are defined and used in the  SPEC  category. See Table of Contents Contains parameters for molecular statics and dynamics. For a tutorial with molecular statics, see this page. NITilmf, lmmc, tbeY maximum number of relaxation steps (molecular statics). SSTAT[…] lm, lmgfY (noncollinear magnetism) parameters specifying how spin statics (rotation of quantization axes to minimze energy) is carried out. SSTAT_MODEilm, lmgfN00: no spin statics or dynamics. -1: Landau-Gilbert spin dynamics. 1: spin statics: quantization axis determined by making output density matrix diagonal. 2: spin statics: size and direction of relaxation determined from spin torque. Add 10 to mix angles independently of P,Q (Euler angles are mixed with prior iterations to accelerate convergence). Add 1000 to mix Euler angles independently of P,Q. SSTAT_SCALEilm, lmgfN0(used with mode=2) scale factor amplifying magnetic forces. SSTAT_MAXTilm, lmgfN0maximum allowed change in angle. SSTAT_TAUilm, lmgfN0(used with mode=-1) time step. SSTAT_ETOLilm, lmgfN0(used with mode=-1) Set tau=0 this iter if etot-ehf>ETOL. MSTAT[…] lmf, lmmc, tbeY (molecular statics) parameters specifiying how site positions are relaxed given the internuclear forces. MSTAT_MODEilmf, lmmc, tbeN00: no relaxation. 4: relax with conjugate gradients algorithm (not generally recommended). 5: relax with Fletcher-Powell alogirithm. Find minimum along a line; a new line is chosen. The Hessian matrix is updated only at the start of a new line minimization. Fletcher-Powell is more stable but usually less efficient then Broyden. 6: relax with Broyden algorithm. This is essentially a Newton-Raphson algorithm, where Hessian matrix and direction of descent are updated each iteration. MSTAT_HESSllmf, lmmc, tbeNTT: Read hessian matrix from file, if it exists. F: assume initial hessian is the unit matrix. MSTAT_XTOLrlmf, lmmc, tbeY1e-3Convergence criterion for change in atomic displacements. >0: criterion satisfied when xtol > net shift (shifts summed over all sites). <0: criterion satisfied when xtol > max shift of any site. 0: Do not use this criterion to check convergence. Note: When molecular statics are performed, either GTOL or XTOL must be specified. Both may be specified. MSTAT_GTOLrlmf,lmmc,tbeY0Convergence criterion for tolerance in forces. >0: criterion satisfied when gtol > “net” force (forces summed over all sites). <0: criterion satisfied when xtol > max absolute force at any site. 0: Do not use this criterion to check convergence. MSTAT_STEPrlmf, lmmc, tbeY0.015Initial (and maximum) step length. MSTAT_NKILLilmf, lmmc, tbeY0 0: Never remove Hessian. >0: Remove hessian after NKILL iterations. <0: Remove hessian after -NKILL iterations, and also remove all memory of the hessian in the relaxation algorithm. MSTAT_LASTRilmfY-1Controls how positions are set in restart file, on final exit from relaxation algorithm −1: restore to position of minimum gradient  0: Retain positions of last cycle.  1: Use algorithm’s estimates for the next cycle (same pos written with --wpos). MSTAT_PDEF=rlmf, lmmc, tbeY0 0 0 …Lattice deformation modes (not documented). MD[…] lmmc, tbeY Parameters for molecular dynamics. MD_MODEilmmcN00: no MD 1: NVE 2: NVT 3: NPT MD_TSTEPrlmmcY20.671Time step (a.u.) NB: 1 fs = 20.67098 a.u. MD_TEMPrlmmcY0.00189999Temperature (a.u.) NB: 1 deg K = 6.3333e-6 a.u. MD_TAUPrlmmcY206.71Thermostat relaxation time (a.u.) MD_TIMErlmmcN20671000Total MD time (a.u.) MD_TAUBrlmmcY2067.1Barostat relaxation time (a.u.) The following are specific to lmmag. SDYN[…] N Subcategory setting parameters for spin dynamics (LL equations) with thermostat SDYN_KTrN Temperature, in a.u. SDYN_TSrN Time step, in a.u. SDYN_TEQUrY0Equilibration time, a.u. SDYN_TTOTrN Duration of total simulation, in a.u. MMHAMcN Rules for micromagnetics hamiltonian GD[…] Y Subcategory to set parameters for thermostat global daemons GD_NTHERMiY3Number of thermostats GD_MODETiY31,32,33Thermostat mode(s) GD_CTrY1thermostat coefficient(s) BSINT[…] Y Subcategory to set parameters for Bulirsch-Stoer integration BSINT_NEWlYTStart new SD run, Bulirsch-Stoer integration BSINT_TOLrN Tolerance in numerical integration BSINT_TS0rY0Minimum time step in units of TS BSINT_MXiY7Maximum order of rational function extrapolation BSINT_MIiY11Maximum number of midpoint rules to invoke BSINT_NSEQiY2Sequence of number of midpoint divisions See Table of Contents Category EWALD holds information controlling the Ewald sums for structure constants entering into, e.g. the Madelung summations and Bloch summed structure constants (lmf). Most programs use quantities in this category to carry out Ewald sums (exceptions are lmstr and the molecules code lmmc). ASr Y2Controls the relative number of lattice vectors in the real and reciprocal space. TOLr Y1e-8Tolerance in the Ewald sums. At times you may need to set this to a small value, like 10−12 — when the overlap matrix may have a small eigenvalue. Instances of this are when you use the PMT method or when calculating long superlattices. NKDMXi Y800The maximum number of real-space lattice vectors entering into the Ewald sum, used for memory allocation. Normally you should not need this token. Increase NKDMX if you encounter an error message like this one: xlgen: too many vectors, n=… RPADr Y0Scale rcutoff by RPAD when lattice vectors padded in oblong geometries. See Table of Contents This category contains parameters defining the one-particle hamiltonian. Portions of HAM are read by these codes: NSPINiALLY11 for non-spin-polarized calculations. 2 for spin-polarized calculations. NB: For the magnetic parameters below to be active, use NSPIN=2. RELiALLY10: for nonrelativistic Schrödinger equation. 1: for scalar relativistic approximation to the Dirac equation. 2: for Dirac equation (ASA only). 11: compute cores with the Dirac equation (lmfa only). SOiALLY00: no SO coupling. 1: Add L·S to hamiltonian. However, only the spin-diagonal part of the density is retained. 2: Add Lz·Sz only to the hamiltonian, so the spin channels remain distinct. 3: Like 2, but also L·S−LzSz is included perturbatively in the eigenvalues only and in a manner that preserves independence in the spin channels. This generates eigenvalues very close to LS for a given potential, but the eigenfunctions are generated from H+LzSz only. As a result the eigenfunctions (and then the density) remain spin-diagonal. There is some effect on the density, but the approximation seems to be rather good since the error on the eigenfunctions is of 2nd order in the perturbation. 11: Same as 1, but additionally decompose SO by site. See here for analysis and description of the different approximations. GW-based codes at present requires the spin channels to be kept separated and works, then, with SO=2,3 only. NONCOLlASAYFF: collinear magnetism. T: non-collinear magnetism. SS4 rASAY0Magnetic spin spiral, direction vector and angle. Example: nc/test/ 1 BFIELDilm, lmfY00: no external magnetic field applied. 1: add site-dependent constant external Zeeman field (requires NONCOL=T). Fields are read from file bfield.ext. 2: add Bz·Sz only to hamiltonian. fp/test/test.fp gdn nc/test/ 5 BXCSCALilm, lmgfY0This tag provides an alternative means to add an effective external magnetic field in the LDA. 0: no special scaling of the exchange-correlation field. 1: scale the magnetic part of the LDA XC field by a site-dependent factor 1 + sbxci as described below. 2: scale the magnetic part of the LDA XC field by a site-dependent factor as described below. This is a special mode used to impose constraining fields on rotations, used, e.g. by the CPA code. Site-dependent scalings sbxci are read from file bxc.ext. XCFUNiALLY2Specifies local part exchange-correlation functional. 0,#2,#3: Use libxc exchange functional #2 and correlation functional #3 1: Ceperly-Alder 2: Barth-Hedin (ASW fit) 3: PW91 (same as PBE) 4: PBE (same as PW91) GGAiALLY0Specifies gradient additions to exchange-correlation functional (not used when XCFUN=0,#2,#3). 0. No GGA (LDA only) 1. Langreth-Mehl 2. PW91 3. PBE 4. PBE with Becke exchange This tutorial uses the PBE functional. To compare the internally coded PBE functional with libxc, try fp/test/test.fp te PWMODEilmf, lmfgwdY0Controls how APWs are added to the LMTO basis. 1s digit: 0. LMTO basis only 1. Mixed LMTO+PW 2. PW basis only Examples: fp/test/test.fp srtio3  and  fp/test/test.fp felz 4 10s digit: 0. PW basis fixed to (less accurate, but simpler) 1. PW basis symmetry-consistent, but basis depends on k. Example:  fp/test/test.fp te PWEMINrlmf, lmfgwdY0Include APWs with energy E > PWEMIN (Ry) PWEMAXrlmf, lmfgwdY Include APWs with energy E < PWEMAX (Ry) NPWPADilmf, lmfgwdY-1If >0, overrides default padding of variable basis dimension. Certain arrays have fixed dimension that must be at least as large as the rank of the hamiltonian. The APW basis is depends on k if PWMODE>10, so some padding must be added to this fixed dimesion to ensure that these arrays can accommodate any k. Normally the code will internally select a sensible default. In the event it is not large enough (the program will stop), you can enlarge the padding with this token. RDSIGilmf, lmfgwd, lm, lmgfY0Controls how the QSGW self-energy Σ0 substitutes for the LDA exchange correlation functional. Note: the GW codes store in file sigm.ext. 1s digit:  0 do not read Σ0  1 read file sigm.ext, if it exists, and add it to the LDA potential  2 same as 1 but symmetrize sigm after reading  Add 4 to retain only real part of real-space sigma 10s digit:  0 simple interpolation (not recommended).  1 approximate high energy parts of sigm by diagonal. Optionally add the following (the same functionality using --rsig on the command line): 10000 to indicate the sigma file was stored in the full BZ (no symmetry operations are assumed). 20000 to use the minimum neighbor table (only one translation vector at the surfaces or edges; cannot be used with symmetrization). 40000 to allow mismatch between expected k-points and file values. RSSTOLrALLY5e-6Max tolerance in Bloch sum error for real-space Σ0. Σ0 is read in k-space and is immediately converted to real space by inverse Bloch transform. The real space form is forward Bloch summed and checked against the original k-space Σ0. If the difference exceeds RSSTOL the program will abort. The conversion should be exact to machine precision unless the range of Σ0 is truncated. You can control the range of real-space Σ0 with RSRNGE below. RSRNGErALLY5Maximum range of connecting vectors for real-space Σ0 (units of ALAT). NMTOiASAY0Order of polynomial approximation for NMTO hamiltonian. KMTOrASAY Corresponding NMTO kinetic energies. Read NMTO values, or skip if NMTO=0. EWALDllmYFMake strux by Ewald summation (NMTO only). VMTZrASAY0Muffin-tin zero defining wave functions. QASAiASAY3A parameter specifying the definition of ASA moments Q0,Q1,Q2 0. band code accumulates Q1, Q2 from true energy moments of sphere charges (KKR style).  Sphere code generates density from Q0× + Q2×.  This (Methfessel convention) is approximate but decouples potential parameters from charges. 1. Sphere code generates density from Q0× + Q2×; thus Q0 is the sphere charge. 2. Q1,Q2 accumulated from and , rather than power moments (not applicable to lmgf, lmpg). 3. 1+2 (Standard conventions). Add 4 to cause the sphere integrator to construct and by outward radial integration only. PMINr,r,…ALLY0 0 0 …Global minimum in fractional part of the continuous principal quantum number . Enter values for l=0,..lmx. 0: no minimum constraint. # : with #<1, fractional part of . 1: use free-electron value as minimum. Note: lmf always uses a minimum constraint, the free-electron value (or slightly higher if AUTOBAS_GW is set). You can set the floor still higher with PMIN=#. PMAXr,r,…ALLY0 0 0 …Global maximum in fractional part of the continuous principal quantum number . Enter values for l=0,..lmx. 0 : no maximum constraint. #: with #<1, uppper bound of fractional P is #. OVEPSrALLY0The overlap is diagonalized and the hilbert space is contracted, discarding the part with eigenvalues of overlap < OVEPS. Especially useful with the PMT basis, where the combination of smooth Hankel functions and APWs has a tendency to make the basis overcomplete. OVNCUTiALLY0This tag has a similar objective to OVEPS. The overlap is diagonalized and the hilbert space is contracted, discarding the part belonging to lowest OVNCUT evals of overlap. Supersedes OVEPS, if present. GMAXrlmf, lmfgwdN G-vector cutoff used to create the mesh for the interstitial density (Ry1/2). A uniform mesh with spacing between points in the three directions as homogeneous as possible, with G vectors |G| < GMAX. This input is required; but you may omit it if you supply information with the FTMESH token. FTMESHi1 [i2 i3]FPN The number of divisions specifying the uniform mesh density along the three lattice vectors. The second and third arguments default to the value of the first one, if they are not specified. This input is used only if the parser failed to read the GMAX token. TOLrFPY1e-6Specifies the precision to which the generalized LMTO envelope functions are expanded in a Fourier expansion of G vectors. FRZWFlFPYFSet to T to freeze the shape of the augmented part of the wave functions. Normally their shape is updated as the potential changes, but with FRZWF=t the potential used to make augmentation wave functions is frozen at what is read from the restart file (or free-atom potential if starting from superposing free atoms). This is not normally necessary, and freezing wave functions makes the basis slightly less accurate. However, there are slight inconsistencies when these orbitals are allowed to change shape. Notably the calculated forces do not take this shape change into account, and they will be slightly inconsistent with the total energy. FORCESiFPY0Controls how forces are to be calculated, and how the second-order corrections are to be evaluated. Through the variational principle, the total energy is correct to second order in deviations from self-consistency, but forces are correct only to first order. To obtain forces to second order, it is necessary to know how the density would change with a (virtual) displacement of the core+nucleus, which requires a linear response treatment. lmf estimates this change using one of ansatz:1.  the free-atom density is subtracted from the total density for nuclei centered at the original position and added back again at the (virtually) displaced position. The core+nucleus is shifted and screened assuming a Lindhard dielectric response. You also must specify ELIND, below. ELINDrlmfY-1A parameter in the Lindhard response function, (the Fermi level for a free-electron gas relative to the bottom of the band). You can specify this energy directly, by using a positive number for the parameter. If you instead use a negative number, the program will choose a default value from the total number of valence electrons and assuming a free-electron gas, scale that default by the absolute value of the number you specify. If you have a simple sp bonded system, the default value is a good choice. If you have d or f electrons, it tends to overestimate the response. Use something smaller, e.g. ELIND=-0.7. ELIND is used in three contexts: (1) in the force correction term; see FORCES= above. (2) to estimate a self-consistent density from the input and output densities after a band pass. (3) to estimate a reasonable smooth density from a starting density after atoms are moved in a relaxation step. SIGP[…] lmf, lmfgwdY Parameters used to interpolate the self-energy . Used in conjunction with the GW package. See gw for description. Default: not used. SIGP_MODEilmf, lmfgwdY4Specifies the linear function used for matrix elements of at highly-lying energies. High-lying states should be far enough away from the Fermi level that their effect should be small, and the result should depend very little on the choice of the constraint. By approximating for these states, one ensures that the LDA and quasiparticle eigenvectors for those states are the same. 0. constrain to be greater than . 1. constrain to be equal to . 2. constrain to be defined in the interval . 3. constrain as in SIGP_MODE=1. The difference between modes 1 and 3 are merely informational. 4. constrain to be a constant. Its value is calculated by the GW package and read from sigm.ext. This mode requires no information from the user. It is the recommended mode, available in version 7.7 or later. SIGP_NMAXilmf, lmfgwdY0Integer specifying which of the highest self-energy matrix elements are to be approximated. States higher than SIGP_NMAX have the off-diagonal part of sigma stripped; unlike the low-lying states, the diagonal part of is constrained (see SIGP_MODE above). If SIGP_NMAX is lower or equal to 0, it is not used; see SIGP_EMAX below. SIGP_EMAXrlmf, lmfgwdY2.0Alternative way to specify approximation of high-lying elements of the self-energy matrix. It is only used if SIGP_NMAX is lower or equal to 0, which case SIGP_EMAX is an energy cutoff: states above SIGP_EMAX are approximated. SIGP_NMINilmf, lmfgwdY0Integer specifying how many of the lowest-lying states are approximated by discarding the off-diagonal parts in the basis of LDA functions. If SIGP_NMIN is zero, no low-lying states are approximated. SIGP_EMINrlmf, lmfgwdY0.0Alternative way to specify approximations of low-lying elements of the self-energy matrix. It is only used if SIGP_NMIN<0, which case SIGP_EMIN is an energy cutoff: states below SIGP_EMIN are approximated. SIGP_Arlmf, lmfgwdY0.02Coefficient in the linear fit (see SIGP_MODE=0,…,3). If SIGP_MODE=4, SIGP_A is not used. In the linear constraints (SIGP_MODE=0,1) it is the constant coefficient; for SIGP_MODE=2, it is the lower bound. Note that its default value is a good estimate for Si. SIGP_Brlmf, lmfgwdY0.06Coefficient in the linear fit (see SIGP_MODE=0,…,3). If SIGP_MODE=4, SIGP_B is not used. In the linear constraints (SIGP_MODE=0,1) it is the linear coefficient; for SIGP_MODE=2, it is the upper bound. Note that its default value is a good estimate for Si. SIGP_EFITrlmf, lmfgwdY0Lower bound for the least squares fit required for a reasonable evaluation of the above coefficients SIGP_A and SIGP_B when SIGP_MODE=0,…,3. For SIGP_MODE<3, lmf will make a least-squares fit to for states higher than SIGP_EFIT. For SIGP_MODE=3, lmf will make a least-squares fit for states between SIGP_EFIT and SIGP_EMAX, which must be used if one is going to evaluate for states above some SIGP_EMAX. For the case SIGP_MODE<3 one must invoke lmf one the mesh of k-points for which the self-energy is known (there appear to be fewer problems with interpolation on that mesh). lmf accumulates the minimum, maximum, and least-squares fit for the for all the states above the cutoff. Look in the output for a line beginning with “hambls:”. Also, setting the verbosity above 45, lmf will print out the calculated for each of these states, together with the constrained value. lmf will write to file sigii.ext the data used to make the fit, and summarize the fit and the end of the file. If SIGP_MODE=4, SIGP_EFIT is not needed. SXiASAY0Calculate screened exchange potential 0. Do nothing 1. Calculate SX self-energy Sigma 2. Calculate SX Sigma with onsite term W. SXOPTScASAY-Options for SX, e.g. SXOPTS=rssig;nit=3 AUTOBAS[…] lmfa, lmf, lmfgwdY Parameters associated with the automatic determination of the basis set. These switches greatly simplify the creation of an input file for lmf. Note: Programs lmfa and lmf both use tokens in the AUTOBAS tag but they mean different things, as described below. This is because lmfa generates the parameters while lmf uses them. Default: not used. AUTOBAS_GWilmfaY0Set to 1 to tailor the autogenerated basis set file basp0.ext to a somewhat larger basis, better suited for GW. AUTOBAS_GWilmfY0Set to 1 to float log derivatives a bit more conservatively — better suited to GW calculations. AUTOBAS_LMTOilmfaY0lmfa autogenerates a trial basis set, saving the result into basp0.ext. LMTO is used in an algorithm to determine how large a basis it should construct: the number of orbitals increases as you increase LMTO. This algorithm also depends on which states in the free atom carry charge. Let lq be the highest l which carries charge in the free atom. There are the following choices for LMTO: 0. standard minimal basis; same as LMTO=3. 1. The hyperminimal basis, which consists of envelope functions corresponding those l which carry charge in the free atom, e.g. Ga sp and Mo sd (this basis is only sensible when used in conjunction with APWs). 2. All l up to lq+1 if lq<2; otherwise all l up to lq. 3. All l up to min(lq+1, 3). For elements lighter than Kr, restrict l≤2. For elements heavier than Kr, include l to 3. 4. (Standard basis) Same as LMTO=3, but restrict l≤2 for elements lighter than Ar. 5. (Large basis) All l up to max(lq+1,3) except for H, He, Li, B (use l=spd). Use the MTO token (see below) in combination with this one. MTO controls whether the LMTO basis is 1-κ or 2-κ, meaning whether 1 or 2 envelope functions are allowed per l channel. AUTOBAS_MTOilmfaY0Autogenerate parameters that control which LMTO basis functions are to be included, and their shape. Tokens RSMH,EH (and possibly RSMH2,EH2) determine the shape of the MTO basis. lmfa will determine a reasonable set of RSMH,EH automatically (and RSMH2,EH2 for a 2-κ basis), fitting to radial wave functions of the free atom. Note: lmfa can generate parameters and write them to file basp0.ext. lmf can read parameters from basp.ext. You must manually create basp.ext, e.g. by copying basp0.ext into basp.ext. You can tailor basp.ext with a text editor. Here are the following choices for MTO: 0: do not autogenerate basis parameters. 1 or 3 : 1-κ parameters with Z-dependent LMX. 2 or 4: 2-κ parameters with Z-dependent LMX. For lmfa 1 and 3 are equivalent, as are 2 and 4. AUTOBAS_MTOilmf, lmfgwdY0Read parameters RSMH,EH,RSMH2,EH2 that control which LMTO basis functions enter the basis. Once initial values have been generated you can tune these parameters automatically for the solid, using lmf with the –optbas switch; see here (or for a simple input file guide, here) and here. The –optbas step is not essential, especially for large basis sets, but it is a way to improve on the basis without increasing the size. Here are the following choices for MTO: 0 Parameters not read from basp.ext; they are specified in the input file ctrl.ext. 1 or 3: 1-κ parameters may be read from the basis file basp.ext, if they exist. 2 or 4: 2-κ parameters may be read from the basis file basp.ext, if they exist. 1 or 2: Parameters read from ctrl.ext take precedence over basp.ext. 3 or 4: Parameters read from basp.ext take precedence over those read from ctrl.ext. AUTOBAS_PNUilmfaY0Autoset boundary condition for augmentation part of basis, through specification of the continuous principal quantum number . 0 do not make P 1 Find P for l < SPEC_LMXA from free atom wave function; save in basp0.ext. AUTOBAS_PNUilmf, lmfgwdY0Autoset boundary condition for augmentation part of basis, through specification of the continuous principal quantum number . 0 do not attempt to read P from basp.ext. 1 Read P from basp.ext, for species which P is supplied. AUTOBAS_LOCilmfa, lmf, lmfgwdY0Autoset local orbital parameters PZ, which determine which high-lying deep or states are to be included as local orbitals (P.Q.N. differs by ±1). For deep states, LO can either be conventional or extended (they spill into the interstitial) PZ can be read either from the ctrl file or the basp file. Used by lmfa to control whether parameters PZ are to be sought: 0 (or 3): do not autogenerate PZ. Option 3 writes PZ to basp file contents of ctrl file. 1 (or 5) : autogenerate extended (or conventional) PZ. Nonzero values from ctrl file take precedence over basp file 2 (or 6) : same as 1 or 5, but contents of ctrl file are ignored. Default: 0 Used by lmf and lmfgwd to control how PZ is read: 0 or 3 do not read read parameters PZ from basp file 1-3 read read parameters. 1: Nonzero values from ctrl file take precedence over basp file; otherwise basp takes precedence. Adding 4 to this parameter has no effect. Default: 1 AUTOBAS_RSMMXrlmfaY2/3sets an upper bound to LMTO smoothing radius RSMH, when autogenerating a basis set. Value is a multiple of the MT radius. AUTOBAS_EHMXrlmfaYsets an upper bound to LMTO smoothed Hankel energy EH, when autogenerating a basis set. Default depends on whether AUTOBAS_GW is set. AUTOBAS_ELOCrlmfaY-2 RyThe first of two criteria to decide which orbitals should be included in the valence as local orbitals. If the energy of the free atom wave function exceeds (is more shallow than) ELOC, the orbital is included as a local orbital. AUTOBAS_QLOCrlmfaY0.005The second of two criteria to decide which orbitals should be included in the valence as local orbitals. If the fraction of the free atom wave function’s charge outside the augmentation radius exceeds QLOC, the orbital is included as a local orbital. AUTOBAS_PFLOATi1 i2lmf, lmfgwdy1 1Governs how the Pnu are set and floated in the course of a self-consistency cycle. The 1st argument controls default starting values of P and lower bounds to P when it is floated. 0: Use pre-2002 (i.e. version 6) lower bound for P (lmf only). 1: Use defaults and float lower bound designed for LDA. 2: Use defaults and float lower bound designed for GW. The 2nd argument controls how the band center of gravity (CG) is determined — used when floating P. 0: band CG is found by a traditional method. 1: band CG is found from the true energy moment of the density. See Table of Contents Category GF is intended for parameters specific to the Green’s function code lmgf. and is read by that code. See the Green’s function web page and also the Introductory Tutorial for lmgf. MODEiASAY0Tells lmgf what function to perform. See also the Green’s function web page 0: do nothing. 1: self-consistent cycle. 10: Transverse magnetic exchange interactions J(q). 11: Read J(q) from disk and analyze results. 14: Longitudinal exchange interactions. 20: Transverse χ+− from ASA Green’s function. 21: Read χ from disk and analyze results. 20: Transverse χ++, χ−− from ASA Green’s function Caution: Modes 14 and higher have not been maintained. GFOPTScASAY ASCII string with switches governing execution of lmgf or lmpg. Use  ’;’  to separate the switches, e.g. GFOPTS=p3;padtol=1e-7 . Switches in GFOPTS are documented on the Green’s function web page. DLMiALLY0Disordered local moments for CPA. Governs self-consistency for both chemical CPA and magnetic CPA. 12 : normal CPA/DLM calculation: charge and coherent potential Ω both iterated to self-consistency. 32 : Ω alone is iterated to self-consistency. BXY1ALLYF(DLM) Setting this switch to T generates a site-dependent constraining field to properly align magnetic moments. In this context constraining field is applied by scaling the LDA exchange-correlation field. The scaling factor is [1+bxc(ib)^2]1/2. A table of bxc is kept for each site in the first column of file shfac.ext. TEMPrALLY0(DLM) spin temperature. See Table of Contents Category GW holds parameters specific to GW calculations, particularly for the GW driver lmfgwd. Most of these tokens supply values for tags in the GWinput template when lmfgwd generates it (--jobgw -1). CODEilmfgwdY2This token tells what GW code you are creating input files for. lmfgwd serves as a driver to several GW codes. 0. First GW version v033a5 (code still works but it is no longer maintained) . 2. Current version of GW codes . 1. Driver for the Julich spex code (not fully debugged or maintained). NKABC1 to 3 ilmfgwdY k-mesh for GW. This token serves the same function for GW as BZ_NKABC does for lmf, and the input format is the same. When generating a GWinput template, lmfgwd passes the contents of NKABC to the n1n2n3 tag. Note: Shell scripts lmgw and lmgwsc used for the GW codes may also use this token. When invoked with switches –getsigp or –getnk, they will modify the n1n2n3 tag in GWinput. The data they use is taken from GW_NKABC. MKSIGilmfgwdY3(self-consistent calculations only). Controls the form of (the QSGW approximation to the dynamical self-energy , where refers to a matrix element of Σ between eigenstates n and n′, at energy E relative to EF. When generating a GWinput template, lmfgwd passes MKSIG to the iSigMode tag. Values of this tag have the following meanings. 0. do not make Σ0 1. Σ0 = Σnn (EF) if nn’, and Σnn(En) if n=n’: mode B, Eq.(11) in Phys. Rev. B76, 165106 (2007) 3. Σ0 = 1/2[Σnn (En) + Σnn (En)]: mode A, Eq.(10) in Phys. Rev. B76, 165106 (2007) 5. “eigenvalue only” self-consistency Σ0 = δnnΣnn‘ (En) GCUTBrlmfgwdY2.7G-vector cutoff for basis envelope functions as used in the GW package (Ry1/2). When generating a GWinput template, lmfgwd passes GCUTB to the QpGcut_psi tag in GWinput.. GCUTXrlmfgwdY2.2G-vector cutoff for interstitial part of two-particle objects such as the screened coulomb interaction (Ry1/2). When generating a GWinput template, lmfgwd passes GCUTX to the QpGcut_cou tag. ECUTSrlmfgwdY2.5 Ry(for self-consistent calculations only). Maximum energy for which to calculate the described in MKSIG above. This energy should be larger than HAM_SIGP_EMAX which is used to interpolate . When generating a GWinput template, lmfgwd passes ECUTS+1/2 to the emax_sigm tag in the GWinput file. ECUTPBrlmfgwdY-6.5 RyFor core states with energy >ECUTPB, include this wave in `occ’ column of the core product basis setup NIMEilmfgwdY6Number of frequencies on the imaginary integration axis when making the correlation part of Σ. When generating a GWinput template, lmfgwd passes NIME to the new tag. DELRErlmfgwdY0.01, 0.1Frequency mesh parameters DW and OMG defining the real axis mesh in the calculation of Im . The ith mesh point is given by: ωi=DW×(i−1) + [DW×(i−1)]2/OMG/2 Points are approximately uniformly spaced, separated by DW, up to frequency OMG, around which point the spacing begins to increase linearly with frequency. When generating a GWinput template, lmfgwd passes DELRE(1) to the dw tag and DELRE(2) to the omg_c tag. Note: the similarity to OPTICS_DW used by the optics part of lmf and lm. DELTArlmfgwdY-1e-4δ-function broadening for calculating χ0, in atomic units. Tetrahedron integration is used if DELTA<0. When generating a GWinput template, lmfgwd passes DELTA to the delta tag. DELTAWrlmfgwdY0.02Width for finite difference in energy differentiation of Σ(ω) for Z factor GSMEARrlmfgwdY0.003Broadening width for smearing pole in the Green’s function when calculating Σ. This parameter is sometimes important in metals, e.g. Fe. When generating a GWinput template, lmfgwd passes GSMEAR to the esmr tag. The tag is described in this manual PBTOLrlmfgwdY0.001Overlap criterion for product basis functions inside augmentation spheres. The overlap matrix of the basis of product functions generated and diagonalized for each l. Functions with overlaps less than PBTOL are removed from the product basis. When generating a GWinput template, lmfgwd passes PBTOL to the second line after the start of the PRODUCT_BASIS section. USEBSEWilmfgwdY0If 1, include ladder diagram contributions to W QOFFPilmfgwdY1Not documented See Table of Contents Category DMFT holds parameters for the inteface to the DMFT, particularly for the DMFT driver lmfdmft. Unless otherwise specified, the only code reading tags from the DMFT category is lmfdmft . NKABC1 to 3 i Y Defines the k-mesh on each of 3 lattice vectors for DMFT driver. If not present, substitute BZ_NKABC. NLOHI2 i Y-first and last eigenstates to include in projector, relative to EF WLOHI2 r Y-(used only if NLOHI not found) lower, upper bound to frequency to include in projector, relative to EF PROJi Y-DMFT projector type KNORMi Y0How local projectors are normalized: 0: k-independent normalization 1:k-dependent normalization BROADr N0.0025(for ω on real axis only) additional broadening of sigma, in eV BETARr --Inverse temperature, in Ry−1 BETAKr --(read only if the preceding tag is missing) Inverse temperature, in K−1 BETAr --(read only if the preceding two tag are missing) Inverse temperature, in eV−1 NOMEGAi Y2000Number of points on frequency mesh The following tokens are read for each inequivalent correlated subblock. Data sandwiched between successive occurences of token BLOCK within DMFT apply to one DMFT correlated block. Li N l quantum number defining this correlated subblock QSPLITi Y2for compatibility with Haule. For now, QSPLIT should always be 2 SITESi, i, … N List of sites with this correlated block. Note: you can use a negative number for a site index. The minus sign indicates that spins 1 and 2 are to be flipped. In the nonmagnetic case this should have no effect, but for a magnetic site, sites with negative indices are antiferromagnetic (same moment amplituded) to their counterparts with positive index. SIDXDi, i, … N (diagonal Σ only)   list of (2l+1) components of diagonal ΣDMFT(ω) to calculate (see tutorial). Equal values imply equivalent elements, and 0 value implies matrix element is not calculated. List must contain contiguous numbers. SIDXMi, i, … - (full Σ, read only if SIDXD is missing)   (2l+1)2 components of ΣDMFT(ω) to calculate. Read in (11, 12, 13, … 21, 22, 23, …) order. SIDXA  - (full Σ, read only if SIDXM is missing)   populate all (2l+1)2 elements of the Σ matrix. UMODEi Y10specifies approximation for U. 1s, 10s, 100s digit are independent numbers. For now, only 10s digit is implemented. 1s digit:  0: u(1) = Hubbard U, u(2) = Hubbard J  1: u(1) = F0, u(2) = F2, u(3) = F4  2: u(1) = screening length, Yukawa potential 10s digit:  0: density-density  1: full matrix U 100s digit:  0: U is static  1: U is dynamical Alternatively, specify by strings separated by ~, one string for each of 1s digit (uj, slater, yukawa), 10s digit (density, full), 100s digit (static, dynamic). Thus UMODE=full~static is equivalent to UMODE=10. See Table of Contents This category is optional, and merely prints to the standard output whatever text is in the category. For example: HEADER This line and the following one are printed to standard output whenever a program is run. HEADER [ In this form only two lines reside within the category delimiters,] and only two lines are printed. See Table of Contents This optional category controls what kind of information, and how much, is written to the standard output file. SHOW1allYFEcho lines as they are read from input file and parsed by the proprocessor. Command-line argument --show provides the same functionality. HELP1allYFShow what input would be sought, without attempting to read data. Command-line argument --input provides the same functionality. VERBOS1 to 3allY30Sets the verbosity. 20 is terse, 30 slightly terse, 40 slightly verbose, 50 verbose, and so on. If more than one number is given, later numbers control verbosity in subsections of the code, notably the parts dealing with augmentation spheres. IACTIV1allYFTurn on interactive mode. Programs will prompt you with queries, in various contexts. TIM1 or 2allY0, 0Prints out CPU usage of blocks of code in a tree format. First value sets tree depth. Second value, if present, prints timings on the fly. May also be controlled from the command-line: --time=#1[,#2] See Table of Contents The ITER category contains parameters that control the requirements to reach self-consistency. It applies to all programs that iterate to self-consistency: lmlmflmmclmgflmpgtbelmfa. A detailed discussion can be found at the end of this document. NITiallY1Maximum number of iterations in the self-consistency cycle. MIXcallY A string of mixing rules for mixing input, output density in the self-consistency cycle. The syntax is given below. See here for detailed description of the mixing. CONVrallY1e-5Maximum energy change from the prior iteration for self-consistency to be reached. See annotated lmf output. CONVCrallY3e-5Maximum in the RMS difference in the density noutnin. See below. UMIXrallY1Mixing parameter for density matrix; used with LDA+U TOLUrallY0Tolerance for density matrix; used with LDA+U NITUiallY0Maximum number of LDA+U iterations of density matrix AMIXcASAY Mixing rules when extra degrees of freedom, e.g. Euler angles, are mixed independently. Uses the same syntax as MIX. NRMIXi1 i2ASA, lmfaY80, 2Used when self-consistency is needed inside an augmentation sphere. This occurs when the density is determined from the momentsQ0,Q1,Q2 in the ASA; or in the free atom code, just Q0. i1: max number of iterations i2: number of prior iterations for Anderson mixing2 of the sphere density Note: You will probably never need to use this token. See Table of Contents Optics functions available with the ASA extension packages OPTICS. It is read by lm and lmf. MODEiOPTICSY00: make no optics calculations 1: generate linear 20: generate second harmonic ε  Example: optics/test/test.optics sic The following cases (MODE<0) generate joint or single density-of-states. Note: MODE<0 works only with LTET=3 described below. −1: generate joint density-of-states  (ASA) optics/test/test.optics --all 4  (FP) fp/test/test.fp zbgan −2: generate joint density-of-states, spin 2  Example:optics/test/test.optics fe 6 −3: generate up-down joint density-of-states −4: generate down-up joint density-of-states −5: generate spin-up single density-of-states  Example: optics/test/test.optics --all 7 −6: generate spin-dn single density-of-states LTETiOPTICSY00: Integration by Methfessel-Paxton sampling 1: standard tetrahedron integration 3: enhanced tetrahedron integration Note: In the metallic case, states near the Fermi level must be treated with partial occupancy. LTET=3 is the only scheme that handles this properly. It was adapted from the GW package and has extensions, e.g. the ability to handle non-vertical transitions . WINDOWr1 r2OPTICSN0 1Energy (frequency) window over which to calculate Im[ε(ω)]. Im ε is calculated on a mesh of points . The mesh spacing is specified by NPTS or DW, below. NPTSiOPTICSN501Number of mesh points in the energy (frequency) window. Together with WINDOW, NPTS specifies the frequency mesh as: = WINDOW(1) + DW×(i−1) where DW = (WINDOW(2)−WINDOW(1))/(NPTS−1) Note: you may alternatively specify DW below. DWr1 [r2]OPTICSY Frequency mesh spacing DW[,OMG]. You can supply either one argument, or two. If one argument (DW) is supplied, the mesh will consist of evenly spaced points separated by DW. If a second argument (OMG) is supplied, points are spaced quadratically as: = WINDOW(1) + DW×(i−1) + [DW×(i−1)]2/OMG/2 Spacing is approximately uniform up to frequency OMG; beyond which it increases linearly. Note: The quadratic spacing can be used only with LTET=3. FILBNDi1 [i2]OPTICSY0 no. electronsi1[,i2] occupied energy bands from which to calculate ε using first order perturbation theory, without local fields. i1 = lowest occupied band i2 = highest occupied band (defaults to no. electrons) EMPBNDi1 [i2]OPTICSY0 no. bandsi1[,i2] empty energy bands from which to calculate ε using first order perturbation theory, without local fields. i1 = lowest unoccupied band i2 = highest unoccupied band (defaults to no. bands) PARTiOPTICSY0Resolve ε or joint DOS into band-to-band contributions, or by k. Result is output into file popt.ext. 0. No decomposition 1. Resolve ε or DOS into individual (occ,unocc) contributions  Example: optics/test/test.optics ogan 5 2. Resolve ε or DOS by k  Example: optics/test/test.optics --all 6 3. Both 1 and 2 Add 10 to write popt as a binary file. CHI2[..] lmY Tag containing parameters for second harmonic generation. Not calculated unless tag is parsed.  Example: optics/test/test.optics sic CHI2_NCHI2ilmN0Number of direction vectors for which to calculate χ2, i.e. the nonlinear susceptibility tensor. CHI2_AXESi1, i2, i3lmN Direction vectors for each of the NCHI2 sets ESCISSrOPTICSY0Scissors operator (constant energy added to unoccupied levels, in Ry) ECUTrOPTICSY0.2Energy safety margin for determining (occ,unocc) window. lmf will attempt to reduce the number of (occ,unocc) pairs by restricting, for each k, transitions that contribute to the response, i.e. to those inside the optics WINDOW. The window is padded by ECUT to include states outside, but near the edge of the window. States outside window may nevertheless make contribution, e.g. because they can be part of a tetrahedron that does contribute. If you do not want lmf to restrict the range, use ECUT<0. NMPiOPTICSYBZ_NIf present, supersedes BZ_N for the optics energy integration WrOPTICSYBZ_WIf present, supersedes BZ_W for energy integration entering into the dielectric function MEFACiOPTICSY0Contribution from nonlocal self-energy to velocity operator. 1. include 2. approximate correction to using ratio of QP to LDA eigenvalues. (Approximation is exact if LDA and QP eigenvalues are the same). FFMTiOPTICSY0Governs formatting of optics file 0. fortran F format 1. fortran E format IQi1, i2, i3OPTICSY0q vector for JDOS(q), in multiples of qlat/BZ_NKABC ESMRrOPTICSY0.05Energy smearing width for determining (occ,unocc) window. States are excluded for which occ<EF-ESMR or unocc>EF+ESMR. ALLTRANSlOPTICSYFDo not limit allowed transitions to occ<EF-ESMR and unocc>EF+ESMR FERMIrOPTICSYNULLIf not NULL, supersede calculated Fermi level with given value when calculating dielectric function. IMREFr1 r2OPTICSYNULLIf not NULL, quasi-Fermi levels for occ and unocc states (nonequilibrium optics) KTrOPTICSY-Temperature for Fermi functions (Ry). Used when NMP<0. See Table of Contents Portions of OPTIONS are read by these codes: HF1lm, lmfYFIf T, use the Harris-Foulkes functional only; do not evaluate output density. SHARM1ASA, lmf, lmfgwdYFIf T, use true spherical harmonics, rather than real harmonics. FRZlallYF(ASA) If T, freezes core wave functions. (FP) If T, freezes the potential used to make augmented partial waves, so that the basis set does not change with potential. SAVVEC1lmYFSave eigenvectors on disk. (This may be enabled automatically in some circumstances) Q=strncallY  Q=SHOW,  Q=ATOM,  Q=HAM,  Q=POT,  Q=BAND,  Q=DOS,  Q=RHO  make the program stop at selected points without completing a full iteration. SCRiASAY0Is connected with the generation or use of the q->0 ASA dielectric response function. It is useful in cases when there is difficulty in making the density self-consistent. See here for documentation. 0. Do not screen qout−qin. 1. Make the ASA response function P0. 2. Use P0 to screen qout−qin and the change in ves. 3. 1+2 (lmgf only). 4. Screen qout−qin from a model P0. 5. Illegal input. 6. Use P0 to screen the change in ves only. P0 and U should be updated every iteration, but this is expensive and not worth the cost. However, you can: Add 10k to recompute intra-site contribution U every kth iteration, 0<k≤9. Add 100k to recompute P0 every kth iteration (lmgf only).  Examples: testing/test.scr and gf/test/ mnpt 6 ASA[…]rASAN Parameters associated with ASA-specific input. ASA_ADNF1ASAYFEnables automatic downfolding of orbitals. ASA_NSPH1ASAY0Set to 1 to generate l>0 contributions (from neighboring sites) to l=0 electrostatic potential ASA_TWOCiASAY0Set to 1 to use the two-center approximation ASA hamiltonian ASA_GAMMAiASAY0Set to 1 to rotate to the (orthogonal) gamma representation. This should have no effect on the eigenvalues for the usual three-center hamiltonian, but converts the two-center hamiltonian from first order to second order. Set to 2 to rotate to the spin-averaged gamma representation. The lm code does not allow downfolding with GAMMA≠0. ASA_CCORllmYTIf F, suppresses the combined correction. By default it is enabled. Note: NB: if any orbitals are downfolded, CCOR is automatically enabled. ASA_NEWREPilmYFSet to 1 to rotate structure constants to a user-specified representation. It requires special compilation to be effective ASA_NOHYB1lmYFSet to 1 to turn off hybridization ASA_MTCOR1lmYFSet to T to turn on Ewald MT correction ASA_QMTrNCY0Override standard background charge for Ewald MT correction Input only meaningful if MTCOR=T RMINESrlmchkN1Minimum augmentation radius when finding new empty sites (--getwsr) RMAXESrlmchkN2Maximum augmentation radius when finding new empty sites (--getwsr) NESABCi,i,ilmchkN100Number of mesh divisions when searching for empty spheres (--getwsr) NEPHDilmf,lmfgwdY0Controls the number of energy points for computing the energy derivative of the partial wave by finite difference. 0: lmf uses 2 points, lmfgwd uses 4 points (for historic compatibity) 2: use 2 points for both lmf and lmfgwd 4: use 4 points for both lmf and lmfgwd See Table of Contents Category PGF concerns calculations with the layer Green’s function program lmpg. It is read by lmpg and lmstr. MODEiASAY 0: do nothing. 1: diagonal layer GF.  Examples: pgf/test/test.pgf -all 5 and pgf/test/test.pgf -all 6 2: left- and right-bulk GF. 3: find k(E) for left bulk.  Example: pgf/test/test.pgf 2 4: find k(E) for right bulk. 5: Calculate ballistic current.  Example: pgf/test/test.pgf femgo SPARSEiASAY00: Calculate G layer by layer using Dyson’s equation  Example: pgf/test/test.pgf -all 5 1: Calculate G using LU decomposition  Example: pgf/test/test.pgf -all 6 PLATLrASAN The third lattice vector of left bulk region PLATRrASAN The third lattice vector of right bulk region GFOPTScASAY ASCII string with switches governing execution of lmgf or lmpg. Use  ‘;’ to separate the switches. Available switches: p1 First order of potential function p3 Third order of potential function pz Exact potential function (some problems; not recommended) Use only one of the above; if none are used, the code makes second order potential functions idos integrated DOS (by principal layer in the lmpg case) noidos suppress calculation of integrated DOS pdos accumulate partial DOS emom accumulate output moments; use noemom to suppress noemom suppresss accumulation of output moments sdmat make site density-matrix dmat make density-matrix frzvc do not update potential shift needed to obtain charge neutrality ‘padtol** Tolerance in Pade correction to charge. If tolerance exceeded, lmgf will repeat the band pass with an updated Fermi level omgtol (CPA) tolerance criterion for convergence in coherent potential omgmix (CPA) linear mixing parameter for iterating convergence in coherent potential nitmax (CPA) maximum number of iterations to iterate for coherent potential lotf (CPA) dz (CPA) See Table of Contents Category SITE holds site information. As in the SPEC category, tokens must read for each site entry; a similar restriction applies to the order of tokens. Token ATOM= must be the first token for each site, and all tokens defining parameters for that site must occur before a subsequent ATOM=. FILEcallY Provides a mechanism to read site data from a separate file. File subs/iosite.f documents the syntax of the site file structure. The reccommended (standard) format has the following syntax: The first line should contain a ‘%’ in the first column, and a `version’ token vn=#. Structural data (see category STRUC documentation) may also be included in this line. Each subsequent line supplies input for one site. In the simplest format, a line would have the following: spid x y z where spid is the species identifier (same information would otherwise be specified by token ATOM= below) and x y z are the site positions. Examples: fp/test/test.fp er and fp/test/test.fp tio2 Bug: when you read site data from an alternate file, the reader doesn’t compute the reference energy. Kotani format (documented here but no longer maintained). In this alternative format the first four lines always specify data read in the STRUC category; see FILE= in STRUC. Then follow lines, one line for each site ib iclass spid x y z The first number is merely a basis index and should increment 1,2,3,4,… in successive lines. The second class index is ignored by these programs. The remaining columns are the species identifier for the site positions. If SITE_FILE is missing, the following are read from the ctrl file: ATOMcallN Identifies the species (by label) to which this atom belongs. It is a fatal error for the species not to have been defined. ATOM_POSr1 r2 r3allN The basis vector (3 elements), in dimensionless Cartesian coordinates. As with the primitive lattice translation vectors, the true vectors (in atomic units) are scaled from these by ALAT in category STRUC. NB: XPOS and POS are alternative forms of input. One or the other is required. ATMOM_XPOSr1 r2 r3allN Atom coordinates, as (fractional) multiples of the lattice vectors. ATOM_DPOSr1 r2 r3allY0 0 0Shift in atom coordinates to POS ATOM_RELAXi1 i2 i3allY1 1 1Relax site positions (lattice dynamics or molecular statics) or Euler angles (spin dynamics, ASA). Three numbers correspond to , , Cartesian components. 0 constrains component not to move; 1 allows it to move. ATOM_RMAXSrFPY Site-dependent radial cutoff for structure constants, in a.u. ATOM_ROTcASAY Rotation of spin quantization axis at this site ATOM_PLilmpgY0(lmpg) Assign principal layer number to this site See Table of Contents Category SPEC contains species-specific information. Because data must be read for each species, tokens are repeated (once for each species). For this reason, there is some restriction as to the order of tokens. Data for a specific species (Z=, R=, R/W=, LMX=, IDXDN= and the like described below) begins with a token ATOM=;  input of tokens specific to that species must precede the next occurence of ATOM=. The following tokens apply to the automatic sphere resizer: SCLWSRrALLY0SCLWSR>0 turns on the automatic sphere resizer. It defaults to 0, which turns off the resizer. The 10’s digit tells the resizer how to deal with resizing empty spheres; see this page. OMAX1r1 r2 r3ALLY0.16, 0.18, 0.2Constrains maximum allowed values of sphere overlaps. This overlap is defined as , where and ae the two sphere radii, and is the bond length. See this page. You may input up to three numbers, which correspond to atom-atom, atom-empty-sphere, and empty-sphere-empty-sphere overlaps respectively. OMAX2r1 r2 r3ALLY0.4, 0.45, 0.5Constrains maximum allowed values of sphere overlaps defined as ; see this page. Both constraints are applied. WSRMAXrALLY0Imposes an upper limit to any one sphere radius The following tokens are input for each species. Data sandwiched between successive occurences of ATOM apply to one species. ATOMcallN A character string (8 characters or fewer) that labels this species. This label is used, e.g. by the SITE category to associate a species with an atom at a given site. The species ID also names a disk file with information about that atom (potential parameters, moments, potential and some sundry other information). More precisely, species are split into classes, the program differentiates class names by appending integers to the species label. The first class associated with the species has the species label; subsequent ones have integers appended.  Example: testing/test.ovlp 3 ZrallN Nuclear charge. Normally an integer, but Z can be a fractional number. A fractional number implies a virtual crystal approximation to an alloy with some Z intermediate between the two integers sandwiching it. RrallN The augmentation sphere radius, in atomic units. This is a required input for most programs: choose one of R=, R/W= or R/A=. Read descriptions of the R/W AND R/A below for further remarks; also see this page for a more complete discussion on the choice of sphere radii. lmchk can find sphere radii automatically. Invoke lmchk with -–getwsr. You can also rescale as-given radii to meet constraints with the SCLWSR token. R/WrallN R/W= ratio of the augmentation sphere radius to the average Wigner Seitz radius W. W is the radius of a sphere such that (4πW3/3) = V/N, where V/N is the volume per atom. Thus if all radii are equal with R/W=1, the sum of sphere volumes would fill space, as is usual in the ASA. You must choose the radii so that the sum of sphere volumes () equals the unit cell volume V; otherwise results may become unreliable. The space-filling requirement means sphere may overlap quite a lot, particularly in open systems. If sphere overlaps get too large (>20% or so), accuracy becomes an issue. In such a case you should add “empty spheres” to fill space. Use lmchk to print out sphere overlaps. lmchk also has an automatic empty spheres finder, which you invoke with the -–findes switch; see here for a discussion. Example: testing/test.ovlp 3 FP results are much less sensitive to the choice of sphere radii. Strictly, the spheres should not overlap, but because of lmf’s unique augmentation scheme, overlaps of up to 10% cause negligibly small errors as a rule. (This does not apply to GW calculations!) Even so, it is not advisable to let the overlaps get too large. As a general rule the L-cutoff should increase as the sphere radius increases. Also it has been found in practice that self-consistency is harder to accomplish when spheres overlap significantly. R/ArallN R/A = ratio of the aumentation sphere radius to the lattice constant ArallY0.025Radial mesh point spacing parameter. All programs dealing with augmentation spheres represent the density on a shifted logarithmic radial mesh. The ith point on the mesh is . b is determined from the number of radial mesh points specified by NR. NRiallYDepends on other inputNumber of radial mesh points LMXiallYNL-1Basis l-cutoff inside the sphere. If not specified, it defaults to NL−1 RSMHr,r,…lmf, lmfgwdY0Smoothing radii defining basis (a.u.), one radius for each l. RSMH and EH together define the shape of basis function in lmf. To optimize, try running lmf with --optbas. EHr,r,…lmf, lmfgwdY Hankel energies for basis (Ry), one energy for each l. RSMH and EH together define the shape of basis function in lmf. RSMH2r,r,…lmf, lmfgwdY0Basis smoothing radii, second group EH2r,r,…lmf, lmfgwdY Basis Hankel function energies, second group LMXAiFPYNL - 1Angular momentum l-cutoff for projection of wave functions tails centered at other sites in this sphere. Must be at least the basis l-cutoff (specified by LMX=). IDXDNiASAY1A set of integers, one for each l-channel marking which orbitals should be downfolded. 0 use automatic downfolding in this channel. 1 leaves the orbitals in the basis. 2 folds down about the inverse potential function at 3 folds down about the screening constant alpha. In the FP case, 1 includes the orbital in the basis; >1 removes it KMXAilmf, lmfgwdY3Polynomial cutoff for projection of wave functions in sphere. Smoothed Hankels are expanded in polynomials around other sites instead of Bessel functions as in the case of normal Hankels. RSMArlmf, lmfgwdYR * 0.4Smoothing radius for projection of smoothed Hankel tails onto augmentation spheres. These functions are expanded in polynomials by integrating with Gaussians of radius RSMA at that site. RSMA very small reduces the polynomial expansion to a Taylor series expansion about the origin. For large KMXA the choice is irrelevant, but RSMA is best chosen that maximizes the convergence of smooth Hankel functions with KMXA. LMXLilmf, lmfgwdYNL - 1Angular momentum l-cutoff for explicit representation of local charge on a radial mesh. RSMGrlmf, lmfgwdYR/4Smoothing radius for Gaussians added to sphere densities to correct multipole moments needed for electrostatics. Value should be as large as possible but small enough that the Gaussian doesn’t spill out significantly beyond the Radius of the Muffin-Tin (RMT). LFOCAiFPY1Prescribes how the core density is treated. 0 confines core to within RMT. Usually the least accurate. 1 treats the core as frozen but lets it spill into the interstitial 2 same as 1, but interstitial contribution to vxc treated perturbatively. RFOCArFPYR × 0.4Smoothing radius fitting tails of core density. A large radius produces smoother interstitial charge, but less accurate fit. RSMFArFPYR/2Smoothing radius for tails of free-atom charge density. Irrelevant except first iteration only (non-self-consistent calculations using Harris functional). A large radius produces smoother interstitial charge, but somewhat less accurate fit. RS3rFPY1Minimum allowed smoothing radius for local orbital HCRrlmY Hard sphere radii for structure constants. If token is not parsed, attempt to read HCR/R below HCR/RrlmY0.7Hard sphere radii for structure constants, in units of R ALPHArASAY Screening parameters for structure constants DVrASAY0Artificial constant potential shift added to spheres belonging to this species MIX1ASAYFSet to suppress self-consistency of classes in this species IDMODiallY00 : floats Pl aka continuous principal quantum number to band center of gravity 1 : freezes 2 : freezes linearization energy . lmf,lmfgwd only: Add 10 to orthogonalize partial waves to highest core level for a particular l. CSTRMX1allYFSet to T to exclude this species when automatically resizing sphere radii GRP2iASAY0Species with a common nonzero value of GRP2 are symmetrized, independent of symmetry operations. The sign of GRP2 is used as a switch, so species with negative GRP2 are symmetrized but with spins flipped (NSPIN=2) FRZWF1FPYFSet to T to freeze augmentation wave functions for this species IDUi[,i…]allY0LDA+U mode. IDU is a vector, with one number for each l corresponding to s, p, d, f. A number signifies: 0. No LDA+U 1. LDA+U with Around Mean Field limit double counting 2. LDA+U with Fully Localized Limit double counting 3. LDA+U with mixed double counting. 4. U is treated as a potential, not coulomb interaction. A fixed potential U is added to both spin channels. 5. Same as 4, but U is a potential applied to the majority channel and J to the minority channel. UHr[,r…]allY0Hubbard U for LDA+U (Ry). UH is a vector, with one number for each l. JHr[,r…]allY0Exchange parameter J for LDA+U (Ry). JH is a vector, with one number for each l. EREF=rallY0Reference energy subtracted from total energy AMASS=rFPY Nuclear mass in a.u. (for dynamics) C-HOLEclmf, lmY Channel for core hole. You can force partial core occupation. Syntax consists of two characters, the principal quantum number and the second one of ‘s’, ‘p’, ‘d’, ‘f’ for the l quantum number, e.g. ‘2s’ See Partially occupied core holes for description and examples. Default: nothing C-HQr[,r]allY-1 0First number specifies the number of electrons to remove from the l channel specified by C-HOLE=. Second (optional) number specifies the hole magnetic moment. See Partially occupied core holes for description and examples. Pr,r,…allY Starting values for Pl, aka “continuous principal quantum number”, one for each l=0..LMXA Default: taken from an internal table. PZr,r,…FPY0starting values for local orbital’s potential functions, one for each of l=0..LMX. Setting PZ=0 for any l means that no local orbital is specified for this l. Each integer part of PZ must be either one less than P (semicore state) or one greater (high-lying state). Qr,r,…allY Charges for each l-channel making up free-atom density Default: taken from an internal table. MMOMr,r,…allY0Magnetic moments for each l-channel making up free-atom density Relevant only for the spin-polarized case. See Table of Contents Category STR contains information connected with real-space structure constants, used by the ASA programs. It is read by lmstr, lmxbs, lmchk, and tbe. RMAXSrallY Radial cutoff for strux, in a.u. If token is not parsed, attempt to read RMAXA, below RMAXArallY Radial cutoff for strux, in units of the lattice constant. If token is not parsed, attempt to read RMAX, below RMAXrallY0The maximum sphere radius (in units of the average Wigner Seitz radius) over which neighbors will be included in the generation of structure constants. This takes a default value and is not required input. It is an interesting exercise to see how much the structure constants and eigenvalues change when this radius is increased. NEIGHBiFPY30Minimum number of neighbors in cluster ENV_MODEiFP, lm, lmstrY0lmf: 1 turns on streened version of basis lm: Type of envelope functions: 0 2nd generation 1 SSSW (3rd generation) 3 SSSW and val-lap basis ENV_NELiFP, lm, lmstrY lmf: Number of screened envelope functions. In the ASA context (lm): number of NMTOs (SSSW) ENV_ELrFP, lm, lmstrN0lmf: Number of screened smooth Hankel functions. In the ASA context (lm): NMTO (SSSW) energies, in a.u. DELRXrASAY3Range of screened function beyond last site in cluster TOLGrFPY1e-6Tolerance in l=0 gaussians, which determines their range RVL/RrallY0.7Radial cutoff for val-lap basis (this is experimental) VLFUNiallY0Functions for val-lap basis (this is experimental) 0 G0 + G1 1 G0 + Hsm 2 G0 + Hsm-dot MXNBRiASAY0Make lmstr allocate enough memory in dimensioning arrays for MXNBR neighbors in the neighbor table. This is rarely needed. SHOW1lmstrYFShow strux after generating them EQUIV1lmstrYFIf true, try to find equivalent neighbor tables, to reduce the computational effort in generating strux. Not generally recommended LMAXWilmstrY-1l-cutoff for (optional) Watson sphere, used to help localize strux DELRWrlmstrY0.1Range extending beyond cluster radius for Watson sphere IINV_NIT=ilmstrY0Number of iterations IINV_NCUTilmstrY0Number of sites for inner block IINV_TOLrlmstrY0Tolerance in errors *IINV parameters govern iterative solutions to screened strux See Table of Contents This category is specific to the ASA. It controls whether the code starts with moments P,Q or potential parameters; also P,Q may be input in this category. It is read by lm, lmgf, lmpg, and tbe. BEGMOMiASAY1When true, causes program lm to begin with moments from which potential parameters are generated. If false, the potential parameters are used and the program proceeds directly to the band calculation. FREE1ASAYFIs intended to facilitate a self-consistent free-atom calculation. When FREE is true, the program uses rmax=30 for the sphere radius rather than whatever rmax is passed to it; the boundary conditions at rmax are taken to be value=slope=0 (rmax=30 should be large enough that these boundary conditions are sufficiently close to that of a free atom.); subroutine atscpp does not calculate potential parameters or save anything to disk; and lm terminates after all the atoms have been calculated. CNTROL1ASAYFWhen CONTRL=T, the parser attempts to read the “continuously variable principal quantum numbers” P and moments Q0,Q1,Q2 for each l channel; see P,Q below. ATOMcASAY Class label. P,Q (and possibly other data) is given by class. Tokens following a class label and preceding the next class label belong to that class. ATOM_P= and ATOM_QcASAY Read “continuously variable principal quantum numbers” for this class (P=…), or energy moments Q0,Q1,Q2 (Q=…). P consists of one number per l channel, Q of three numbers (Q0,Q1,Q2) for each l. Note In spin polarized calculations, a second set of parameters must follow the first, and the moments should all be half of what they are in non-spin polarized calculations. In this sample input file for Si, P,Q is given as: ATOM=SI P=3.5 3.5 3.5 Q=1 0 0 2 0 0 0 0 0 ATOM=ES P=1.5 2.5 3.5 Q=.5 0 0 .5 0 0 0 0 0 One electron is put in the Si s orbital, 2 in the p and none in the d, while 0.5 electrons are put in the s and p channels for the empty sphere. All first and second moments are zero. This rough guess produces a correspondingly rough potential. You do not have to supply information here for every class; but for classes you do, you must supply all of (P,Q0,Q1,Q2). Data read in START supersedes whatever may have been read from disk. Remarks below provide further information about how P,Q is read and printed. RDVES1ASAYFRead Ves(RMT) from the START category along with P,Q ATOM_ENUrASAY Linearization energies Sample START category The following is taken for the distribution’s test of La2 Cu O4. ATOM=LA P= 6.3055046 6.3000000 5.2308707 Q= 0.4770507 0.0000000 0.0610692 0.9882047 -0.3905638 0.2327244 2.0252993 0.0000000 0.1272500 ATOM=CU P= 4.6331214 4.3438861 3.8947075 Q= 0.4910799 0.0000000 0.0974578 0.6087341 0.0000000 0.1140513 9.4164169 0.0000000 0.2018023 ATOM=OX P= 2.8833091 2.8438183 3.1896353 Q= 1.6741779 0.0000000 0.0653497 4.2304006 0.0000000 0.1036699 0.0404676 0.0000000 0.0023966 ATOM=OX2 P= 2.8840328 2.8447249 3.1806967 Q= 1.6660490 0.0000000 0.0257208 4.1318836 0.0000000 0.0365535 0.0083512 0.0000000 0.0003608 Notes on parsing P and Q In the ASA, knowledge of P and Q is sufficient to completely determine the ASA density. Several ways are available to read these important quantities. The parser returns (P,Q) as a set according the following priorities: • The (P,Q) set is read from the disk, if supplied, (along possibly with other quantities such as potential parameters El, C, Δ, γ.) One file is created for each class that contains this data and other class-specific information. Some or all of the data may be missing from the disk files. Alternatively, you may read these data from a restart file rsta.ext, which if it exists contains data for all classes in one file. The program will not read this data by default; use --rs=1 to have it read from the rsta file. To write class data to rsta, use --rs=,1 ( must be 0 or 1) • If START_CONTRL=T, (P,Q) (and possibly other quantities) are read from START for classes you supply (usually all classes). Data read from this category supersedes any that might have been read from disk. If class data read from either of these sources, the input system returns it. For classes where none is available the parser will pick a default: • If data from a different class but in the same species is available, use it. • Otherwise use some preset default values for (P,Q). After a calculation finishes you can run lmctl to read (P,Q) from disk and format it in a form ready to insert into the START category,e.g. ATOM=SI P= 3.8303101 3.7074067 3.2545634 Q= 1.1694276 0.0000000 0.0297168 1.8803181 0.0000000 0.0489234 0.1742629 0.0000000 0.0063520 ATOM=ES P= 1.4162942 2.2521617 3.1546386 Q= 0.2873686 0.0000000 0.0129888 0.3485430 0.0000000 0.0165416 0.1400664 0.0000000 0.0055459 Thus all the information needed to generate a self-consistent ASA density can be embedded in the ctrl file. Because the P’s float to the band center-of gravity (i.e. center of gravity of the occupied states for a particular site and l channel) the corresponding first moments Q1 vanish. P’s are floated by default since it minimizes the linearization error. Caution: Sometimes it is necessary to override this default: If the band CG (of the occupied states) is far removed from the natural CG of a particular channel, you must restrict how far P can be shifted to the band CG. In some cases, allowing P to float completely will result in “ghost bands”. The high-lying Ga 4d state is a classic example. To restrict P to a fixed value, see SPEC_ATOM_IDMOD. In such cases, you want to pick the fractional part of P to be small, but not so low as to cause problems (about 0.5 for s orbitals and 0.15 for d orbitals; see here). See Table of Contents By default structural information is read through the ctrl file. But some of the essential data can be read in multiple ways, in particular from site file. Questaal has utilities that will import this information from other formats such as cif files. FILEcallY Read structural data (ALAT, NBAS, PLAT) from an independent site file. The file structure is documented here; see also this tutorial. Note: EXPRESS_file performs the same function as STRUC_FILE, and supersedes STRUC_FILE if it is present. NBASiallN† Number of the sites in the primitive unit cell. NSPECiallY Number of atom species ALATrallN† A scaling, in atomic units, of the lattice and basis vectors DALATrallY0is added to ALAT. It can be useful in contexts certain quantities that depend on ALAT are to be kept fixed (e.g. SPEC_ATOM_R/A) while ALAT varies. PLATr,r,…allN† (dimensionless) primitive translation vectors SLATr,r,…lmscellN Superlattice vectors NLiallY3Sets a global default value for l-cutoffs lcut = NL−1. NL is used for both basis set and augmentation cutoffs. SHEARr,r,r,rallY Enables shearing of the lattice in a volume-conserving manner. If SHEAR=#1,#2,#3,#4,  #1,#2,#3=direction vector;  #4=distortion amplitude. Example: SHEAR=0,0,1,0.01 distorts a lattice in initially cubic symmetry to tetragonal symmetry, with 0.01 shear. ROTcallY Rotates the lattice and basis vectors, and the symmetry group operations by a unitary matrix. Example: ROT=z:pi/4,y:pi/3,z:pi/2 generates a rotation matrix corresponding to the Euler angles α=π/4, β=π/3, γ=π/2. See this document for the general syntax. Lattice and basis vectors, and point group operations (SYMGRP) are all rotated. The direction of rotation is such that a +π/2 rotation around z transforms (x,y) into (−y,x). DEFGRDr,r,…allY A 3×3 matrix defining a general linear transformation of the lattice and basis vectors. STRAINr,r,…allY A sequence of six numbers defining a general distortion of the lattice and basis vectors. ALPHArallN Amount of Voigt strain. †Information may be obtained from a site file See Table of Contents Category SYMGRP provides symmetry information; it helps in two ways. First, it provides the relevant information to find which sites are equivalent, this makes for a simpler and more accurate band calculations. Secondly, it reduces the number of k-points needed in Brillouin zone integrations. Normally you don’t need SYMGRP; the program is capable of finding its own symmetry operations. However, there are cases where it is useful or even necessary to manually specify them. For example when including spin-orbit coupling or noncollinear magnetism where the symmetry group isn’t only specified by the atomic positions. In this case you need to supply extra information. You can use SYMGRP to explicitly declare a set of generators from which the entire group can be created. For example, the three operations R4X, MX and R3D are sufficient to generate all 48 elements of cubic symmetry. Unless conditions are set for noncollinear magnetism and/or SO coupling, the inversion is assumed by default as a consequence of time-reversal symmetry. A tag describing a generator for a point group operation has the form O(nx,ny,nz) where O is one of M, I or Rj, or E, for mirror, inversion j-fold rotation and identity operation, respectively. nx,ny,nz are a triplet of indices specifying the axis of rotation. You may use X, Y, Z or D as shorthand for (1,0,0), (0,1,0), (0,0,1), and (1,1,1) respectively. You may also enter products of rotations, such as I*R4X. specifies three generators (4-fold rotation around x, mirror in x, 3-fold rotation around (1,1,1)). Generating all possible combinations of these rotations will result in the 48 symmetry operations of the cube. To suppress all symmetry operations, use In the ASA, owing to the spherical approximation to the potential only the point group is required for self-consistency. But in general you must specify the full space group. The translation part gets appended to rotation part in one of the following forms:  :(x1,x2,x3)  or alternatively  ::(p1,p2,p3)  with the double ‘::’. The first defines the translation in Cartesian coordinates in units of ALAT, second in crystal coordinates. These two lines (taken from testing/ctrl.cr3si6) provide equivalent specifications: SYMGRP r6z:(0,0,0.4778973) r2(1/2,sqrt(3)/2,0) SYMGRP r6z::(0,0,1/3) r2(1/2,sqrt(3)/2,0) Keywords in the SYMGRP category SYMGRP accepts, in addition to symmetry operations the following keywords: • find tells the program to determine its own symmetry operations. Thus: SYMGRP find amounts to the same as not incuding a SYMGRP category in the input at all You can also specify a mix of generators you supply, and tell the program to find any others that might exist. For example: SYMGRP r4x find specifies that 4-fold rotation be included, and  find  tells the program to look for any additional symops that might exist. • AFM: For certain antiferromagnets, certain translation operations exist provided the rotation/shift is accompanied by a spin flip. Say a translation of (-1/2,1/2,1/2)a restores the crystal structure, but all atoms after translation have opposite spin. Specify this symmetry with: SYMGRP ... AFM::-1/2,1/2,1/2 This operation is used only by lmf. • SOC or SOC=2: Tells the symmetry group generator to exclude operations that do not preserve the z axis. This is used particularly for spin-orbit coupling where the crystal symmetry is reduced (z is the quantization axis). SOC=2 is like SOC but allows operations that preserve z or flip z to −z. This works in some cases. Note: This keyword is only active when the two spin channels are linked, e.g. SO coupling or noncollinear magnetism. • GRP2 turns on a switch that can force the density among inequivalent classes that share a common species to be averaged. In the ASA codes the density is spherical and the averaging is complete; in the FP case only the spherical part of the densities can be averaged. This helps sometimes with stabilizing difficult cases in the path to self-consistency. You specify which species are to be averaged with the SPEC_ATOM_GRP2 token. GRP2 averages the input density; GRP2=2 averages the output density; GRP2=3 averages both the input and the output density. • RHOPOS turns on a switch that forces the density positive at all points. You can also accomplish this with the command-line switch --rhopos. See Table of Contents This category is used for version control. As of version 7, the input file must have the following tokens for any program in the suite: It tells the input system that you have a v7 style input file. For a particular program you need an additional token to tell the parser that this file is set up for that program. Thus your VERS category should read: VERS LM:7 ASA:7 for lm, lmgf or lmpg VERS LM:7 FP:7 for lmf or lmfgwd VERS LM:7 MOL:3 for a molecules codes such as lmmc VERS LM:7 TB:9 for the empirical tight-binding tbe and so on. Add version control tokens for whatever programs your input file supports. See Table of Contents Notes on gradient corrected functionals The semilocal exchange-correlation potential is defined as where and are the exchange or correlation potentials and energy densities, respectively, for the Homogeneous Electron Gas (HEG), which are determined by the options XCFUN=1 or XCFUN=2. is the exchange or correlation GGA enhancement factor which introduce semilocal effects and can be choosen using the token GGA. is the reduced gradient. Then, the final exchange-correlation potential/functional is determined using two different tokens, i.e. one for the local (HEG) part and the other one for the semilocal (GGA) correction. Note that hybrid and meta-GGA exchange-correlation functional schemes are not implemented in the QUESTAAL package. As consequence, only LDA and GGA libXC functionals can be used. Interactive mode For the most part Questaal codes are designed to run without receiving information through standard input. The various editors are an exception (though even editor instructions can be run in batch mode; see e.g. dynamical self-energy editor tutorial). It is often convenient to have some interactive facility, e.g. whether to limit the number of iterations in a self-consistency cycle. The Questaal codes have a interactive mode, which you can turn on with token IO_IACTIV in the ctrl file, or on the command-line with the  --iactive  switch. For example, if you run lm or lmf interactively you will be prompted with QUERY: beta (def=0.3)? and the program will wait for input. To see what your options are, enter  ? <RET> You should see (A)bort (I)active (V)erb (C)pu (T)iming (W)ork (S)et value QUERY: beta (def=0.3)? Here it is asking if you want to modify the existing value for the charge mixing parameterbeta. Enter one of: • a   Program aborts execution • i   Toggles interactive mode • v  #  Sets verbosity to  # • c   Prints out CPU usage so far • t   Turns on timing printout • w   Not used now • s  #  Sets parameter to  # If you enter s .5 <RET> in this instance, the program will modify its value for beta to 0.5 and prompt you again. If you don’t want to make any (further) changes, just enter  <RET>. The most commonly changed parameter is the number of iterations, called  maxit. You can increase or decrease it; if you decrease maxit below the current iteration number, the program will stop. Simulacrum of interactive mode You can enter interactive mode instructions that are normally read by the standard input by entering them into a file iact.ext. The executable first looks for that file, reads its contents, and executes its instructions before prompting you. It will perform the instruction, e.g. set the value of the parameter, without turning on the true interactive mode. However if you do turn it on, e.g. put  i  into iact.ext, the program will revert to true interactive mode and prompt you for instructions. There is an important difference in the normal and simulacrum operations when setting a parameter. In the latter case, you must tell the program which parameter. Do this by naming the parameter after the  s. Thus iact.ext would contain a line like s maxit 3 Interactive mode with MPI Normal interactive mode is not available when running with multiple processors. The simulacrum mode does work, but only for a subset of parameters. Most importantly,  maxit is one parameter that is read; thus you can adjust the number of iterations a job will do after execution starts. slatsm/query.f contains the source code controlling this mode. See Table of Contents ITER_MIX is a token in the ITER category that controls how Questaal codes iterate to self-consistency in the charge density. Its contents is a string consisting of mixing options separated by a delimiter, as described here. Questaal codes follow the usual procedure of mixing a linear combination of input density  nin and output density  nout to make a trial guess  n*  for the self-consistent density (see for example Chapter 9 in Richard Martin’s book1. Questaal uses two independent techniques to accelerate convergence to the self-consistency condition  noutnin. First, the quantities are mixed making use of a model for the dielectric function. Second, multiple (nin, nout) pairs (taken from prior iterations) can be used to accelerate convergence. They are typically taken in combination and the contents of ITER_MIX control options for both kinds of approaches. Both ideas are explained on this page. See also the ASA NbFe superlattice tutorial. The ITER_MIX tag and how to use it In practice, mixing proceeds as described on this page, but additionally by combining multiple instances of (Xin,Xout) pairs. Each iteration produces a new pair, and methods exist to use this information and take linear combinations of them from both current and prior iterations, that is a better choice than any one pair. You can choose between Broyden3 and Anderson2 methods. The string belonging to ITER_MIX should begin with one of which tells the mixer which scheme to use. slatsm/amix.f describes the mathematics behind the Anderson scheme. n is the maximum number of prior iterations to include in the mix. As programs proceed to self-consistency, they dump prior iterations to disk, to read them the next time through. Data is I/O to mixm.ext. The Anderson scheme is particularly simple to monitor. How much of δX from prior iterations is included in the final mixed vector is printed to stdout as parameter tj, e.g. tj: 0.47741 &larr; iteration 2 tj:-0.39609 -0.44764 &larr; iteration 3 tj:-0.05454 0.01980 &larr; iteration 4 tj: 0.24975 tj: 0.48650 In the second iteration, one prior iteration was mixed; in the third and fourth, two; and after that, only one. (When the normal matrix picks up a small eigenvalue the Anderson mixing algorithm reduces the number of prior iterations). Consider the case when a single prior iteration was mixed. • If tj=0, the new X is entirely composed of the current iteration. This means self-consistency is proceeding in an optimal manner. • If tj=1, it means that the new X is composed 100% of the prior iteration. This means that the algorithm doesn’t like how the mixing is proceeding, and is discarding the current iteration. If you see successive iterations where tj is close to (or worse, larger than) unity, you should change something, e.g. reduce beta. • If tj<0, the algorithm thinks you can mix more of Xout and less of Xin. If you see successive iterations where tj is significantly negative (less than −1), increase beta. Broyden mixing3 uses a more sophisticated procedure, in which it tries to build up the Hessian matrix. It usually works better but has more pitfalls than Anderson. Broyden has an additional parameter,  wc, that controls how much weight is given to prior iterations in the mix (see below). Remember also that (XoutXin) is screened. In a simple metal, the Lindhard function pretty well describes the actual dielectric function, and tj should be small, as see in this tutorial. As for the dielectric function in the ASA, codes (lm, lmgf, lmpg) offer two options: 1. A rough ε is obtained from eigenvalues of the Madelung matrix (OPTIONS_SCR=4). 2. The q=0 discretized polarization is explicitly calculated (OPTIONS_SCR=11 generates it and OPTIONS_SCR=2 uses it; see OPTIONS_SCR). The general syntax is for ITER_MIX is An[,b=beta][,b2=b2][,bv=betv][,n=nit][,w=w1,w2][,fn=name][,k=nkill][,elind=#][;...] or The options are described below. They are parsed in routine subs/parmxp.f. Parameters (b, wc, etc.) may occur in any order.: • An or Bn:  maximum number of prior iterations to include in the mix (the mixing file may contain more than n prior iterations). n=0 implies linear mixing. Default: B2. • b=beta:  the mixing parameter beta in Eq. 4 above. Default: 0.3. • b2=b2:  Not documented. The ASA code does not use this tag. • n=nit:  the number of iterations to use mix with this set of parameters before passing on to the next set. After the last set is exhausted, it starts over with the first set. • name=fn:  mixing file name (mixm is the default). Must be eight characters or fewer. • k=nkill:  kill mixing file after nkill iterations. This is helpful when the mixing runs out of steam, or when the mixing parameters change. Default: 7. • wc=wc:  (Broyden only) that controls how much weight is given to prior iterations in estimating the Jacobian. wc=1 is fairly conservative. Choosing wc<0 assigns a floating value to the actual wc, proportional to wc/rms-error. This increases wc as the error becomes small. wc defaults to −1 if it is not specified. See Johnson’s paper3 for the definition of  wc. • w=w1,w2:  (spin-polarized calculations only) The up- and down- spin channels are mixed independently. Instead the sum (up+down) and difference (up-down) are mixed. The two combinations are weighted by w1 and w2 in the mixing, more heavily emphasizing the more heavily weighted. As special cases, w1=0 freezes the charge and mixes the magnetic moments only while w2=0 freezes the moments and mixes the charge only. • elind=elind:  The Fermi energy entering into the Lindhard dielectric function: . elind<0: Use the free-electron gas value, scaled by elind. The default value is −1. • wa:  (ASA only) weight for extra quantities included with P,Q in the mixing procedure. For noncollinear magnetism, includes the Euler angles. • locm:  (FP only) not documented yet. • r=expr:  continue this block of mixing sequence until rms error < expr. Example:  MIX=A4,b=.2,k=4 uses the Anderson method2, killing the mixing file each fourth iteration. The mixing  beta  is 0.2. You can string together several rules. One set of rules applies for a certain number of iterations; followed by another set. Rules are separated by a “ ; ”. Example:  MIX=B10,n=8,w=2,1,fn=mxm,wc=11,k=4;A2,b=1 does 8 iterations of Broyden mixing, followed by Anderson mixing. The Broyden iterations weight the (up+down) double that of (up-down) for the magnetic case, and iterations are saved in a file which is deleted at the end of every fourth iteration. wc is 11. beta assumes the default value. The Anderson rules mix two prior iterations with beta=1. See Table of Contents 1 R. M. Martin, Electronic Structure, Cambridge University Press (2004). 2 D. G. Anderson. Iterative procedures for nonlinear integral equations. J. Assoc. Comput. Mach., 12:547–560, 1965 3 D. D. Johnson, Phys. Rev. B 38, 12807 (1988).
9a00f26a95a35473
Epigenetics: the light and the way? October 24, 2010 • 6:44 am How often do you see an editor of a scientific journal complain that a field is overhyped?   Well, you can see it this week in Current Biology, where Florian Maderspacher, the senior reviews editor, takes out after the current penchant of  journalists to see epigenetics as the Great Missing Piece of Biology—a field that will completely revolutionize Darwinism and our view of inheritance. (I take epigenetics to mean “inheritance not based on coding changes in the DNA”.)  The title of Maderspacher’s piece pretty much says it all: “Lysenko rising.” (Sadly, it seems to be behind a paywall.) Maderspacher was cheesed off because the latest issue of his German magazine Der Spiegel devoted ten pages to epigenetics, including a racy cover of a nude nymph whose naughty bits were conveniently occluded by DNA-shaped splashes: Florian translates this as “THE VICTORY OVER THE GENES. Smarter, healthier, happier: how we can outwit our genome.”  And he explains that this is not a one-off bit of hype: Epigenetics is of course being considered ‘sexy’ in vast circles of the scientific world (and has attracted the funding to go with it), but that Spiegel cover was a different type of ‘sexy’. This kind of public attention seemed unusual: molecular biology rarely makes it to the front page. And what’s more, this wasn’t just some German oddity: Newsweek had last year a similar cover story, touting a revolution in biology in gonzo-journalism style: “Roll over, Mendel. Watson and Crick? They are so your old man’s version of DNA”. Likewise, the New York Times is in tune, as a news piece last year celebrated the role of the ‘epigenome’ in controlling “which genes are on or off”; nor is the hype confined to the popular press, as a recent editorial in Nature also noted that: “genome sequences, within and across species, were too similar to be able to explain the diversity of life. It was instead clear that epigenetics — those changes to gene expression caused by chemical modification of DNA and its associated proteins — could explain much about how these similar genetic codes are expressed uniquely in different cells, in different environmental conditions and at different times”. And the wonders of epigenetics, at least in this piece, came down to the same tired old data: The article itself was mainly concerned with listing examples supporting the notion that ‘genes aren’t everything’: on the one hand, cases where genetic predisposition, e.g. for adiposity, does not lead to the development of that phenotype, as well as the much-discussed weaknesses in genome-wide association studies to pick up causative genetic agents for common diseases; on the other hand, examples of how the environment can influence the genome, evident for instance as differences in DNA modifications between monozygotic twins in different environments and lifestyles. The piece culminated in bold statements like: “Epigenetics is the long sought link through which the environment influences the hereditary material [… and it] currently leads to a dramatic new understanding of human biology”. There is thus no need to construe a dichotomy between the power of the genes and the power of the environment — a molecular version of the ancient nature vs. nurture debate. The environment influences the phenotype through the genes. There is no contrast, no one over whom to achieve ‘victory’. Indeed.  I’m not sure I agree with Maderspacher’s analysis of the reasons why this misconception is so important.  He floats the idea that Germans are particularly fond of  it “for historic reasons,” that is, because it apparently contradicts the hegemony of genetic determinism that undergirded Nazi racist ideology. But ultimately he lays the hype at the feet of Marxism, which trumpets the malleability of the individual by the environment. (This is where Lysenko comes in—the Russian agronomist whose fraudulent claim that one could permanently modify crops by environmental manipulation so impressed Stalin, and so ruined Russian agriculture.)  Maderspacher sees epigenetics as “a kind of lysenkoism for the molecular age.” Well, maybe the popularity of epigenetics is a vulgarised environmentalist response to the “vulgarised genetic determinism” that so dominates our times.  And maybe that’s why journalists love it, though, as Maderspacher notes, they love anything that smacks of an overthrown paradigm—especially Darwinian evolution.  Regardless, he sees this kind of buzz-journalism as injurious to the public understanding of science, and I agree 100%: Therefore, a larger frame has to be invoked, far-fetched as it may be. Building around the story is a legitimate literary technique to some extent, but becomes dangerous when the frame interferes with the presentation and interpretation of empirical data. In effect, it’s not far from what Lysenko did, and makes the whole purpose of science journalism questionable. It won’t cost lives as Lysenko’s mad ideas — after all, it’s only molecular biology — but the public have a right to be informed correctly. First, because they pay for the research. Second, because at the very least they need to know that science, and genetics in particular, cannot give them simple answers about who they are and how they should live, and neither can epigenetics. They’ll have to work that out for themselves and let Lysenko lie. Well done!  But I’d go further and include among the miscreants those scientists—especially those evolutionists—who argue in the face of the data that epigenetics will overthrow conventional ideas about evolution and natural selection. Unlike journalists, they know better. 40 thoughts on “Epigenetics: the light and the way? 1. Wow. I just did a PubMed search on “epigenetics” and got 39,157 hits, including 6,567 reviews and 16,930 free text articles. Any way to help the clueless but curious in the crowd (ie, me) separate the wheat from the chaff? I can’t read 16,930 articles between now and kick-off. 2. Excellent article! I too, wouldn’t know where to start when it comes to epigenetics. It does appeal as an elegant explanation for environmental influences. However, I’m skeptical enough of my own, limited knowledge of genetics to think that any misgivings I might have are due to a misunderstanding or incomplete knowledge of ‘traditional’ evolutionary theory… You know, it would be a great topic for a book… 😉 3. “…maybe the popularity of epigenetics is a vulgarized environmentalist response to the ‘vulgarized genetic determinism’ that so dominates our times.” Right. Anything that seems to defeat the imagined specter of determinism will be eagerly snapped up by a culture wedded to the myth of contra-causal freedom. But whether genetic or environmental or their interaction, it’s all deterministic, at least for practical purposes having to do with behavior (and indeterminism doesn’t help establish originative agency). To point this out isn’t being vulgar, but simply stating the obvious. Eventually we’re going to have to come to terms with determinism – the fact of our complete inclusion in natural cause and effect – as science closes gaps in our understanding. Killing off the “little god” of libertarian free will is the next step for atheists wanting to be pro-active in developing and promoting a fully naturalistic worldview. Sam Harris does a nice job of it pp. 102-112 in The Moral Landscape, plus he draws out some of the progressive implications for criminal justice. Hope his fans (and critics) take note, whatever the merits of his thesis about science and morality. 1. Quantum Mechanics shows that the world is fundamentally stochastic not deterministic. DNA is molecular and molecules are in the quantum domain, this is why Schrödinger in the forties was able to predict that genetic information was coded at the molecular level rather than at the level of super-macromolecular assemblies, as most biologists believed at the time. He merely deduced that this was a consequence of X-rays causing mutations, As a chemist we tend to use the time-independent Schrödinger equation as a starting point for our calculations so only see a time averaged result in order to obtain molecular properties. This is fine systems in equilibrium. Biological systems are in principle out of equilibrium. So what we see is dependent upon the evolution of the state vector in time. It doesn’t matter if you believe in state vector reduction or take it’s unitary evolution seriously as Everett did in the relative state formulation of QM, what we see is fundamentally stochastic. Our world is not deterministic. Determinism is dead. 1. Indeterminism may not of itself necessarily establish “originative agency” but it does establish potential and in a sense really existing contrafactual histories. This has important implications. 2. Our world is not deterministic. Determinism is dead. Doesn’t that depend on your interpretation of quantum mechanics. My understanding is that on interpretations like Many Worlds, the randomness is purely subjective—it’s a matter of which universe you’re in, and the seemingly random events always happen and don’t happen in different universes. As I understand it, this general kind of interpretation is not dead, and is common among quantum cosmologists in particular. 1. The ensemble as a totality is deterministic just as the unitary evolution of the state vector is deterministic. That is the bird’s eye view as Tegmark puts it. For any individual history it appears to be stochastic, this is the frog’s eye view as Tegmark puts it. Personally I favour the MWI (the Everett relative state formulation). However it still comes down to the fact the world we live in is stochastic. This is the lesson of QM and what separates it from classical theories like relativity. 1. Oh yes, as the adventures of cGh Tompkins show, you just never know what might happen next in this world of quantum randomness. Oops… I wasn’t reading carefully enough, and missed that you made clear what you meant by “fundamentally,” which was not what I meant by “fundamentally.” I’m still confused about what you find particularly interesting about whether the “world we see” is fundamentally random, or just pseudo-random. I think we need some clearer vocabulary for this sort of thing, but I don’t know what it would be. Many people seem to think that it’s a big philosophical deal whether things are “fundamentally” deterministic vs. nondeterministic, strictly speaking. I don’t see it. If my behavior is driven by a random number generator, it doesn’t generally matter whether it’s really, really fundamentally random, e.g., quantum noise, or if it’s pseudo-random. Either way, it’s noise for my purposes, and dependency on noise is not the same thing as any useful concept of freedom, or any particularly interesting high-level concept like that. I’m not clear on what “interesting implications” you see in quantum randomness. Whether things are effectively random in the middle seems much more important than whether they’re random at the bottom. Either way, higher-level systems may be resilient in the face of the underlying or external noise, or sensitive to it, and that’s usually the difference that matters. 3. Quantum Mechanics shows that the world is fundamentally stochastic not deterministic. Curiously this is an analog to the genetic-epigenetics discussion. As you yourself admit, QM is a deterministic theory par excellence where ‘stochastic modifications are not inherited past one or two Planck times’. It is inherent in the unitary evolution that we can’t loose causality. And it is explicit determinism in the MW theory, which is the realistic QM. At the same time the fact that we live in one world of the many means that indeed what we see is fundamentally stochastic (in the “one or two Planck times” sense). However it still comes down to the fact the world we live in is stochastic. This is the lesson of QM and what separates it from classical theories like relativity. And this is the, again curious, analog to the nature-nurture discussion. It is actually classical systems which are unboundedly stochastic, as demonstrated by deterministic chaotic systems exponential divergence. Conversely non-classical quantum systems can demonstrate but limited chaos from other sources (say, “hockey rink” geometric effects) as they diverge much more orderly in their linear fashion. So long-time environmental stochasticity is overruling inherent short-time stochasticity, and it has nothing to do with QM. In fact QM tries its best to avoid it and it mostly slips in through the classical regime. I put that the ability of classical systems to act stochastic is, or should be, well known since the discovery of probability theory. And that, to continue on the string of analogs, the promotion of QM as the origin of stochasticity is overhyped. This hype is further a source quantum woo, and should be discouraged for the same reason as the overhyping of epigenetics should be. To paraphrase Coyne, one can see this “this kind of buzz-philosophizing as injurious to the public understanding of science”. 4. Just to make myself clear, I note that I myself used to argue inherent stochasticity of QM specifically, out of irreversible environmental interaction during decoherence as I understand it today. But never, I hope, argued that “the world is fundamental stochastic not deterministic”. Even including decoherence and uncertainty, even including stochastic distributions, all of our laws evolve deterministically. (Or we would loose causality.) If anything causality and the concordant determinism is fundamental AFAIU, since having energy is to evolve “in time” or time and energy wouldn’t be classically complementary. For other systems that putatively jump all over time, see supernaturalism. 😀 Also, speaking of QM overhyping, the sibling to overselling its stochasticity is to oversell its discreteness. From a classic axiomatic instrumentalist theory of QM one can see what is taking place is actually a seamless stitching together of discrete and continous states by way of their boundedness. Bounded systems exhibit discreteness (say, electron energy levels in atoms), unbounded systems exhibit continous properties (say, energy of free electrons). More fundamentally, the work of Lucien Hardy, translating classical and quantum theory into classical respectively quantum probability theory, shows that it is the classical world that is, this time genuinely, discrete. While quantum physics is the continous physics, necessitating a continous transformation between pre- and post observation states. Talk about putting old prejudices on their head! Interestingly, and tieing back to stochasticity, I believe Hardy’s work has been implicitly tested by modern measurements of, tentatively suggested as, decoherence. It turns out that decoherence may reject the “Copenhagen collapse” of states (and the whole Copenhagen theory with it). It seems one can take systems gradually in and out of decoherence. (Measurements on photon traps, IIRC. I’ll have to get back to you if asked for the exact reference.) This would, as I understand it, directly correspond to Hardy’s continous transformation of pre to post observation states. Now I ask, do this gradual, more precisely deterministic, irretriveable loss of information to the environment remind of stochasticity? To me it suggest that decoherence is but an entropic process. To wax philosophically, since the 2nd law of thermodynamics that allows entropy to increase fundamentally goes back to the inflationary caused cosmological expansion, it smacks of determinism to me. Determinism isn’t dead. Neither is stochasticity. They are married. (Puns on marital states vs death aside.) 4. I think epigenetics is important! But you’re right, it’s been overhyped and isn’t going to revolutionize evolutionary theory at all. My usual answer answer when people ask me about epigenetics is that it’s a phenomenon that blurs the effect of the genotype over several generations…but fundamentally it’s still all about the inheritance of genetic traits. Did anyone freak out over maternal effect genes and decide they were going to overturn evolutionary biology? No. They’re also important but they aren’t going to change the way we think about evolution. 1. Oh, I absolutely agree about their importance, even in evolution (that’s why I said it twice). After all, differential imprinting by males and females may be an evolutionary strategy to alter gene expression in adaptive ways. But that’s a fillip of evolution, not a new paradigm. 1. I think you mean the now famous “Beard & Boot”. The other paraphernalia is “Tentacle & Paw”. Or should be paraphernalia, I’m not so sure in PZ’s case. … no, wait, there is “Puss in Boots”. Have anyone seen Jerry’s feet? His cordwainer, perhaps!? 5. I think its currently very important in medical research since it is involved in the ongoing evolution of malignant cells. In terms of species evolution, well there is the problem that you tend to lose the epigenetic imprint during meiosis. A Jerry mentioned, it is very likely that you will completely lose any trace of a previous epigenetic imprint within a couple of generations. The current model of how this affects evolution is more along the lines of the imprint itself increasing particular mutations. In other words, a specific epigenetic imprint usually means a ‘mark’ or some sort, either a methylation of a cytosine in a CpG dinucleotide or a histone modification, within a gene promoter region. This mark has the function, while it is present, of changing the rate of transcription of the gene in question. Cytosine methylation also has the effect of altering the mutation rate at that nucleotide – thus the epigenetic mark may lead to a nucleotide mutation at the same point and therefore the ‘temporary’ epigenetic signal may be altered to a permanent ‘genetic’ signal. Obviously this is much less efficient than a straightforward mutation at the same point but it has the advantage that the mutation is ‘targeted’ for a promoter that seems to be affected by epigenetic alteration. This model at least provides some degree of plausibility as to how an environmental condition might ‘quickly’ cause a mutation in genes that respond to it. I’m not sure how well all of this has been proven experimentally but I think I’ve outlined the current model. 1. I don’t know a real lot about evolution (I really only come to this blog because of Prof Coyne’s Puss ‘n’ Boots), but it did always strike me that organisms – especially big, complex organisms with complex needs – wouldn’t be ideally placed for ‘thriving’, or even surviving, if they couldn’t make some kind of short-term temporary response to a world that can change far more rapidly than either ‘natural selection’ or ‘gene drift’ or whatever could possibly respond to (even in a very short punctuation of an equilibrium). So firstly I though, aha, epigenetics is that very much needed ‘rapid response’ mechanism. And, or so I thought, that the response was short-lived was a good thing (who knows how the world will change in the next 5 minutes in this totally random quantum world ?). But, if the world doesn’t change back real quick, then an epigenetic change would be lost and the organisms would have to ‘reinvent’ it. Except that you now say that an epigenetic change can end up being ‘fixed’. Most interesting. 6. How about epigenetic gene therapy? Transform my mass (and I have enough for two!) into the nymph(s) and I would change my name to “Doc Nymph.” Sort of like Thing 1 and Thing 2. Do they know better? Perhaps they should, but (to mention names, but who did you mean?) Eva Jablonka and Massimo Pigliucci are not getting off their hobby horse. 8. “Epigenetics is the long sought link through which the environment influences the hereditary material” Well, it’s the sort of thing Lamarck and later Lysenko would dream about. I’ve met a few Lamarckians (and biologists at that) and am shocked that such people still exist. Oh well, I hear there are still flat-earthers out there. 1. It would be nice to have a term for these new pseudoscientists of old dead theories. Say, if we call them “phlogindeadhorseys”, we would include them all, wouldn’t we? 1. Very good: much more poetic than archosis and much less sordidly pedestrian, than ‘zombie’. Carried by acclaim ! 9. Maderspracher says “Science journalism, where it still exists, is part of the news industry, and thus needs to be newsy; ironically, that the environment can influence the phenotype and the genes is terribly old news, no news at all, really. […] Therefore, a larger frame has to be invoked, far-fetched as it may be. Building around the story is a legitimate literary technique to some extent, but becomes dangerous when the frame interferes with the presentation and interpretation of empirical data. In effect, it’s not far from what Lysenko did, and makes the whole purpose of science journalism questionable.” Well, all true – but it is not just the the journalists – scientists & universities need to shoulder some blame. This links in with competition for research grants & getting funding plus staying ‘contemporary’. People want all research to be ‘ground breaking’ rather than ‘ground covering’ as it should be as well, if you see what I mean. 1. Possibly, but it should be a safe environment outside of press releases and interviews IMHO. But it is your show. (Btw, as it is today, I believe I will settle for “grounded” or at least “ground pointing”. (¬_¬) ) 10. Not surprised at all. I am always wondering why the SPIEGEL is hailed as Germany’s best and most sophisticated magazine. All their articles are based on argumentation by carefully selected anecdote, and whenever they report about a topic that you are familiar with yourself, you scratch your head and go: “What are they talking about? And did they not do any research whatsoever apart from interviewing one or two people promoting an isolated extreme view of the matter?” 11. As a computer scientist viewing the genome as a kind of program—not a von Neumann program, of course—I’m wondering what computational abilities are added by things like methylation. There’s already a capacity for passing state information from one generation of cells to the next, from he basic way genes work and the way that cells divide to make two—right? Genes interact by producing transcription products that switch other genes on or off. (Or change their rates of firing in an analog way.) Those transcription products diffuse around the surrounding plasm and dock to repressor and promoter sites of other genes to influence their rates of transcription. When the plasm is split along with the cell, some of those products go into each of the resulting cells, and hey, presto, that state information is transferred from the original cell to both copies. Right? That’s the usual thing that happens by default, right? And it’s under genetic control, too—you can have genes switch on or off and quiesce before the cell divides, to make sure that state information is transferred neatly. (Ensuring that the relevant transcription products concentrations are appropriately high or appropriately low for whatever state you want to transfer—e.g., that certain genes will stay on, and others will stay off, across the split.) In computer nerd terms, this is like the fork() operation in UNIX, which creates a new process by copying the entire state of the parent process, and setting the copy running. (But passing it a one piece of information to tell it it’s a new copy, rather than the parent. Other than that, all the state information for the parent process is transferred to the child process by default. Given my mental model of these things, the ability to pass state information from one generation of cells to the next is the furthest thing from new—it’s the absolutely normal, default thing that happens when cells divide, just as it’s the default when forking processes in UNIX. Methylation and whatnot just give you another way of doing something you could do before. Am I mistaken? What is the significance of using those mechanisms rather than the default one, i.e., transfer of transcription products through the plasm surrounding the genes? (E.g., if I were designing a synthetic organism, would it be neater to do it all the default way, such that methylation is just a redundant kludge that evolution happened to come up with, or is their some good reason to transfer some information one way and other information the other?) 12. I’d like to point out that there IS a form of non-genomic inheritance that may actually be more prevalent than currently known (as no one really looked) – cellular (cytoplasmic and cortical) inheritance of organisation of components (eg some cytoskeletal systems). Sonneborn and Beisson (1965, PNAS) have a classic experiment wherein a row of cilia in paramecium was inverted surgically, and inherited in the following generations (for thousands of them – the original strain may well still be alive, haven’t checked though), even though no genetic content was changed. Subsequent experiments showed that modifying the genetic content via multiple crosses, etc (in case somehow the genome was ‘altered’ by the surgery – which would also have been amazing) did not alter the inverted row; meanwhile, this row was only inherited cortically, so the modified cilia were not transmitted to the other sex partner. Thus, it is a solid and fascinating example of non-genomic inheritence in the most definite sense. Sadly, since most cell biologists work on big things and could care less about evolution, this phenomenon has been only barely investigated, and mostly in ciliates. Speaking of ciliates, their nuclear dimorphism is absolutely the most amazing example of epigenetics, ever. I highly recommend reading up on the effect of the old [somatic] macronucleus on the development of the new macronucleus from the [germline] micronucleus. And the gene-silencing methylation is just boring. Mostly restricted to embryonic plants and animals, from what I know, possibly as a result of conflict between the male and the female parents, if such theories are to be believed. I’m annoyed they hijacked “all” of epigenetics that way… I do agree with the hype aspect. They hype the wrong things. Cellular inheritance, on the other hand, is completely obscure and desperately in need of attention. (You can also see the book by Grimes & Aufderheide 1991 “Cellular aspects of pattern formation: the problem of assembly”; as well as Cavalier-Smith 2002 “he membranome and membrane heredity in development and evolution” in ” Organelles, genomes and eukaryote phylogeny” for further discussion) 13. @MadScientist I’m shocked people make such a big deal over “Lamarckian” vs “Darwinian”. Both terms, and associated accusations, are bullshit. Both of the guys are long dead, and have little to offer to modern evolutionary biology. We have moved on. I hope it doesn’t shock you that I am apparently “Lamarckian” in some ways, for reasons described in my post above. But seriously, who cares? The data is what it is, no need to soak everything in philosophy and intradisciplinary politics! Leave a Reply
cb72acdd56af93ca
Phase-space formulation The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product. The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis,[1] and independently by Joe Moyal,[2] each building on earlier ideas by Hermann Weyl[3] and Eugene Wigner.[4] The chief advantage of the phase-space formulation is that it makes quantum mechanics appear as similar to Hamiltonian mechanics as possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of the Hilbert space".[5] This formulation is statistical in nature and offers logical connections between quantum mechanics and classical statistical mechanics, enabling a natural comparison between the two (see classical limit). Quantum mechanics in phase space is often favored in certain quantum optics applications (see optical phase space), or in the study of decoherence and a range of specialized technical problems, though otherwise the formalism is less commonly employed in practical situations.[6] The conceptual ideas underlying the development of quantum mechanics in phase space have branched into mathematical offshoots such as algebraic deformation theory (see Kontsevich quantization formula) and noncommutative geometry. Phase-space distribution The phase-space distribution f(xp) of a quantum state is a quasiprobability distribution. In the phase-space formulation, the phase-space distribution may be treated as the fundamental, primitive description of the quantum system, without any reference to wave functions or density matrices.[7] There are several different ways to represent the distribution, all interrelated.[8][9] The most noteworthy is the Wigner representation, W(xp), discovered first.[4] Other representations (in approximately descending order of prevalence in the literature) include the Glauber–Sudarshan P,[10][11] Husimi Q,[12] Kirkwood–Rihaczek, Mehta, Rivier, and Born–Jordan representations.[13][14] These alternatives are most useful when the Hamiltonian takes a particular form, such as normal order for the Glauber–Sudarshan P-representation. Since the Wigner representation is the most common, this article will usually stick to it, unless otherwise specified. The phase-space distribution possesses properties akin to the probability density in a 2n-dimensional phase space. For example, it is real-valued, unlike the generally complex-valued wave function. We can understand the probability of lying within a position interval, for example, by integrating the Wigner function over all momenta and over the position interval: If Â(xp) is an operator representing an observable, it may be mapped to phase space as A(x, p) through the Wigner transform. Conversely, this operator may be recovered by the Weyl transform. The expectation value of the observable with respect to the phase-space distribution is[2][15] A point of caution, however: despite the similarity in appearance, W(xp) is not a genuine joint probability distribution, because regions under it do not represent mutually exclusive states, as required in the third axiom of probability theory. Moreover, it can, in general, take negative values even for pure states, with the unique exception of (optionally squeezed) coherent states, in violation of the first axiom. Regions of such negative value are provable to be "small": they cannot extend to compact regions larger than a few ħ, and hence disappear in the classical limit. They are shielded by the uncertainty principle, which does not allow precise localization within phase-space regions smaller than ħ, and thus renders such "negative probabilities" less paradoxical. If the left side of the equation is to be interpreted as an expectation value in the Hilbert space with respect to an operator, then in the context of quantum optics this equation is known as the optical equivalence theorem. (For details on the properties and interpretation of the Wigner function, see its main article.) An alternative phase-space approach to quantum mechanics seeks to define a wave function (not just a quasiprobability density) on phase space, typically by means of the Segal–Bargmann transform. To be compatible with the uncertainty principle, the phase-space wave function cannot be an arbitrary function, or else it could be localized into an arbitrarily small region of phase space. Rather, the Segal–Bargmann transform is a holomorphic function of . There is a quasiprobability density associated to the phase-space wave function; it is the Husimi Q representation of the position wave function. Star product The fundamental noncommutative binary operator in the phase space formulation that replaces the standard operator multiplication is the star product, represented by the symbol .[1] Each representation of the phase-space distribution has a different characteristic star product. For concreteness, we restrict this discussion to the star product relevant to the Wigner-Weyl representation. For notational convenience, we introduce the notion of left and right derivatives. For a pair of functions f and g, the left and right derivatives are defined as The differential definition of the star product is where the argument of the exponential function can be interpreted as a power series. Additional differential relations allow this to be written in terms of a change in the arguments of f and g: It is also possible to define the -product in a convolution integral form,[16] essentially through the Fourier transform: (Thus, e.g.,[7] Gaussians compose hyperbolically, The energy eigenstate distributions are known as stargenstates, -genstates, stargenfunctions, or -genfunctions, and the associated energies are known as stargenvalues or -genvalues. These are solved, analogously to the time-independent Schrödinger equation, by the -genvalue equation,[17][18] where H is the Hamiltonian, a plain phase-space function, most often identical to the classical Hamiltonian. Time evolution The time evolution of the phase space distribution is given by a quantum modification of Liouville flow.[2][9][19] This formula results from applying the Wigner transformation to the density matrix version of the quantum Liouville equation, the von Neumann equation. In any representation of the phase space distribution with its associated star product, this is or, for the Wigner function in particular, where {{ , }} is the Moyal bracket, the Wigner transform of the quantum commutator, while { , } is the classical Poisson bracket.[2] This yields a concise illustration of the correspondence principle: this equation manifestly reduces to the classical Liouville equation in the limit ħ → 0. In the quantum extension of the flow, however, the density of points in phase space is not conserved; the probability fluid appears "diffusive" and compressible.[2] The concept of quantum trajectory is therefore a delicate issue here.[20] See the movie for the Morse potential, below, to appreciate the nonlocality of quantum phase flow. N.B. Given the restrictions placed by the uncertainty principle on localization, Niels Bohr vigorously denied the physical existence of such trajectories on the microscopic scale. By means of formal phase-space trajectories, the time evolution problem of the Wigner function can be rigorously solved using the path-integral method[21] and the method of quantum characteristics,[22] although there are severe practical obstacles in both cases. Simple harmonic oscillator The Hamiltonian for the simple harmonic oscillator in one spatial dimension in the Wigner-Weyl representation is The -genvalue equation for the static Wigner function then reads Time evolution of combined ground and 1st excited state Wigner function for the simple harmonic oscillator. Note the rigid motion in phase space corresponding to the conventional oscillations in coordinate space. Wigner function for the harmonic oscillator ground state, displaced from the origin of phase space, i.e., a coherent state. Note the rigid rotation, identical to classical motion: this is a special feature of the SHO, illustrating the correspondence principle. From the general pedagogy web-site.[23] (Click to animate.) Consider, first, the imaginary part of the -genvalue equation, This implies that one may write the -genstates as functions of a single argument, With this change of variables, it is possible to write the real part of the -genvalue equation in the form of a modified Laguerre equation (not Hermite's equation!), the solution of which involves the Laguerre polynomials as[18] introduced by Groenewold in his paper,[1] with associated -genvalues For the harmonic oscillator, the time evolution of an arbitrary Wigner distribution is simple. An initial W(x,p; t = 0) = F(u) evolves by the above evolution equation driven by the oscillator Hamiltonian given, by simply rigidly rotating in phase space,[1] Typically, a "bump" (or coherent state) of energy Eħω can represent a macroscopic quantity and appear like a classical object rotating uniformly in phase space, a plain mechanical oscillator (see the animated figures). Integrating over all phases (starting positions at t = 0) of such objects, a continuous "palisade", yields a time-independent configuration similar to the above static -genstates F(u), an intuitive visualization of the classical limit for large action systems.[6] Free particle angular momentum Suppose a particle is initially in a minimally uncertain Gaussian state, with the expectation values of position and momentum both centered at the origin in phase space. The Wigner function for such a state propagating freely is where α is a parameter describing the initial width of the Gaussian, and τ = m/α2ħ. Initially, the position and momenta are uncorrelated. Thus, in 3 dimensions, we expect the position and momentum vectors to be twice as likely to be perpendicular to each other as parallel. However, the position and momentum become increasingly correlated as the state evolves, because portions of the distribution farther from the origin in position require a larger momentum to be reached: asymptotically, (This relative "squeezing" reflects the spreading of the free wave packet in coordinate space.) Indeed, it is possible to show that the kinetic energy of the particle becomes asymptotically radial only, in agreement with the standard quantum-mechanical notion of the ground-state nonzero angular momentum specifying orientation independence:[24] Morse potential The Morse potential is used to approximate the vibrational structure of a diatomic molecule. Quantum tunneling Tunneling is a hallmark quantum effect where a quantum particle, not having sufficient energy to fly above, still goes through a barrier. This effect does not exist in classical mechanics. Quartic potential Schrödinger cat state 1. H. J. Groenewold, "On the Principles of elementary quantum mechanics", Physica,12 (1946) pp. 405–460. doi:10.1016/S0031-8914(46)80059-4. 2. J. E. Moyal, "Quantum mechanics as a statistical theory", Proceedings of the Cambridge Philosophical Society, 45 (1949) pp. 99–124. doi:10.1017/S0305004100000487. 3. H. Weyl, "Quantenmechanik und Gruppentheorie", Zeitschrift für Physik, 46 (1927) pp. 1–46, doi:10.1007/BF02055756. 4. E. P. Wigner, "On the quantum correction for thermodynamic equilibrium", Phys. Rev. 40 (June 1932) 749–759. doi:10.1103/PhysRev.40.749. 5. S. T. Ali, M. Engliš, "Quantization Methods: A Guide for Physicists and Analysts". Rev. Math. Phys., 17 (2005) pp. 391–490. doi:10.1142/S0129055X05002376. 6. Curtright, T. L.; Zachos, C. K. (2012). "Quantum Mechanics in Phase Space". Asia Pacific Physics Newsletter. 01: 37–46. arXiv:1104.5269. doi:10.1142/S2251158X12000069. 7. C. Zachos, D. Fairlie, and T. Curtright, "Quantum Mechanics in Phase Space" (World Scientific, Singapore, 2005) ISBN 978-981-238-384-6. 8. Cohen, L. (1966). "Generalized Phase-Space Distribution Functions". Journal of Mathematical Physics. 7 (5): 781–786. Bibcode:1966JMP.....7..781C. doi:10.1063/1.1931206. 9. G. S. Agarwal and E. Wolf "Calculus for Functions of Noncommuting Operators and General Phase-Space Methods in Quantum Mechanics. II. Quantum Mechanics in Phase Space", Phys. Rev. D,2 (1970) pp. 2187–2205. doi:10.1103/PhysRevD.2.2187. 10. E. C. G. Sudarshan "Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams", Phys. Rev. Lett.,10 (1963) pp. 277–279. doi:10.1103/PhysRevLett.10.277. 11. R. J. Glauber "Coherent and Incoherent States of the Radiation Field", Phys. Rev.,131 (1963) pp. 2766–2788. doi:10.1103/PhysRev.131.2766. 12. Kôdi Husimi (1940). "Some Formal Properties of the Density Matrix", Proc. Phys. Math. Soc. Jpn. 22: 264–314. 13. G. S. Agarwal and E. Wolf "Calculus for Functions of Noncommuting Operators and General Phase-Space Methods in Quantum Mechanics. I. Mapping Theorems and Ordering of Functions of Noncommuting Operators", Phys. Rev. D,2 (1970) pp. 2161–2186. doi:10.1103/PhysRevD.2.2161. 14. K. E. Cahill and R. J. Glauber "Ordered Expansions in Boson Amplitude Operators", Phys. Rev.,177 (1969) pp. 1857–1881. doi:10.1103/PhysRev.177.1857; K. E. Cahill and R. J. Glauber "Density Operators and Quasiprobability Distributions", Phys. Rev.,177 (1969) pp. 1882–1902. doi:10.1103/PhysRev.177.1882. 15. M. Lax "Quantum Noise. XI. Multitime Correspondence between Quantum and Classical Stochastic Processes", Phys. Rev.,172 (1968) pp. 350–361. doi:10.1103/PhysRev.172.350. 16. G. Baker, “Formulation of Quantum Mechanics Based on the Quasi-probability Distribution Induced on Phase Space,” Physical Review, 109 (1958) pp. 21982206. doi:10.1103/PhysRev.109.2198 17. Fairlie, D. B. (1964). "The formulation of quantum mechanics in terms of phase space functions". Mathematical Proceedings of the Cambridge Philosophical Society. 60 (3): 581–586. Bibcode:1964PCPS...60..581F. doi:10.1017/S0305004100038068. 18. Curtright, T.; Fairlie, D.; Zachos, C. (1998). "Features of time-independent Wigner functions". Physical Review D. 58 (2): 025002. arXiv:hep-th/9711183. Bibcode:1998PhRvD..58b5002C. doi:10.1103/PhysRevD.58.025002. 19. C. L. Mehta "Phase‐Space Formulation of the Dynamics of Canonical Variables", J. Math. Phys.,5 (1964) pp. 677–686. doi:10.1063/1.1704163 20. M. Oliva, D. Kakofengitis, and O. Steuernagel (2018). "Anharmonic quantum mechanical systems do not feature phase space trajectories". Physica A. 502: 201–210. arXiv:1611.03303. Bibcode:2018PhyA..502..201O. doi:10.1016/j.physa.2017.10.047.CS1 maint: multiple names: authors list (link) 21. Marinov, M.S. (1991). "A new type of phase-space path integral". Physics Letters A. 153 (1): 5–11. Bibcode:1991PhLA..153....5M. doi:10.1016/0375-9601(91)90352-9. 22. Krivoruchenko, M. I.; Faessler, Amand (2007). "Weyl's symbols of Heisenberg operators of canonical coordinates and momenta as quantum characteristics". Journal of Mathematical Physics. 48 (5): 052107. arXiv:quant-ph/0604075. Bibcode:2007JMP....48e2107K. doi:10.1063/1.2735816. 23. Curtright, T. L. Time-dependent Wigner Functions 24. J. P. Dahl and W. P. Schleich, "Concepts of radial and angular kinetic energies", Phys. Rev. A,65 (2002). doi:10.1103/PhysRevA.65.022109
677ee19d77cde14c
art & culture   Ford Delivery Department 1925 Ford Delivery Department Le Corbusier 1928-29 Le Corbusier Villa Savoye, France Desk Set 1928 Desk Set The Persistence of Memory 1931 Salvador Dali The Persistence of Memory 1931 Cord   The Language of the Atom Bird in Space The Roaring Twenties were a boisterous era of prosperity, fast cars, jazz, popular radio, and illegal drinking. Before they ended with the crash of the stock market in 1929, which triggered the Great Depression, the twenties produced such human and technological accomplishments as the invention of television and the jet engine, and the first transatlantic solo flight by Charles Lindbergh in 1927. Out of range of public clamor, this exhilarating atmosphere also produced what might be called the greatest achievement in the history of physics: the development of quantum mechanics. Werner Heisenberg Frustrated by the inconsistencies of the patchwork quantum theory pioneered by Einstein and Bohr, the 23 year old German physicist Werner Heisenberg (right) started from scratch. In the summer of 1925 he decided that atoms should be described without assuming anything about unmeasurable quantities such as the positions and speeds of electrons inside atoms. Instead, he arranged the measurable quantities, such as the discrete frequencies of light emitted by the atom, in arrays of numbers not unlike spreadsheets. By manipulating these spreadsheets, which mathematicians call matrices, Heisenberg was able to recover the successes of the older quantum theory, without encountering its contradictions. Heisenberg's matrices give the right answers, but convey no visual image of the interior of the atom. In the winter of 1925-26 the Austrian physicist Erwin Schrödinger succeeded in finding a more intuitively appealing description. In this approach, de Broglie's waves are solutions of an equation which came to be called the Schrödinger equation. Discrete colors of light emitted by glowing matter reflect the fact that electron waves confined in an atom have specific frequencies, just as sound waves inside a flute can only have discrete frequencies. The birth of the new theory culminated in Schrödinger's amazing proof, in March 1926, that his and Heisenberg's formulations, which appeared so different, are actually mathematically equivalent. Henceforth Quantum Mechanics, in either guise, replaced Newtonian mechanics as the correct description of atomic particles. It incorporates wave/particle duality and substitutes probability for certainty in dealing with the building blocks of matter. It broke with classical physics even more radically than Special and General Relativity -- and for three quarters of a century it has passed all experimental tests. Next Decade Previous Decade Home Search Index Help
6aa3a850a9f05c9c
Activation energy From Encyclopedia of Mathematics Jump to: navigation, search A concept originating in the theory of chemical reactions. It plays an important role in combustion theory. The evolution of a chemical reaction is determined by its specific-reaction rate constant, usually denoted by $ k $. This quantity depends mainly on the temperature, $ T $. For a one-step chemical reaction, its functional dependence is given empirically by the Arrhenius expression: $ k = A { \mathop{\rm exp} } ( - {E / {( RT ) } } ) $. Here, $ R $ is the universal gas constant, $ E $ the activation energy and $ A $ the frequency factor for the reaction step; $ E $ is independent of $ T $, while $ A $ may depend weakly (for example, polynomially) on $ T $. At the molecular level, a chemical reaction is a collision process between reactant molecules, from which reaction products emerge. The molecules move on a potential-energy surface, whose shape is determined by a solution of the Schrödinger equation. A configuration of the reactant molecules corresponds to a local minimum in one region, and a configuration of the reaction products to a local minimum in another region, where the two minima are generally separated by a barrier in the potential-energy surface. At a saddle point on the barrier, the height of the surface above the energy of the reactant region assumes a minimum value. A collision of the reactant molecules can produce products only if the energy of the reactants (for example, their kinetic energy) exceeds this minimum height. The minimum barrier height defines the activation energy, $ E $. It is the energy the reactants must acquire before they can react. In practice, the activation energy is determined experimentally, by measuring $ k $ at various values of $ T $ and making a best straight-line fit through the data $ { \mathop{\rm ln} } k $ versus $ {1 / T } $. Activation-energy asymptotics. Activation-energy asymptotics play an important role in combustion theory, in the study of diffusion flames. A diffusion flame is a non-pre-mixed, quasi-steady, nearly isobaric flame, where most of the reaction occurs in a narrow zone. The structure of a diffusion flame, especially in a multi-step reaction, is very complicated, but can be analyzed in model problems by means of activation-energy asymptotics. The small parameter in activation-energy asymptotics is the reciprocal of the Zel'dovich number. The Zel'dovich number, $ \beta $, is a non-dimensional measure of the temperature sensitivity of the reaction rate: $$ \beta = \alpha { \frac{E _ {1} }{RT _ \infty } } , $$ $$ \alpha = { \frac{T _ \infty - T _ {0} }{T _ \infty } } $$ while $ T _ {0} $ and $ T _ \infty $ denote the temperature far upstream and far downstream of the moving flame, respectively, and $ E _ {1} $ is a reference energy. The graph of the reaction rate has a single peak, which narrows and increases in magnitude and whose location approaches the fully burnt condition ( $ T _ \infty $) as $ \beta \rightarrow \infty $. The governing equations for the temperature and the concentration of the reaction-limiting component define a singular perturbation problem (cf. also Perturbation theory) as $ \beta \rightarrow \infty $. The regular (or outer) expansion yields the temperature and concentration profiles in the convective-diffusive zone outside the flame, and the "singular" (or "inner" ) expansion the profiles in the reactive-diffusive zone inside the flame. The complete profiles are then found by the method of matched asymptotic expansions (cf. Perturbation theory). Activation-energy asymptotics were first introduced in combustion theory by Yu.B. Zel'dovich and D.A. Frank-Kamenetskii [a1]. The method was formalized in [a2]. Many results are summarized in [a3]. [a1] Y.B. Zel'dovich, D.A. Frank-Kamenetskii, "The theory of thermal flame propagation" Zhur. Fiz. Khim. , 12 (1938) pp. 100 (In Russian) [a2] W.B. Bush, F.E. Fendell, "Asymptotic analysis of laminar flame propagation for general Lewis numbers" Comb. Sci. and Technol. , 1 (1970) pp. 421 [a3] Ya.B. Zel'dovich, G.I. Barenblatt, V.B. Librovich, G.M. Makhviladze, "The mathematical theory of combustion and explosions" , Consultants Bureau (1985) (In Russian) How to Cite This Entry: Activation energy. Encyclopedia of Mathematics. URL: This article was adapted from an original article by H.G. Kaper (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
dcf931ccea77d6e1
Sunday, February 10, 2013 First Post: A Response to Awet on Philosophy and Character I hastily tried to post this entry as a casual comment and realized the formatting had been sterilized & changed past acceptable limits. So, I decided to make my own blog and start a dialogue/monologue about the problem of evil, atheism, and better consciousness, with a focus on the philosophy of Arthur Schopenhauer. Here is the original post from an excellent and thoughtful writer, Awet, who shares many of my philosophical interests and motivated me to take my contemplation online: Heterodoxia - Philosophy Can [Not] Change You The italicized remarks are Awet's and some are from 1 or 2 other blog posts from his site. The basic thrust of my post is to (1) push Awet on whether he represented Schopenhauer's theory of agency correctly, as well as Schopenhauer's character, since he sprinkled in a little ad hominem; (2) argue for the impotence of philosophy/ethics to change character as esse (and invite a dialogue on whether concepts like esse even hold water, since Awet believes metaphysics is totally illusory); (3) loosely assert some pessimistic conclusions about human nature and speculate about quantum agency, again, for the sake of future edifying dialogue.  If free choice or an act of will is taken as an event, and all events take place in time, then the idea of an intelligible choice, or the act of choice that takes place in the timeless domain of the Kantian thing in itself is completely incoherent. Whatsoever is incoherent cannot sustain as a solution.  Awet does not accept the intelligible/empirical solution to the free will / determinism problem and his understanding of Kant and Schopenhauer is = or > mine. However, Awet also seems to say that Schopenhauer believes philosophy cannot change our character because it cannot motivate us. This seems to misrepresent Schopenhauer's thought and Awet's own understanding of Schopenhauer: "Motives, however, can influence character through knowledge, and that is how a person’s manner can change while his character remains the same. Motives can influence the will, alter its direction, but not change the will. Therefore, pace Seneca, willing cannot be taught, and always remains inscrutable. Motives themselves are concepts, abstract representations of reason, and through the conflict of several motives, the strongest emerges and determines the will with necessity." Character + Motive = Action, is my understanding of Schopenhauer's teaching, i.e. motives do not change character, but the influence they bring changes our behavior; Awet seems to say that no influence is possible since our character does not change, in the linked blog post. I do believe that philosophy cannot teach morality/virtue in the sense that concepts do not change our being/essence (esse). So, in modern speech, people are guided by motives that they value more than others; the operari is affected by motives, which reveals the transcendentally free esse appearing fixed in time/phenomenal experience. The phenomenal conscious motives "direct" our transcendental choice through the ephemeral world of representation. I think Awet should treat this linkage in greater depth. Awet says: Schopenhauer argues that there is no bridge between the heart and the mind because all theoretical knowledge acquired from books or instruction cannot motivate — their concepts are dead. It seems to me that Schopenhauer is trying to hoist his own petard here, by prescribing how philosophy should be done. No? But Schopenhauer is arguing that prescriptive philosophy/ethics has no effect on the character of particular persons, which I take to be an incredibly insightful and true description of human nature. Schopenhauer is admitting that no person will become better or worse, morally, for reading his work, which is quite the opposite of hoisting his own petard. When Schopenhauer writes about why he writes (why the genius writes according to Schopenhauer), he does not give a very clear reason at all (an instinct of a unique sort): "The motive which moves genius to productivity is, on the other hand, less easy to determine (compared to the motive which moves talent, i.e. money and fame). It isn't money, for genius seldom gets any. It isn't fame: fame is too uncertain and, more closely considered, of too little worth. Nor is it strictly for its own pleasure, for the great exertion involved almost outweighs the pleasure. It is rather an instinct of a unique sort by virtue of which the individual possessed of genius is impelled to express what he has seen and felt in enduring works without being conscious of any further motivation. It takes place, by and large, with the same sort of necessity that a tree brings forth fruit, and demands of the world no more than a soil on which the individual can flourish. More closely considered, it is as if in such an individual the will to live, as the spirit of the human species, had become conscious of having, by a rare accident, attained for a brief span of time to a greater clarity of intellect, and now endeavors to acquire the products of this clear thought and vision for the whole species, which indeed is the intrinsic being of the individual, so that their light may continue to illumine the darkness and stupor of the ordinary human consciousness. It is from this that there arises that instinct which impels genius to labor in solitude to complete its work without regard for reward, applause or sympathy, but neglectful rather even of its own well-being. To make its work, as a sacred trust and the true fruit of its existence, the property of mankind, laying it down for a posterity better able to appreciate it: this becomes for genius a goal more important than any other, a goal for which it wears the crown of thorns that shall one day blossom into a laurel wreath. Its striving to complete and safeguard its work is just as resolute as that of the insect to safeguard its eggs and provide for the brood it will never live to see: it deposits its eggs where it knows they will find life and nourishment, and dies contented". --Vol. 2 "On Philosophy and the Intellect" as translated in Essays and Aphorisms (1970), as translated by R. J. Hollingdale. If Schopenhauer was hoisting his own petard, wouldn't he try to demonstrate a rational + prescriptive ethics of compassion, i.e. that we ought (can) to be good/virtuous by doing x, y, z? Schopenhauer believed the ultimate significance of life was moral, and he knowingly failed his own standard, by a vast breadth. Thinkers as great as Kierkegaard gleefully claimed "Schopenhauer is not who he thinks he is" (paraphrase) in an attempt to prove that Schopenhauer's metaphysics was wrong because Schopenhauer didn't become an ascetic/saint. Nietzsche did the same from the other way round, e.g. Schopenhauer negates God, and the world, but preserves morality; is this a pessimist? (paraphrase again, sorry). Critics love to talk about Schopenhauer's arrogance/pride, but I believe he humbly recognized his own moral imperfection, his distance from his own conception of salvation, his unbecoming attachment to life & lack of philosophical equanimity. His expression that the great sculptor need not be beautiful, nor the philosopher be a saint, seems to capture his sense of self-condemnation. Almost every philosopher attempts to portray himself as living up to the requirements of his own ethical knowledge, Schopenhauer wasn't even in the ballpark for his own standards. With regard to humanity, what should we make of the fact that almost nobody philosophizes; that the very idea of a person being in dead earnest about philosophy occurs to no one? (Schopenhauer paraphrase again, sorry) Rational arguments do not change what we believe about the meaning of life (they are all tautologies anyway, right?); everyone searches their feelings in light of arguments/experience to determine their ultimate disposition. Schopenhauer thinks that direct/intuitive understanding is the source of innate moral disposition which we graft rational dogma upon. This explains why asceticism is practiced very similarly (in terms of self-denial) in many (all?) world religions despite disparate dogmatic commitments. I think Schopenhauer is right that no philosophy/religion changes our character and, much more radically, that we are fundamentally evil (or life is meaningless). The insoluble problem of evil seems to be the only real problem of existence; for Schopenhauer, the original astonishment that anything exists followed by horror at the ubiquity of suffering and death, ground his fixation on the riddle of existence. But almost nobody gives a damn about the suffering and death inherent to life and its implications. Instead, the majority justifies egoism, and two minorities pursue (1) the well-being of all and (2) the woe of all, with an infinite spectrum in between. Yet Awet praises Schopenhauer for recognition of the primacy of the incoherent: Contra the dogma of philosophers, Arthur Schopenhauer realized that reason is not the basic essence of man. Greatest insight (separate post where Awet states this remark)  Moreover, the incoherent is simply a fact of all attempts at complete explanation. Schopehauer does not believe he has proven that the intelligible character / empirical character theory is true, he simply speculates that metaphysically we are free transcendentally (or life has no moral significance) because empirically we appear determined by the principle of sufficient reason / causality. None of Schopenhauer's transcendental claims are put forth as certain truths; Schopenhauer is explicit that the "Will" is only a best guess and not exhaustive of ultimate reality; in fact, if salvation exists from apart from Will then Will cannot constitute the whole of ultimate reality. All transcendental language is necessarily incoherent; however, and I do not agree that because a proposition is incoherent that it cannot sustain a metaphysical solution. One suggestion Awet offered for the riddle of existence is to "establish your lucidity in the middle of what negates it", which hardly seems coherent, but I found it deeply meaningful. (Awet's remark was from a personal email, not his blog) Awet also claims that although causation was taken as universal and absolutely necessary during the heyday of Newtonian science, in our post-modern times, quantum indeterminacy provides an escape hatch [to the strict Kantian determinism used for the solution to free-will problem in transcendental idealism]. I do not believe that quantum indeterminacy necessarily provides an "escape hatch" for allowing empirical freedom from causation because physicists are focused on how to describe results rather explain why one result happened over another. In other words, the physical ontology isn't the focus of quantum physics because we are still struggling to develop resources to accurately say what is happening.  Perhaps quantum physics affirms an epistemic/material determinism trapped within ontological freedom/uncertainty. Tim Maudlin's Distilling Metaphysics from Quantum Physics from the Oxford Handbook of Metaphysics has a section on determinism that may be illuminating: "Historically, the most widely remarked metaphysical innovation of quantum theory over classical physics is the rejection of determinism in favor of chance. Events such as the decay of a radioactive atom are typically held to be fundamentally random: there is no reason at all that the decay takes place at one time rather than another. Atoms that are physically identical in every respect may nonetheless behave differently. Einstein was resistant to the idea that God plays dice, and his insistence on determinism is taken to be a mark of a reactionary inability to accept the quantum theory. Things are not quite so simple. Does either the pragmatic formalism or empirical result of any experiment require us to abandom determinism? No. The pragmatic formalism requires an interpretation, and some interpretations posit deterministic laws while other employ fundamentally stochastic dynamics. Further, little can be said in the way of generalization." Maudlin goes on to note that the Schrödinger equation itself is deterministic, so any interpretation not employing wave collapse at a fundamental level must find its indeterminism apart from his theory, if at all. The main problem to be solved in quantum theory is not "an explanation of why one result happened rather than another (restoring determinism), but rather to have the theoretical resources to describe the experiment as having had one result rather than another. That problem is answered in the first place simply by having more than the wavefunction in the physical ontology, irrespective of the dynamics." Maudlin concludes the section by explaining that "the question of determinism is only tangential to the motives of the enterprise ... So we can't say that quantum theory forces indeterminism on us. Furthermore, the whole issue looks like a case of spoils to the victor than a fundamental point of contention: if some consideration militates in favour of a specific interpretation, the question of determinism will simply follow suit, and it seems very unlikely that determinism itself will be a decisive consideration." With the frontier of physics so foggy, we simply aren't in a position to talk about physical ontology yet. I would love to explore the possibility of reworking Schopenhauer's epistemology in light of quantum physics, and I am not convinced yet that his theory is obsolete/irrelevant. It is curious that both Schrödinger and Einstein were both avid readers and enthusiasts of Schopenhauer; perhaps his fundamental insights can survive emerging discoveries.
d9fb58ae20d2135a
Five Popular Posts Of The Month Tuesday, December 18, 2018 Killing The Schrodinger's Cat, at last and for good: part II This post is one of the series of posts  listed in Appendix below After writing my first reflection on the first two chapter so the book (Part I), I continued reading the book and keeping my notes while reading. In general, I enjoyed the reading, especially the parts about personal history of various people. In that part my expectation turned out to be correct, Adam Becker offers a good account of the history.  Once in a while the reading initiated an argument, and those I present below. Page 39 “When an electron is shot out into the tube, its wave function obeys the Schrödinger equation, undulating and propagating outward like a wave” This statement makes us think that a wave function describes an actual physical field, like an electric field. This is simply wrong, because a wave function describes a number distribution ("an amplitude") in space and time (related to the probability distribution). “So sometimes the electron behaves like a wave, and sometimes it behaves like a particle”. This statement is wrong, because an electron never behaves like a wave and always behaves like a particle. However, that particle demonstrates different macroscopic behavior under the same macroscopic conditions – which is different from the behavior of macroscopic particles, those always demonstrate the same macroscopic behavior under the same macroscopic conditions. Specifically, an electron hits a screen at different locations. Electronspluralmany electrons under the same macroscopic conditions demonstrate behavior visually similar to the behavior demonstrated by a macroscopic waves. For example, when many electrons hit a screen at different locations, the resulting picture may look similar to the picture formed by waves traveling on a surface of water through two narrow slits. The difference between the waves in water and electrons is that every electron actually travel in space from a source to a screen, but water waves happen do to water molecules slowly moving about their equilibrium position and pushing on each other. The whole idea of “wave-particle duality” was developed as an attempt to make sense of the theoretical concepts which could not fit into a well-developed classical picture. But since then physics has grown and today, almost hundred years later, we do not need to hold on this mental bridge anymore. At the dawn of the quantum mechanics the fact that a particle cannot demonstrate its location and velocity at the same time was a shock. Today, we just accept it as a fact; yes, a quantum object cannot demonstrate (note: I am not saying “have” – that is a different conversation about possible interpretations of quantum mechanics), so, a quantum object cannot demonstrate its location and velocity at the same time. The real “mystery” is why macroscopic objects, which are made of quantum objects, do demonstrate their location and velocity at the same time; how does that ability of the whole comes from an inability of its parts? About a hundred years ago, when physicists would say something like “an incomplete description”, “incompatible variables”, “complementary”, they simply meant “different from classical”. Page 59 “For any entangled system, Einstein’s choice applied: either the system is nonlocal, or quantum physics can’t fully describe all the features of that system”. There is the third choice. The parts of a system interact via a physical interaction of some sort which has speed high enough to explain the behavior of the system – assuming the experiment is feasible at least in principle. The “high enough speed” condition may include interaction via agents which travel above the speed of light. Page 100 “How do … the photons … know you’re watching them at all?” (in a double-slit experiment). Answer – because “watching” means having photons interacting with  device which has one state in the absence of a photon and changes its state in the presence of a photon, and that inter-action changes the photon as well. Placing a detector by each slit makes the necessity for including those detectors in the mathematical description of the experiment. The much more intriguing question is how does a photon “know” – after traveling through one of the slits (and we don’t know which) – where to hit a screen, or more importantly, where NOT to hit it? It seems like a photon “knows” that well before reaching a screen. A photon “knows” must mean that there is an interaction between a photon and the environment which affects its motion toward a screen. But the Schrödinger equation gives NO information about such interaction. Here is where Bohm’s theory steps in. Page 124 “Everett … insisted that a single universal wave function was aa there was”. The idea of the existence of a single universal wave function for the whole existing universe is no different from the idea of a single universal Lagrangian for the whole world. It should be natural to every physicists who believes that our understanding of the universe should reflect the existence of the universe. However, the idea of the “many-worlds” logically is not connected with the idea of a single universal wave function; these two ideas do not demand each other. Term “many-worlds” implies the existence of many different worlds – at the same time at the same place – (however one may see it). However, the passage (page 126) “universal wave function splits into more and more noninteracting parts” shows that those many worlds just represent different parts of the whole world, parts which exists at the same time at different locations. This picture is no different from any classical view on the world. The notion that every single event in the universe creates new universes which correspond to all possible outcomes of the event, and an observer in each universe observes his own outcome may be seen as an innovation, but it has nothing to do with science, because does not help making predictions does not lead to new insights, and instead of making things easier and clearer, make them harder. This is a situation when a treatment is worse than a disease. Page 145 Bell’s quote “The great von Neuman … made assumptions in his proof that were entirely unwarranted”. Many though experiments about entanglement make the same mistake. Human mind can imagine things which may seem natural, but physically “unwarranted”. It is not enough just to say “let’s assume these particles are entangled”, there has to be a specific physical mechanism in place for that to happen. If that specific mechanism of entanglement does not exist, the whole thought experiment makes no sense. Page 149 “Bell used … Bohm’s version of EPR involving photons with entangled polarization. … When a photon hits a polarizer, it either passes through or gets blocked” This is an example of a very commonly used interaction between a photon and an optical device (a polarizer, a mirror, a lens, etc.). And it is an example of a very common misunderstanding of the physical phenomenon happening during this interaction. Every author bases his/her logic on the options what may happen with a photon during this interaction, for example, a photon maybe be reflected, deflected, transmitted, blocked. And then the same photon keeps traveling (and something else is happening to it). The fact of the matter is that the photon traveling away from a device is simply not the same photon which was approaching a device. When a photon starts interacting with a device, it means it collides with an atom inside the device (at least one), it most probably gets absorbed, then – after some microscopic time interval – a new photon is emitted, which may be again absorbed, emitted, absorbed, emoted, etc., and such a process eventually leads to a photon – a new one! – leaving a device. Any conclusion on what property that final photon has is probabilistic and has to be derived based on quantum electrodynamics (in general). Until this description is provided, any conclusions on the results of an experiment involving a photon-device interaction may be plausible, but not necessarily definite. A polarization axis (a transmission axis) of a polarizer is a macroscopic property of a device. When one photon encounters a polarizer, it encounter the existence of one atom or molecule. How would a photon “know” the direction of a transmission axis when it meets with only one atom? That atom absorbs a photon, emits a new one, etc. The final result is probabilistic. Hence, when a single photon interacts with a polarizer, there is always non-zero probability for a new photon be emitted by the polarizer on another side (what we call “passing through”). The phrase “a photon is polarized perpendicularly to the transmission axis” simply makes no sense. Hence, statement that (page 150) “the two [entangled] photons will always pass through together or be blocked together” is just wrong. Even if one polarizer completely absorbs one photon, there is a non-zero probability to see a photon on another side of a second polarizer. And that ruins the whole idea of the experiment, of any experiment with entangled photons (and also of the example with the casino). Page 153 “That suggests a need for a radical revision of our conception of space and time, far beyond Einstein’s relativity” Bell’s theorem may have pointed in that direction, but today “a need for a radical revision” is nothing but obvious, because, clearly, that seems the only way toward quantum gravitation – nothing else has been working. Page 198 “further work on the subject would extinguish his academic career”. The book provides many insights into the world of science, but it also provide many insights into the world of scientists. Those two worlds are not identical. The world of scientists is actually not much different from the world of actors, or politicians. “You are wrong” does not mean “you made a logical mistake here and there” (as it should be in the world of science), but “you think different from me, and that is wrong” (as it often in the world of scientists). And if you are not a member of “a pack”, you have a slim chance to find a good position. Page 231 “What causes the collapse of the system-apparatus-environment combined wave function?” The answer is – instability of immeasurable states (e.g. those states in a Hydrogen atom that would have energy NOT equal to the energy of the Bohr's energy levels). It seems to me that many Western physicists think of a wave-function as of an actual real physical field, e.g. like electric field. Having this view naturally makes them wonder what happens to the field if just before a measurement it existed in a huge area of space but right after a measurement it exists only at a specific point. that is why they call it a "collapse" and trying to break their heads to understand what happened. However, a wave-function is not an actual physical field, but just a mathematical abstract - like the Dirac Delta-function. Yes, it does have strange behavior, but what is not strange about quantum world? Page 245 “The idea that the universe as a whole was a suitable subject for scientific investigation was difficult for some physicists to swallow”. A good example to demonstrate the difference between one who is paid for doing something in the field called “physics”, and a physicist (like not everyone who has a job title “a teacher” is actually a teacher). Page 291 “But how can the photon “decide” whether to travel down just one path after it’s already passed through the first beam splitter?” The answer is (again) – the photon does not need to “decide” anything. That photon disappears, being absorbed by the material of the beam splitter. Gone. The rest of the process does not include the original photon anymore. Appendix I After writing Part I, but before writing Part II, I also wrote two more piece on the matter, which provide some additional points of view, including on probability, entanglement, “many-words”, philosophy, a “delayed choice experiment”, and more: I also have short pieces on a scientific method: Three old pieces on physics: Appendix II  The mission of a scientists as an agent of that human practice is discovering the truth about the universe and representing it in a testable form (e.g. verifiable, or falsifiable). When a faculty tells students "Quantum Mechanics is a complete theory, and it means this ..." he or she is simply lying - hence, he or she stops being a scientist. The truth (the fact) is that there are (exist, whether one likes it or not) different views on the state of Quantum Mechanics, and denying that fact is not a scientific action. A mere fact that someone is involved in a scientific research does not automatically makes that one a scientists. Appendix III So many people and so much energy have been focused on a photon traveling through two slits, or two entangled photons or electrons, etc., so no one asks why waves on trillions of strongly interacting atoms behave in a way similar to the behavior of weekly interacting atoms in a dilute  gas? A macroscopic number of strongly interacting microscopic particles does not follow the laws of a macroscopic world. Instead, there is a trick, a recipe – called “quantization” – which works like a charm. Why? The recipe works, what else do you need?  No comments: Post a Comment
8d03221bf76a60c5
The Mathematica Journal Just another site Thu, 04 Mar 2021 20:34:19 +0000 en-US hourly 1 Coverage versus Confidence Wed, 03 Mar 2021 23:04:02 +0000 This article is intended to help students understand the concept of a coverage probability involving confidence intervals. Mathematica is used as a language for describing an algorithm to compute the coverage probability for a simple confidence interval based on the binomial distribution. Then, higher-level functions are used to compute probabilities of expressions in order to obtain coverage probabilities. Several examples are presented: two confidence intervals for a population proportion based on the binomial distribution, an asymptotic confidence interval for the mean of the Poisson distribution, and an asymptotic confidence interval for a population proportion based on the negative binomial distribution. 1. Introduction Introductory courses in mathematical statistics present the rudimentary concepts behind confidence intervals. The creation of confidence intervals often involves the use of maximum likelihood estimation and the central limit theorem along with estimated standard errors.This is described in Casella and Berger [1] p. 497. Consequently, the level of confidence is often only approximate. This is particularly the case when continuous probability models are used to approximate discrete probabilities. The probability that the interval surrounds the unknown parameter depends on the value of the unknown parameter. Such a probability is called a coverage probability. Confidence is defined as the infimum of the coverage probabilities. The following definitions can be found in Casella and Berger [1] p. 418. Definition (Coverage and Confidence) Let where the are all independent from a distribution with probability density (or discrete mass) function given by . The support of each is and the parameter space is . Let and be the lower and upper limits of a confidence interval. Then the coverage probability of the interval evaluated at is . The level of confidence is . Students are often confused about how to compute coverage probabilities. This tutorial is intended to help students understand them. We give a detailed explanation of calculating one particular coverage probability. This also allows one to perform the calculations with a minimum of distraction involving programming. We then compute coverage probabilities using higher-level functions and that allow specifying a function of a random variable along with its distribution. In both cases these functions allow one to focus on the higher-level ideas rather than low-level nuts and bolts of programming. Coverage probabilities are best calculated by computer. This necessitates the choice of a programming language and programming environment. Statisticians are generally familiar with one or more statistical programming languages such as SAS, R and so on. Such languages are necessary productivity tools due to their significant data handling capabilities as well as their statistical methods. They are indispensable to the statistician. However, they are not as useful as a language for describing algorithms. Small bookkeeping” matters often obscure the algorithm or method to be calculated. This tutorial uses Mathematica as a language to describe the computation of coverage probabilities. With a little additional effort, one can produce graphs of coverage probabilities as well as dynamic demonstrations that use a slider to illustrate the effect of the sample size on the graph. The Wolfram Demonstrations Project website contains numerous Demonstrations involving a wide variety of topics. One such Demonstration provided by Heiner and Wagon [2] involves coverage probabilities for a population proportion using a Wald approach as well as a Bayesian approach. This article takes a different approach than Heiner and Wagon. We illustrate the idea of coverage (and hence confidence) with several examples. Section 2 describes two asymptotically justified confidence intervals for estimating a population proportion based on the binomial distribution. The first confidence interval is a simple hand calculation interval contained in many textbooks. We present a step-by-step algorithm for computing the coverage probability for one specific value of the population parameter. We stress clarity of computation rather than efficiency. The approach is adequate for a population described by a discrete distribution with a finite number of possible values. We then compute the coverage probability using a much higher-level function, , to automatically compute the probability associated with an inequality. We also use for subsequent calculations. We produce a typical graph of coverage probabilities found in some textbooks. The second confidence interval for a population proportion (again based on the binomial distribution) is more complicated but has gained popularity. Naturally, it will be seen that coverage probabilities are generally higher than the level of confidence when approximations are used to create a confidence interval. This is illustrated in the examples below. Section 3 presents an asymptotically justified confidence interval for the mean of a population described by a Poisson distribution. The Poisson distribution has infinitely many possible observable values. The function used to evaluate coverage probabilities automatically takes this into account. Section 4 presents a graph of coverage probabilities based on an asymptotically justified confidence interval for estimating a population proportion based on the negative binomial distribution. Section 5 presents a summary. 2. A Population Proportion and the Binomial Distribution The Simplest Confidence Interval A population has a proportion of members with a given characteristic. In order to estimate , one randomly selects members of the population with replacement, say , where the are independent and identically distributed random variables, each with a Bernoulli distribution with parameter . If is the number of members in the random sample possessing the target characteristic, that is, , then has a binomial distribution with parameters and . The sample proportion of members with the characteristic is . Two large sample confidence intervals for are typically given. We start with the simplest. A large sample confidence interval of size for is given by where is the upper part of the standard normal distribution. Let , the standard error of . So, we may shorten (1) by writing it as One can find the confidence interval in expression (2) in virtually any statistics book; in particular, see Devore and Berk [3] p. 396. Also, coverage probabilities for this confidence interval are described in Brown, Cai and DasGupta [4]. The derivation of the interval leads one to believe that the level of confidence is . However, two approximations are used to derive the interval in expression (2). One approximation uses the central limit theorem. A second approximation uses an estimated variance for the sampling distribution of the sample proportion . We want to compute the actual coverage probability for any possible value of the true population proportion . The coverage probability is where . Books are sometimes vague about whether or not to include the endpoints in the inequality. We exclude the endpoints in order to be consistent with typical hypothesis testing methods. The definition of coverage confuses many students. For a given value of with , one must determine the set values of satisfying the inequality in expression (3) and compute the probability of observing such values of . We will describe how to determine the set of values and then compute their probability. Once we know what is actually being computed, we will move on to higher-level functions that perform the computations automatically. We use an example with . A plot will show how bad the approximation can be and also displays the output of each step of the algorithm. We will compute the coverage probability for . The input and output are presented in a conversational style with some editorial comments along the way. We wish to determine the upper percentage value from the standard normal distribution. The variable is often called a critical value for the standard normal distribution. The result will be a floating-point number, which restricts the accuracy and precision of all calculations that use it; the result of this calculation is a floating-point number. Define . Here is the confidence interval inequality for sample size 10 and general . The support of the random variable is the set of values for which the probability mass function is positive. They also represent the observable values of for a discrete random variable. We represent the support of with the programming variable . This tests whether the inequality is true for each value of and probability 0.5. These are the positions that yield ; eliminates one level of parentheses. We wish to compute the probabilities of at those positions and sum them. These are the appropriate values of the variable . Now one computes the probabilities for the individual values of satisfying the inequality. Finally, the values of the individual probabilities are summed to create the actual coverage probability for . The steps have been broken down so that students can easily understand what is needed. A large sample justification leads us to believe that this number should be about 0.95. The coverage probability is about 0.89 rather than 0.95. Here is a much more transparent manner in which to compute the coverage probability. We may use a system function for evaluating the probability of expressions of a random variable. Apparently, the system function automatically tests each possible value of the random variable to determine the ones that satisfy the inequality. (This works quite well for a discrete random variable with a finite number of observable values.) The relevant probabilities are then summed. This approach is not efficient in cases with infinitely many observable values of a random variable. However, it is straightforward and easy for a student to understand. We evaluate the probability of an expression involving the binomial random variable. The expression of the binomial random variable is the confidence interval inequality. Let us define a function that constructs the inequality more explicitly. Define the function that computes the coverage. We now plot the coverage probabilities for a range of values of in Figure 1 below. We also create a horizontal line at a level of 0.95 for comparison purposes. The graph is symmetric due to properties of the binomial distribution and the large sample approximation involved in the confidence interval justification. Figure 1. Coverage plot for first binomial confidence interval, . Examining Figure 1 indicates several points. First, the coverage probabilities are in general not equal to the nominal level of confidencenamely .95. Moreover, coverage probabilities near and are effectively zero. Finally, the coverage probability function is discontinuous. All this with a minimum level of programming. In fact, the programming statements presented are simply a good description of the algorithm. More is available. We wish to be able to change the plot by varying the sample size with a slider. A dynamic demonstration can easily be created with the function. The manipulate variable is the sample size , which you can vary with a slider from 5 to 100. The graph is in Figure 2. The computer processing time increases with the value of the sample size because the inequality must be tested for each possible value of . The initial sample size is . Figure 2. Coverage plot as a function of sample size. A larger sample size improves the coverage probabilities as one expects. After all, the confidence interval formula is justified by a large sample argument. However, it is very clear that the coverage probability is small when is close to either 0 or 1 even with . For some sample sizes it is even more obvious that this function contains discontinuities. A Better Confidence Interval for a Population Proportion This subsection presents coverage probabilities for an improved confidence interval for a population proportion. The improvement makes coverage probabilities generally larger. Devore and Berk [3, p. 395] give a better large sample confidence interval for a population proportion. Based on the same assumptions as expression (1), a sample confidence interval of size for a population proportion is given by This confidence interval is based on solving the following inequality for : This defines the new inequality accordingly. Just as with the previous kind of inequality, define . Figure 3 is the corresponding plot, again with . Figure 3. Coverage plot for the better confidence interval, . The inequality in (5) is supposed to have a probability of approximately before sampling the population. We can of course compute the true probability with respect to the correct binomial distribution. The Mathematica code follows along with a dynamic graph in Figure 4. Figure 4. Coverage probabilities for the superior asymptotic confidence interval for a population proportion. Figure 5 contains the code and plot for the dynamic version of the plot. This plot allows for an easy comparison of the coverage probabilities for the two types of confidence intervals. Figure 5. A comparison of coverage probabilities for the two binomial intervals. The coverage probabilities for this improved confidence interval for a population proportion are indeed superior to the simpler interval. In particular, the coverage probabilities are quite large when is close to 0 or 1. One can see this even with a sample size of , for which the large sample approximation is not appropriate. The difference in coverage probabilities with the simple interval (displayed in Figure 2) and this improved interval is striking. 3. The Mean of the Poisson Distribution We now turn our attention to the Poisson distribution. The book by Devore and Berk [3, p. 400] presents a homework exercise for determining a confidence interval of size for the mean of a population described by a Poisson distribution. Let , where the are independent and identically distributed with a Poisson distribution with parameter (mean) of . Ideally, we must solve the inequality to obtain the desired confidence interval. However, if we have a large enough sample, we may replace the true standard error in the denominator with its estimate. Again, this produces a less than ideal result. The resulting simple confidence interval of approximate size for the mean is given by which has an approximate level of confidence of . The parameter in the denominator was replaced by the sample mean. Figure 6 contains the code and graph for the coverage probabilities. We use . We let where has a Poisson distribution with a mean of . In principle, the inequality must be tested for each of the infinitely many possible values of . Coverage probabilities are evaluated at a discrete set of points in order to save computational time. Figure 6. Coverage probabilities for the confidence interval for the Poisson mean . Unless is close to zero, this large sample approximation is quite good for , which is easily seen in Figure 6. Given the two approximations used, it is not surprising that the coverage probability is small when is close to zero. 4. The Population Proportion and the Negative Binomial Distribution This section addresses the situation of estimating a population proportion when the negative binomial distribution is appropriate. Let , where the are independent and identically distributed with a geometric distribution with parameter . It is well known that has a negative binomial distribution with parameters and , (see [5], p. 127). Consequently, we use the negative binomial distribution for estimating a population proportion. There are many ways to define the negative binomial distribution. We use the version described in Kinney [5, p. 125]. Conduct independent success/failure trials, each with a probability of success . Let be the total number of trials needed to obtain successes. The probability mass function for is given by where . Some authors count the number of trials before the success. Other authors count the number of failures before the success. There are other possibilities still. Mathematica uses , the number of failures before the success. Consequently, . Casella and Berger [1, p. 496] describe large sample confidence intervals based on maximum likelihood. It is easily shown that the maximum likelihood estimator for is . Moreover, the asymptotic variance of this estimator is the reciprocal of the Fisher information, . Fisher information is described in Casella and Berger [1, p. 388]. This variance expression is not useful for creating a confidence interval for since it depends on . So, we estimate the large sample variance by replacing with . This leads to the large sample confidence interval: In order to conveniently perform the calculations, we note that . We evaluate the coverage probability for in steps of 0.01. We based the calculations on . The calculation can take some time depending on the computer. When is small, values of or are extremely unlikely. This makes the internal algorithm take quite a while. We can help speed up the calculations by using rather than the symbolic . The speedup occurs by reducing the required number of digits in calculations. Even so, this calculation takes some time (about four minutes on the authors computer). A graph of the coverage probabilities is contained in Figure 7. Figure 7. Coverage probabilities for the confidence interval for a population proportion on the negative binomial distribution, . We see from Figure 7 that the approximation is quite good for values of close to 0.2. We infer that the approximation is also quite good if is close to 0. The approximation generally gets worse as increases (though not monotonically). A large sample approximation was used. Also, an approximate standard error was used. One sees that the coverage probability is essentially zero when is close to 1. 5. Summary Large sample confidence intervals are often quite easy to derive. This is particularly true when using an estimate for the standard error of an estimator. However, the actual probability of surrounding the parameter value (coverage) can be quite different from the nominal value. It is helpful to graph the coverage probabilities to see this. Mathematica is particularly useful in performing these calculations and providing a language for describing the algorithms. The author wishes to thank the anonymous reviewer and the editor for their help in improving this article. [1] G. Casella and R. Berger, Statistical Inference, 2nd ed., United States: Brooks/Cole Cengage Learning, 2002. [2] K. Heiner and S. Wagon. Wald and Bayesian Confidence Intervals from the Wolfram Demonstrations ProjectA Wolfram Web Resource. [3] J. Devore and K. Berk, Modern Mathematical Statistics with Applications, 2nd ed., New York: Springer, 2012. [4] L. D. Brown, T. T. Cai and A. DasGupta, Confidence Intervals for a Binomial Proportion and Asymptotic Expansions, The Annals of Statistics, 30(1), 2002 pp. 160201. [5] J. Kinney, Probability: An Introduction with Statistical Applications, New York: John Wiley and Sons, 1997. P. Cook, Coverage versus Confidence, The Mathematica Journal, 2021. About the Author Peyton Cook earned a B.A. in Psychology, B.S. in Mathematics, and an M.S. and Ph.D. in Statistics. He is an Associate Professor at The University of Tulsa. Peyton Cook Department of Mathematics The University of Tulsa 800 Tucker Drive Tulsa, Oklahoma 74104 ]]> 0 Structural Equation Modeling Mon, 28 Dec 2020 23:39:03 +0000 Structural equational modeling is a very popular statistical technique in the social sciences, as it is very flexible and includes factor analysis, path analysis and others as special cases. While usually done with specialized programs, the same can be achieved in Mathematica, which has the benefit of allowing control of any aspect of the calculation. Moreover, a second, more flexible, approach to calculating these models is described that is conceptually much easier yet potentially more powerful. This second approach is used to describe a solution of the attenuation problem of regression. The SEM Method Linear structural equation modeling (SEM) is a technique that has found widespread use in many sciences in the last decades. An early foundational work is Bollen [1]; a more recent overview is provided by Hoyle [2]. The basic idea is to model the linear structure of observed variables of cases (observations, subjects) by linear equations that may involve latent variables. These variables are not measured directly but inferred from the observed variables by their linear relation to the observed variables. Many commercial programs (including LISREL, Amos, Mplus) and free ones (including lavaan, sem, OpenMX) have been developed to carry out the estimation procedure. From my perspective, the R package lavaan [3, 4] by Yves Rosseel is the most reliable and convenient one among the free programs. I use it as the gold standard to judge results of my own code. This article first gives a quick overview of the standard SEM theory, then shows how to perform the calculations in Mathematica. In the last section, a second approach is discussed. The Standard Example There is a standard example due to Bollen that is also used in the lavaan manual. The dataset consists of observations of 11 manifest variables , , , , , , , , , , . SEM models are usually depicted graphically. In the lavaan documentation, this is displayed as in Figure 1. Figure 1. Bollens democracy model (image from lavaan documentation [4]). The variables , , are observed variables that measure the construct of industrialization in 1960, which is described by the latent variable . This means that the level of industrialization is assumed to be representable by one number for each country, but this number cannot be measured directly; it has to be inferred from its linear relation to gross national product , energy consumption per capita and share of industrial workers . Next, and are the democracy levels in 1960 and 1965, measured by , , , and , , , (these indicators are freedom of the press, etc.). The data matrix consists of these 11 numbers for each of 75 countries (cases). The data is delivered with the lavaan package for R. The aim of estimating the model is twofold. First, the weights of the linear connections (represented in the picture by arrows) are estimated. These arrows encode linear equations by the rule that all arrows that end in a variable indicate a linear combination that yields the value of this variable plus some error term variable. To bring this mysterious language down to earth, here are the equations represented in Figure 1: , , , , , , The variable is called an exogenous latent variable because no arrow ends there. It has no associated error variable. However, its manifest (measured) indicator variables , , have associated error variables (they are called in [1]). The indicator variables , , , and , , , of the two endogenous latent variables (those latent variables where arrows end) have error variables (called in [1]). The equations that relate latent and manifest variables define the measurement part of the model. The two equations (coming from three arrows) between the latent variables are the structure model, usually of most interest. Fitting the model to the data gives estimates for the weights of the arrows, , , , , . The second goal of SEM modeling is to check how well the structure of the model fits the data; that is, SEM is also a hypothesis-testing method. The equations given do not yet identify all variables. Assume we have a solution of ; then for any number , the numbers and would be solutions, too. To avoid this problem, we either fix the variance of the latent variables to be 1 or we fix some of the weights to be 1. This is the default in lavaan and we adopt it here, hence , , . The Standard Way of Estimating SEM Ever since SEMs invention, SEM models are estimated by calculating the models covariance matrix. From the data, we get the empirical covariance matrix . On the other hand, from the model, we can calculate a theoretical covariance matrix between the observed variables. ( depends on the model and thus on the parameters.) For example, one entry in this matrix would be . Using linearity and other properties of the covariance, this boils down to a matrix with entries that are polynomials in the model parameters and the covariances and variances between latent variables and error variables. However, without further assumptions, this gives a lot of covariances (e.g. ) that are not determined by the model and hence must be estimated. As this usually leads to too much freedom, the broad assumption is that most error variables are uncorrelated. Only some covariances between error variables are not assumed to be 0; those are marked in the diagram by two-headed arrows between the observed variables. For every pair of observed variables, we calculate the covariance by using the above given model equation as replacement rules and applies linearity and independence assumptions. In the end, we get a covariance matrix that depends on the model parameters , , , , , and on the variances of the latent variables and the covariances of error variables that are not assumed to be 0. Details can be found in Bollen [1]. To fit the empirical and the theoretical covariance matrix, we have to choose these parameters to minimize some distance function. The three most common are uniform least-square, , generalized least-square, (I is the identity matrix), and maximum likelihood, (here is the number of manifest variables). Now we are in the position to define a Mathematica function that performs SEM. First, we define the helper function that gets all variables contained in an expression in such a way that, for example, counts as one variable. Here is an example. The method will be explained with Bollens democracy dataset, so first, we need to load this dataset. The file bollen.csv contains headers (the names of the variables are saved in the list ) and a first column numbering the cases, which is dropped. The data has 75 rows. Here is the first row of 11 numbers. The model itself has to be specified as a list of replacement rules that mirror the model equations discussed. The code for the estimation function includes some utilities. For example, it defines its own covariance and variance functions that take into account which variables are assumed to be uncorrelated. The input of is the data matrix , a matrix of numerical values, one row per case. The structural equations are given in the format detailed in the previous section, The Standard Example. Moreover, the function needs: the lists of free parameters, (e.g. path weights) endogenous latent variables, exogenous latent variables, the list of error variables of latent variables, errors of exogenous manifest variables errors of endogenous manifest variables a list of pairs of error variables specifying which error variables are allowed to be correlated The code after defining can be omitted on a first reading; it is only needed to calculate some fit indices (if required by the option , which asks to do the fit index (FI) calculation; similarly, asks to do the maximum likelihood estimation). The estimation is done at the end of the function. The goal of the first half of the program is the definition of the covariance function that takes into account the SEM assumptions: that most error variables are uncorrelated (except those specified to be correlated), leaving variances of latent variables as symbolic entities to be estimated. This function is then used to calculate the model implied covariance matrix . Applying the model equation rules repeatedly gives a matrix that depends only on parameters, variances of latent variables and error variables and some allowed covariances of error variables. The code from the line defining (the degree of freedom) onward is only important for getting fit indices. If we are only interested in estimating the model parameters, the next interesting lines are where is applied to estimate the model. As described in the introduction, there are several strategies to measure deviation of covariance matrices; for example, the definition of is a straightforward coding for minimizing . Let us run the code on Bollen’s model in a simplified version where no correlation of error variables is assumed. This may take several minutes. The result combines parameter, variance and covariance estimations according to the various estimating strategies. To judge how well the model fits the data, you can set the option to some fit indices: RMSEA is the root square mean error CFI is the comparable fit index TLI is the TuckerLewis fit index NFI is the normed fit index RMSEA should be less than 0.1 or better, less than 0.05, and the last three should all be greater than 0.9 or 0.95 for good model fit. The results of estimating using the three different methods differ somewhat. This is not a bug of our program; lavaan determines the same numbers up to several decimal places. There are results in the literature about which methods are equivalent under which conditions. For these fit indices to be interpretable, we need to assume that the data is multivariate normally distributed. If this assumption is violated, then we should judge model fit by other indices, which is beyond the scope of this article; however, they could be calculated based on the current approach as well. The book edited by Hoyle [2] gives some information on these methods. For the original model that allows some covariances between error variables, the runtime gets worse, especially for maximum likelihood estimation. Hence, this is turned off in the following code. The results of both models are exactly the same as calculated with lavaan. An Alternative Approach: Case-based Estimation When I first learned about SEM, I was puzzled by the many notions (e.g. exogenous, endogenous) and the assumptions needed. For example, I felt that correlation of error variables should be calculated by the estimation algorithm and not be set at will when specifying the model. However, these difficulties seem to play no large role in practice and there are thousands of research papers (mainly) in the social sciences that use these methods with great success. Yet, there are some reasons why the standard approach to SEM via covariance matrices can be criticized (a more detailed discussion is given in [5]). Traditional SEM: is well suited only for linear models (there are some nonlinear extensions, but they have not yet become mainstream) does not give estimates of the values of latent variables for each case (Bayesian variants can do this) requires the covariance matrix of observed data to be nonsingular; however, improving measurement methods in , , , for example, may result in highly correlated measures of (in the extreme case with identical vectors of measured values) and hence their covariance matrix will be almost singular has resulting estimations for parameters that depend a lot on the estimation method used forbids certain linear models that are not identified in this approach, even though the model itself is sensible and well defined (e.g. the number of covariances of error variables allowed to be nonzero is limited, although in practice there may be correlations) You may then wonder why the covariance matrixbased approach is so popular. I suppose that more than 40 years ago, computers were not powerful enough to deal with a full dataset, so that the information reduction by calculating the correlation matrix was essential. Since then, many powerful programs have been developed and research has been carried out that gave a good understanding of conditions under which the method works well. Moreover, the psychometric community reached a consensus on how model fit should be judged and thus studies using this method faced no problem being published. After this discussion of pros and cons, it is time to present the following case-based approach to SEM estimation that is very easy (one may even call it naive) to implement but is also very flexible and with today’s computing power, it is feasible in many real-world situations. Hence, I propose to do SEM case-based by least-square optimization of the defects of the equations. Assume we have observations (cases) of variables , . A general equational model consists of equations , , which involve the data, latent variables , , and parameters . Then the latent variables and the parameters are estimated by minimizing . Another twist is needed to get the best results, however. The above objective function gives all equations the same weight. However, it turned out (by working with simulated data where it is clear which parameters should be found) that we get better results by multiplying by a factor that gives the equations different weights, that is, . The factor can be modified by an option in the code that follows. Best results are obtained for , where is the number of latent variables in . The idea behind this choice is that an equation that involves only one latent variable links this variable directly to the manifest data and thus should have a high weight. In contrast, equations with many latent variables are not so close to the manifest observations and are thus are more hypothetical, so they should have a lower weight. The model equations are not formulated as rules as for the first SEM, but as equations with the name of the error variable attached to each equation. Moreover, the dataset is not normalized, so there are nonzero intercepts in the linear equations. In the first approach this had no consequences, because such additive values are eliminated by calculating the covariance matrix, but in the SEM2 approach, intercepts must be modeled explicitly (and we have the benefit of getting estimates for them as well). The function SEM2 that carries out the model estimation takes as input and the names of the manifest () and latent variables (). At the technical heart of the function is the subroutine . This function takes an equation involving latent variables (e.g. ) and adds to the objective function the appropriate term for each case (i.e. with values from the data replacing the names of manifest variables): (dem60[1]-(b1 ind60[1]+u1[1]))^2+
(dem60[2]-(b1 ind60[2]+u1[2]))^2+
...+
(dem60[n]-(b1 ind60[n]+u1[n]))^2 There is one option. This code estimates Bollens model. As mentioned, there is a version that weights equations according to the number of latent variables they have. The results for the estimates differ from what is calculated in the traditional covariance matrixbased approach given for . A simulation study that compares the two approaches [5] showed that in many situations the case-based approach gives better results, especially when the assumption of independent errors is violated. Moreover, the case-based approach is easily applied to nonlinear equations. However, in certain situations it may be necessary to perform the minimization with higher accuracy than provided by standard hardware floating-point numbers. Application to Measurement Error In standard linear regression , one assumes that the independent variables are measured exactly, while the dependent variable has an error that is ideally normally distributed. If the independent variables are measured with error too, standard linear regression underestimates the regression coefficient. This is the famous attenuation problem and I will show how to solve it. Let us first simulate a dataset with error on both variables. Then linear regression underestimates the slope, which should be 0.5. When using case-based modeling, several strategies are possible. We may use one or two latent variables for the true values. As the true dependent variable is just , the following code uses just one latent variable. Another twist is that the equations are divided by the empirical standard deviations to put them on an equal footing. This example shows both the power of this method and the responsibility of the modeler to set up sensible equations. If we are sure that the errors are uncorrelated, we may add as another constraint to further improve the estimate. This may also be done automatically with an extended version of SEM2, which will be published when its development is completed. Two methods for the estimation of structural equational models are presented. One uses the traditional covariance matrixbased approach and is therefore restricted to linear equations, while the other approach is more general but not yet established in practice. Estimating the models is rather easy in Mathematica, but the numerical problems that arise can be demanding. The new case-based approach is very flexible and promising in certain situations where the standard approach shows limitations. Case-based calculation of SEM looks very promising given the numerical power of todays computers and might give insight in situations where the restrictions of the traditional approach urge researchers into making assumptions that may not be warranted. It is my pleasure to thank Ed Merkle and Yves Rosseel for many explanations of SEM. [1] K. A. Bollen, Structural Equations with Latent Variables, New York: Wiley, 1989. [2] R. H. Hoyle (ed.), Handbook of Structural Equation Modeling, New York: Guilford Press, 2012. [3] K. Gana and G. Broc, Structural Equation Modeling with lavaan, Hoboken: John Wiley & Sons, 2019. [4] Y. Rosseel. lavaan. (Aug 25, 2019) [5] R. Oldenburg, Case-based vs. Covariance-based SEM, forthcoming. R. Oldenburg, Structural Equation Modeling, The Mathematica Journal, 2020. About the Author Reinhard Oldenburg has studied physics and mathematics and received a PhD in algebra. He has been a high-school teacher and now holds a professorship in Mathematics Education at Augsburg University. His research interests are computer algebra, the logic of elementary algebra and real-world applications. Reinhard Oldenburg Augsburg University Mathematics Department Universitätsstraße 14 86159 Augsburg, Germany ]]> 0 Generating Minimally Unsatisfiable Conjunctive Normal Forms Thu, 29 Oct 2020 20:06:45 +0000 Constructing Unsatisfiable CNFs Minimally Unsatisfiable CNFs Define the partition of the -variables. Here is the -partition for our example. Next we generate, negate and partition the -variables. We join and and form all -sets from the result. puts these steps together. The argument is a permutation of . Equivalently, here is a longer form. This tests whether C3 is minimally unsatisfiable. We define the function that does derangement experiments. About the Author Robert Cowen 16422 75th Avenue Fresh Meadows, NY 11366 ]]> 0 Degree versus Dimension for Rational Parametric Curves Tue, 22 Sep 2020 15:25:54 +0000 Given a rationally parameterized curve in or , where the and are polynomials, we find the dimension of the smallest linear subset of containing the curve. If all the and are of degree or less, then it is known abstractly that this dimension is or less and rational normal curves play a key role in the argument. We consider this from a computational point of view with playing an essential part in the discussion. 1. Introduction The ancients were confused about the concepts of degree and dimension. As late as 1545 in his famous book Ars Magna [1], Cardano, who did not hesitate to invent imaginary numbers, in reference to his assistant Ferrari’s solution of the quartic gives the following disclaimer: Although a long series of rules might be added and a long discourse given about them, we conclude our detailed consideration with the cubic, others being merely mentioned, even if generally, in passing. For as the first power refers to a line, the square to a surface, and the cube to a solid body, it would be very foolish to go beyond this point. Nature does not permit it. The distinction between degree and dimension was later resolved by Descartes’s algebraic notation. But, in the context of parametric curves, I recently noticed a simple linear algebra proof of the following theorem: Theorem A Let , be a curve in or where the coordinate functions are polynomials of degree or less. Then for any , the curve lies in a linear subset of or of dimension . This theorem, as well as many of the other facts in this article, is given in Joe Harriss book [2] from a projective geometry point of view. He also considers the degree versus dimension issue in a number of other situations. We give the linear algebra proof in Section 2. Unfortunately, projective geometry is not computationally friendly. Instead we can view these results from an affine point of view using the built-in function [3], which we discuss in Section 3. We then generalize and rephrase our result in Section 4 as Theorem B. The generalization is to rational curves and we can give the dimensions of the smallest linear space containing the curve. Theorem B does clarify that, while the degree bounds the size of a linear set, the curve may lie in a smaller dimensional linear set. In Section 5 we observe that the rational normal curve in or , , is universal for rational curves. That is, every rational curve is a transform of a normal curve. This is very easily seen via the . This lets us rephrase Theorem B in another useful form, where the can be found directly from the expression of a rational function in the form where and the common denominator are all polynomials of degree or less written in descending degree. To simplify notation we generally work with coefficients in the real numbers , but it should be understood that one could work in any subfield of the complex numbers as well. But, as immediately below, in some cases we must consider parameter values in the algebraic closure of the subfield. Sections 5 and 6 give two applications. The first discusses the recognition problem: given a point , is for some ? This is equivalent to the well-studied problem of finding a common solution of a family of univariate polynomials, which we do not consider here. We show that modulo the linear , the recognition problem can often be solved in a linear space of smaller dimension. The second example is the implicitization problem for rational functions, which is to find an implicit system that describes the ideal of the rational curve. We only sketch this, as there is no room to carefully describe the routines in [4]. In fact, this article was motivated by the authors work on implicitization of parametric curves. I noticed that an unexpectedly large number of linear equations appeared in the implicit systems. 2. Special Case of Polynomial Parameters In this article a linear subset of is a set defined by a system of linear equations, not necessarily homogeneous. A linear subset is distinguished from a linear subspace, which is a subspace of the vector space and defined with homogeneous equations. The big difference is that a subspace contains the origin . A linear subset is a coset of a linear subspace under the operation of vector addition. A polynomial parametric curve is a function where each coordinate function is a polynomial that we write in descending degree: where is the degree of the coordinate polynomial. The largest such degree is the degree of the parameterization. Note that . This constant acts merely as a basepoint; a different basepoint gives a curve that is a translation of the first. Thus the basepoint does not affect the geometry. We say our parameterization is stripped if (or alternatively if each ). Each polynomial parameterized curve is then a translate of a stripped curve, so we first consider those. We strip a polynomial parameterized curve by dropping all the constant terms. We now create a stripped coefficient matrix from the stripped polynomial. If is the degree of the polynomial, is the matrix with rows . Consider the following equation where points are column vectors. This shows that every point on the parameterized curve is in the vector space spanned by the columns of the coefficient matrix. So Theorem A is true for a stripped parameterization, but adding back the constant simply moves this subspace to a linear subset. To describe the smallest linear set containing a finite set of points in terms of a system of equations, here is a short routine. A longer version of this with error detecting is in [4]. Example 1: We know a linear set containing this curve must be of dimension no greater than three, since this set is contained in , so it is generated as a linear set by four or fewer points. Therefore it is enough to take four random points on this curve and calculate the smallest linear set containing them. Here are the four random points. Here is the linear expression for the linear set. A linear set defined by one linear equation in three variables is of dimension two. This curve lies in the linear set defined by setting the linear expression to zero. 3. Rational Parametric Curves via The central concept in this article is the built-in Wolfram Language function . When we say transformation function we mean a function given by . Basically these are affine versions of projective linear transformations, which can include translations along with the usual transformations of linear algebra. They appeared in Lecture 2 of Abhyankar [5] and much of the authors work [4, 6] as fractional linear transformations; they are also known in the literature as linear fractional transformations. Our major use of these transformations is to be able to access projective geometry where points are cosets of -tuples, while working in affine geometry where points are merely -tuples, which are easy to manipulate computationally. A transformation function can be described by an matrix. The matrix of the associated projective linear transformation is called the transformation matrix in the Wolfram Language. Thus the of an matrix takes an affine -tuple, appends 1 to represent this in projective -space, applies the projective linear transformation defined by and then specializes by dividing by the component. Here is an example. These transformations in the special case are discussed in detail in Chapter 6 of my book [4]. A transformation function is affine if the last row is ; the denominators are always 1, the upper-left submatrix gives a linear transformation and the first entries of the last column describe a translation. In particular, the domain of an affine transformation is all of . Otherwise we call the transformation function projective. If the last row of the transformation matrix is , then the hyperplane of given by is not in the domain of the transformation function. In the context of an affine transformation, it is understood that the equation defines the empty set. In this article we assume that a rational parametric curve has coordinates that are quotients of two polynomials in . We insist that the parametric curve be given with a common denominator , so, for example, is of the form for polynomials . The degrees of may be greater than, equal to or less than the degree of . In particular, could be the constant polynomial 1, in which case is a polynomial curve that we can treat as a special case of a rational curve. The degree of is the largest degree of . The advantage of writing polynomials in the parameter in descending degree is that writing a transformation matrix for a rational function is easy. Suppose in equation (1) that for , where we write . Then the transformation matrix for is Example 2. Here is the transformation matrix. This is the curve. Both [2] and [5] mention the fact that every rationally parameterized curve is a projective transformation applied to a polynomially parameterized curve. In particular, [2] notes that this polynomial curve can be the rational normal curve of degree 4. Theorem B Before we state Theorem B, we note that every linear transformation can be factored into a projection on some coordinates followed by an embedding. This is accomplished in a special way using Mathematica by the following matrix reduction algorithm we call . This takes an matrix of rank and outputs an matrix and an matrix consisting of rows of such that . This implies that rows of are what the Wolfram Language calls ; that is, contains an identity matrix as a submatrix. In the code, the functions and defined in the statements invert the lists and viewed as functions from their index sets. The tests whether is in the domain of . We can now state and prove our main theorem; we write . It may seem counterintuitive that we can strip the constant off the denominator, in particular for polynomially parameterized curves (so stripping it gives ). But projectively the denominator is just another coordinate so we can still do that. So if is the matrix from the previous section and , where is the of , then the projective stripped coefficient matrix of is just the submatrix of with the last column removed. Theorem B Let be a parametric curve in of degree . Suppose the projective stripped coefficient matrix of has rank . Then there are components of defining a stripped polynomial parametric curve in and a transformation function taking to . We apply the algorithm to the projective stripped coefficient matrix of , obtaining a list of rows forming a basis of the row space of and a matrix of size , where the rows corresponding to this basis are replaced by rows of the identity matrix. Multiplying by the vector gives the parametric function . Appending a last column to with the constant terms of the original gives a transformation matrix . By the above comments it is easy to see that the defined by takes to . One can paraphrase this theorem as: Given a parametric curve P(t) of degree d, there is an , a stripped parametric polynomial curve rho in and a so that the following diagram commutes. We ask the reader not to take this diagram literally in the case of a rational parameterization, as the domains of , , may not be the full spaces indicated. But if is a polynomial parameterization, then the domains are the full spaces and is an embedding. Example 3: We illustrate this proof by fully working out the following degree-two curve in . The decomposition can be easily done by hand. So we add the constant row; remember that the constant in the last row is 1. Theorem B tells us the composition of and is . So this curve is contained in a plane. Example 2 (continued): We now consider the rational parameterization of example 2. We check that this lies in a two-dimensional plane in . The step in the proof of Theorem B where we use to obtain the curve from the rational normal curve can also be done by using an affine transformation function obtained by adding the row and column to . In Example 2 we have the following. This gives: Theorem C Let be a rational (or polynomial) curve parameterization of degree . Suppose the projective stripped coefficient matrix of has rank . Then the transformation function in Theorem B can be decomposed into transformation functions as in the following diagram. Here is an affine transformation function of onto and is a possibly projective transformation of into . In particular, the parametric curve given by lies in a linear subset of of dimension less than or equal to the minimum of , , . As in Theorem B, we let be the projective stripped matrix of and apply to to get of sizes and , respectively. Appending a row of zeros and then a column of zeros with last component 1 to make into an affine transformation matrix of size , let be the of . Appending the column of constants to , we get a transformation matrix of size . Then is the of . One can check that . This recovers the known result [2] that every rational parameterization is a projective linear transformation of the rational normal curve, but here we have a constructive approach. Example 4: For an easy but nontrivial (i.e. not conic) example we use the piriform [7]. Here , . Here is the stripped projective matrix. A trivial application of in that is of full rank gives the following. Notice here that , and In this case, the curve lies in , a two-dimensional space. The numbers , , are important values in describing a rational parameterized curve. Even though the transformation matrix for contains the identity matrix, it is not injective, which is typical in the case of a rational parameterization, even when , but this does not occur for a polynomial parameterization. 5. The Recognition Problem The recognition problem is: given a parameterized curve and a point in , is in the curve; that is, does there exist with ? There are two obvious methods to solve this problem. The first is to directly solve the over-determined system using . This works surprisingly well, failing mostly with poorly conditioned systems for which the other methods following may not work well either. The biggest problem with this approach is that when it does not work, it gives a false negative to the recognition problem. One can, of course, solve component by component and see if any solutions are numerically close. Example 2 continued. So the first point is on the curve but the second point is not. In general, finding a common zero of a set of polynomial or rational equations is an interesting problem, but we do not consider that here. The second method is to find a system of equations whose solution set is the Zariski closure of the point set . All that then needs to be done, in principle, is to evaluate this system at and check that the value is 0. We consider this issue in Section 5. As we have seen, a parameterized curve in may lie in a linear subset of dimension less than Using Theorem C and the algorithm, we can get some additional information about the problem and perhaps reduce this to a problem in a smaller . Example 5. We would like to find out which, if any, of the following points are on this curve. We first find the transformation functions. Here is the projective stripped coefficient matrix. Apply . Augment these matrices to get transformation matrices. Generate some random points. This says is not contained in any proper subspace, but the image of lies in a three-dimensional subspace of . The points and do lie in the image of , so may be points on the curve, but we can eliminate . We find the fibers (preimage) of and in . These conveniently are singleton points. Thus we have reduced this rational recognition problem in to a polynomial problem recognition problem in . So but is not on the curve. 6. Implicitization of Rational Parametric Space Curves As mentioned, the motivation for this article is my work on implicitization of rational parametric space curves. In this section I only sketch my algorithms; details are in [4]. The key here is that by the material discussed, especially Theorem C, every such curve is simply a fractional linear transformation of the rational normal curve. By implicitization I mean describing these parametric curves by way of algebraic equations. A problem that arises is that while one expects a curve in to be given by equations, this is often not enough to fully describe the curve pointwise or algebraically. The standard counterexample is the twisted cubic, which is just the rational normal curve of degree three, . A system of three equations in the variables , , describing the twisted cubic, given in [2], is . An exercise in [2] is to show that the zero set of any pair of these three equations contains not only the twisted cubic, but also a line, but note that the extra line in the last pair lies in the infinite plane of projective three-space. Any implicitization problem has infinitely many possible answers, but the best answers are systems of equations that form an H-basis. This idea goes back to F. S. Macaulay in 1916, who was studying homogeneous equations, hence the H; basically in our context it means that any equation of total degree containing the parametric curve in its zero set is a polynomial combination , where the are in the H-basis and the are polynomials so that each term has total degree at most Thus for an H-basis, the ideal membership problem reduces to linear algebra. If one has a system with zero set describing the parametric curve , then the Gröbner basis with respect to a degree ordering is an H-basis, perhaps larger than necessary. In practical terms one can simply use the following format. In the case of the rational normal curve of degree , Harris [2] claims that using quadratic equations is sufficient, so we can proceed as follows: we first give a procedure finding the total degree d of a polynomial of several variables. Then we use the following code, say for . This defines . Likewise we get the following for . The size of the H-basis is , which gets much larger than . The numbers are binomial coefficients and can be enumerated recursively; however, does not contain , so there is no obvious recursive construction of these bases. In [4] I construct, for the fractional linear transformation given by an transformation matrix A, a transformation . This takes the system in with variables given by the list X to a system in with variable list Y such that for a solution of then is a solution of . Unfortunately, this works numerically and the user must provide a number that bounds the degrees of the polynomials used and a small tolerance , but for an appropriate choice of these parameters the system is often an H-basis if is. Thus, a possible method for finding an implicit system describing the rational parameterized curve is to write it in the form , where is a , and use . We use Example 4 to illustrate this. Example 4 continued; define and so on. Then we get the implicitization directly using a related function (fractional linear transformation, i.e. ) that takes not points to points but equation systems to equation systems. In [6] this is simple, because all transformation matrices used are invertible. In this context the 3×5 transformation matrix is not invertible, so finding the equation system for the image of a transformation function becomes quite involved. Essentially this is the subject of all of Chapter 2 in [4]. For instance, in this case we are compacting six equations into one. The following non-executable code and result are copied from [4, Section 3.1]. For executable code, see the GlobalFunctionsMD.nb notebook of [4]. Once this is done we can check that this works. We have shown how the Wolfram function simplifies the study of rational parameterized curves. [1] J. Cardano, The Great Art (T. R. Whitmer, trans.), Boston: MIT Press, 1968. [2] J. Harris, Algebraic Geometry, A First Course, Springer Graduate Texts in Mathematics 133, New York: Springer, 1992. [3] S. Wolfram, An Elementary Introduction to the Wolfram Language, Champaign, IL: Wolfram Media, 2015. See also [4] B. H. Dayton. Space Curve Book. (Sep 4, 2020). Code in [5] S. Abhyankar, Algebraic Geometry for Scientists and Engineers, Providence, RI: American Mathematical Society, 1990. [6] B. H. Dayton, A Numerical Approach to Real Algebraic Curves with the Wolfram Language, Champaign, IL: Wolfram Media, 2018. [7] E. W. Weisstein. Piriform Curve from Wolfram MathWorldA Wolfram Web Resource. B. Dayton, Degree versus Dimension for Rational Parametric Curves, The Mathematica Journal, 2020. About the Author Barry Dayton is the author of A Numerical Approach to Real Algebraic Curves with the Wolfram Language and is Professor Emeritus at Northeastern Illinois University in Chicago, IL. He lives in Ridgefield CT. Barry H. Dayton Department of Mathematics Northeastern Illinois University Chicago, Illinois 60625-4699 ]]> 0 Foundations of Computational Finance Tue, 18 Aug 2020 21:56:00 +0000 The Wolfram Language has numerous knowledge-based built-in functions to support financial computations. This article introduces many built-in and other financial functions that are based on concepts and models covered in undergraduate-level finance courses. Examples are taken from a wide range of finance areas. They emphasize importing and visualization of data from many sources, valuation, capital budgeting, analysis of stock returns, portfolio optimization and analysis of bonds and stock options. We hope that all the functions selected in this article are very useful for analyzing real-world financial data. All examples provide a unique set of tools for users to engage with real-world financial data and solve practical problems. The feature of automatic data retrieval from online sources and its analysis makes all results reproducible without any modifications in the code. We hope this feature will attract new users from the finance community. 1. Introduction Finance is computational in nature and often involves the analysis and visualization of complex data, optimization, simulation and use of data for risk management. Without the proper use of technology, it is almost impossible to analyze these functions of modern finance. Moreover, the field of finance has become far more driven by data and technology since the 1990s, which has made large-scale data analysis the norm. Data-driven decision-making and predictive modeling are now the heart of every strategic financial decision. Since the publication of Varian [1, 2], Shaw [3] and Stojanovic [4], there have been many updates and new functions, but no new articles or books have been written to cover wide areas of computational finance. This article provides a comprehensive overview of functions related to finance and introduces many functions that are useful for real-world financial data analysis using Mathematica 12. We have provided all the custom functions in the text so that users can make changes as they learn how to program in the Wolfram Language. Furthermore, we minimize the explanation of any financial concepts in this article as our focus is on introducing financial application of the Wolfram Language. We begin by defining some symbols that are frequently used as input arguments of custom functions in the article. The article uses or , , and as arguments in many functions defined in this article. Most of these symbols are used as input in the built-in function . All these arguments must be specified in the format acceptable in the function. We use or to represent a companys or companies stock ticker symbol or symbols. It could be a string or a list of strings. The format represents the start date of the sample period specified and represents the last date of the analysis period. Both must be specified as date objects in any date format supported by . Similarly, represents data frequency. It may include , , or . In the subsequent functions defined in this article, we will not describe them when they are used as arguments. The article is organized into 13 sections: 1. this Introduction 2. importing and visualizing data from different sources 3. capital budgeting and business valuation 4. functions for the analysis of security returns 5. rolling-window performance analysis 6. financial application of optimization 7. decomposing the risk of a portfolio into its components 8. importing factor data and running factor models 9. computing different types of portfolio performance measures 10. technical analysis of stock prices 11. bond analysis 12. analyzing derivative products 13. concluding remarks 2. Accessing Financial Data and Basic Data Visualization The most commonly used built-in functions for retrieving company-specific financial data are , and . For example, imports Facebook’s financial statement data and imports Facebook’s price-related data. Similarly, the function can be used to get data about stocks and other financial instruments. or can be used to chart prices against time. or can be used to make interactive plots with additional features of adding different technical indicators. Other functions such as , , or can also be used to visualize financial data. In the remaining part of this section, we are going to show you how to import data from different sources and visualize it. We download Apples return on assets (ROA), return on equity (ROE) and revenue growth over the period January 1, 2001, to January 1, 2019, and plot them. Similarly, we define the function to compare any specified property of different companies. The function takes a list of stock symbols, beginning period, end period and a property to consider as its arguments. We plot the revenue growth of Apple, Facebook, Walmart and Bank of America over the period January 1, 2000, to January 1, 2019. Second, we import and visualize data from the Federal Reserve Bank of St. Louis, as it is one of the most important data sources when it comes to economic data. The built-in function can be used to request the data from the Federal Reserve Economic Data API. Its argument structure is: where is a series ID or a list of IDs. It returns a time series containing data for the specified series. It is often of interest to plot the economic time series with the recession dates. The function downloads and plots the selected series along with the shaded recession period. The function takes series ID, start date, end date and title as inputs and returns a graph. It uses recession indicators based on USREC (US recession) data from the National Bureau of Economic Research (NBER) for the United States from the period following the peak through the trough to indicate the recession period. The Federal Reserve Bank of St. Louis may require the API key to download its data. The API key can be obtained freely by creating a user account at (click “my account” and follow the instructions). Now we can download any series and plot it. For example, we download and plot the Leading Index for the United States (USSLIND) over the period January 30, 1990, to January 30, 2019. Please use the API key 207071a5f2e90e7816259d3c32c1ab81 if needed. The shaded regions indicate recession periods. We download and plot the historical real S&P 500 prices by month (MULTPL/SP500_REAL_PRICE _MONTH) over the period March 31, 1975, to May 30, 2019. Finally, we show how to create a dataset. The built-in function is very useful for organizing large or small sets of data. The function can be used to get a company’s fundamental data. After the data is stored, we can organize and analyze the data. For example, we download return on assets (), return on equity () and revenue growth () for Apple Inc. (AAPL) over the period January 1, 2000, to January 1, 2019, and make a dataset. After the dataset is constructed, we can pull data and do further analysis using a rich set of built-in Wolfram knowledge. 3. Common Financial Decision Making, Capital Budgeting and Business Valuation Basic concepts used in common financial decision making are important for learning and understanding the finance discipline. Many functions such as , , , and are directly related to finance. Other functions such as and can be used to find one of the unknowns when relevant information is given. All these functions are useful for solving time value of money, capital begetting and business valuation problems. The Mathematica documentation provides numerous examples of how to use these functions. In this section, we are going to focus on a few examples concerning loan amortization, capital budgeting and business valuation. A loan amortization table is often used to visualize periodic payments of the loan, loan balance and payment breakdown into principle payment and interest payment. The function returns an amortization table given its input arguments. It takes four arguments: : current value of loan amount : loan term in years : annual percentage interest rate : frequency of loan payment per year: 12 for monthly payment, 1 for annual payment, and so on : (an optional argument) future value of the loan amount; if no value for is provided, the future value of the loan is assumed to be zero Using this function, we compute an amortization table for a loan of $40,000 with 1-year loan term, 5% APR paid monthly. The most commonly used decision tools in capital budgeting are net present value (NPV), internal rate of return (IRR), modified internal rate of return (MIRR) and profitability index (PI). These are defined in terms of the cash flows , , the discount rate and the reinvestment rate by: We use the built-in Mathematica functions , and in the function to compute these measures. It takes cash flows (a list), discount rate and reinvestment rate as its arguments. We illustrate the use of the function with an example. Say a project requires a $50,000 initial investment and is expected to produce, after tax, a cash flow of $15,000, $8,000, $10,000, $12,000, $14,000 and $16,000 over the next six years. The discount rate is 10% and the reinvestment rate is 11%. We compute the projects NPV, IRR, MIRR and PI. One of the most widely used business valuation models is the discounted cash flow model, in which the value of any asset is obtained by discounting the expected cash flows on that asset at a rate that reflects its riskiness. In its most general form, the value of a company is the present value of the expected free cash flows the company can generate in perpetuity. Because we cannot estimate free cash flows (FCF) in perpetuity, we generally allow for a period where FCF can grow at extraordinary rates, but we allow for closure in the model by assuming that the growth rate will decline to a stable rate that can be sustained forever at some point in the future. If we assume that the discount rate is the weighted average cost of capital (WACC), FCF grows at the rate of per year and that the last years free cash flow is , then the value of the firm can be defined as If we assume that FCF grows at the rate for the next years and at the rate thereafter, then the value of the firm can be written as We implement formula (1) with that takes five arguments: 1. , last year’s free cash flows 2. , the annual growth rate of free cash flows in the first growth period 3. , the number of years in the first growth period 4. , the stable growth rate 5. , the weighted average cost of capital Suppose that a company has $100 as its past years CFC, its FCF is expected to grow at 5% for the next five years and 2.5% thereafter and that its WACC is 9.5%. We compute its value. The most practical approach is to assume that a company goes through three phases: growth, transition and maturity. First, a companys growth increases. In the transition phase, the growth rate decreases. In the mature phase, a company grows at the same rate as that of the overall economy. Assume that the FCF is positive and grew at the rate last year. Assume further that it grows at the higher rate each year for the next years and that after years, it declines at the rate each year for the next years. Finally, assume the company grows at the stable positive rate per year after years and that the cost of capital is represented by WACC. Then the value of the firm can be written as We implement formula (2) with that takes eight arguments: 1. , last year’s free cash flows 2. , last year’s FCF growth rate 3. , the incremental growth rate in the high growth period 4. , the number of years in the high growth period 5. , the declining growth rate in the translational growth period 6. , the number of years in the translational growth period 7. , the stable growth rate in the maturity growth period 8. , the weighted average cost of capital To apply the function, consider a company in the early stage of its life cycle, assuming that the company experienced 10 percent growth in the past year. The company is expected to grow by 8% more each year for the next 7 years and its growth will start to decline by 5% each year after the seventh year for 5 years. After 12 years, the company is expected to grow at the same rate as that of the overall economy, which is 2.5% per year. Suppose the past years CFC was $100 million and the weighted average cost of capital is 9.5%. We compute the value of the company. When using , we assume that the growth rate in the stable phase is never negative. Therefore, do not assume the declining growth rate to be too high. Otherwise, the growth rate of FCF in the maturity phase may be negative. 4. Analysis of Stock Returns There are various methods in analyzing historical stock prices and returns data. We can use different kinds of charts and graphs as well as descriptive statistics. Similarly, it is also important to understand whether the stock returns distribution is normal. In the next two subsections, we explain some of the most common charts and descriptive measures. 4.1 Individual Stock Return Analysis Commonly used charts for historical performance analysis are time series plot of prices, normalized prices (historical prices divided by the price at the beginning), continuous draw-downs (cumulative continuous returns) and cumulative returns. The function l takes four arguments as defined in Section 1 and returns four different plots: historical prices, the normalized price, continuous draw-downs and cumulative returns. We can apply l to any symbol and period. For example, we plot the historical stock prices and returns of Walmart Inc. (WMT) over the period October 10, 2018, to June 7, 2019. A histogram and an empirical plot of kernel density estimates are often used to describe the general shape of the data. The function takes four arguments as defined in Section 1 and returns density and histogram plots of returns. Using the function, we download the daily closing price of the S&P 500 index over the period October 1, 2000, to October 1, 2019, and plot the histogram, the empirical density function and the density function of a normal distribution with the same mean and variance. Many built-in functions can be used to compute different descriptive statistics. These descriptive statistics describe properties of distributions, such as location, dispersion and shape. The most common measures are computed by : holding period return average return geometric mean return cumulative returns standard deviation minimum return maximum return historical value at risk historical conditional value at risk For example, we compute the descriptive statistics for Walmart Inc. (WMT) using monthly returns over the period October 10, 2010, to June 6, 2019. It is also informative to examine the historical performance of an individual stock. In such an analysis, we calculate monthly statistics using daily returns and report them on a monthly basis. We define the function to download historical stock prices, compute desired statistics and return a dataset. The function takes four arguments: stock ticker symbol, start date, end date and a statistical function such as , or , and returns a dataset. For the function to work, it requires more than two years of data. For example, we compute the monthly cumulative returns for Walmart Inc. (WMT) using daily returns over the period October 1, 2010, to June 30, 2019. Once we compute the statistics, we can take specific columns by specifying their names. We are often interested in knowing whether returns data follows a normal distribution because understanding whether stock returns are normal or not is very important in investment management. One way to check whether returns are normally distributed or not is to compare the empirical quantiles of the data with normal distribution. The function can be used to produce quantile-quantile plots. Many other built-in functions can help to assess whether returns are normally distributed. The function can be used to test whether data is normally distributed and can also be used to assess the goodness of fit of data to any distribution. 4.2 Analysis of Multiple Stocks Returns As a majority of financial data is multivariate, it is advantageous to perform comparative analysis of multiple security returns. In some cases, one has to compare one series with another. In other cases, many variables might have to be simultaneously measured to capture the complex nature of the relationship among variables. Comparing the complexities of these factors gives the analyst a more detailed account of the relationships between selected returns, thus allowing for a better interpretation of their values and behaviors. In this section, we first compare the performance of one asset with another using graphs, then compute descriptive statistics as well as correlation matrices. Two most commonly used graphs for comparing historical performance of more than one stock/ETF are time series plots of normalized prices and cumulative returns. The next two functions take four arguments as defined in Section 1 and compute normalized prices and cumulative returns. We get normalized prices and cumulative returns and plot them for three stocks (Facebook, Inc. (FB), Costco Wholesale Corporation (COST) and Walmart Inc. (WMT)) over the period May 1, 2000, to May 30, 2019. Besides graphs, we can also compute descriptive statistics and compare their performance. We define a function for that purpose. It takes four arguments as defined in Section 1 and returns a table with different types of descriptive statistics. For example, we download historical data and compute different descriptive statistics for three stocks (Walmart Inc. (WMT), Apple Inc. (AAPL) and Microsoft Corporation (MSFT) ) using monthly data over the period January 1, 2010, to March 30, 2019. Similarly to how we calculated an individual stocks monthly statistics in Section 4.1, we define the function to compute monthly statistics for more than one stock given the arguments: stock ticker symbols, start date, end date and a statistical function such as , or . For example, we compute the monthly cumulative returns for four stocks (Walmart Inc. (WMT), Apple Inc. (AAPL), Microsoft Corporation (MSFT) and Netflix, Inc. (NFLX)) over the period January 1, 2010, to June 30, 2019, and create a dataset. The first column represents year and month, the first four digits for the year and the last two digits for the month. Similarly, box-and-whisker charts, paired histograms, paired smooth histograms and matrix scatterplots are often used to examine multivariate data. The function can be used to make a box plot that gives a glimpse of the distribution of the given dataset. You can see the statistical information by hovering over the boxes in the plot. The and functions are used to create paired histogram and smooth distribution plots. They can be used to compare how two datasets are distributed. The function from the Statistical Plots package can be used to make scatter plots of multivariate data. It creates scatter plots comparing the data in each column against other columns. More complex analysis of multivariate data can be done using functions from the Multivariate Statistics package. The package contains functions to compute descriptive statistics for multivariate data and distributions derived from the multivariate normal distribution. All these functions are well explained in the official documentation. 5. Rolling-Window Performance Analysis Rolling-window performance analysis is a simple technique to access variability of the statistical performance measures. For example, if we want to access the stability of mean or standard deviation of returns on a stock over time, we can choose rolling window (the number of consecutive observations per rolling window), estimate the mean or standard deviation and plot series of the estimates. A little fluctuation is normal but large fluctuations indicate a shift in the values of the estimate. Built-in functions such as , and are useful for rolling-window performance analysis. In this section, we are going to show a few examples of how to compute rolling-window-based performance statistics. We define that can be used to plot rolling-window statistics given its five inputs: stock ticker symbol, start date, end date, size of window in days and function to apply, which can be any built-in or user-defined function. For example, we compute and plot the 90-day rolling mean to standard deviation ratio on Walmart’s daily stock returns over the period January 1, 2001, to March 30, 2019. Similarly, we define the function to compute the rolling correlation of two series and apply it together with to the desired data. We use the function to plot the 90-day rolling correlation of daily returns on two stocks, WMT and COST, for the period from March 30, 2009, to March 30, 2019. Sometimes, it is also useful to store these time-varying descriptive statistics as a dataset so that we can use them in the subsequent analysis. The function , given its input, computes the geometric mean, standard deviation and the ratio of the arithmetic mean to the standard deviation on a rolling-window basis. For example, we compute the 90-day rolling-window geometric mean (GM), standard deviation (Std. Dev.) and arithmetic mean to standard deviation ratio (AM/Std. Dev.) of Walmarts daily stock returns over the period July 1, 2018, to October 30, 2019. You can scroll through the dataset. 6. Portfolio Optimization Mean-variance analysis is one of the foundations of financial economics. Portfolio optimization is essential, whether it be in professional or personal financial planning. In this section, we are going to show how to implement the most commonly used optimization techniques in finance using historical returns. We want to point out that future returns on investment depend on expected returns and other conditioning information, not on the past returns. Past returns are used only for illustration and do not guarantee future returns. Define the following variables: is the risk-free rate is the proportion of wealth invested in security is the average return on security is the variance of security is the covariance between securities and is the correlation between securities and Then we define the vectors of mean returns and weights and the covariance matrix: The formulas for the portfolio mean and variance are and , respectively. The corresponding Mathematica code is and . In order to compute portfolio statistics, we need returns data. We can use the function to download historical returns data. It takes four arguments as defined in Section 1 and gives a matrix of returns. Most functions in this section use the function, so run it before you run other functions. To compute basic portfolio statistics such as portfolio mean, variance, standard deviation and Sharpe ratio, we can use , which takes six arguments. The first four arguments are as defined in Section 1 and the other two are a list of weights and the optional risk-free rate. For example, we compute the portfolio mean, variance, standard deviation and Sharpe ratio for the portfolio that consists of the stock returns of five companies: Apple (AAPL), Walmart (WMT), Boeing (BA), 3M and Exxon Mobil (XOM), using monthly returns over the period January 1, 2009, to May 30, 2019. The function plots the Markowitz portfolio frontier; it takes a matrix of returns obtained from as its only argument. The function uses the concept that any two efficient portfolios are enough to establish the whole portfolio frontier, as first proved by Black [5]. It accepts any option. For example, we plot the portfolio frontier for the portfolio that consists of stock returns of five companies: Apple (AAPL), Walmart (WMT), Boeing (BA), 3M and Exxon Mobil (XOM), using monthly returns over the period January 1, 2009, to May 30, 2019. Next, we solve two kinds of portfolio problems: global minimum variance portfolio and tangency portfolio. In terms of the notation defined earlier in this section, the global minimum variance portfolio can be obtained by minimizing subject to and solving for . Its solution can be obtained with the built-in Mathematica function . The function computes weights, returning the portfolio allocation on stocks considered for a global minimum variance portfolio. We compute the global minimum variance portfolio weights using the monthly stock returns of five companies: Apple (AAPL), Walmart (WMT), Boeing (BA), 3M and Exxon Mobil (XOM), over the period January 1, 2009, to May 30, 2019. Similarly, the tangency portfolio can be obtained by maximizing , where is the risk-free rate (a constant in this case), subject to and solving for . The solution uses the built-in Mathematica function . The function computes the tangency portfolio weights given its five inputs, four as defined in the Section 1 and the risk-free rate. Assuming a monthly risk-free rate of 0.1667 percent and using monthly data over the period January 1, 2009, to May 30, 2019, we calculate the tangency portfolio weights for our portfolio of five stocks: (Apple (AAPL), Walmart (WMT), Boeing (BA), 3M and Exxon Mobil (XOM)). Portfolio optimization using the Wolfram Language is very flexible. We can formulate any kind of portfolio and use built-in functions such as , or to get numerical solutions to the portfolio problem. 7. Portfolio Risk Decomposition In this section, we concentrate on how to decompose a measure of portfolio risk (portfolio standard deviation) into risk contribution from individual assets included in the portfolio. It helps to see how individual assets influence portfolio risk. When risk is measured by standard deviation, we can use Eulers theorem for decomposing risk into asset-specific risk contribution. Eulers theorem provides an additive decomposition of a homogeneous function. For reference, see Campolieti and Makarov [6]. Using Eulers theorem, we can define the percentage contribution to portfolio standard deviation of an asset as , where is the marginal contribution of the asset, is the weight of the asset, and is the portfolio standard deviation. We define the function for portfolio risk decomposition. It takes five arguments, four arguments as defined in Section 1 and a list of portfolio weights; it returns a bar chart representing the individual asset’s risk contribution to the portfolio standard deviation. We calculate the risk contribution of each asset in a portfolio that consists of five stocks using the historical monthly returns over the period January 30, 2010, to May 30, 2019 and make a bar chart. 8. Factor Models Currently, factor models are widely accepted and used in finance to construct portfolios, to evaluate portfolio performance and for risk analysis. Factor models are regression models. We can use the built-in function to estimate and evaluate the appropriateness of the regression models. In addition, we can download all factor data directly from Prof. Kenneth Frenchs data library. Before we apply factor models to real-world data, we need data in the form is a matrix of values of independent variables and is a vector of values of a dependent variable. Commonly used factor models are summarized in Table 1. We can find more about the factor models in Fama and French [7]. We use the following notation: for excess return on the security or portfolio, MKT for excess return on the value-weighted market portfolio, SML for return on a diversified portfolio of small-capitalization stocks minus the return on a diversified portfolio of large-capitalization stocks, HML for difference in the returns on diversified portfolios of high-book-to-market stocks and low-book-to-market stocks, MOM for difference in returns on diversified portfolios of the prior years winners and losers, RMW for difference between the returns on diversified portfolios of stocks with robust and weak profitability, CMA for difference between the returns on diversified portfolios of the stocks of low and high investment firms, for risk-adjusted return on security or portfolio and , , , , and for betas or factor loadings. InterpretationBox[Description, TextCell[Enter column head here, Italic, Editable -> True]] InterpretationBox[Models for r-rf, TextCell[Enter column head here, Italic, Editable -> True]]; InterpretationBox[Capital asset pricing model (CAPM), TextCell[Enter data here, Editable -> True]] CAPM=alpha+beta_(MKT)MKT; InterpretationBox[Fama-French three-factor model, TextCell[Enter data here, Editable -> True]] InterpretationBox[{{FF, _, DocumentationBuild`Utils`Private`Parenth[3]}, =, {CAPM, +, {{beta, _, DocumentationBuild`Utils`Private`Parenth[SMB]}, SMB}, +, {{beta, _, DocumentationBuild`Utils`Private`Parenth[HML]}, HML}}}, TextCell[Enter data here, Editable -> True]]; InterpretationBox[Carhart four-factor model, TextCell[Enter data here, Editable -> True]] InterpretationBox[{{C, _, DocumentationBuild`Utils`Private`Parenth[4]}, =, {{FF, _, DocumentationBuild`Utils`Private`Parenth[3]}, +, {{beta, _, DocumentationBuild`Utils`Private`Parenth[RMW]}, RMW}}}, TextCell[Enter data here, Editable -> True]]; InterpretationBox[Fama-French five-factor model, TextCell[Enter data here, Editable -> True]] InterpretationBox[{{FF, _, DocumentationBuild`Utils`Private`Parenth[5]}, =, {{C, _, DocumentationBuild`Utils`Private`Parenth[4]}, +, {{beta, _, DocumentationBuild`Utils`Private`Parenth[CMA]}, CMA}}}, TextCell[Enter data here, Editable -> True]] Table 1. Overview of factor models. Before we estimate these models using real data, we need factors data. We download five factors and the momentum factor data from Kenneth Frenchs website ( and define variables to store them. The function takes only two arguments, the start date and end date, and returns time series of all factors data. The start date and end date must be specified as date objects. Similarly, the function can be used to get a stocks monthly returns data. It takes three arguments, a ticker symbol of any publicly traded company, and the start and end dates for the analysis period. Using and , we define to combine factors and returns data. The takes four arguments (the symbol of the stock for which we want to estimate the factor model, the start date, the end date and an integer that represents the number of factors) and returns a data matrix suitable for . Next, we estimate different factor models for Apples stock using monthly data from October 1, 2008, to March 30, 2019. We estimate the capital asset pricing model (CAPM) with market factor (MKT). We estimate the FamaFrench three-factor model with market, size and value factors (MKT, SMB, HML). Similarly, we estimate the Carhart four-factor model with market, size, value and momentum factors (MKT, SMB, HML, MOM). Finally, the FamaFrench five-factor model with market, size, value, profitability and investment factors (MKT, SMB, HML, RMW, CMA) is estimated as follows. Once the model is estimated, we must access different properties related to data and fitted models. To assess how well the model fits the data and how well the model meets the assumptions, there are many built-in functions.To learn more about obtaining diagnostic information, see the properties of . 9. Stock Portfolio Performance Measures There are various measures to evaluate the performance and risk of portfolios. Most of these measures are used to evaluate a portfolio of interest against a chosen benchmark, by taking a snapshot of the past or considering the entire historical picture. We will compute some common metrics often employed by investors while analyzing performance measures. All the computations are based on the formulas developed in Bacon [8]. Most common measures are summarized in the function . The function takes four arguments: ticker symbols for the test assets and the benchmark asset periodic risk-free rate time period frequency of data We repeat the definition of the function defined earlier to make this section self-contained. The next example uses stocks, although these measures are used to evaluate the performance of portfolios, mutual funds and exchange-traded funds. We evaluate the performance of (Walmart Inc. (WMT), Apple Inc. (AAPL) and Microsoft Corporation (MSFT) ) against the S&P 500 index (^SPX) using 0.0016 as a monthly risk-free rate and month as the data frequency over the period January 1, 1995, to March 30, 2019. 10. Interactive Graphics and Technical Analysis of Stock Prices Short-term traders commonly use interactive graphics and technical indicators of stock prices to profit from stocks that may be overbought or oversold. Much is based on market sentiment, but also market timing. When a stock is oversold, the price is low and people want to buy. In comparison, when a stock is overbought, the price is between its normal range and higher, and people do not want to buy, or may want to short sell. Many technical indicators are used to determine a given stocks peak or bottom price and how to take advantage of that information. Three of the most useful functions for technical analysis of stocks are , and . The documentation provides a comprehensive set of examples on how to use them. We show one example of how to use the function and one example of how to use . You can choose the chart type and from over 100 technical indicators, which are divided into eight groups: moving average market strength displays all the available technical indicators. Here is the basic format: Alternatively, use: The time series data must be of the form: The historical open, high, low, close and volume data retrieved from can also be used as data input. The function has many options that can be used to enhance graphics. We produce a chart using historical prices of Apple’s stock and volume over the period January 1, 2018, to March 30, 2019. The top of the chart shows a plot of historical prices and 50-day and 200-day moving averages. The second part shows the historical volume. The last two parts show plots of two indicators, the commodity channel index () and the relative strength index (). A good introduction to technical indicators can be found in standard references, including at Fidelity Learning Center [9]. The function provides a point-and-click interactive chart, with a similar setup: Alternatively, use: For example, we make a chart showing prices, volume and indicators for historical data of Apple’s stock over the period January 1, 2018, to March 30, 2019. The function provides a user-friendly environment where you can drag a slider to view different parts of the chart or you can choose different indicators with point-and-click. 11. Essentials of Bond Mathematics A bond is a long-term debt instrument in which a borrower agrees to make payments of principal and interest, on specific dates, to the holders of the bond. When it comes to analysis and pricing of bonds and computing returns, convexity and duration are important concepts. When a bond is traded between coupon payment dates, its price has two parts: the quoted price and the accrued interest. The quoted price is net of accrued interest, and does not include accrued interest. Accrued interest is the proportion or share of the next coupon payment. The full price is the price of a bond including any interest that has accrued since issue of the most recent coupon payment. Similarly, yield to maturity is the rate of return earned on a bond if it is held to maturity. Duration is a measure of the average length of time for which money is invested in a coupon bond. Convexity estimates the change in the bond price given a change in the yield to maturity, assuming a nonlinear relationship between the two. The built-in functions and can be used to compute various properties including value of the bond, accrued interest, yield, duration, modified duration, convexity, and so on. This section provides a few examples of how to use and how the concepts of bond convexity and duration can be used in bond portfolio management. We discuss zero-coupon bonds first. The zero-coupon bond does not make coupon payments. The only cash payment is the face value of the bond on the maturity date. The yield to maturity () for a zero-coupon bond with periods to maturity, current price and face value can be obtained by solving . For example, we compute the yield to maturity of a zero-coupon bond with a $10,000 face value, time to maturity 4 years and current price $9,662 using and . Similarly, can also be used to compute the yield to maturity of a nonzero coupon payment bond. For example, we compute the yield to maturity of a $1,000 par value 10-year bond with 5% semiannual coupons issued on June 20, 2013, with maturity date of June 20, 2023, selling for $920 on September 15, 2018. can also be used to compute the price, duration, modified duration and convexity of a bond. For example, we compute those values for a bond with 8% yield, 8% annual coupons, 10-year maturity and $1,000 face value. There are different approaches to bond portfolio management. We concentrate here on a liability-driven portfolio strategy, in which the characteristics of the bonds that are held in the portfolio are coordinated with those of the liabilities the investor is obligated to pay. The matching techniques can range from an attempt to exactly match the levels and timing of the required cash payments to more general approaches that focus on other investment characteristics, such as setting the average duration or convexity of the bond portfolio equal to that of the underlying liabilities. One specific example would be to construct the portfolio so that the duration of the bond portfolio is equal to the duration of cash obligation and the total money invested in the bond portfolio today is equal to the present value of the future cash obligations. To illustrate the concept of bond portfolio management, assume that we have an obligation to pay $1,000,000 in 10 years and there are two bonds available for investment. The first bond matures in 30 years with $100 face value and annual coupon payment of 6%. The second bond matures in 10 years with $100 face value and annual coupon payment of 5%. The yield to maturity is 9% on both bonds. We can decide on how much to invest in each bond so that the overall portfolio is immunized against changes in the interest rate. We compute the duration of each bond using , which gives that the duration of bond 1 is 11.88 and that of bond 2 is 6.75. Assuming that the proportion of money invested in bonds 1 and 2 is and , the immunized portfolio is found by solving the simultaneous equations: These two equations can be solved using . The result shows how much money should be allocated to each bond. More examples can be found in Benninga [10]. A more general approach to bond portfolio management can be solved by using linear programming. It is beyond the scope of this article to introduce linear programming. 12. Binomial and BlackScholesMorton Stock Option Pricing Models The most popular options pricing models are the binomial model and the BlackScholesMorton option pricing formulas for European options. In the next two subsections, we discuss these models and their implementation. 12.1 Binomial Option Pricing Model Following the notation from Hull [11], define: S_0: the current stock price K: the strike price rf: the annual risk-free rate sigma: the annual standard deviation of the stock returns T: the time to expiration of the option in years n: the total number of up and down moves j: the number of upward moves. Then n-j is the number of down moves on the tree. In terms of these variables, we can define: time per period up factor down factor probability of an up move probability of a down move stock price at node payoff from a European call payoff from a European put In the risk-neutral world, the price of the call and put using the -period binomial options pricing model can be computed as: The functions and calculate the prices of European call and put options; they output the option price. Each function takes six arguments as defined at the beginning of this subsection. We use these functions to find the prices of call and put options when the current stock price is $50, the strike price is $45, the annual volatility is 40%, the risk-free rate is 10%, the time to maturity is half a year and the total number of up and down moves is 500. 12.2 BlackScholesMorton Options Pricing Model Similar to the binomial option pricing formula defined in the last section, we follow Hull [11] to explain the BlackScholesMorton option pricing formulas. Define the variables: c_0: the current call option value p_0: the current put option value S_0: the current stock price K: the exercise price T: the time in years until the option expires rf: the annual risk-free interest rate sigma: the annual standard deviation of the rate of return of the underlying stock delta: the annual dividend yield of the underlying stock Furthermore, assume that is the standard normal density function (where e approx 2.71828... the base of the natural logarithms) and let be the standard normal cumulative distribution function, so that denotes the probability that a random variable drawn from a standard normal distribution is less than . Then the call and put values can be computed as Table 2 summarizes the price sensitivity measures of call and put options (denoted by Greek symbols) with respect to their major price determinants; here stands for value of the option. InterpretationBox[Greeks, TextCell[Enter data here, Editable -> True]] Evaluation InterpretationBox[Interpretation, TextCell[Enter data here, Editable -> True]]; InterpretationBox[estimated change in value relative to change in:, TextCell[Enter data here, Editable -> True]]; InterpretationBox[Delta (Delta), TextCell[Enter data here, Editable -> True]] (partialV)/(partialS) InterpretationBox[the underlying asset price, TextCell[Enter data here, Editable -> True]]; InterpretationBox[Vega (upsilon), TextCell[Enter data here, Editable -> True]] (partialV)/(partialsigma) InterpretationBox[the underlying asset volatility, TextCell[Enter data here, Editable -> True]]; InterpretationBox[Rho (rho), TextCell[Enter data here, Editable -> True]] (partialV)/(partialr) InterpretationBox[the risk-free interest rate, TextCell[Enter data here, Editable -> True]]; Theta (theta) (partialV)/(partialT) the time to expiration (for a negative change); Gamma (Gamma) (partial^2V)/(partialS^2) estimated change in delta relative to a change in the underlying asset price Table 2. Greeks as measures of sensitivity. The built-in Mathematica function computes the values and other price sensitivity measures for common types of derivative contracts. The function can compute the value of an option, any of delta, gamma, theta and vega, as well as the implied volatility of the contract. The Mathematica documentation provides many examples of how to use . Here are the first 10 of a list of 101 available contracts. For example, we compute the price and Greeks of the European-style put option with strike price $50, expiration date 0.3846 years, interest rate 5%, annual volatility 20%, no annual dividend and current price $49. Similarly, we compute the implied volatility of an American-style call option with the same values of the parameters. One interesting application of is to get real-world data and compute related measures related to options. We define that computes the theoretical value of options and their Greeks, which takes five arguments: ticker symbol () strike price () expiration date () risk-free rate () either or () We compute the European-style option parameters for Boeing (BA), assuming that the option expires on December 28, 2020, with exercise price $145 and risk-free rate 0.0187. Make sure that the expiration date is later than the current date, since uses historical data. We strongly encourage you to explore the built-in or online documentation for the powerful function. 13. Conclusion The article provides a brief overview of built-in functions and introduces many functions especially designed for analysis of financial data. In particular, we have focused on functions that are more relevant to introductory computational financial concepts. We emphasize importing company fundamental data and its visualization, analysis of individual stocks and portfolio returns, factor models and the use of built-in functions for bond and financial derivative analysis. The functions we have provided are just a few examples. The Wolfram Language can do much more than what we have shown in this article. Interested readers can start exploring the Wolfram Language via Mathematicas extensive documentation. [1] H. R. Varian, ed., Economic and Financial Modeling with Mathematica, New York: Springer-Verlag, 1993. [2] H. R. Varian, ed., Computational Economics and Finance Modeling and Analysis with Mathematica, New York: Springer-Verlag, 1996. [3] W. Shaw, Modelling Financial Derivatives with Mathematica, Cambridge, UK: Cambridge University Press, 1998. [4] S. Stojanovic, Computational Financial Mathematics using MATHEMATICA Optimal Trading in Stocks and Options, Boston: Birkhäuser, 2003. [5] F. Black, Capital Market Equilibrium with Restricted Borrowing, Journal of Business, 4(3), 1972 pp. 444455. [6] G. Campolieti and R. Makarov, Financial Mathematics: A Comprehensive Treatment, London: Chapman and Hall/CRC Press, 2014. [7] E. F. Fama and K. R. French, A Five-Factor Asset Pricing Model, Journal of Financial Economics, 116(1), 2015 pp.122. doi:10.1016/j.jfineco.2014.10.010. [8] C. R. Bacon, Practical Risk-Adjusted Performance Measurement, 2nd ed., Hoboken: Wiley, 2013. [9] Fidelity Learning Center. Technical Indicator Guide. (Jul 29, 2020) [10] S. Benninga, Financial Modeling, 4th ed., Massachusetts: The MIT Press, 2014. [11] J.C. Hull, Options, Futures and Other Derivatives, 10th ed., New York: Pearson Education Limited, 2018. R. Adhikari, Foundations of Computational Finance, The Mathematica Journal, 2020. About the Author Ramesh Adhikari is an assistant professor of finance at Humboldt State University. Prior to coming to HSU, he taught undergraduate and graduate students at the Tribhuvan University and worked at the Central Bank of Nepal. He was also a research fellow at the Osaka Sangyo University, Osaka, Japan. He earned a Ph.D. in Financial Economics from the University of New Orleans. He is interested in the areas of computational finance and high-dimensional statistics. Ramesh Adhikari School of Business, Humboldt State University 1 Harpst Street Arcata, CA 95521 ]]> 0 Sectional Curvature in Riemannian Manifolds Thu, 19 Mar 2020 15:08:19 +0000 The metric structure on a Riemannian or pseudo-Riemannian manifold is entirely determined by its metric tensor, which has a matrix representation in any given chart. Encoded in this metric is the sectional curvature, which is often of interest to mathematical physicists, differential geometers and geometric group theorists alike. In this article, we provide a function to compute the sectional curvature for a Riemannian manifold given its metric tensor. We also define a function to obtain the Ricci tensor, a closely related object. A Riemannian manifold is a differentiable manifold together with a Riemannian metric tensor that takes any point in the manifold to a positive-definite inner product function on its tangent space, which is a vector space representing geodesic directions from that point [1]. We can treat this tensor as a symmetric matrix with entries denoted by representing the relationship between tangent vectors at a point in the manifold, once a system of local coordinates has been chosen [2, 3]. In the case of a parameterized surface, we can use the parameters to compute the full metric tensor. A classical parametrization of a surface is the standard parameterization of the sphere. We compute the metric tensor of the standard sphere below. This also works for more complicated surfaces. The following is an example taken from [4]. Denoting the coordinates by , we can then define , where the are functions of the coordinates ; this definition uses Einstein notation, which will also apply wherever applicable in the following. From this surprisingly dense description of distance, we can extract many properties of a given Riemannian manifold, including sectional curvature, which will be given an explicit formula later. In particular, two-dimensional manifolds, also called surfaces, carry a value that measures at any given point how far they are from being flat. This value can be positive, negative or zero. For intuition, we give examples of each of these types of behavior. The sphere is the prototypical example of a surface of positive curvature. Any convex subspace of Euclidean space has zero curvature everywhere. The monkey saddle is an example of a two-dimensional figure with negative curvature. Sectional curvature is a locally defined value that gives the curvature of a special type of two-dimensional subspace at a point, where the two dimensions defining the surface are input as tangent vectors. Manifolds may have points that admit sections of both negative and positive curvature simultaneously, as is the case for the Schwarzchild metric discussed in the section Applications in Physics. An important property of sectional curvature is that on a Riemannian manifold it varies smoothly with respect to both the point in the manifold being considered and the choice of tangent vectors. Sectional curvature is given by where . In this formula, represents the purely covariant Riemannian curvature tensor, a function on tangent vectors that is completely determined by the . Both and the are treated more thoroughly in the following section, as well as in [1]. Some immediate properties of the curvature formula are that is symmetric in its two entries, is undefined if the vectors and are linearly dependent, and does not change when either vector is scaled. Moreover, any two tangent vectors that define the same subspace of the tangent space give the same value. This is important because curvature should only depend on the embedded surface itself and not how it was determined. While we are primarily concerned with Riemannian manifolds, it is worth noting that all calculations are valid for pseudo-Riemannian manifolds, in which the assumption that the metric tensor is positive-definite is dropped. This generalization is especially important in areas such as general relativity, where the metric tensors that represent spacetime have a different signature than that of traditional Riemannian manifolds. We explore this connection more in the section Applications in Physics. Coordinate Systems and the Representation of the Metric Tensors For a differentiable manifold, an atlas is a collection of homeomorphisms, called charts, from open sets in Euclidean space to the manifold, such that overlapping charts can be made compatible by a differentiable transition map between them. Via these homeomorphisms, we can define coordinates in an open set around any point by adopting the coordinates in the corresponding Euclidean neighborhood. By convention, these coordinates are labelled , and unless important, we omit the point giving rise to the coordinates. In some cases of interest, it is possible to adopt a coordinate system that is valid over the whole manifold. From such a coordinate system, whether local or global, we can define a basis for the tangent space using a coordinate frame [5]. This will be the basis consisting of the partial derivative operators in each of the coordinate directions, that is, . Considering the tangent space as a vector space, this set is sometimes referred to in mathematical physics as a holonomic basis for the manifold. We use this expression then to define the symmetric matrix by the following expression for : From here, we define one more tensor of interest for the purposes of calculating curvature. Using Einstein notation, the Riemannian curvature tensor is The various are the Christoffel symbols, for which code is presented in the next section. In light of these definitions, we recall sectional curvature once again from the introduction as the following, now considering the special case of the tangent vectors being chosen in coordinate directions: The norm in the denominator is the norm of the tangent vector associated to that partial derivative in the holonomic basis, which is induced by the associated inner product from . Sectional Curvature We now create functions to compute these tensors and sectional curvature itself. These values depend on a set of coordinates and a Riemannian metric tensor, so that will be the information that serves as the input for these functions. Coordinates should be a list of coordinate names like , and should be a square symmetric matrix whose size matches the length of the coordinate list. Some not inconsiderable inspiration for the first half of this code was taken from Professor Leonard Parker’s Mathematica notebook “Curvature and the Einstein Equation,” which is available online as a supplement to [6]. We can now define a function for the Christoffel symbols from the previous section. This calculation consists of taking partial derivatives of the metric tensor components and one tensor operation. In Mathematica, the dot product, typically used for vectors and matrices, is also able to take tensors and contract indices. We can now use the formulas stated in the second section to define both the covariant and contravariant forms of the Riemannian curvature tensor. We perform one more tensor operation using the dot product to transform our partially contravariant tensor into one that is purely covariant. Both of these will be called at various points later. The full function to return the sectional curvatures consists of computing a scaled version of the covariant Riemannian metric tensor. The output consists of a symmetric matrix with zero diagonal entries representing curvatures in the coordinate directions. These diagonal values should not be taken literally, as curvature is undefined given two linearly dependent directions. While this of course does not give all possible sectional curvatures, one may perform a linear transformation on the basis in order to obtain a new metric tensor with arbitrary (linearly independent) vectors as basis elements. From here, the new tensor may be used for computation. Here is an example with diagonal entries that are functions of the last coordinate. Any good computation in mathematics must stand to scrutiny by known cases, so we evaluate our function with the input of hyperbolic 3-space. The two in the exponent should be imagined as the squaring of the exponential function. Checking with [7] verifies that this is indeed a global metric tensor for hyperbolic 3-space. As such, we know that it has constant sectional curvature of (recall the diagonal entries do not represent any curvature information). Applications in Topology Continuing with the hyperbolic space metric tensor, it is a well-known result in hyperbolic geometry that one is able to scale these first two dimensions to vary the curvature and produce a pinched curvature manifold. If we allow for new constant coefficients in the exponents for positive real numbers and , then we should see explicit bounds on the curvatures. In this vein, the Riemannian structure for complex hyperbolic space is similar to the real case, except for a modification to allow for complex variables. In this setting, a formula for the metric tensor valid over the entire manifold is available from [8], among other places. One can verify that, although not constant, the entries in the upper-left block are always bounded between and for positive . This result agrees with sectional curvature in complex hyperbolic space, and so serves as an example of sectional curvature computation where the underlying tensor is not diagonal. A careful review of [8] reminds us that this metric is only well-defined up to rescaling, which can change the values of the sectional curvature. What does not change, however, is the ratio of the largest and smallest curvatures, which are always exactly 4. The introduction in [9] takes considerable care to remind us that definitions change between curvatures in , and even . Applications in Physics Perhaps the most interesting applications of differentiable manifolds and curvature to physics lie in the area of relativity. This discipline uses the idea of a Lorentzian manifold, which is defined as a manifold equipped with a Lorentzian metric that has signature instead of the signature for four-dimensional Riemannian manifolds. As noted in the introduction, however, this has no impact on the computations of sectional curvature. Examples of such Lorentzian metrics include the Minkowski flat spacetime metric; is the familiar constant speed of light. Justifying the name of flat spacetime, our curvature calculation guarantees all sectional curvatures are identically zero. More generic Lorentzian manifolds may have nonzero curvature. To this end, we examine the Schwarzschild metric, which describes spacetime outside a spherical mass such that the gravitational field outside the mass satisfies Einstein’s field equations. This most commonly is viewed in the context of a black hole and how spacetime behaves nearby. More details on the following tensor can be found in [10]. In the following, , and are standard spherical coordinates for three-dimensional space and represents time. With this setup, we can calculate the sectional curvature of spacetime for areas outside such a spherical mass. This result indicates that the sectional curvature is directly proportional to the mass and inversely proportional to the distance from the object. In particular, there is a singularity at , indicating that curvature blows up near the center of the mass. Indeed, these results are in line with Flamm’s paraboloid, the graphical representation of a constant-time equatorial slice of the Schwarzchild metric, whose details can be found in [11]. Ricci Curvature In fact, the calculations we have done already allow us to compute one further object of interest for a Riemannian or pseudo-Riemannian manifold: the Ricci curvature. The Ricci curvature is a tensor that contracts the curvature tensor and is computable when one has the contravariant Riemannian curvature tensor. Below we use a built-in function for tensors to contract the first and third indices of the contravariant Riemannian curvature tensor to obtain a matrix containing condensed curvature information (see [12] for more information). The values 1 and 3 above refer to the dimensions we are contracting. In general, the corresponding indices must vary over sets of the same size; here all dimensions have indices that vary over a set whose size is the number of coordinates. We compute the Ricci curvature for some of the previous examples. The fact that the Ricci curvature vanishes for the above solution to the Einstein field equation is a consequence of its types of symmetries. In general, the Ricci curvature for other solutions is nonzero. Notice for the example (and the , trivially), all information from the Ricci tensor is contained in the diagonal elements. This is always the case for a diagonal metric tensor [12]. As such, we may sometimes be interested only in these values, so we take the diagonal in such a case. The supervising author would like to thank Dr. Nicolas Robles for suggesting the submission of this article to The Mathematica Journal. We would also like to thank Leonard Parker, who authored the notebook file available at [6], which greatly illuminated some of the calculations. We are also very grateful to the referee and especially the editor, whose contributions have made this article much more accurate, legible and efficient. [1] M. do Carmo, Differential Geometry of Curves & Surfaces, Mineola, NY: Dover Publications, Inc., 2018. [2] J. M. Lee, Introduction to Smooth Manifolds, Graduate Texts in Mathematics, 218, New York: Springer, 2003. [3] C. Stover and E. W. Weisstein, Metric Tensor from MathWorldA Wolfram Web Resource. [4] ParametricPlot3D, ParametricPlot3D from Wolfram Language & System Documentation CenterA Wolfram Web Resource. [5] F. Catoni, D. Boccaletti, R. Cannata, V. Catoni, E. Nichelatti and P. Zampetti, The Mathematics of Minkowski Space-Time, Frontiers in Mathematics, Basel: Birkhäuser Verlag, 2008. [6] J. B. Hartle, Gravity: An Introduction to Einsteins General Relativity, San Francisco: Addison-Wesley, 2003. [7] J. G. Ratcliffe, Foundations of Hyperbolic Manifolds, 2nd ed., Graduate Texts in Mathematics, 149, New York: Springer, 2006. [8] J. Parker, Notes on Complex Hyperbolic Geometry (Jan 10, 2020). [9] W. M. Goldman, Complex Hyperbolic Geometry, Oxford Mathematical Monographs, Oxford Science Publications, New York: Oxford University Press, 1999. [10] R. Adler, M. Bazin and M. Schiffer, Introduction to General Relativity, New York: McGraw-Hill, 1965. [11] R. T. Eufrasio, N. A. Mecholsky and L. Resca, Curved Space, Curved Time, and Curved Space-Time in Schwarzschild Geodetic Geometry, General Relativity and Gravitation, 50(159), 2018. doi:10.1007/s10714-018-2481-2. [12] L. A. Sidorov, Ricci Tensor, Encyclopedia of Mathematics (M. Hazewinkel, ed.), Netherlands: Springer, 1990. E. Fairchild, F. Owen and B. Burns Healy, Sectional Curvature in Riemannian Manifolds, The Mathematica Journal, 2020. About the Authors Elliott Fairchild is a high-school student at Cedarburg High School. He particularly enjoys problems in analysis, and is always looking for more research opportunities. Francis Owen is an undergraduate student at the University of Wisconsin-Milwaukee. His major is Applied Mathematics and Computer Science, and he is eager to find new programming opportunities. Brendan Burns Healy is a Visiting Assistant Professor at the University of Wisconsin-Milwaukee. Though a geometric group theorist and low-dimensional topologist by training, he also enjoys problems of computation and coding. Elliott Fairchild Department of Mathematical Sciences University of Wisconsin-Milwaukee 3200 N. Cramer St. Milwaukee, WI 53211 Francis Owen Department of Mathematical Sciences University of Wisconsin-Milwaukee 3200 N. Cramer St. Milwaukee, WI 53211 Brendan Burns Healy, PhD Department of Mathematical Sciences University of Wisconsin-Milwaukee 3200 N. Cramer St. Milwaukee, WI 53211 ]]> 0 From Discrete to Continuous Spectra Thu, 31 Oct 2019 20:10:02 +0000 We study the distribution of eigenspectra for operators of the form -y''+q(x)y with self-adjoint boundary conditions on both bounded and unbounded interval domains. With integrable potentials q, we explore computational methods for calculating spectral density functions involving cases of discrete and continuous spectra where discrete eigenvalue distributions approach a continuous limit as the domain becomes unbounded. We develop methods from classic texts in ODE analysis and spectral theory in a concrete, visually oriented way as a supplement to introductory literature on spectral analysis. As a main result of this study, we develop a routine for computing eigenvalues as an alternative to , resulting in fast approximations to implement in our demonstrations of spectral distribution. We follow methods of the texts by Coddington and Levinson [1] and by Titchmarsh [2] (both publicly available online via in our study of the operator and the associated problem where on the interval with real parameter and boundary condition for fixed , where . For continuous (the set of absolutely integrable functions on ), we study the spectral function associated with (1) and (2) using two main methods: First, following [1], we approximate by step functions associated with related eigenvalue problems on finite intervals for some sufficiently large positive ; then, we apply asymptotic solution estimates along with an explicit formula for spectral density [2]. For some motivation and clarification of terms, we recall a major application: For certain solutions of (1) and (2) and for any (the set of square-integrable functions on ), a corresponding solution to (1) may take the form (in a sense described in Theorem 3.1 of Chapter 9 [1]); here, is said to be a spectral transform of . By way of such spectral transforms, the differential operator may be represented alternatively in the integral form where induces a measure by which (roughly, the set of square-integrable functions when integrated against ) and by which Parsevals equality holds. Typical examples are the complete set of orthogonal eigenfunctions for and the corresponding Fourier sine transform in the limiting case (cf. Chapter 9, Section 1 [1]). For a fixed, large finite interval , we consider the problem (1), (2) along with the boundary condition (), which together admit an eigensystem with correspondence where the eigenvalues satisfying and where the eigenfunctions form a complete basis for . Since the associated spectral function is a step function with jumps at the various , we first estimate these by way of a related equation arising from Prüfer (phase-space) variables and compute the corresponding jumps . Then, we use interpolation to approximate the continuous spectral function using data from a case of large at points and using imposing the condition for all . We compare our results with those of a well-known formula [2] appropriate to our case on , which we outline as follows: For fixed , let be the solution to (1) with boundary values for which the asymptotic formula holds as . Then we have from Section 3.5 [2]. Finally, in the last section, we apply the above techniques to extend our study to operators on large domains and on , where spectral matrices take the place of spectral functions as a matrix analog of spectral transforms on these types of intervals (cf. equation (5.5) [1]). The techniques are described in detail below, but it is of particular interest that our computations uncover an interesting pattern in a discrete-spectrum case, as we are forced to reformulate our approach according to certain eigen-subspaces involved: our desired spectral approximations are resolved by way of an averaging procedure in forming Riemann sums. Various sections of Chapters 79 [1] (see also [3] and related articles) present useful introductory discussion applied to material presented in this article; yet, with our focus on equations (1)(6), one may proceed given basic understanding of RiemannStieltjes integration along with knowledge of ordinary differential equations and linear algebra, commensurate with (say) the use of and . An Eigenvalue Estimator We compute eigenvalues by first computing solutions on to the following, arising from Prüfer variables (equation 2.4, Chapter 8 [1]): Here, , where is a nontrivial solution to (1), (2) and (3) and satisfies for positive integers . We interpolate to approximate such solutions as an efficient means to invert (8) in the variable . And we use the following function on (7) throughout this article. Consider an example with , , and potential for parameter with , , in the case , . We create an interpolation approximation for eigenvalues . It is instructive to graphically demonstrate the theory behind this method. Here, we consider the eigenvalues as those values of where the graph of intersects the various lines as we use to find (or ), our maximum index , depending on . We choose these boundary conditions so that we may compare our results with those of applied to the corresponding problem (1) and (2) using . We now compare and contrast the methods in this case. The percent differences of the corresponding eigenvalues are all less than 0.2%, even within our limits of accuracy. In contrast, our interpolation method allows some direct control of which eigenvalues are to be computed, whereas (in the default setting) outputs a list up to 39 values, starting from the first. Moreover, our method admits nonhomogeneous boundary conditions, where admits only homogeneous conditions, Dirichlet or Neumann. Spectral Density: Discrete Approximation We proceed to build our approximate spectral density function for the problem (1) and (2) on with the same potential as above. We compute eigenvalues likewise but now on a larger interval for and with nonhomogeneous boundary conditions, say given by , (albeit does not depend on ). We compute eigenvalues via our interpolation method and compute a minimum (or ) as well as a maximum index so as to admit only positive eigenvalues; is supported on and negative eigenvalues result in dubious approximations by . We now compute the values . Fitting Method We now apply the method of [2] as outlined in equation (6). We use to include data from an interval near the endpoint that includes at least one half-period of the period of the fitting functions and . The function may return non-numerical results among the first few, in which case we recommend that either or be readjusted or that be set large enough to disregard such results. We now compare our results of the discrete and continuous (asymptotic fit) spectral density approximations. We compare the results by plotting percent differences, all being less than 0.1%. Check with Exact Calculation We chose as above because, in part, the solutions can be computed in terms of well-known (modified Bessel) functions. Replacing by , for , the solutions are linear combinations of From asymptotic estimates (cf. equation 9.6.7 [4]), we see that the former is dominant and the latter is recessive as when . Then, from Chapter 9 [1], equation 2.13 and Theorem 3.1, we obtain the density function by computing where is a solution as above and is a solution with boundary values , . (Here, is commonly known as the TitchmarshWeyl -function.) In the following code, we produce the density function in exact form by replacing functions from (9), the dominant by 1 and the recessive by 0, to compute the inside limit and thereafter simply allowing to be real. We likewise compare the exact formula for the continuous spectrum with the discrete results, noting that the exact graph appears to essentially be the same as that obtained by our asymptotic fitting method (not generally expecting the fits to be accurate for small !). Extension to Unbounded Domains: A Proof of Concept For the operator we now extend our study to large domains in the discrete-spectrum case and to the domain in the continuous-limit case. We choose an odd function potential of the form for positive constants , . We focus on the spectral density associated with specific boundary values at and an associated pair of solutions to (1): namely, we consider expansions in the pair and such that We apply the above computational methods to the analytical constructs from Chapter 5 [1] in both the discrete and continuous cases. First, for the discrete case, we compute spectral matrices associated with self-adjoint boundary-value problems and the pair as in (11): We estimate eigenvalues for an alternative two-point boundary-value problem on for (moderately) large to compute the familiar jumps of the various components . These components induce measures that appear in the following form of Parsevals equality for square-integrable functions on (taken in a certain limiting sense): (real-valued case). Second, we compute the various densities as limits as by the formulas where and are certain limits of -functions, related to equation (10), but for our ODE problem on domains and , respectively. The densities are computed by procedures more elaborate than (6), as discussed later. Then, we compare results of the discrete case like in (4), approximating Discrete Case After choosing (self-adjoint) boundary conditions (of which the limits happen to be independent) on an interval , we estimate eigenvalues and compute coefficients , from the linear combinations for the associated orthonormal (complete) set of eigenfunctions ; , whereby (real-valued case). Here, the functions result by normalizing eigenfunctions satisfying (14) so that we obtain We are ready to demonstrate. Let us choose , and , (arbitrary). Much of the procedure follows as above, with minor modification, as we include to obtain the values and (the next result may take around three minutes on a laptop). We now approximate the density functions by plotting where (for certain ) as we compute the difference quotients at the various jumps, over even and odd indices separately, and assign the corresponding sums D_(i j;k) to the midpoints of corresponding intervals [lambda_k,lambda_(k+1)]. We give the plots below, in comparison with those of the continuous spectra, and give a heuristic argument in the Appendix as to why this approach works. Continuous Case First, we apply the asymptotic fitting method using the solutions and . Here, we have to compute full complex-valued formulas for the corresponding -functions (cf. Section 5.7 [2]) where a slight modification of the derivation of , via a change of variables and a complex conjugation, results in (See Appendix). We now compare the result of the discrete and asymptotic fitting methods for the elements . We have deferred some discussion on our use of , comparison of eigenvalue computations, discrete eigenspace decomposition and Weyl -functions to this section. First, we have used to suppress messages warning that some solutions may not be found. From Chapter 8 [1], we expect unique solutions since the functions are strictly increasing. We have also used to suppress various messages from and other related functions regarding small values of to be expected with short-range potentials and large domains. Second, our formulation of and the midpoints as in (15) arises from a decomposition of the eigenspace by even and odd indices. We motivate this decomposition by an example plot of the values , where the dichotomous behavior is quite pronounced, certainly for large . We are thus inspired to compute the quotients over even and odd indices separately. Then, we consider, say, a relevant expression from Parsevals equality: for appropriate Fourier coefficients g_(i;k), , associated with respective solutions , we write We suppose that and for the corresponding transforms in the limit . Of course, a rigorous argument is beyond the scope of this article. Finally, we elaborate on the calculations of the -functions and : Given the asymptotic expressions as (resp.), we follow Section 5.7 of [2], making changes as needed, with a modification via complex conjugation (, say) for to arrive at The author would like to thank the members of MAST for helpful and motivating discussions concerning preliminary results of this work in particular and Mathematica computing in general. [1] E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, New York: McGraw-Hill, 1955. [2] E. C. Titchmarsh, Eigenfunction Expansions Associated with Second-Order Differential Equations, 2nd ed., London: Oxford University Press, 1962. [3] E. W. Weisstein. Operator Spectrum from MathWorldA Wolfram Web Resource. [4] M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Wiley, 1972. C. Winfield, From Discrete to Continuous Spectra, The Mathematica Journal, 2019. About the Author C. Winfield holds an MS in physics and a PhD in mathematics and is a member of the Madison Area Science and Technology amateur science organization, based in Madison, WI. Christopher J. Winfield Madison Area Science and Technology 3783 US Hwy. 45 Conover, WI 54519 ]]> 0 The Arithmetic of Points on a Conic and Projectivities Sun, 04 Aug 2019 15:06:58 +0000 H. S. M. Coxeter wrote several geometry film scripts that were produced between 1965 and 1971 [1]. In 1992, Coxeter gave George Beck mimeographs of two scripts that had not been made. Beck wrote Mathematica code for the stills and animations. This material was added to the third edition of Coxeters The Real Projective Plane [2]. This article updates the Mathematica code. Run This Code First The Arithmetic of Points on a Conic The example of a thermometer makes it easy to see how the real numbers (positive, zero and negative) can be represented by the points of a straight line. On the axis of ordinary analytic geometry, the number is represented by the point . Given any two such numbers, and , we can set up geometrical constructions for their sum, difference, product, and quotient. However, these constructions require a scaffolding of extra points and lines. It is by no means obvious that a different choice of scaffolding would yield the same final results. The object of the present program is to make use of a circle (or any other conic) instead of the line, so that the constructions can all be performed with a straight edge, and the only arbitrariness is in the choice of the positions of three of the numbers (for instance, 0, 1 and 2). Although this is strictly a chapter in projective geometry, let us begin with a prologue in which the scale of abscissas on the axis is transferred to a circle by the familiar process of stereographic projection. A circle of any radius (say 1, for convenience) rests on the axis at the origin 0, and the numbers are transferred from this axis to the circle by lines drawn through the opposite point. That is, the point at the top. In this manner, a definite number is assigned to every point on the circle except the topmost point itself. The numbers come closer and closer to this point on one side, and the numbers come closer and closer on the other side. So it is natural to assign the special symbol (infinity) to this exceptional point: the only point for which no proper number is available. The tangent at this exceptional point is, of course, parallel to the axis; that is, parallel to the tangent at the point 0. Having transferred all the numbers to the circle, we can forget about the axis; but the tangent at the point infinity will play an important role in the construction of sums. For instance, there is one point on this tangent that lies on the line joining points 1 and 2, also on the line joining 0 and 3, and on the line joining 1 and 4. We notice that these pairs of numbers all have the same sum: . Similarly, the tangent at 1 meets the tangent at infinity in a point that lies on the lines joining 0 and 2, 1 and 3, 2 and 4, in accordance with the equations . These results could all be verified by elementary analytic geometry, but there is no need to do this, because we shall see later that a general principle is involved. Having finished the Euclidean prologue, let us see how far we can go with the methods of projective geometry. Let symbols 0, 1, infinity be assigned to any three distinct points on a given conic. There is a certain line through 0 concurrent with the tangents at infinity and 1; let this line meet the conic again in 2. (Alternatively, if we had been given 0, 1, 2 instead of 0, 1, infinity, we could have reconstructed infinity as the point of contact of the remaining tangent from the point where the tangent at 1 meets the line 02.) We now have the beginning of a geometrical interpretation of all the real numbers. To obtain 3, we join 1 and 2, see where this line meets the tangent at infinity, join this point of intersection to 0, and assign the symbol 3 to the point where this line meets the conic again. Thus the line joining 0 and 3 and the line joining 1 and 2 both meet the tangent at infinity in the same point. More generally, we define addition in such a way that two pairs of points have the same sum if their joins are concurrent with the tangent at the point infinity. In other words, we define the sum of any two points and to be the remaining point of intersection of the conic with the line joining 0 to the point where the tangent at infinity meets the join of and . To justify this definition, we must make sure that it agrees with our usual requirements for the addition of numbers: the commutative law a unique solution for every equation of the form and the associative law The commutative law is satisfied immediately, as our definition for involves and symmetrically. The equation is solved by choosing so that and have the same sum as and . Thus the only possible cause of trouble is the associative law; we must make sure that for any three points , , (not necessarily distinct), the sum of and is the same as the sum of and . For this purpose, we make use of a special case of Pascals theorem, which says that if is a hexagon inscribed in a conic, the pairs of opposite sides (namely and , and , and ) meet in three points that lie on a line, called the Pascal line of the given hexagon. In 1639, when Blaise Pascal was sixteen years old, he discovered this theorem as a property of a circle. He then deduced the general result by joining the circle to a point outside the plane by a cone and then considering the section of this cone by an arbitrary plane. We do not know how he proved this property of a hexagon inscribed in a circle, because his original treatise was lost, but we do know how he might have done it, using only the first three books of Euclid’s Elements. In our own time, an easier proof can be found in any textbook on projective geometry. Each hexagon has its own Pascal line. If we fix five of the six vertices and let the sixth vertex run round the conic, we see the Pascal line rotating about a fixed point. If this fixed point is outside the conic, we can stop the motion at a stage when the Pascal line is a tangent. This is the special case that concerns us in the geometrical theory of addition. The hexagon shows that the sum of and is equal to the sum of and . Beginning with 0, 1 and infinity, we can now construct the remaining positive integers and so on. We can also construct the negative integers , given by and so on. Alternatively, we can construct the negative integers using and so on. By fixing while letting vary, we obtain a vivid picture of the transformation that adds to every number . The points and chase each other round the conic, irrespective of whether happens to be positive or negative. In our construction for the point 2, we tacitly assumed that the tangent at 1 can be regarded as the join of 1 and 1. More generally, the join of and meets the tangent at infinity in a point from which the remaining tangent has, for its point of contact, a point such that , namely, , which is the arithmetic mean (or average) of and . This result holds not only when is even but also when is odd; for instance, when and are consecutive integers. In this way we can interpolate 1/2 between 0 and 1, 1 1/2 between 1 and 2 and so on. We shall find it convenient to work in the scale of 2 (or binary scale), so that the number 2 itself is written as 10, one half as 0.1, one quarter as 0.01, three quarters as 0.11 and so on. We can now interpolate 1.1 between 1 and 10, 1.01 between 1 and 1.1, and so on to the eighths between 1 and 10. In fact, we can construct a point for every number that can be expressed as a terminating “decimal” in the binary scale. By a limiting process, we can thus theoretically assign a position to every real number. For instance, the square root of two, being (in the binary scale) is the limit of a certain sequence of constructible numbers: Conversely, by a process of repeated bisection, we can assign a binary “decimal” to any given point on the conic. (The “but one” is, of course, the point to which we arbitrarily assigned the symbol infinity.) We can now define multiplication in terms of the same three points 0, 1 and infinity. Two pairs of points have the same product if their joins are concurrent with the line joining 0 and infinity. The geometrical theory of projectivities is somewhat too complicated to describe here, so let us be content to remark that, if we pursued it, we could prove that our definition for addition is consistent with this definition for multiplication. The product is positive if the point of concurrence is outside, negative if it is inside the conic. In other words, we define the product of any two points and on the conic to be the remaining point of intersection of the conic with the line joining 1 to the point where the line joining 0 and infinity meets the line joining and . Of course, the question arises as to whether this definition agrees with our usual requirements for the multiplication of numbers: • the commutative law • a unique solution for every equation of the form (with ) • the associative law The equation is solved by choosing so that and have the same product as 1 and . Finally, another application of Pascal’s theorem suffices to show the associative law. That is, for any three points , , , the product of and is equal to the product of and . In fact, the appropriate hexagon is . By fixing while letting vary, we obtain a vivid picture of the transformation that multiplies every number by . If is positive, the points and chase each other round the conic. But if is negative, they go round in opposite directions. The familiar identity is illustrated by the concurrence of the tangent at 2 with the line joining 1 and 4 and the line joining 0 and infinity. More generally, if and are any two numbers having the same sign, the join of the corresponding points meets the line joining 0 to infinity in a point from which the two tangents have, for their points of contact, points such that , namely , where the square root of is the geometric mean of and . Setting and , we obtain a construction for the square root of two without having recourse to any limiting process. In fact, we have finite constructions for all the “quadratic” numbers commonly associated with Euclid’s straight-edge and compass. One of the most fruitful ideas of the nineteenth century is that of one-to-one correspondence. It is well illustrated by the example of cups and saucers. Suppose we have about a hundred cups and about a hundred saucers and wish to know whether the number of cups is actually equal to the number of saucers. This can be determined, without counting, by the simple device of putting each cup on a saucer, that is, by establishing a one-to-one correspondence between the cups and saucers. In our first application of this idea to plane geometry, the cups are points, the saucers are lines and the relation “cup on saucer” is incidence. As we know, a line is determined by any two of its points and is of unlimited extent. We say that a point and a line are “incident” if the point lies on the line; that is, if the line passes through the point. It is natural to ask whether the number of points on a line is actually equal to the number of lines through a point. In ordinary geometry both numbers are infinite, but this fact need not trouble us: if we can establish a one-to-one correspondence between the points and lines, there are equally many of each. The set of all points on a line is called a range and the set of all lines through a point is called a pencil. If the line and the point are not incident, we can establish an elementary correspondence between the range and the pencil by means of the relation of incidence. Each point of the range lies on a corresponding line of the pencil. The range is a section of the pencil (namely the section by the line ) and the pencil projects the range (from the point ). In our picture, the range is represented by a red point moving along a fixed line (which, for convenience, is taken to be horizontal) and the pencil is represented by a green line rotating around a fixed point . There is evidently a green line for each position of the red point. But we must admit that for some positions of the green line the red point cannot be seen because it is too far away; in fact, when the green line is parallel to (that is, horizontal), the red point is one of the ideal “points at infinity” that we agree to add to the ordinary plane so as to make the projective plane. Without this ideal point, our elementary correspondence would not be one-to-one: the number of points in the range would be one less than the number of lines in the pencil. In other words, the postulation of ideal points makes it possible for us to express the axioms for the projective plane in such a way that they remain valid when we consistently interchange the words “point” and “line” (and consequently also certain other pairs of words such as “join” and “meet”, “on” and “through”, “collinear” and “concurrent” and so forth). It follows that the same kind of interchange can be made in all the theorems that can be deduced from the axioms. This principle of duality is characteristic of projective geometry. In the plane we interchange points and lines. In space, the same principle enables us to interchange points and planes, while lines remain lines. When we regard the elementary correspondence as taking us from the point to the line , we write the capital before the small , as . The inverse correspondence, from to , is denoted by the same sign with the small before the capital , as . If , , , are particular positions of , and , , , of , we write all these letters before and after the sign, taking care to keep them in their corresponding order (which need not be the order in which they appear to occur in the figure), . This notation enables us to exhibit the principle of duality as the possibility of consistently interchanging capital and small letters. By combining two elementary correspondences, one relating a range to a pencil and the other a pencil to a range, we obtain a perspectivity. This either relates two ranges that are different sections of one pencil, or two pencils that project one range from different centers. In the former case, two of the symbols with one bar , or can be abbreviated to one with two bars, or, if we wish to specify the point that carries the pencil, we put above the two bars, as o^(O)m. In the latter case (when two pencils project one range from different centers), the two symbols with one bar are again abbreviated to one with two bars, and if we wish to specify the line that carries the range, we put above the bars. We can easily go on to combine three or more elementary correspondences. But then we prefer not to increase the complication of the symbols. Instead, we retain the simple symbol (with just one bar) for the product of any number of elementary correspondences. Such a transformation is called a projectivity. Thus elementary correspondences and perspectivities are the two simplest instances of a projectivity. The product of three elementary correspondences is the simplest instance of a correspondence relating a range to a pencil in such a way that the range is not merely a section of the pencil. The product of four elementary correspondences, being the product of two perspectivities, shares with a simple perspectivity the property of relating a range to a range or a pencil to a pencil. Now there is the interesting possibility that the initial and final range (or pencil) may be on the same line (or through the same point). We see two moving red points and , on , related by perspectivities from and to an auxiliary red point on . When reaches , on , we have another invariant point; the three red points all come together. Such a projectivity, having two distinct invariant points, is said to be hyperbolic. On the other hand, the three lines , and may all meet in a single point , so that coincides with and there is only one invariant point. Such a projectivity is said to be parabolic. A third possibility is an elliptic projectivity that has no invariant point, but this is more complicated, requiring three perspectivities (i.e., six elementary correspondences). The centers of the three perspectivities are , and . The green lines, rotating around these points, yield four red points. Two of the red points and chase each other along the bottom line . These two points are related by the elliptic projectivity. However, this is not the most general elliptic projectivity. There is a special feature arising from the fact that the points , , lie on the sides of the green triangle. When one of the two red points is at , the other is at , and vice versa: the projectivity interchanges and and is consequently called an involution. Thus we are watching an elliptic involution. Looking closely, we see that it not only interchanges and but also interchanges every pair of related points. For instance, it interchanges with (on ). An important theorem tells us that for any four collinear points , , , , there is just one involution that interchanges with and with . We denote it by . At any instant, the two red points are a pair belonging to this involution. Call them and . We now have three pairs of points, , , , on the bottom dark blue line, all belonging to one involution. The other lines form the six sides of a complete quadrangle , which consists of four points (no three collinear) and the six lines that join them in pairs. Two sides are said to be opposite if their point of intersection is not a vertex; for instance, and are a pair of opposite sides. We see now that the six points named on the bottom dark blue line are sections of the six sides of the quadrangle, and that each related pair comes from a pair of opposite sides. Accordingly the six points, paired in this particular way, are said to form a quadrangular set. Here is another version of the quadrangle and the corresponding quadrangular set , , . As before, is a pair of the involution . This remains true when we move the bottom dark blue line to a new position so that coincides with and with . Now and are invariant points, and we have a hyperbolic involution , which still interchanges and . The quadrangular set of six points has become a harmonic set of four points. We say that and are harmonic conjugates of each other with respect to and , and that the four points satisfy the relation . This means that there is a quadrangle having two opposite sides through and two opposite sides through , while one of the remaining two sides passes through and the other through . Given , and , we can construct by drawing a triangle whose sides pass through these three points. Let meet in ; then meets in . Of course, the hyperbolic involution can still be constructed as the product of three perspectivities (with centers , , ). But the invariant points and enable us to replace these three perspectivities by two, with centers (where meets ) and . Another product of two perspectivities relates ranges on two distinct lines. The fundamental theorem of projective geometry tells us that a projectivity relating ranges on two such lines is uniquely determined by any three points of the first range and the corresponding three points of the second. There are, of course, many ways to construct the projectivity as the product of two or more perspectivities, but the final result will always be the same. For instance, there is a unique projectivity relating on the first line to on the second. This means that for any point on there is a definite point on . The simplest way to construct this projectivity is by means of perspectivities from and , so that is first related to on and then to on . We can regard as a variable triangle whose vertices run along fixed lines , , while the two sides and rotate around fixed points and . The third side joins the projectively related points and . This construction remains valid when and are of general position, instead of lying on the lines that carry the related ranges. Let meet in and in . Now we have a construction for the unique projectivity that relates to . As before, the vertices of the variable triangle run along fixed lines , , while the two sides and rotate around the fixed points and . The possible positions for the third side include, in turn, each of the five sides of the pentagon . Carefully watching this line , we see that it envelops a beautiful curve. This is the same kind of curve that was constructed quite differently by Menaechmus about 340 BC. Since that time it has been known everywhere as a conic. One important property is that a conic is uniquely determined by any five of its tangents, and that these may be any five lines of which no three are concurrent. Since the possible positions for our variable line include, in turn, each side of the pentagon , we call its envelope the conic inscribed in this pentagon. To sum up: Let be a variable point on the diagonal of a given pentagon . Then the point , where meets , and the point , where meets , determine a line whose envelope is the inscribed conic. For any particular position of (on ), we see a hexagon whose six sides all touch the conic. The three lines , , , which join pairs of opposite vertices, are naturally called diagonals of the hexagon. Thus, if the diagonals of a hexagon are concurrent, the six sides all touch a conic. Conversely, if all the sides of a hexagon touch a conic, five of them can be identified with the lines , , , , . Since the given conic is the only one that touches these fixed lines, the sixth side must coincide with one of the lines that we have constructed. We thus have Brianchons theorem: If a hexagon is circumscribed about a conic, the three diagonals are concurrent. All these results can, of course, be dualized. (Now all the letters that we use are lowercase, representing lines.) For any pentagon whose vertex is joined to by and to by , there is a unique projectivity relating to . The sides of the variable triangle rotate about fixed points , , while the two vertices and run along the fixed lines and . The possible positions for the third vertex include, in turn, each of the five vertices of the pentagon. Carefully watching this moving point , we see that it traces out a curve through these five fixed points (no three concurrent). What is this curve, the dual of a conic? One of the many possible definitions for a conic exhibits it as a self-dual figure, with the interesting result that the dual of a conic (regarded as the envelope of its tangents) is again a conic (regarded as the locus of the points of contact of these tangents). Thus the locus of the point is a conic, and this is the only conic that can be drawn through the five vertices of the pentagon. To sum up: Let be a variable line through the intersection of two non-adjacent sides of a given pentagon . Then the line , which joins to , and the line , which joins to , determine a point whose locus is the circumscribed conic. The hexagon , which, for convenience, we rename , yields the dual of Brianchon’s theorem, namely Pascals theorem: If is a hexagon inscribed in a conic, the points , , (where pairs of opposite sides meet) are collinear. The hexagon that we see is, perhaps, unusual, because its sides cross one another. From the standpoint of projective geometry, this feature is irrelevant. A convex hexagon would serve just as well, but the “diagonal points” would be inconveniently far away. Another natural observation is that our conic looks like the familiar circle. In fact, this famous theorem was first proved for a circle in 1639, when its discoverer, Blaise Pascal, was only sixteen years old. Nobody knows just how he did it, because his original treatise has been lost. But there is no possible doubt about how he deduced the analogous property of the general conic. He joined the circle and lines to a point outside the plane, obtaining a cone and planes. Then he took the section of this solid figure by an arbitrary plane. We change the position of the points of the hexagon. In this way the conic appears in one of its most ancient aspects: as the section of a circular cone by a plane of general position. We change the position of the points of the hexagon. Thanks to Gregory Robbins, who sparked this update and was able to read the files from an old diskette. [1] College Geometry Project (196371). (Dec. 19, 2018) [2] H. S. M. Coxeter, The Real Projective Plane, 3rd ed., New York: Springer, 1993. H. S. M. Coxeter and G. Beck, The Arithmetic of Points on a Conic and Projectivities, The Mathematica Journal, 2018. About the Authors H. S. M. Coxeter (19072003) was a Canadian geometer. For an extensive biography, see George Beck earned a B.Sc. (Honours Math) from McGill University and an MA in math from the University of British Columbia. He has been the managing editor of The Mathematica Journal since 1997. He has worked for Wolfram Research, Inc. since 1993 in a variety of roles. George Beck 102-1944 Riverside Drive Courtenay, B.C., V9N 0E5 ]]> 0 Peak Response of Single-Degree-of-Freedom Systems to Swept-Frequency Excitation Fri, 19 Apr 2019 21:17:51 +0000 A comprehensive discussion is presented of the closed-form solutions for the responses of single-degree-of-freedom systems subject to swept-frequency harmonic excitation. The closed-form solutions for linear and octave swept-frequency excitation are presented and these are compared to results obtained by direct numerical integration of the equations of motion. Included is an in-depth discussion of the numerical difficulties associated with the complex error functions and incomplete gamma functions, which are part of the closed-form solutions, and how these difficulties were overcome by employing exact arithmetic. The closed-form solutions allowed the in-depth study of several interesting phenomena. These include the scalloped behavior of the peak response (with multiple discontinuities in the derivative), the significant attenuation of the peak response if the sweep frequency is started at frequencies near or above the natural frequency, and the fact that the swept-excitation response could exceed the steady-state harmonic response. complex variable complex variable dimensionless composite parameter complex variable error function imaginary error function linear sweep rate in Hz per minute nonzero start frequency for an octave sweep rate natural frequency in Hz complex variable complex variable octave sweep rate in octaves per minute complex variable time (also used as a dummy integration variable) upper limit of search for peak values time at which instantaneous frequency of excitation for linear sweep equals time at which instantaneous frequency of excitation for octave sweep equals new independent variable for octave sweep value at which instantaneous frequency of excitation for octave sweep equals single-degree-of-freedom system displacement response single-degree-of-freedom system velocity response single-degree-of-freedom system acceleration response initial displacement initial velocity complex variable octave sweep rate in octaves per second composite parameter for closed-form solution for linear sweep composite parameter for closed-form solution for linear sweep composite parameter for closed-form solution for linear sweep composite parameter for closed-form solution for octave sweep general phase function composite parameter for closed-form solution for octave sweep initial phase value incomplete gamma function dummy integration variable composite variable proportional to composite parameter for closed-form solution for linear sweep composite parameter for closed-form solution for linear sweep natural frequency in radians per second linear sweep rate in radians per second per second generalized sweep forcing function composite parameter for closed-form solution for octave sweep composite parameter for closed-form solution for octave sweep critical damping ratio dummy integration variable 1. Introduction Harmonic excitation is a fact of life in systems with rotating machinery, such as liquid rocket engine turbopumps, spacecraft momentum wheels, aircraft turbojet engines, electric plant steam turbines and liquid-transport turbine compressor trains. Associated with high performance are high shaft speeds and the resulting excitation caused by imbalances in the rotating components and imperfections in the shafts and ball bearings. Furthermore, phenomena such as shaft whirl and rotor dynamic instability are critical design aspects. Although performance requirements dictate design parameters such as shaft speed, avoiding certain speeds due to dynamic interactions within the system is also a critical design consideration. Completely avoiding critical speeds may not be possible. For example, if the critical speeds are below the operational shaft speed, then at startup and shutdown, the rotation rate sweeps through them. The magnitude of the response is a function of the sweep rate, system damping and modal gains at the excitation and response locations. In addition, bearing imperfection can produce excitation above and below the operational frequency, and responses to these imperfections are also a function of the sweep rate associated with the startup and shutdown of the system. In addition to rotating machinery considerations, frequency sweep effects are a critical aspect of harmonic base shake vibration testing, as employed in the aerospace industry, for example. Therefore, it has been recognized that being able to predict the vibration response of systems to swept-frequency excitation is critical (e.g. [17]). In 1932 Lewis presented the first response of a single-degree-of-freedom system to linear frequency sweep excitation [1]. He derived an expression for the envelope functions that contained the peak values. The limited quantitative results presented by Lewis were obtained by graphical integration for various levels of damping and sweep rate. Lewis concluded that the greater the sweep rate, the larger the attenuation relative to steady-state response, and the higher the instantaneous frequency of excitation would be at which the peak envelope response occurs. Fearn developed in 1967 [2] an algebraic expression for the time at which the peak displacement response of a single-degree-of-freedom system subjected to a linear frequency sweep would occur, and an approximate magnitude of the displacement response. Until Cronin’s dissertation [3], published in 1965, analytical studies were generally restricted to linear frequency sweep, and exponential sweep-excitation studies were mostly experimental in nature. Cronin did provide results for relatively slow sweep rates; his work included analog studies involving linear and exponential excitation frequency sweeps. In addition to spring-mass single-degree-of-freedom systems, work has also been done on unbalanced flexible rotors whose spin rate swept through its critical speeds, e.g., [4]. In these types of systems the modes of vibration would be a function of the spin rate and the resulting gyroscopic moments. In 1964 Hawkes [5] described an approach for obtaining the envelope function of the response of single-degree-of-freedom systems subjected to octave sweep rates. He credits the solution approach to an unpublished document written in April 1961 by T. J. Harvey. From the publication, it is unclear how all required initial conditions were obtained for the resulting differential equations that were solved by numerical integration. The results, however, are consistent with subsequent work published by Lollock [6], who extended the work for both linear and octave sweep rates to useful damping and natural frequency ranges. In approaches where the envelope function is used to identify the peak response, several factors need to be considered. First, the peak of the envelope function may not coincide with the peak of the time history response; this could lead to an incorrect estimate of the instantaneous excitation frequency that coincides with the peak response. The discrepancy would be greatest for low-frequency systems and decrease as the natural frequency increases relative to the starting frequency of the sweep. Another peculiar feature of this approach is that, whereas the original equation of motion is a second-order differential equation with two initial conditions (say, on the function and its derivative), the envelope equations turn out to be two coupled second-order differential equations, each of which requires two initial conditions, and there does not appear to be any way to derive these four necessary initial conditions from the original two for the equation of motion. There are physical arguments that one could make regarding what the initial conditions ought to be, but there does not appear to be any way to mathematically derive them from the original initial conditions. It is the purpose of this article to extend and complement previously published work by proposing explicit closed-form solutions to both linear and octave frequency-sweep excitation. This allows the computation of the peak response, not just the peak of the envelope function. The closed-form solutions involve error functions and incomplete gamma functions of complex arguments, computations of which require numerical precision exceeding that which today’s computers can provide. The approach used to overcome this will be described. The closed-form solutions are compared to solutions obtained by numerical integration of the equations of motion. Having the ability to compute closed-form solutions, studies were performed to explore the impact of the frequency separation between the start frequency of the sweeps and the natural frequency of the system. In addition, results are presented showing the fine structure of the peak response in relation to the steady-state resonance response as a function of natural frequencies and critical damping ratios. This includes some unexpected results, in that the peak response curves exhibit highly nonlinear behavior with discontinuities in the derivative. 2. Equations of Motion The differential equation for the motion of a single-degree-of-freedom system driven by harmonic excitation with a linear frequency sweep is given by where is the critical damping ratio, is the natural frequency, is the sweep rate in radians per second per second, and the dots indicate differentiation with respect to time. Assume, without any loss of generality, a sweep starting frequency of zero, a force magnitude equal to the mass of the system and initial conditions of and . The differential equation of motion of a single-degree-of-freedom system driven by harmonic excitation with an octave frequency sweep is where , is the octave sweep rate in octaves/sec, and is the nonzero start frequency of the sweep. As for the linear sweep case, assume a force magnitude equal to the mass of the system and initial conditions of and . It is helpful to also write both the linear sweep and the octave sweep equations in the following form: where is a general phase function and is the initial phase. Both the linear and octave sweep equations of motion can be put into the following more general form, which will be useful for constructing closed-form solutions: 3. Closed-Form Solution: Linear Sweep The solution to equation (4) can be expressed as For linear sweep, this becomes If the sine terms are expanded in terms of complex exponentials, then the resulting integrals can be computed in terms of the error function, , and the imaginary error function, , each with complex argument , where . Conceptually, the process proceeds as follows: 1. After converting the sine terms to complex exponentials, expand out the products of sums of exponentials, splitting the integral accordingly into a sum of several integrals of exponentials and pulling the parts of each integrand that do not depend on the integration variable outside the integral; the resulting integrals will all have the form. 2. With some algebraic manipulation, these integrals can be put into the form or , where , and are, in general, complex valued. 3. Choosing as the new integration variable, the first of these integrals becomes: An identical procedure can be applied to the second of these integrals, leading to an expression involving imaginary error functions. Performing the indicated calculations (including the associated algebra) gives the following closed-form solution for the linear frequency-sweep excitation case. In the interests of compactness, it is helpful to first introduce the following auxiliary parameters: Then the closed-form solution for the linear sweep case can be written as In order to verify that this equation for does in fact satisfy the equation of motion, we make use of the fact that the derivatives of the error function and the imaginary error function are given by the exponentials and . Then substituting equation (7) and its derivatives into the equation of motion yields an expression involving all of the original and functions, plus a number of terms that do not contain any error functions. Collecting terms with respect to the various error functions, which is relatively straightforward although algebraically tedious, verifies that the coefficient of each of the error functions is zero, and that the terms that do not contain any error functions sum to , which is the forcing function on the right-hand side of the equation. Since we are interested in the peak acceleration response, the second derivative of the solution, equation (7), is the sought-after response time history. 4. Closed-Form Solution: Octave Sweep For the case of octave sweep, it is helpful to make a change of independent variable in equation (2) and let , where is the octave sweep rate in octaves per minute. With this change of variable, the equation of motion for octave sweep in the domain becomes The initial conditions become and . Similarly, the expression for the second derivative with respect to time becomes (in terms of derivatives with respect to ), where we define . The advantage of making this change of variable, from the perspective of numerical integration, is the absence of exponential functions of time in the forcing function in equation (8); rather, the forcing function is a constant-frequency sine wave, and the coefficients in the equation are at most quadratic in . This greatly improves the stability and reliability of the numerical integration. It is helpful to write equation (8) for the octave sweep in the following more general form: where . Then using the variation of parameters method, we obtain the following expression for : Substituting for and then expanding the sines in terms of complex exponentials yields integrals of the form , which are readily expressed in terms of incomplete gamma functions after algebraic transformation. The incomplete gamma function is given by . For compactness, it is helpful to first introduce the following auxiliary parameters: , , , and . Then the resulting expression for reduces to Substituting yields the corresponding solution in the time domain: Computing the first and second derivatives of equation (13) and substituting them into the original equation of motion, equation (2), one discovers, after some algebra and collecting terms with respect to the various incomplete gamma functions, that the resulting equation can be put into the form . Since we are interested in oscillatory motion, which implies , it follows that reduces to zero, thereby showing that equation (13) does indeed satisfy equation (2). 5. Challenges in Separating Real and Imaginary Parts of Closed-Form Solutions The sought-after solutions are the real parts of equation (7) and equation (13). For the linear sweep, series expressions exist for the real and imaginary parts of both and : for and for contain series expressions in terms of Hermite polynomials as well as hypergeometric functions. In practice, these series have very slow and highly nonmonotonic convergence properties, with the partial sums fluctuating over many orders of magnitude as successive terms are added. Furthermore, numerical evaluation of these partial sums using exact numbers as inputs is extremely slow and computation time increases nonlinearly with the number of terms, while evaluation using finite-precision numbers yields erroneous results. Since one does not know ahead of time how many terms will be needed for an accurate computation, this approach is impractical. As with the error function, there are similar numerical challenges in computing the incomplete gamma function of complex arguments. Accordingly, the closed-form solutions will be computed using equations (7) and (13) directly. 6. Challenges in Numerical Evaluation of the Exact Closed-Form Solutions There are also numerical challenges associated with the exact solutions because of the complex arguments of the error and gamma functions. Recall that the error function is given by and observe that the magnitude (i.e. the absolute value) of is the same as the magnitude of , since . However, once the argument becomes complex, we would need to integrate expressions of the form , and the presence of the term in the exponent means that the real part of the exponent grows very quickly with , that is, as . Since is analytic in the complex plane, we can use the Cauchy integral theorem for line integrals [8] to break the integral from 0 to (complex) into two parts: the integral from to plus the integral from to . In the integral from to , we are in effect integrating from to . Thus, both and increase very quickly, as shown in the plots in Figures 1 and 2. In order for the end result of the combinations of and that appear in the exact solution to sum to an oscillatory function, very precise cancellations are needed, meaning that extremely high precision is needed in order to do the numerical evaluations correctly. Figure 1. Plot of . Observe that over the range and , the magnitude of increases to about . Figure 2. Plot of . The behavior of is similar to that of . Because of the extremely high numerical precision requirements, Mathematica, which implements arbitrary-precision arithmetic, was chosen to compute the closed-form solutions. This made it possible to experiment with different levels of computational precision. Some results were computed with hundreds or thousands of digits of precision. Depending on the values of the input parameters (sweep rate, natural frequency, damping coefficient, etc.), it was found that different levels of precision were needed in order to get reliable resultsnot a very attractive idea, since it is impossible to know ahead of time how much precision would be needed for any particular set of inputs. Fortunately, Mathematica also allows exact arithmetic (using rational and/or exact symbolic numbers as inputs), and this made it possible to use the exact analytic solutions in a computationally tractable form. More specifically, one can evaluate functions numerically using exact arithmetic by means of the following steps: 1. Convert all of the inputs to integers, rational numbers or exact symbolic numbers such as or , or rational multiples thereof, all of which are treated as having infinite precision. 2. Set the global variable , which specifies the maximum number of extra digits of precision, to . This enables as much extra precision as possible. 3. Evaluate the function of interest with the desired (exact) inputs. This will, in general, yield a very complicated exact expression. 4. Evaluate this exact value to the desired number of digits of precision for the output in order to get a recognizable numerical value, with the understanding that any imaginary dust arising from this numerical truncation will be ignored. (For the results presented later, we used 30 digits of output precision.) 7. Comparison of Exact and Numerical Solutions To build confidence in the closed-form solution, the equation of motion was also solved by direct numerical integration. For the linear sweep case, the results presented herein were obtained from the closed-form solution, equation (7), as well as direct integration of the differential equation of motion, equation (1). The closed-form solution was evaluated by first rationalizing all of the inputs to (7) (other than integers, rational numbers and multiples of and ) using (which converts any number to rational form), and then evaluating the real part of the result (to eliminate any very small imaginary numbers) to the desired number of digits of precision (typically 30, 50 or 100) with the function. The numerical solution was obtained by integrating the equation of motion (1) with out to a some desired maximum time (typically some time after the sweep frequency hits the natural frequency of the system), with , , and . Figure 3 shows the response time histories for a system with a natural frequency of 5 Hz and a critical damping ratio of 1%. The sweep frequency was started at zero Hz and the sweep rate was 150 Hz/min, or . In the figure, the dashed orange line is the closed-form solution and the dotted blue line is the direct numerical integration solution. Clearly, the differences are imperceptible. Table 1 shows the numerical values for both solutions for a randomly selected subset of the time points used in plotting Figure 3. Again, it is evident that for all practical purposes, the solutions are identical. Figure 3. Acceleration response time histories of a single-degree-of-freedom system, , excited by a harmonic force with a linear sweep rate frequency of 150 Hz/min. Table 1. Selected acceleration response values from the time histories shown in Figure 3. For the octave sweep case, the results in Table 2 were obtained from the closed-form solution, equation (13), as well as by direct integration of the differential equation of motion in the -domain, equation (8). The procedure for evaluating the closed-form solution for the octave sweep case was identical to that described for the closed-form solution in the linear sweep case. The numerical solution was obtained by integrating equation (8) in the -domain with from out to some desired maximum value of (typically corresponding to some time beyond the time at which the sweep frequency hits the natural frequency of the system), with and , and then using equation (9) to transform the acceleration back to the -domain. Figure 4 shows the response time histories for a system with a circular natural frequency of 1/4 Hz and critical damping ratio of 0.01. The sweep frequency was started at 1/8 Hz and the sweep rate was 1/2 octaves/min. The orange dashed line is the closed-form solution and the blue dotted line is the direct numerical integration solution. Again, the differences are imperceptible. Table 2 provides the numerical values for both solutions for a randomly selected subset of the time points used in plotting Figure 4; for all practical purposes, the results are identical. Figure 4. Acceleration response time histories of a single-degree-of-freedom system, , excited by a harmonic force with an octave sweep rate frequency of 0.5 octaves/min. Table 2. Selected acceleration response values from time histories shown in Figure 4. 8. Construction of Peak Response Curves The construction of the peak response curves involved two steps. First, the times at which the peak of the absolute value of the acceleration occurred were obtained via numerical integration for the desired combinations of , and for linear sweep or for octave sweep. These times were then used as the starting points for a very fine-grained search of the exact analytical solutions in order to determine the peak acceleration in each case. Development of a generic algorithm to accomplish this was not trivial, as will be discussed. However, the effort was made easier by previously published results that indicate that the peak envelope values, which would contain the peak response values, would occur after the instantaneous frequency of excitation was equal to the natural frequency of the system. Hence, the search for the peaks was started at the point in the response time history where the instantaneous frequency of excitation was equal to the circular natural frequency of the system. For the linear sweep excitation, the time was computed as and for the octave sweep excitation, the value was computed as In the case of the numerical approach, we sorted the list of computed acceleration values generated via integration, starting at in order to find an initial approximation to the peak acceleration, and then did a more refined local search around this peak using standard local optimization techniques. In the case of the analytical approach, much smaller increments in were used in order to get a sharper picture of some unusual phenomena that emerge at low frequencies. Accordingly, interpolations were generated of the times at which the numerically generated peak responses occurred, as a function of for combinations of and for linear sweep, or for octave sweep. Thus, for any value of , we could use this interpolated time value as the starting point for a refined numerical search that involved evaluating the exact analytical solution at very closely spaced time points in a neighborhood of this time. For this, we chose time points that were equally spaced in the phase of the forcing function, that is, at 0.25° phase increments, which provided precise, although computationally intense results. In addition, care was taken to search for the peak sufficiently past the start of the search, given by equation (14) or equation (15), to guarantee that the global maximum peak had been found. Associated with the question of at which point in a time history to start the search for the peak value, that is, and , is the question of how far past this point the search should be conducted to guarantee that the global peak has been identified. Unfortunately, the only way we found to reliably accomplish this was through trial and error. For linear sweep, we found experimentally that it was very helpful to divide the range into two parts: and . For relatively low natural frequencies, that is , it was found experimentally that evaluating the function out to gives reliable results in most cases, with the peak response typically occurring about 20% of the way out to . At low values of , however, sometimes the peak response occurred about 45 to 50% of the way out to . Although with hindsight, we could have obtained the peak response without going out so far in time, we wanted to be sure that the peak response found was in fact the true global peak response. We observed that in some cases, what looked like a global peak value eventually got “dethroned” by a peak that occurred quite a few cycles later, due to the beating of the frequencies involved. Thus, all of the low-frequency responses, as well as a subset of the high-frequency responses, were visually monitored graphically, and if any peak responses were found at times more than 50% or so of the way out to , then the coefficient of for was increased accordingly. For higher natural frequencies, that is, , it was found that generally gave reliable results for high sweep rates (~150-200 Hz/min), while gave reliable results for lower sweep rates (~10-20 Hz/minute). In view of the oscillatory nature of the system, it was important to constrain the maximum integration step size to be at most a small fraction of a cycle. Based on prior experience with similar computations, we chose the maximum step size to be 1/40 of a cycle of the largest frequency of interest, which was the sweep frequency at the value described previously. For simplicity, we deliberately chose to constrain the maximum step size based on the largest frequency of interest, encountered at time , rather than attempt to change the maximum step size as the frequency changed. For the low sweep frequencies encountered in the early parts of a sweep, this step size was much smaller than 1/40 of a cycle, but this did not create any problems. The numerical integrator used () employs an adaptive algorithm that adjusts the step size as needed, subject to any user-prescribed constraints. In addition, we used fifth-order interpolation in the numerical integrator so that the acceleration would be a third-order interpolating function. Finally, in view of the progressive increase of sweep frequency with time, we found it useful to specify a maximum of 100,000,000 integration time steps (considerably more than the integrator’s default value), as in some cases a smaller maximum number of time steps (such as 10,000,000) did not allow the adaptive integrator to reach the global peak response. It was also required that the closed-form solution be evaluated at very closely spaced time increments in order to reliably find the peak acceleration. This strategy leveraged off of the previously computed numerical solutions, that is, the times at which the numerically obtained peak values occurred, in order to do a very fine-grained search (with the closed-form solution) in the neighborhood of the numerically computed peak value. Although a global list of search points could have been generated in other ways without the use of the numerical solution, using the points generated by the numerical integrator seemed like the most efficient approach. The strategy then was to use the numerically generated estimate of when the peak acceleration occurs and search within plus or minus some number of cycles of this time, at equally spaced increments in the forcing function phase. We found that searching within ±60 cycles with 1,440 phase increments per cycle (i.e. at 0.25° phase increments) yielded reliable results. 9. Peak Response Curves for Linear Sweep Figure 5 shows the peak response (from the exact solution) normalized by , the steady-state resonant response when the excitation frequency is equal to the undamped natural frequency of the system, plotted against the natural frequency of the system for three linear excitation sweep rates, , and Hz/min. The system has a critical damping ratio of and its natural frequency was varied from 0.25 Hz to 10 Hz in steps of 0.01 Hz. Each of the (almost 1,000) peak response values on each of these curves was computed via the process for computing peak acceleration (from the exact solution) described in Sections 7 and 8, that is, searching within ±60 cycles of the numerically generated estimate of when the peak acceleration occurs, with 1,440 phase increments per cycle. As can be seen, the attenuation of the peak response relative to the resonant steady-state response is significant for systems with low natural frequencies. As the natural frequency increases, which allows a greater number of response cycles during any given excitation frequency range, the attenuation decreases. These results are consistent with those published by others [6]. What is not consistent is the scalloped behavior of the peak curves at the lower frequencies. This behavior was obtained with both the numerically integrated results and the closed-form solution. Figure 6 shows an expanded close-up view of the lower-frequency range of Figure 5 and was generated by simply changing the horizontal plot range in Figure 5. The details visible in Figure 6 will be discussed in more detail later. Figure 5. Normalized peak response plotted against natural frequency for several linear excitation sweep rates. Left-to-right curves correspond to top to bottom in key. Figure 6. Close-up of the low-frequency range of Figure 5. Left-to-right curves correspond to top to bottom in key. Another observation is that the peak response during a frequency sweep can exceed the steady-state resonant response. This is shown in Figure 7, where the normalized peak responses are shown for two sweep rates (Figure 7 was obtained from Figure 5 by simply adjusting the vertical plot range to focus on the overshoot portion of the response). This might seem counterintuitive, since the frequency of excitation is sweeping through the natural frequency and therefore does not dwell. However, the sweep causes a response that is at the natural frequency of the system and that decays as a function of the system damping. Once the sweep frequency passes the natural frequency, the total response is the response due to the excitation plus the free-decay response of the system at its natural frequency. This is what causes the beating in the response once the sweep frequency passes the natural frequency. The decaying free response plus the transient response to the swept excitation can combine to produce higher peak responses than the resonant response caused by harmonic dwell at the natural frequency. The overshoot observed here is consistent with the overshoot observed by Cronin [3]. Figure 7. Close-up of overshoot phenomenon observed in Figure 5. Left-to-right curves correspond to top to bottom in key. Figure 8 shows the normalized peak response for various sweep rates plotted against the natural frequency squared divided by the linear sweep rate; this normalization allows comparison to results presented in the literature. The critical damping ratio for this system is . The data used in Figure 8 is the same as the data used in Figure 5, only plotted differently. Observe that the curves merge into one, as explained by Hawkes [5]. Figure 8. Normalized peak response for several linear sweep rates plotted against , where is the natural frequency and is the sweep rate. 9.1 Discontinuities in Derivative of Peak Response Curves at Low Frequencies In Figures 5 through 8, one observes periodic discontinuities in the derivative of the peak response curve. Moreover, the curve does not increase monotonically; sometimes it starts to dip down before hitting a discontinuity in slope and resuming its upward trend. One also observes that at very low frequencies the discontinuities in the derivative are not very regular, but as the natural frequency is gradually increased, they take on a much more regular nature. These discontinuities are best understood in terms of what we will call the competing peaks phenomenon, which can be most clearly explained by taking several observations into account: 1. The peak response always occurs some time after the sweep frequency reaches the natural frequency of the system. 2. As the natural frequency of the system is increased, the time at which the sweep frequency reaches the natural frequency occurs later and later, since for these problems the sweep frequency always started at 1/8 Hz. 3. Thus the time at which the peak acceleration occurs can be expected to increase as the natural frequency is increased. 4. In the array of plots shown in Figure 9, which show the response time histories for several very closely spaced values of , one observes that as the time at which the peak acceleration is reached increases, the dominant peak (i.e. the largest global peak) is eventually overtaken (from one value of to the next, i.e. from one plot to the next) by the secondary peak (i.e. the second-largest global peak), which has been increasing all along. So when this happens, the rate of change of the global peak suddenly changes, since it is now associated with a different peak, and thus there is a discontinuity in the slope of the peak response curve. These peak responses as a function of frequency are summarized in the plot insert at the lower-right corner of Figure 10. Figure 9. Evolution of peak acceleration as natural frequency is increased (left to right, then top to bottom). Figure 10. Evolution of peak acceleration as secondary peak overtakes the dominant peak. The first six points come from the preceding plots. In principle, there are actually three possible types of behavior that can lead to discontinuities in the derivative of the peak response curve, and all can be understood in terms of the preceding logic: 1. A decreasing peak is overtaken by an increasing peak (this is the case described in the preceding). 2. An increasing peak is overtaken by a more rapidly increasing peak. 3. A decreasing peak passes a more slowly decreasing peak so that the more slowly decreasing peak is now the dominant peak (possible in principle, but not observed in this example). The later the peak (i.e. the larger the natural frequency of the system), the longer the system has to build up to a steady-state-like response, so that successive peak accelerations (corresponding to successively higher natural frequencies) attain higher and higher values, hence the overall upward general trend of the peak response curve. For this same reason, at high sweep frequencies successive peaks in the response versus time curve all have very similar amplitudes, so that when the natural frequency is changed slightly and one peak overtakes another, the difference in the rates at which the dominant and secondary peaks are increasing is extremely small and barely noticeable. Thus the peak response curve appears to be smooth at high frequencies. 10. Peak Response Curves for Octave Sweep As described earlier, in the octave sweep case, it is extremely helpful to first make a change of independent variable by letting . The resulting differential equation for then has a constant-frequency forcing term in the domain (at the expense of coefficients in the equation that are at most quadratic in time). The resulting differential equation for , equation (8), was solved both analytically (equation (13)) and numerically, and then transformed back to the time domain. 10.1 Numerical Integration in the Domain The time at which the sweep frequency equals the oscillators resonant frequency is given by equation (15). However, since the integration is being done in the domain, the corresponding expression for the value at which the systems resonant frequency is reached becomes Since in the domain the forcing function is a constant-frequency sine wave, we found experimentally that in most cases it was sufficient to integrate to a maximum value of 1.5 , although occasionally it was necessary to go up to 3 or 4 times . In some cases it is possible for the value of to become less than 1, and so we also imposed a lower bound of 1.05 on the maximum value of . We again used fifth-order interpolation for computing derivatives in and again allowed the integration to go for a maximum of 100,000,000 time steps: recall that for octave sweep we used the substitution , so increases exponentially with , and thus the number of steps in the domain can become much larger than the number of time steps in the domain. 10.2 Numerical Optimization to Identify Peak Response Once the differential equation (for a given set of , , and values) had been solved, the following procedure for finding the peak response was followed: 1. Create a list of the values generated via numerical integration. 2. Use equation (9) to evaluate (in the time domain) at each value and then from this list select the largest response. 3. Use the data from steps (1) and (2) to also create an interpolating function for as a function of . 4. Having found this initial estimate of the peak value, then use the interpolation function returned by to do a local optimization (via the function) around this initial peak, using this peak as a starting point. 10.3 Peak Response Curves for Octave Sweep Figure 11 shows the normalized peak response to various octave sweep rates. Each of the (almost 1,000) peak response values on each of these curves was computed via the process for computing peak acceleration (from the exact solution) described in Sections 7 and 8, that is, searching within ±60 cycles of the numerically generated estimate of when the peak acceleration occurs, with 1,440 phase increments per cycle. As expected, the slower the sweep rate, the lower the attenuation. In addition, the scalloped behavior in the peak response curves that was observed for the linear sweeps is also present here, although not as pronounced. This is because the octave sweep increases in frequency more rapidly than the linear sweep. Figure 11. Normalized peak responses with and several values of octave sweep rate (octaves/minute). At low natural frequencies, , the peak response was computed in increments of 0.002 Hz. Left-to-right curves correspond to top to bottom in key. Figure 12 shows an expanded view of Figure 11 corresponding to the lower frequency systems so that the scalloped behavior can be better seen. Figure 12 was obtained from Figure 11 by simply adjusting the vertical and horizontal plot ranges. Figure 12. Expanded view of peak response curves with at low natural frequencies for various octave sweep rates (octaves/minute). Left-to-right curves correspond to top to bottom in key. Figure 13 shows the results from Figure 11 normalized by the octave sweep rate, as suggested by Hawkes [5]. The data used in Figure 13 is the same as the data used in Figure 11, only plotted differently. As in the case with the linear sweep rate and its normalization factor, the octave sweep rate results also merge into a single curve for systems with the same critical damping ratio. Figure 13. Normalized peak response curves for and various octave sweep rates plotted against , where is in Hz and is in octaves/minute. Figure 14 shows comparable results to those in Figure 13 for systems with a critical damping ratio of . Figure 14. Normalized peak response curves for and various sweep rates with , plotted against ; is in Hz and is in octaves/minute. Figures 15 and 16 show the severe attenuation that occurs when the start frequency of the sweep is close to the natural frequency. In both figures, the sweeps were started at 1 Hz. As can be ascertained, the attenuation is significant for systems with natural frequencies close to or below 1 Hz, as would be expected. Hence, the attenuation is not only a function of the natural frequency, damping and sweep rate, but also of the proximity of the start frequency of the sweep to the natural frequency. As with Figure 11, each of the peak response values on each of the curves in Figures 1416 was computed via the process for computing peak acceleration (from the exact solution) described in Sections 7 and 8. Figure 15. Normalized peak response curves for octave sweep with and (instead of 1/8 Hz). Left-to-right curves correspond to top to bottom in key. The derivation of closed-form solutions for the responses of single-degree-of-freedom systems subject to linear and octave swept-frequency harmonic excitation was presented. The closed-form solutions were compared to results obtained by direct numerical integration of the equations of motion with excellent agreement obtained. In addition, an in-depth discussion was presented on the numerical difficulties associated with the gamma and error functions of complex arguments that are part of the closed-form solutions, and how these difficulties were overcome by employing exact arithmetic with infinite-precision numbers, that is, rational and/or exact symbolic numbers. This included a study of precision requirements by performing computations with numerical precision exceeding what is available on todays computers. The closed-form solutions allowed the in-depth study of several interesting phenomena including: (a) computation of the peak response instead of the peak of the envelope function; (b) scalloped behavior of the peak response with frequent discontinuities in the derivative; (c) the significant attenuation of the peak response if the sweep frequency is started at frequencies near or above the natural frequency; and (d) the fact that the swept-excitation response could exceed the steady-state harmonic response when the system is excited at its natural frequency. We are grateful to Luke Titus of Wolfram Research for his valuable suggestions on exact numerical computation. This work was supported by contract # FA8802-14-C-0001. [1] F. M. Lewis, Vibration during Acceleration through a Critical Speed, Transactions of the American Society of Mechanical Engineers, 54(1), 1932 pp. 253261. [2] R. L. Fearn and K. Millsaps, Constant Acceleration of an Undamped Simple Vibrator through Resonance, The Aeronautical Journal, 71(680), August 1967 pp. 567569. [3] D. L. Cronin, Response of Linear, Viscous Damped Systems to Excitations Having Time-Varying Frequency, Ph.D. thesis, Dynamics Laboratory, California Institute of Technology, Pasadena, California, 1965. [4] R. Gasch, R. Markert and H. Pfutzner, Acceleration of Unbalanced Flexible Rotors through the Critical Speeds, Journal of Sound and Vibration, 63(3), 1979 pp. 393409. [5] P. E. Hawkes, Response of a Single-Degree-of-Freedom System to Exponential Sweep Rates, Shock, Vibration and Associated Environments, Part II, Bulletin No. 33, February 1964 pp. 296304. [6] J. A. Lollock, The Effect of Swept Sinusoidal Excitation on the Response of a Single-Degree-of-Freedom Oscillator, in 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2002, Denver, CO. [7] R. Markert and M. Seidler, Analytically Based Estimation of the Maximum Amplitude during Passage through Resonance, International Journal of Solids and Structures, 38(1013), 2001 pp. 19751992. [8] L. Ahlfors, Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable, New York: McGraw-Hill, 2000. C. C. Reed and A. M. Kabe, Peak Response of Single-Degree-of-Freedom Systems to Swept-Frequency Excitation, The Mathematica Journal, 2018. About the Authors Dr. Chris Reed is a Senior Engineering Specialist in the Structures Department at The Aerospace Corporation. As an applied mathematician, his work has encompassed mechanical vibrations, structural deformation, space-based sensor system performance, satellite system design optimization, flight termination system interference, fluid sloshing, electrostatic discharges, dielectric degradation on satellites and queueing systems. He has two patents and received a Wolfram Innovator award in 2017. His B.S. is from the California Institute of Technology and his M.S. and Ph.D. degrees are from Cornell University. Dr. Alvar M. Kabe is the Principal Director of the Structural Mechanics Subdivision of The Aerospace Corporation. He has made notable contributions to the state of the art of launch vehicle and spacecraft structural dynamics. He has published numerous papers, is an Associate Fellow of the AIAA, and has received The Aerospace Corporations Trustees Distinguished Achievement Award and the Aerospace Presidents Achievement Award. His B.S., M.S. and Ph.D. degrees are from UCLA. C. Christopher Reed Senior Engineering Specialist Structures Department The Aerospace Corporation P.O. Box 92957 Los Angeles, CA 90009-2957 Alvar M. Kabe Principal Director Structural Mechanics Subdivision The Aerospace Corporation P.O. Box 92957 Los Angeles, CA 90009-2957 ]]> 0 Pseudo-Dynamic Approach to the Numerical Solution of Nonlinear Stationary Partial Differential Equations Mon, 08 Oct 2018 21:17:37 +0000 This article presents a numerical pseudo-dynamic approach to solve a nonlinear stationary partial differential equation (PDE) with bifurcations by passing from to a pseudo-time-dependent PDE . The equation is constructed so that the desired nontrivial solution of represents a fixed point of . The numeric solution of is then obtained as the solution of at a high enough value of the 1. Introduction: Soft Bifurcation of a Stationary Nonlinear PDE The method described here can be applied to solve PDEs coming from different domains. However, it was initially developed to get the numerical solution of a stationary nonlinear PDE with a bifurcation. The methods application to a broader class of equations is briefly discussed at the end of the article. The term bifurcation describes a phenomenon that occurs in some nonlinear equations that depend on one or several parameters. These equations can be algebraic, differential, integral or integro-differential. At some values of a parameter, such an equation may exhibit a fixed number of solutions. However, as soon as the parameter exceeds a critical value (referred to as the bifurcation point), the number of solutions changes and either new solutions emerge or some old ones disappear. To be specific, we discuss the case of dependence on a single parameter . The new solutions can emerge continuously at the bifurcation point. The norm of the solution exhibits a continuous though nonsmooth dependence on the parameter at the bifurcation point (left, Figure 1). An explicit example is in Section 4.5. A bifurcation at which the solution is continuous at the bifurcation point is referred to as supercritical or soft. The behavior of the solution in the case of a subcritical or hard bifurcation is different: the norm of the solution is finite at the bifurcation point but has a jump discontinuity there (right, Figure 1). Figure 1. Soft versus hard bifurcation. In the case of the soft bifurcation, the solution has a continuous dependence of the solution norm on the control parameter , with a kink at the bifurcation point, . In contrast, in the case of a hard bifurcation, the solution is discontinuous at the bifurcation point. In this article, we focus only on the case of a nonlinear PDE with soft bifurcations; some peculiarities of hard bifurcations are briefly discussed in Section 5.3. In the most general form, a nonlinear PDE can be written as: Here so that (1) indicates a system of nonlinear PDEs; is an -dimensional vector representing the dependent variable. The subscript indicates that is the solution of a stationary equation. Further, x is a -dimensional vector. Finally, is a real numerical parameter. The system of equations (1) is analyzed in a domain subject to zero Dirichlet boundary conditions: Also assume that and thus represents a trivial solution of (1, 2). It is convenient to separate out the linear part of the operator (1), which is often (though not always) representable in the form and to write it down in the following form: Here is a linear differential operator (such as, for example, the Laplace operator). Further, is the nonlinear part of the operator . The assumption that solves equation (1) implies that . In its explicit form, we use the representation (4) only in Section 2.2, where we derive the critical slowing-down phenomenon. In all other cases, a general form of the dependence of equation (4) on is valid: and . Nevertheless, we stick to the form (4) for simplicity, while the generalization is straightforward. Let us also consider an auxiliary equation that yields the linear part of the nonlinear equation (4). Equation (5) represents the eigenvalue problem, where the are its eigenfunctions and the are its eigenvalues, indexed by the discrete variable , provided the discrete spectrum of (5) exists. Let us assume that at least a part of the spectrum of (5) is discrete. We assume here that starts from zero: . The state with is referred to as the ground state. Without proofs, we recall a few facts from bifurcation theory [1] valid for soft bifurcations of such equations. Assume that the trivial solution is stable for some values of . As soon as the parameter becomes equal to the smallest discrete eigenvalue of the auxiliary equation (5), this solution becomes unstable. As a result, a nontrivial solution branches off from the trivial one. In the close vicinity of the bifurcation point , this solution has the asymptotics where is the set of eigenfunctions of the equation (5) belonging to the eigenvalue . The vector is the set of amplitudes. The scalar product stands for the expression . Here the index (where ) enumerates the eigenfunctions in the -dimensional subspace of the functional space where (5) has a nonzero solution. The exponent exceeds unity: . There are a few methods available to determine . Listing them is out of the scope of this article. However, the simplest of these methods can be applied if there exists a generating functional enabling one to obtain the system of equations (1) as its minimum condition: where is the variational derivative. This functional we refer to as energy in analogy with physics. Substituting the representation (6) into the energy functional and integrating out the spatial coordinates, one finds the energy as a function of the amplitudes and parameter . Minimizing the energy with respect to the amplitudes yields the system of equations for the amplitudes, referred to as the ramification equation: Their solution is only accurate close to the bifurcation point . Assuming that the bifurcation takes place with decreasing (as is the case in the following example), one finds the typical solution for the amplitudes, where and are real numbers to be determined using the original equation. One of the methods to analytically find these parameters is discussed in Section 3. Further analytical methods may be found in [1]. This article focuses on finding these parameters numerically (Section 4.5). All theorems and proofs for the preceding statements, along with more general methods of the derivation of the ramification equation, can be found in [1]. 2. Numerical Description of a Soft Bifurcation: A Problem and a Workaround The bifurcation theory formulated so far is quite general: equation (1) can be differential, integral or integro-differential [1]. In what follows, we focus only on a more specific class of nonlinear partial differential equations. The solution of the spectral system of equations (5) yields the bifurcation point ; the solutions (6) and (9) are only valid very close to this point. With increasing , the solution soon deviates from the correct behavior quantitatively, and the solution often fails to resemble (6) even qualitatively. For this reason, to get the solution at some finite that would be correct both qualitatively and quantitatively, one needs to solve (1) numerically. In the case of a hard bifurcation, none of the machinery of the theory of soft bifurcations described so far works. Studying the bifurcation numerically often becomes the only possibility. However, the direct numerical solution of nonlinear equations like (1) and (4) with some nonlinear solvers only returns the trivial solution for equation (4), even at the values of the parameter at which the trivial solution is unstable and a stable nontrivial solution already exists. A plausible reason may be as follows: the solver starts to construct the PDE solution from the boundary. Here, however, the boundary condition u_(s)|_(partialOmega)=0 is already part of the trivial solution. Thus the solver appears to be placed at the true solution of the equation and is then unable to climb down from it. To find a nontrivial solution, one needs to use a method that would start from some initial approximation that, even if rough, should be quite different from the trivial solution. Furthermore, this method should converge to the nontrivial solution by a chain of successive steps. 2.1. A Pseudo-Dynamic Equation One can do this with the pseudo-dynamic approach formulated in the present article. Let us introduce pseudo-time . The word “pseudo” indicates that is not real time. It just represents a technical trick that helps with the simulation. Assume now that the dependent variable is a function of both the set of spatial coordinates x and the pseudo-time: . Instead of the stationary equation (1), let us study the behavior of the pseudo-time-dependent equation: One solves equation (10) with a suitable nonzero initial condition . Let us stress that the solution of the time-dependent equations (10) is not the same as the solution of the stationary equation (1). One could also construct the pseudo-time-dependent equation as follows: , that is, with a minus sign in front of . The idea of such an extension is that either or exhibits a fixed point, so that , while the other diverges as . By trial and error, one chooses the equation whose solution converges to the fixed point . The operator has not yet been specified; for definiteness let us assume that the fixed point at takes place for equation (10), that is, with the plus sign in front of . The convergence of the solution of the dynamic equation to the fixed point enables one to apply the following strategy. Instead of the static equation (1), which is difficult to solve numerically, one simulates the quasi-dynamic equation (10) using a suitable time-stepping algorithm. The advantage of this approach is in the possibility of starting the simulation from an arbitrary distribution chosen as the initial condition, provided it agrees with the boundary conditions. From the very beginning, such a choice takes one away from the trivial solution. The time-stepping process takes the initial condition for each step from the previous solution. The solution starting from any function gradually converges to with time if belongs to its attraction basin. After having obtained the solution of the pseudo-time-dependent equation, one approximates the function , as at a large enough value of the pseudo-time . The meaning of the words large enough is clarified in Section 4.3. The approach can be given a pictorial interpretation (Figure 2). In the infinite-dimensional functional space, let be an infinite set of basis functions. Then the function can be represented as Figure 2. Schematic view of the 3D projection of the infinite-dimensional functional space with a trajectory from the initial state (blue dot) to the fixed point (red dot). The trajectory in this space goes from the initial state to the final state , as shown by the two dots. The time derivative represents the velocity of the motion of a point through this space, while can be regarded as a force driving this point. Thus equation (10) can be interpreted as describing a driven motion of a massless point particle with viscous friction through the functional space. In these terms, the condition (1) means that the driving force is equal to zero at some point of the space, which is the location of the fixed point of the nonlinear equation (10). If the energy functional for equation (1) exists, one can make one further step in the interpretation (Figure 3). Figure 3. Schematic view of the energy functional as the function of the coordinate in the functional space (A) above and (B) below the bifurcation point. The cross section of the infinite-dimensional space along a single coordinate is shown. The points show initial positions of the particle, while the arrows indicate its motion to the nearest minimum of the potential well. Indeed, according to the definition given, equation (1) delivers a minimum to the energy functional. In this case, one can regard the dynamic equation (10) as describing a viscous motion of the massless point particle along a hypersurface in the -dimensional space, , the surface forming a potential well. The motion goes from some initial position to the minimum of the potential well as shown schematically in Figure 3. Above the bifurcation, this minimum only corresponds to the trivial solution (A) situated at . Below the bifurcation, the energy hypersurface exhibits a new configuration with new minima, while the previous minima vanish. As a result, below the bifurcation, the point particle moves from the initial position (shown by dots in Figure 3) to one of the newly formed minima (as the red and green arrows show in B). The functional space has infinite dimension, and essential features of the numeric process may involve several dimensions. The D representation displayed in Figure 3 is therefore oversimplified and only partially represents the bifurcation phenomenon. Equation (10) can be rewritten as: Though lacking a stationary nonlinear solver at present, Mathematica offers the option , efficiently applicable to dynamic equations like (12). This method is applied everywhere in the rest of this article. The evident penalty of this approach is that the computation time can become large, especially in the vicinity of the bifurcation point; this peculiarity is discussed next. 2.2. A Critical Slowing Down Close to the critical point , the relaxation of the solution to the fixed point dramatically slows down. This is referred to as critical slowing down. Its origin is illustrated in Section 4. To simplify the argument, let us consider a single equation with the one-component dependent variable that still depends on the D-dimensional coordinate . The generalization for a system of equations is straightforward, though a bit cumbersome. According to (6), close to the bifurcation point, one can look for the solution of equation (12) in the form: Ignore the higher-order terms, assuming that is small. Substitute (13) into the first equation (12) and linearize it. Here one should distinguish between the case at , where the linearization should be done around , and that at , where one linearizes with the center at (the second line of equation 9). In the former case, one finds Making use of (5), one finally obtains the dynamic equation for at : implying that , and the relaxation time has the form . At , analogous but somewhat more lengthy arguments give the characteristic time, twice as small as that above the critical point. One comes to the relation: One can see that the relaxation time diverges with from both sides. From the practical point of view, this suggests increasing the simulation time according to (15) near the critical point. The result (15) is valid for equation (12), in which the linear part of the pseudo-dynamic equation has the form . That is, the parameter enters this equation only linearly, in the form of the product . In the general case , one still finds diverging relaxation time , though the factors (such as above, and below the bifurcation point) may be different. The phenomenon of critical slowing down was first discussed in the framework of the kinetics of phase transitions [2]. 3. Example: A 1D GinzburgLandau Equation As an example, let us study the 1D PDE: where is the dependent variable of the single coordinate . This equation exhibits a cubic nonlinearity . A classical GinzburgLandau equation only has constant coefficients for the terms and . In contrast, equation (16) possesses the inhomogeneity with shown by the solid line in Figure 4. It thus represents a nonhomogeneous version of the GinzburgLandau equation. One can see that (16) has the trivial solution . Figure 4. The potential from equation (17) (solid, red) and the solution of the auxiliary equation (18) (dashed, blue). Equations (16) and (17) play an important role in the theory of the transformation of types of domain walls into one another [3]. The auxiliary equation (5) in this case takes the following form: where enumerates the eigenvalues and eigenfunctions belonging to the discrete spectrum. One can see that equation (18) represents the Schrödinger equation [4] with potential well (17) and energy . The exact solution of the auxiliary equation (18) is known [3, 4]. It has two discrete eigenvalues when n=0 and n=1, and the ground-state (n=0) solution has the form which can be easily checked by direct substitution. The energy functional generating the GinzburgLandau equation (16, 17) has the form: Equation (6) can be written as . Substituting that into equation (20) for the energy, eliminating the term with the derivative using equation (18) and applying the Gauss theorem, one finds the energy as a function of the amplitude xi: The ramification equation takes the form : with the following solution for the amplitude: 4. Numerical Solution of the GinzburgLandau Equation 4.1. Pseudo-Time-Dependent Equation Let us now look for the numerical solution of equation (16). The problem to be solved is to find the point of bifurcation and the overcritical solution at . The pseudo-time-dependent equation can be written as: The choice of the initial condition is not critical, provided it is nonzero. The method of lines employed in the following is relatively insensitive to whether or not the initial condition precisely matches the boundary conditions. We demonstrate its solution with three initial conditions in the in the next section. 4.2. Solution within a Finite Domain The method of lines is applied here since it can solve nonlinear PDEs, provided these equations are dynamic, which is exactly the case within the pseudo-time-dependent approach. To address the problem numerically, let us start with the initial conditions taken at a finite distance, rather than at infinity. The distance must be greater than the characteristic dimension of the equation, which is the distance for which exhibits a considerable variation. For the GinzburgLandau equation (16), the characteristic dimension is defined by the width of the potential for (17), which is about 1. That is, let us start with the boundaries at with . We check the quality of the result obtained with such a boundary later. To obtain a precise enough solution, one needs to make a spatial discretization providing a step comparable to the characteristic dimension of the equation, which we just saw is of the order of . Therefore, a step that is small enough can be a few times . The value appears to be enough. The following code solves the equation. To keep the discretization with the step comparable to the characteristic equation dimension, we chose . To avoid conflicts with variables that may have been previously set, this notebook has the setting Evaluation Notebooks Default Context Unique to This Notebook. According to Section 2, the time-dependent solution obtained converges to the solution of the stationary problem . In practice, however, one can instead take some finite value, provided that it is large enough. We solve the pseudo-dynamic equation (24) with each of the three initial conditions stated before. Further, in order to give the feeling of the method, we visualize and animate the solution, varying as well as the initial conditions. This requires a few comments. As discussed in Section 2.2, the maximum time of simulation strongly depends on . This is accounted for by introducing according to (15), where was chosen by trial so that the simulation does not last too long, but also so that the value of always ensures the convergence for any combination of and initial condition. In the simulations, you can observe two essential features of the present method. First, near the fixed point, the solution converges more slowly and the curve gradually appears to stop changing. Second, near the critical point, close to , the critical slowing down (see Section 2.2) takes place, which requires considerably longer to approach the fixed point. In the animation, the curve evolves much more slowly at and , and the convergence, therefore, requires much more time. In the , choose one of the three initial conditions and a value of . Click the button with the arrow to start the animation. The value of the current time is shown at the top-left corner. The distribution shown by the blue curve at corresponds to the initial condition, while at the animation shows its further evolution. For each of the three initial conditions, the solution converges to the same bell-shaped curve. One can make sure that for low , the solution is nonzero. However, for greater than about 0.5, the solution is trivial. 4.3. The Solution Norm and the Convergence Control To get an accurate solution, one needs to control the convergence as the pseudo-time increases. Here we control the convergence by analyzing the behavior of the integral (the norm of the solution in Hilbert space) at a fixed value of the parameter as a function of . The norm is zero above the bifurcation but nonzero below it. We show how depends on the time limit at three fixed values of the control parameter : , and , which are all below the bifurcation point . The following code makes a nested list containing three sublists corresponding to the three values. Each sublist consists of pairs at different values of the simulation time , which increases from 10 to approximately 3000. The exponential rate of increase is chosen so as to make the plot on a semilogarithmic scale look equally spaced (Figure 5). Figure 5. Semilogarithmic plots of the Hilbert norm of the solution for (disks), (squares) and (diamonds) depending on the simulation time, . There is convergence for all three values of . However, the value of for which the convergence is satisfactory depends on . For example, at the solution at slightly exceeding 100 is already near convergence. Thus, with , one can be sure that the solution is satisfactory. We use this in Section 4.4 to determine the expression for accounting for the critical slowing down. In contrast, the solution for shows some evolution even at . 4.4. The Critical Slowing Down in the Numeric Process As we showed in Section 2.2, the value that gives satisfactory convergence depends on . To get an accurate solution, must considerably exceed the relaxation time . For example, in the calculation of the result shown in Figure 4, substituting and into (15), one finds , while the convergence only becomes good enough at , which is eight times greater than . This implies that to find an accurate solution in the close vicinity of the bifurcation point, one has to define depending on by where is the regularization parameter. 4.5. In Search of the Bifurcation Point The bifurcation point can be found by analyzing the same integral calculated at in (26). Let us denote . This time we study the integral as a function of the parameter . The transition from to occurs at the bifurcation point. Accordingly, the integral at this point changes from to . To find the critical point, bifurcation theory (23) predicts the norm to be expressed in the form: We find the constant parameters and by fitting. We now find the numerical solution of the equation (16) as a function of the control parameter ; the norm obtained from this solution depends on . We vary from 0.45 to to create a list consisting of pairs . The most critical region for dependence is close to the critical point, so the points there are taken to be about 10 times more dense. This list is fitted to the function (27). The list is plotted with the analytic function obtained by fitting (Figure 6). Figure 6. Behavior of the Hilbert norm of the solution in the vicinity of the bifurcation point. Dots show the integrals (25), while the solid line indicates the result of fitting with the relation (27), yielding . The values of the integrals at various are shown by the red dots in Figure 6, while its fitting curve is shown by the solid blue curve. The fitted value of the bifurcation point is and . We used equation (26) for the used in the solution. However, this equation depends on the spectral value . In the present case, the value was known, which considerably simplifies the task. In general, the value of is only established in the course of the fitting procedure, requiring an iterative approach. For the first simulation, we fix some large enough value of independent of and obtain a fit. This fit gives the first guess for , which can then be used for the simulation with the equation (26). This procedure can be repeated until a satisfactory is achieved. 4.6. Varying the Boundary To check how the choice of the boundary affects the results, we solve the problem by gradually increasing (Figure 7). (This takes some time.) Figure 7. A double-logarithmic plot showing the convergence of the bifurcation point with increasing . Figure 7 displays the error in the spectral value obtained by the numerical process. As one could have expected, with the increase of , it decreases from to about . 5. Discussion The preceding example has shown the application of the pseudo-dynamic approach for solving a 1D nonlinear PDE with zero boundary conditions that exhibits a supercritical (soft) bifurcation. That simple problem was chosen to keep the processing time as short as possible. Now possible extensions are discussed. 5. 1. Nonzero Boundary Conditions Recall that zero boundary conditions often (if not always) represent a problem for a nonlinear solver. Starting from along the boundary, such a solver often only returns the trivial solution, since zero is, indeed, the solution of the equation considered here. For this reason, a solution to a problem like the one discussed in this article necessarily requires some specific approach that can converge to a nontrivial solution. It is for this type of equation that the approach presented here has been developed. One should, however, make two comments. First, there are numerous problems where the bifurcation takes place from a solution that is nonzero. The boundary condition in this case has the form . A trivial observation shows that one comes back to the original problem by the shift . Second, the approach formulated here can be applied to nonlinear equations with no bifurcation. These equations can have boundary conditions that are either zero or nonzero. Indeed, such equations can often be solved by a nonlinear solver if one is available. Among other approaches, the present one can be applied; the nonzero boundary conditions are not an obstacle for the transition to the pseudo-time-dependent equation. Though the present approach takes longer, in certain cases it is preferable; for example, when due to a strong nonlinearity the nonlinear solvers fail. The solver moves along the pseudo-time parameter in small steps from to , gradually passing from the initial condition to the final solution. Such a slow ramping can be stable. 5.2. Dimensionality The space dimensionality does not limit the application of our approach (for 2D examples, see [5, 6]). 5.3. A Supercritical (Soft) versus a Subcritical (Hard) Bifurcation In the case of a soft bifurcation, the energy can have only one type of minimum, as shown in Figure 2 describing the convergence either to the trivial or the nontrivial solution. The trajectory always flows into the minimum along the steepest slope of . The minimum is a fixed point. An essentially different situation occurs for a hard bifurcation, when the hypersurface may have multiple minimums. Figure 8 (A) shows a schematic cross section of the infinite-dimensional functional along the plane, leaving out all other dimensions. This cross section shows the situation with minima of different types. One of these minima is more pronounced than the others. The arrows schematically indicate the trajectories in the functional space. These start from the initial conditions displayed by the dots in Figure 8 (A, B) and converge to the minima (Figure 8 A). The green arrow shows the convergence of the process to the principal minimum, while the red one converges to a secondary minimum. Figure 8. Schematic view of the energy functional along a direction of the functional space, where it exhibits a metastable minimum (A). The green point schematically indicates the initial condition starting from which the solution converges to the one corresponding to the principal energy minimum (green arrow), while the red dot shows the initial condition leading to the convergence to the secondary minimum. (B) The trajectory ends at an inflection point. As a result, depending on the choice of initial condition, some solution trajectories may end up at a fixed point that is a secondary minimum rather than in the main one. Also, keep in mind that the dimension of the functional space is infinite and can have many unobvious secondary minima. There can also be inflection and saddle points of the energy hypersurface (Figure 8 B). The trajectory completely stops at such a point. It is a fundamental question whether or not such secondary fixed points as well as the inflection points belong to the problem under study. The answer is not straightforward. One should look for such an answer based on the origin of the equation. Let us also mention possible gently sloping valleys in the energy relief. In this case, the motion along such a shallow slope may appear practically indistinguishable from an asymptotic falling into a fixed point during the numerical process. 6. Summary This article offers an approach to solve nonlinear stationary partial differential equations numerically. It is especially useful in the case of equations with zero boundary conditions that have both a trivial solution and nontrivial solutions. The approach is based on solving a pseudo-time-dependent equation instead of the stationary one, the initial condition being different from zero. Then the solver can avoid sticking to the trivial solution and is able to converge to a nontrivial solution. However, the penalty is increased simulation time. [1] M. M. Vainberg and V. A. Trenogin, Theory of Branching of Solutions of Non-linear Equations, Leyden, Netherlands: Noordhoff International Publishing, 1974. [2] E. M. Lifshitz and L. P. Pitaevskii, Physical Kinetics: Course of Theoretical Physics, Vol. 10, Oxford, UK: Pergamon, 1981 Chapter 101. [3] A. A. Bullbich and Yu. M. Gufan, Phase Transitions in Domain Walls, Ferroelectrics, 98(1), 1989 pp. 277290. doi:10.1080/00150198908217589. [4] L. D. Landau and E. M. Lifshitz, Quantum Mechanics: Course of Theoretical Physics, Vol. 3, 3rd ed., Oxford, UK: Butterworth-Heinemann, 2003. [5] A. Boulbitch and A. L. Korzhenevskii, Field-Theoretical Description of the Formation of a Crack Tip Process Zone, European Physical Journal B, 89(261), 2016 pp. 118. doi:10.1140/epjb/e2016-70426-6. [6] A. Boulbitch, Yu. M. Gufan and A. L. Korzhenevskii, Crack-Tip Process Zone as a Bifurcation Problem, Physics Review E, 96(013005), 2017 pp. 119. doi:10.1103/PhysRevE.96.013005. A. Boulbitch, Pseudo-Dynamic Approach to the Numerical Solution of Nonlinear Stationary Partial Differential Equations, The Mathematica Journal, 2018. About the Author Alexei Boulbitch graduated from Rostov University (USSR) in 1980 and obtained his Ph.D. in theoretical solid-state physics in 1988 from this university. In 1990 he moved to the University of Picardie (France) and later to the Technical University of Munich (Germany). The Technical University of Munich granted him his habilitation degree in theoretical biophysics in 2001. His areas of interest are bacteria, biomembranes, cells, defects in crystals, phase transitions, physics of fracture (currently active), polymers and sensors (currently active). He presently works in industrial physics with a focus on sensors and gives lectures at the University of Luxembourg. Alexei Boulbitch Zum Waldeskühl 12 54298 Igel ]]> 0
d4ab7e5f586540b5
I approve the message of Aaronson's QM comic She explains to her son – who no longer wants to be considered a child but he clearly is one – that the usual statements about quantum mechanics, qubits, the clever trick of quantum computers, and the availability of quantum computers that are often described in the pop-science journals aren't really right. There is one unrealistic aspect of the comic. In the real world, young boys and men are almost never taught quantum mechanics and important facts about it by their mothers. But if we ignore this piece of the feminist fantasy – feminism was guaranteed to affect Aaronson's cartoon in one way or another – I must say that I agree with everything that the comic claims. First, the son is told that it isn't true that quantum bits are 0 and 1 "at the same time". Instead, one of the options may be measured but the way how they're combined is a "new type of ontology", a complex generalization of the probability calculus. I think that I've used almost the same words many times in the past. The wave functions are closer to probabilities but they're not quite the usual probabilities. Instead, they're probability amplitudes which are complex and also have the ability to constructively or destructively interfere. When one is observing anything, the amplitudes are converted to the usual probabilities only. But when no one is looking, the probability amplitudes evolve as a new entity according to new rules that have no counterparts in classical physics. Another myth – pumped into her son – that the mother debunks is the idea that the quantum computer is fast because it's nothing else than a classical computer trying all the possible answers in parallel. As I explained e.g. through the mouth of a fake PM Trudeau, this ain't the case. There's no "splitting of the worlds" during a quantum computation. On the contrary, the splitting of the worlds may only make sense after a measurement which can only occur after decoherence – but the quantum computation depends on the absence of any decoherence (I will make the same observation again later). So a key necessary condition for the quantum computer to work – and do some things that are practically impossible on classical computers – is that there's no decoherence and no splitting of the world during the calculation. The son is also brainwashed by the idea that his classmates are testing quantum interference with frogs (and probably also oil droplets). Well, they aren't, the mother points out. The quantum probability amplitudes are something totally different than any feature of these macroscopic classical objects and experiments we've been familiar with for centuries. When the mother tries to summarize the wisdom, she says: Relax. It's all just different consequences of one fact: classical events have probabilities, and quantum events have amplitudes. Remember that, and you'll do just fine. This looks utterly reasonable to me, too. To understand quantum mechanics, you really need to understand that it's some generalization of the probabilistic thinking that's been around for a long time – and it's a generalization in which you have to work not just with the probabilities but also with some semi-baked product (in English, the approximate equivalent is an "intermediate good" but it doesn't quite capture the more-food-industry-focused "polotovar" in Czech), the probability amplitudes, which are complex are have to be manipulated in ways that are analogous (and give rise) to the usual probabilistic calculus but don't quite coincide with any pre-1925 calculation of probabilities. The last frames of the comic also mock the popular articles (and there exist very similar ones in the real world) that iPhone 8 will be a quantum computer, someone will have a gigaqubit quantum computer soon, and that quantum computing is equivalent to consciousness because "both are weird". I would really endorse every frame of this comic – including the spirit and accents. Weinersmith is a good cartoonist but I guess that "most of the work" according to my counting was done by Aaronson – after all, the mother and her son are just talking to each other all the time. I find it irresistible to react to some comments by Aaronson's readers: Hwold quotes: "It’s not the size that matters. It’s the rotation through complex vector space." Other ideas in this comic are familiar to me, but not this one. Any reference explaining this? Well, that's unfortunate that readers of heavily quantum mechanical blogs have never heard about the rotations through a complex vector space. All transformations – including the evolution in time – are represented by unitary (linear) transformations of the Hilbert space. That's also why Paul Dirac basically liked to use the term "transformation theory" for the bulk of the mathematical apparatus of quantum mechanics. Imagine how many times some wrong claims about quantum mechanics are repeated in the popular media. But many people who often read this stuff can never hear about complex rotations of the Hilbert space. OK, I said it. Every transformation of a physical object (rotation, time evolution etc.) in classical physics may be represented by some rearrangement (permutation) of the points in the classical phase space. In quantum mechanics, the same kind of an operation – a transformation of the physical system (or the whole Universe) – is always represented by a unitary operator/matrix \(U\) mapping the Hilbert space onto itself and obeying \(U^\dagger = U^{-1}\). The complex coordinates of the vectors in the Hilbert space are the probability amplitudes and they're just being rotated by unitary (=complexified orthogonal, with the dagger) transformations. So all the possible laws of evolution in time are quantum mechanically represented by different unitary "rotations" of the same infinite-dimensional space and by different ways to choose coordinate systems – labeled by various things that can be measured – on that infinite-dimensional space. Jacob Aron: As a journalist infecting young minds with filthy quantum articles in magazines, this got a laugh out of me. But the fact that the comic is so long means it can’t quite sustain the joke, which handily illustrates the point of coming up with these incorrect approximations – sometimes you just don’t have room to be right! Too bad that the comic got a laugh of a filthy journalist like you, Jacob. A more rigorous reaction would be crying through your missing teeth because for your bastardization of the youth, you deserve a proper thrashing. The cartoon may be long – because they wanted to include lots of useful but essential information. But the full length isn't necessary. Individual frames of the comic could be used as valuable sources of wisdom separately from others. An average frame in this comic is more valuable than an average filthy article that the likes of Jacob Aron are printing in pop-science journals these days. Jon K.: Very funny and elightening cartoon. I hope it makes its rounds in the popular presses. But I wonder what would have happened if the kid started asking other questions like, “But I heard complex numbers could just be represented by two ordinary numbers?” or “What do you mean by ‘isolated’?” …Or something else that might have given the mom a little more pause(I’m not sure if those questions would actually do that, as this mom seems pretty smart and quick to respond with enlightening answers). Scott, what are the hard questions that this kid could have asked his Mom where she would not have been able to give him an answer that he’d be satisfied with? Scott says that the mother could answer all these questions – the cartoon would get even longer, however. In particular, by an "isolated system", they (and we) mean that the description of that system is in terms of a factor in the tensor product of Hilbert spaces. Job: [Quantum computers are hard because the calculation is different than the classical computation. ...] In the quantum world it’s difficult to break things down into steps. [...] It's simply not true. A quantum computation – e.g. one following Shor's algorithm or any similar algorithm people normally talk about – is composed of individual steps just like a classical calculation. It's just the character of the individual steps that is different for classical and quantum computers. For quantum computers, the single step is a unitary transformation discussed previously. But the collection of possible operations envisioned by realistic quantum computers is equally finite – and similarly large – as the collection of operations that a classical computer may do in one step. In fact, close cousins of the classical operations are usually operations done by quantum computers, too. Quantum computers have some additional operations that mix the qubits in quantum logical ways that have no classical counterparts. Another difference is that we must be careful not to make any measurement during the calculation. Only at the very end, we do a measurement using a quantum computer. This is needed to allow the probability amplitudes to evolve in their new, characteristically quantum way – which is needed for the relative superiority of the quantum computer. Scott later says the same and correctly points out that the only "new" thing about the steps in a quantum computer is that they need a harder mathematics to be understood – but any mathematics may be hard. There exist all conceivable confusions about these things. Job says that quantum mechanics doesn't allow "steps". It does allow them. Computation – both classical and quantum computation – envisions computers, objects whose evolution may be pretty much divided to steps (i.e. for which the time is effectively discrete). More general objects in Nature – whether they are described by classical or quantum mechanics – don't allow such steps i.e. quantization of time. Despite deluded cranks' suggestions that the time has to be discrete in quantum mechanics (or quantum gravity), no implication like that holds. Time is normally continuous, is effectively discrete when we build computers, but the discreteness of time is completely independent from the quantumness of the laws of physics. Jahan: Scott: Is tunneling, Heisenberg uncertainty, and action at a distance really consequences of complex amplitudes? I would think to predict tunneling you’d need to know Schrodinger’s equation, to understand the uncertainty principle you need to know \([x,p]\), and to get action at a distance you need entanglement. Can’t all those things exist independently of complex amplitudes? The laymen are mostly hating the foundations of quantum mechanics – and they seem to hate a part of mathematics, complex numbers, equally fanatically. They hope that they're not needed. Someone will ban them etc. But complex numbers are fundamental in physics, especially in quantum mechanics. Yes, complex numbers are absolutely needed for Schrödinger's equation, the uncertainty principle, and a meaningful \([x,p]\), too. There's no "action at a distance" so let me not discuss Jahan's confusion about the entanglement – I've discussed it many times in the past. Schrödinger's equation says\[ i\hbar \frac{\partial}{\partial t} \ket{\psi(t)} = H \ket{\psi(t)}. \] The time derivative of the state vector or wave function \(\psi(t)\) is obtained by the action of the Hermitian Hamiltonian operator \(H\) on the same state vector – but divided by the purely imaginary constant \(i\hbar\). Because this constant is imaginary – i.e. complex, not real – an initial wave function, even if it is real at one moment, is pretty much guaranteed to be complex at the following moment: the time derivative contains some purely imaginary pieces. Does the coefficient in the Schrödinger equation have to be imaginary? You bet. For an initial state that is an energy eigenstate i.e. obeys\[ H \ket\psi = E \ket\psi, \] the solution to any Schrödinger-like equation with any coefficient \(C\) on the left hand side unavoidably has the form\[ \ket{\psi(t)} = \exp(Et / C) \ket\psi \] You simply need \(C\) to be pure imaginary for the total probability \(|\psi|^2\) not to exponentially grow or decrease in time. Exactly when \(C\) is pure imaginary, the transformations you get for any time evolution will be unitary, and therefore preserving the total probability. So complex numbers are absolutely needed for Schrödinger's equation to work. By using the usual proof of the equivalence of the Schrödinger and Heisenberg picture, you may also prove that the same \(i\) is absolutely needed in the Heisenberg equations of motion. And the proofs of the equivalence of Heisenberg-or-Schrödinger's pictures with the Feynman integrals similarly tell you that you need an \(i\) in the integrand \(\exp(iS/\hbar)\) of the path integral, too. The commutator \([x,p]=xp-px\) that is nonzero and mathematically underlies the uncertainty principle absolutely requires complex numbers, too. The reason is simple. The operators \(x\) and \(p\) have to be Hermitian for their (measurable) eigenvalues to be real. So we have \(x=x^\dagger\) and \(p=p^\dagger\). But that's enough to see that\[ (xp-px)^\dagger = p^\dagger x^\dagger - x^\dagger p^\dagger = px-xp = -(xp-px). \] The Hermitian conjugate of \([x,p]\) is minus itself. We got the minus sign because the two terms in the difference got permuted and this permutation has arisen because \((AB)^\dagger = B^\dagger A^\dagger\) i.e. because (just like the transposition of matrices) the Hermitian conjugation forces you to read the factors from the right to left. In other words, \([x,p]\) is unavoidably anti-Hermitian i.e. \(i\) times a Hermitian operator. The commutator \([x,p]\) is \(i\hbar\) i.e. \(i\) times a Hermitian operator proportional to the identity matrix. So if you write \(x\) as a real matrix, then you are guaranteed that the operator \(p\) must contain some complex (often pure imaginary) entries and vice versa. If both \(x,p\) were real matrices, their commutator would be a real matrix as well but \(i\) times a nonzero real \(c\)-number can't be a real matrix because of the damn \(i\). One can list several other simple arguments like that which prove that virtually nothing could work in quantum mechanics if you demanded that all coefficients in the equations are real. Complex numbers are absolutely paramount in quantum mechanics. You may childishly imagine that a complex number is a pair of two real numbers – which is a wrong way to think about complex numbers because a single complex number is really "more elementary or fundamental" than a real one (good calculus and representation theory experts in mathematics would surely agree with this physicists' view) – but in that case, the proofs above show that the pairing is absolutely unavoidable whenever you deal with the probability amplitudes. The beef of the complexity of amplitudes us unavoidable. We may also ask whether the complexity of the amplitudes implies the principles of quantum mechanics such as the uncertainty principle (the opposite implication than discussed above, basically). Well, strictly speaking, No. But if you morally want "something new and useful or interesting to be done with the complex numbers", then yes, quantum mechanics with all the principles is the only new interesting set of ideas that uses makes the complexity helpful for anything. Probabilities can't be complex so they should better be calculated as the squared absolute values of the complex amplitudes. If that's so, the complex numbers – if they are physically present at all – should better be placed as off-diagonal elements of some matrices, starting from the density matrix (the diagonal elements are the real probabilities themselves) and the whole representation of observables as matrices or operators morally follows. ppnl: I liked the last frame in the cartoon where it takes a swipe at the connection between quantum mechanics and consciousness. Lubos Motl seems – as best as I can figure out – to be saying that a conscious observer is needed for quantum wave collapse. I tried to point out that there can be no difference between a mindless robot observing a quantum particle and a conscious scientist observing it. There can be no experiment that differentiates between the two. [...] If this is the best way how you're capable of reading the last frame of the comic, then you're sadly a mental cripple, Mr ppnl. The frame says nothing of the sort. Instead, the frame follows frames saying that "if you don't talk to your child about QM, someone else will" and mocks a pop-science journal that says: Quantum computing and consciousness are both weird and therefore equivalent. This statement has absolutely nothing to do with my obviously correct statements about consciousness that are too hard for Mr ppnl. The mocked journal title is a variation of what Roger Penrose likes to say (a collapse in the brain is what allows the brain to calculate, and the brain is therefore working as a quantum computer, and that's why consciousness and quantum computation are equivalent). So the text in the mocked journal is pretty much equivalent to the kind of silliness that Roger Penrose has been saying for decades – a point observed by another reader of Aaronson's blog, Fazal Majid. (Penrose wasn't the only person who made vaguely similar, silly identifications.) Moreover, it's framed in a way that I know from Lee Smolin. Lee Smolin has once invited himself to Harvard (by painting himself as a victim of a sort, abusing myself, and forcing me to ask our secretary to invite him) and he taught us: I can't believe that M-theory is hard. Three-dimensional Chern-Simons theory is also simple and therefore three-dimensional Chern-Simons theory and M-theory must be equivalent. I laughed for quite some time after I heard it – and we did in the Society of Fellows and elsewhere. By that moment, I had known for some 5 years that Smolin was a crackpot but only at that time, I began to appreciate how amazingly stupid he was. When two things share an adjective, it's very far from a convincing argument (let alone a proof) that they're equivalent. To make things worse, the attribution of the adjective was controversial for one object and almost certainly wrong for the other. So Smolin has applied a logical fallacy in a way that was mostly wrong by itself. Concerning this particular equivalence, there is really no consciousness during a quantum computation. I have already explained the reason in this very essay and many others. An observer is only conscious about an outcome of an observation after he has made an observation but that requires some decoherence to take place in his mind before that. But a quantum computation depends on the absence of any decoherence, so there's simply no consciousness "taking place" during the quantum calculation. The mocked statement isn't just proving something that can't be properly proven. It's claiming something that may be disproven. It's the opposite of the truth. My claims about "robots and humans" that ppnl tries to disagree with are correct, of course. Quantum mechanics doesn't tell you to treat humans and robots differently – after all, the definition of a "human" and a "robot" is a very complex and mostly ill-defined task. So whatever holds for biological humans may hold for machines and vice versa. So far so good, ppnl would agree. But quantum mechanics does treat and has to treat observers differently than the observed objects. So whether someone or something is a human or a machine, it's a physical system that evolves to complex superpositions of states up to the very moment when it's observed by an external agent, an observer. On the other hand, from the viewpoint of an observer (whether he's biological or a machine or whatever), the observation – the act of changing his knowledge about the world (or objects in it) – is always accompanied by (or inseparable from) the collapse of the wave function as well. In general, the description of objects (including humans and robots) by quantum mechanics depends on who is describing them, from whose observational viewpoint the description takes place. That agent, the observer, plays a special role in the description. In rather generic situations (often caricatured as the Wigner's friend thought experiment), two observers may use very different wave functions in the same situation. A key point is that the collapse of the wave function is always a subjective event. Heisenberg considered this novelty of quantum mechanics – the dependence of the description on the choice of the observer and, in this sense, "subjectivity" – "sort of obvious", a generalization of the positivist lessons about the relativity (inertial frame-dependence) of some quantities introduced by Einstein's special theory of relativity. The need to describe the evolution relatively to an observer who (subjectively) knows – independently from any theory – what is an observation and what isn't is absolutely essential for quantum mechanics. Quantum mechanics simply can't be applied without observers (or something equivalent, something that pre-knows what are the relevant questions that are being asked, observables that are being measured). Everyone who talks about quantum mechanics without observers is clueless: it's an oxymoron. If ppnl were posting this kind of untrue crap on my blog, he would be banned rather soon. Well, I am not just bragging about these credentials. ppnl: To be fair I’m not really sure what his position is since he banned me for disagreeing without discussing it. Right, ppnl, you're a piece of lying šit (which is more relevant for your status than some disagreement) and I am afraid that I would immediately make the same conclusion after several seconds even if you tried to obscure your identity by wearing 1,000 condoms on your f*cking stupid head. Scott Aaronson reasonably answers some stupid comments. One of the answers is: Scott: Niraj #27: Not sure if I understand the error. Had the OR/XOR distinction been relevant given the context, the mom could’ve added, “superposition doesn’t mean AND, and it doesn’t mean OR, and it doesn’t mean XOR either.” The need for this trivial clarification shows how utterly and hopelessly naive many readers who expect "answers about quantum mechanics" are. In the comment #27, Niraj basically conjectures that when the mother says that the superposition of wave functions means neither the classical "AND" nor the classical "OR", it must simply mean "XOR", the exclusive OR whose truth values are 0,1,1,0 for the four combinations of qubits. Holy crap. If the superposition of wave functions could be represented by simple classical logic and XOR, people would simply say it and they would stop talking about hard things, wouldn't they? After all, XOR is pretty much exactly as easy as OR or AND. So Niraj, do you really believe that the superposition is just XOR? Do you really believe that all the difficulty that the laymen face may be overcome by learning the difference between OR and XOR, something that schoolkids should understand very quickly? I can't believe that someone doesn't see how utterly idiotic such an expectation is. Niraj and similar people can't even imagine that something about modern physics could require higher intelligence and a more abstract and deeper thinking than the thinking needed to learn the table of four values of the XOR operator. It's amazing. It seems obvious to me that the people who want the intellectual requirements to be "bounded from above" in this way should be honestly told that they're just hopelessly stupid – closer to apes than to experts in quantum mechanics – and attempts to teach modern physics to them should be immediately stopped because they're a complete waste of time. But we still live in the era of incredible hypocrisy and political correctness so instead of hearing that they're hopeless idiots who should enjoy their practical lives and stop trying becoming scientists, the likes of Niraj are used to compliments and they are often told that they're de facto scientists, too. I am so fed up with these omnipresent lies! Add to Digg this Add to reddit snail feedback (0) :
fb6d3e12c32cedac
Quantum Approaches to Consciousness First published Tue Nov 30, 2004; substantive revision Thu Apr 16, 2020 It is widely accepted that consciousness or, more generally, mental activity is in some way correlated to the behavior of the material brain. Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness. Several approaches answering this question affirmatively, proposed in recent decades, will be surveyed. There are three basic types of corresponding approaches: (1) consciousness is a manifestation of quantum processes in the brain, (2) quantum concepts are used to understand consciousness without referring to brain activity, and (3) matter and consciousness are regarded as dual aspects of one underlying reality. Major contemporary variants of these quantum-inspired approaches will be discussed. It will be pointed out that they make different epistemological assumptions and use quantum theory in different ways. For each of the approaches discussed, both problematic and promising features will be highlighted. 1. Introduction The problem of how mind and matter are related to each other has many facets, and it can be approached from many different starting points. The historically leading disciplines in this respect are philosophy and psychology, which were later joined by behavioral science, cognitive science and neuroscience. In addition, the physics of complex systems and quantum physics have played stimulating roles in the discussion from their beginnings. As regards the issue of complexity, this is evident: the brain is one of the most complex systems we know. The study of neural networks, their relation to the operation of single neurons and other important topics do and will profit a lot from complex systems approaches. As regards quantum physics, there can be no reasonable doubt that quantum events occur and are efficacious in the brain as elsewhere in the material world—including biological systems.[1] But it is controversial whether these events are efficacious and relevant for those aspects of brain activity that are correlated with mental activity. The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions (“free will”) are problematic in a perfectly deterministic world,[2] so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!) Quantum theory introduced an element of randomness standing out against the previous deterministic worldview preceding it, in which randomness expresses our ignorance of a more detailed description (as in statistical mechanics). In sharp contrast to such epistemic randomness, quantum randomness in processes such as the spontaneous emission of light, radioactive decay, or other examples has been considered a fundamental feature of nature, independent of our ignorance or knowledge. To be precise, this feature refers to individual quantum events, whereas the behavior of ensembles of such events is statistically determined. The indeterminism of individual quantum events is constrained by statistical laws. Other features of quantum theory, which became attractive in discussing issues of consciousness, were the concepts of complementarity and entanglement. Pioneers of quantum physics such as Planck, Bohr, Schrödinger, Pauli (and others) emphasized the various possible roles of quantum theory in reconsidering the old conflict between physical determinism and conscious free will. For informative overviews with different focal points see e.g., Squires (1990), Kane (1996), Butterfield (1998), Suarez and Adams (2013). 2. Philosophical Background Assumptions Variants of the dichotomy between mind and matter range from their fundamental distinction at a primordial level of description to the emergence of mind (consciousness) from the brain as an extremely sophisticated and highly developed material system. Informative overviews can be found in Popper and Eccles (1977), Chalmers (1996), and Pauen (2001). One important aspect of all discussions about the relation between mind and matter is the distinction between descriptive and explanatory approaches. For instance, correlation is a descriptive term with empirical relevance, while causation is an explanatory term associated with theoretical attempts to understand correlations. Causation implies correlations between cause and effect, but this does not always apply the other way around: correlations between two systems can result from a common cause in their history rather than from a direct causal interaction. In the fundamental sciences, one typically speaks of causal relations in terms of interactions. In physics, for instance, there are four fundamental kinds of interactions (electromagnetic, weak, strong, gravitational) which serve to explain the correlations that are observed in physical systems. As regards the mind-matter problem, the situation is more difficult. Far from a theoretical understanding in this field, the existing body of knowledge essentially consists of empirical correlations between material and mental states. These correlations are descriptive, not explanatory; they are not causally conditioned. It is (for some purposes) interesting to know that particular brain areas are activated during particular mental activities; but this does, of course, not explain why they are. Thus, it would be premature to talk about mind-matter interactions in the sense of causal relations. For the sake of terminological clarity, the neutral notion of relations between mind and matter will be used in this article. In many discussions of material [ma] brain states and mental [me] states of consciousness, the relations between them are conceived in a direct way (A): \[ [\mathbf{ma}] \substack{\leftarrow \\ \rightarrow} [\mathbf{me}] \] This illustrates a minimal framework to study reduction, supervenience, or emergence relations (Kim 1998; Stephan 1999) which can yield both monistic and dualistic pictures. For instance, there is the influential stance of strong reduction, stating that all mental states and properties can be reduced to the material domain or even to physics (physicalism).[3] This point of view claims that it is both necessary and sufficient to explore and understand the material domain, e.g., the brain, in order to understand the mental domain, e.g., consciousness. It leads to a monistic picture, in which any need to discuss mental states is eliminated right away or at least considered as epiphenomenal. While mind-brain correlations are still legitimate though causally irrelevant from an epiphenomenalist point of view, eliminative materialism renders even correlations irrelevant. Much discussed counterarguments against the validity of such strong reductionist approaches are qualia arguments, which emphasize the impossibility for physicalist accounts to properly incorporate the quality of the subjective experience of a mental state, the “what it is like to be” (Nagel 1974) in that state. This leads to an explanatory gap between third-person and first-person accounts for which Chalmers (1995) has coined the notion of the “hard problem of consciousness”. Another, less discussed counterargument is that the physical domain itself is not causally closed. Any solution of fundamental equations of motion (be it experimentally, numerically, or analytically) requires to fix boundary conditions and initial conditions which are not given by the fundamental laws of nature (Primas 2002). This causal gap applies to classical physics as well as quantum physics, where a basic indeterminacy due to collapse makes it even more challenging. A third class of counterarguments refer to the difficulties to include notions of temporal present and nowness in a physical description (Franck 2004, 2008; Primas 2017). However, relations between mental and material states can also be conceived in a non-reductive fashion, e.g. in terms of emergence relations (Stephan 1999). Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them.[4] This leads to a dualistic picture (less radical and more plausible than Cartesian dualism) in which residua remain if one attempts to reduce the mental to the material. Within a dualistic scheme of thinking, it becomes almost inevitable to discuss the question of causal influence between mental and material states. In particular, the causal efficacy of mental states upon brain states (“downward causation”) has recently attracted growing interest (Velmans, 2002; Ellis et al. 2011).[5] The most popular approaches along those lines as far as quantum behavior of the brain is concerned will be discussed in Section 3, “Quantum Brain”. It has been an old idea by Bohr that central conceptual features of quantum theory, such as complementarity, are also of pivotal significance outside the domain of physics. In fact, Bohr became familiar with complementarity through the psychologist Edgar Rubin and, more indirectly, William James (Holton 1970) and immediately saw its potential for quantum physics. Although Bohr was also convinced of the extraphysical relevance of complementarity, he never elaborated this idea in concrete detail, and for a long time after him no one else did so either. This situation has changed: there are now a number of research programs generalizing key notions of quantum theory in a way that makes them applicable beyond physics. Of particular interest for consciousness studies are approaches that have been developed in order to pick up Bohr’s proposal with respect to psychology and cognitive science. The first steps in this direction were made by the group of Aerts in the early 1990s (Aerts et al. 1993), using non-distributive propositional lattices to address quantum-like behavior in non-classical systems. Alternative approaches have been initiated by Khrennikov (1999), focusing on non-classical probabilities, and Atmanspacher et al. (2002), outlining an algebraic framework with non-commuting operations. The recent development of ideas within this framework of thinking is addressed in Section 4, “Quantum Mind”. Other lines of thinking are due to Primas (2007, 2017), addressing complementarity with partial Boolean algebras, and Filk and von Müller (2008), indicating links between basic conceptual categories in quantum physics and psychology. As an alternative to (A), it is possible to conceive mind-matter relations indirectly (B), via a third category: \[\begin{gather} [\mathbf{ma}] \quad [\mathbf{me}] \\ \searrow\nwarrow \swarrow\nearrow \\ [\mathbf{mame}] \end{gather}\] This third category, here denoted [mame], is often regarded as being neutral with respect to the distinction between [ma] and [me], i.e., psychophysically neutral. In scenario (B), issues of reduction and emergence concern the relation between the unseparated “background reality” [mame] and the distinguished aspects [ma] and [me]. Such “dual aspect” frameworks of thinking have received increasing attention in contempory discussion, and they have a long tradition reaching back as far as to Spinoza. In the early days of psychophysics, Fechner (1861) and Wundt (1911) advocated related views. Whitehead, the modern pioneer of process philosophy, referred to mental and physical poles of “actual occasions”, which themselves transcend their bipolar appearances (Whitehead 1978). Many approaches in the tradition of Feigl (1967) and Smart (1963), called “identity theories”, conceive mental and material states as essentially identical “central states”, yet considered from different perspectives. Other variants of this idea have been suggested by Jung and Pauli (1955) [see also Meier (2001)], involving Jung’s conception of a psychophysically neutral, archetypal order, or by Bohm and Hiley (Bohm 1990; Bohm and Hiley 1993; Hiley 2001), referring to an implicate order which unfolds into the different explicate domains of the mental and the material. They will be discussed in more detail in Section 5, “Brain and Mind as Dual Aspects”. Velmans (2002, 2009) has developed a similar approach, backed up with empirical material from psychology, and Strawson (2003) has proposed a “real materialism” which uses a closely related scheme. Another proponent of dual-aspect thinking is Chalmers (1996), who considers the possibility that the underlying, psychophysically neutral level of description could be best characterized in terms of information. Before proceeding further, it should be emphasized that many present-day approaches prefer to distinguish between first-person and third-person perspectives rather than mental and material states. This terminology serves to highlight the discrepancy between immediate conscious experiences (“qualia”) and their description, be it behavioral, neural, or biophysical. The notion of the “hard problem” of consciousness research refers to bridging the gap between first-person experience and third-person accounts of it. In the present contribution, mental conscious states are implicitly assumed to be related to first-person experience. This does not mean, however, that the problem of how to define consciousness precisely is considered as resolved. Ultimately, it will be (at least) as difficult to define a mental state in rigorous terms as it is to define a material state. 3. Quantum Brain In this section, some popular approaches for applying quantum theory to brain states will be surveyed and compared, most of them speculative, with varying degrees of elaboration and viability. Section 3.1 addresses three different neurophysiological levels of description, to which particular quantum approaches refer. Subsequently, the individual approaches themselves will be discussed — Section 3.2: Stapp, Section 3.3: Vitiello and Freeman, Section 3.4: Beck and Eccles, Section 3.5: Penrose and Hameroff. In the following, (some of) the better known and partly worked out approaches that use concepts of quantum theory for inquiries into the nature of consciousness will be presented and discussed. For this purpose, the philosophical distinctions A/B (Section 2) and the neurophysiological distinctions addressed in Section 3.1 will serve as guidelines to classify the respective quantum approaches in a systematic way. However, some preliminary qualifications concerning different ways to use quantum theory are in order. There are quite a number of accounts discussing quantum theory in relation to consciousness that adopt basic ideas of quantum theory in a purely metaphorical manner. Quantum theoretical terms such as entanglement, superposition, collapse, complementarity, and others are used without specific reference to how they are defined precisely and how they are applicable to specific situations. For instance, conscious acts are just postulated to be interpretable somehow analogously to physical acts of measurement, or correlations in psychological systems are just postulated to be interpretable somehow analogously to physical entanglement. Such accounts may provide fascinating science fiction, and they may even be important to inspire nuclei of ideas to be worked out in detail. But unless such detailed work leads beyond vague metaphors and analogies, they do not yet represent scientific progress. Approaches falling into this category will not be discussed in this contribution. A second category includes approaches that use the status quo of present-day quantum theory to describe neurophysiological and/or neuropsychological processes. Among these approaches, the one with the longest history was initiated by von Neumann in the 1930s, later taken up by Wigner, and currently championed by Stapp. It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. Another fairly early idea dating back to Ricciardi and Umezawa in the 1960s is to treat mental states, particularly memory states, in terms of vacuum states of quantum fields. A prominent proponent of this approach at present is Vitiello. Finally, there is the idea suggested by Beck and Eccles in the 1990s, according to which quantum mechanical processes, relevant for the description of exocytosis at the synaptic cleft, can be influenced by mental intentions. The third category refers to further developments or generalizations of present-day quantum theory. An obvious candidate in this respect is the proposal by Penrose to relate elementary conscious acts to gravitation-induced reductions of quantum states. Ultimately, this requires the framework of a future theory of quantum gravity which is far from having been developed. Together with Penrose, Hameroff has argued that microtubuli might be the right place to look for such state reductions. 3.1 Neurophysiological Levels of Description A mental system can be in many different conscious, intentional, phenomenal mental states. In a hypothetical state space, a sequence of such states forms a trajectory representing what is often called the stream of consciousness. Since different subsets of the state space are typically associated with different stability properties, a mental state can be assumed to be more or less stable, depending on its position in the state space. Stable states are distinguished by a residence time at that position longer than that of metastable or unstable states. If a mental state is stable with respect to perturbations, it “activates” a mental representation encoding a content that is consciously perceived. Neural Assemblies Moving from this purely psychological, or cognitive, description to its neurophysiological counterpart leads us to the question: What is the neural correlate of a mental representation? According to standard accounts (cf. Noë and Thompson (2004) for discussion), mental representations are correlated with the activity of neuronal assemblies, i.e., ensembles of several thousands of coupled neurons. The neural correlate of a mental representation can be characterized by the fact that the connectivities, or couplings, among those neurons form an assembly confined with respect to its environment, to which connectivities are weaker than within the assembly. The neural correlate of a mental representation is activated if the neurons forming the assembly operate more actively, e.g., produce higher firing rates, than in their default mode. figure 1 Figure 1. Balance between inhibitory and excitatory connections among neurons. In order to achieve a stable operation of an activated neuronal assembly, there must be a subtle balance between inhibitory and excitatory connections among neurons (cf. Figure 1). If the transfer function of individual neurons is strictly monotonic, i.e., increasing input leads to increasing output, assemblies are difficult to stabilize. For this reason, results establishing a non-monotonic transfer function with a maximal output at intermediate input are of high significance for the modeling of neuronal assemblies (Kuhn et al. 2004). For instance, network models using lattices of coupled maps with quadratic maximum (Kaneko and Tsuda 2000) are paradigmatic examples of such behavior. These and other familiar models of neuronal assemblies (for an overview see Anderson and Rosenfeld 1988) are mostly formulated in a way not invoking well-defined elements of quantum theory. An explicit exception is the approach by Umezawa, Vitiello and others (see Section 3.3). Single Neurons and Synapses The fact that neuronal assemblies are mostly described in terms of classical behavior does not rule out that classically undescribable quantum effects may be significant if one focuses on individual constituents of assemblies, i.e., single neurons or interfaces between them. These interfaces, through which the signals between neurons propagate, are called synapses. There are electrical and chemical synapses, depending on whether they transmit a signal electrically or chemically. At electrical synapses, the current generated by the action potential at the presynaptic neuron flows directly into the postsynaptic cell, which is physically connected to the presynaptic terminal by a so-called gap junction. At chemical synapses, there is a cleft between pre- and postsynaptic cell. In order to propagate a signal, a chemical transmitter (glutamate) is released at the presynaptic terminal. This release process is called exocytosis. The transmitter diffuses across the synaptic cleft and binds to receptors at the postsynaptic membrane, thus opening an ion channel (Kandel et al. 2000, part III; see Fig. 2). Chemical transmission is slower than electric transmission. Figure 2 Figure 2. Release of neurotransmitters at the synaptic cleft (exocytosis). A model developed by Beck and Eccles applies concrete quantum mechanical features to describe details of the process of exocytosis. Their model proposes that quantum processes are relevant for exocytosis and, moreover, are tightly related to states of consciousness. This will be discussed in more detail in Section 3.4. At this point, another approach developed by Flohr (2000) should be mentioned, for which chemical synapses with a specific type of receptors, so-called NMDA receptors,[6] are of paramount significance. Briefly, Flohr observes that the specific plasticity of NMDA receptors is a necessary condition for the formation of extended stable neuronal assemblies correlated to (higher-order) mental representations which he identifies with conscious states. Moreover, he indicates a number of mechanisms caused by anaesthetic agents, which block NMDA receptors and consequently lead to a loss of consciousness. Flohr’s approach is physicalistic and reductive, and it is entirely independent of any specific quantum ideas. The lowest neurophysiological level, at which quantum processes have been proposed as a correlate to consciousness, is the level at which the interior of single neurons is considered: their cytoskeleton. It consists of protein networks essentially made up of two kinds of structures, neurofilaments and microtubuli (Fig. 3, left), which are essential for various transport processes within neurons (as well as other cells). Microtubuli are long polymers usually constructed of 13 longitudinal α and β-tubulin dimers arranged in a tubular array with an outside diameter of about 25 nm (Fig. 3, right). For more details see Kandel et al. (2000), Chap. II.4. Figure 3a    Figure 3b Figure 3. (left) microtubuli and neurofilaments, the width of the figure corresponds to approximately 700nm; (right) tubulin dimers, consisting of α- and β-monomers, constituting a microtubule. The tubulins in microtubuli are the substrate which, in Hameroff’s proposal, is used to embed Penrose’s theoretical framework neurophysiologically. As will be discussed in more detail in Section 3.5, tubulin states are assumed to depend on quantum events, so that quantum coherence among different tubulins is possible. Further, a crucial thesis in the scenario of Penrose and Hameroff is that the (gravitation-induced) collapse of such coherent tubulin states corresponds to elementary acts of consciousness. 3.2 Stapp: Quantum State Reductions and Conscious Acts The act of measurement is a crucial aspect in the framework of quantum theory, that has been the subject of controversy for more than eight decades now. In his monograph on the mathematical foundations of quantum mechanics, von Neumann (1955, Chap. V.1) introduced, in an ad hoc manner, the projection postulate as a mathematical tool for describing measurement in terms of a discontinuous, non-causal, instantaneous (irreversible) act given by (1) the transition of a quantum state to an eigenstate bj of the measured observable B (with a certain probability). This transition is often called the collapse or reduction of the wavefunction, as opposed to (2) the continuous, unitary (reversible) evolution of a system according to the Schrödinger equation. In Chapter VI, von Neumann (1955) discussed the conceptual distinction between observed and observing system. In this context, he applied (1) and (2) to the general situation of a measured object system (I), a measuring instrument (II), and (the brain of) a human observer (III). His conclusion was that it makes no difference for the result of measurements on (I) whether the boundary between observed and observing system is posited between I and (II & III) or between (I & II) and III. As a consequence, it is inessential whether a detector or the human brain is ultimately referred to as the “observer”.[7] By contrast to von Neumann’s fairly cautious stance, London and Bauer (1939) went further and proposed that it is indeed human consciousness which completes the quantum measurement process (see Jammer (1974, Sec. 11.3 or Shimony (1963) for a detailed account). In this way, they attributed a crucial role to consciousness in understanding quantum measurement in terms of an update of the observer’s knowledge. In the 1960s, Wigner (1967) radicalized this proposal,[8] by suggesting an impact of consciousness on the physical state of the measured system, not only an impact on observer knowledge. In order to describe measurement as a real dynamical process generating irreversible facts, Wigner called for some nonlinear modification of (2) to replace von Neumann’s projection (1).[9] Since the 1980s, Stapp has developed his own point of view on the background of von Neumann and Wigner. In particular, he tries to understand specific features of consciousness in relation to quantum theory. Inspired by von Neumann, Stapp uses the freedom to place the interface between observed and observing system and locates it in the observer’s brain. He does not suggest any formal modifications to present-day quantum theory (in particular, he stays essentially within the “orthodox” Hilbert space representation), but adds major interpretational extensions, in particular with respect to a detailed ontological framework. In his earlier work, Stapp (1993) started with Heisenberg’s distinction between the potential and the actual (Heisenberg 1958), thereby taking a decisive step beyond the operational Copenhagen interpretation of quantum mechanics. While Heisenberg’s notion of the actual is related to a measured event in the sense of the Copenhagen interpretation, his notion of the potential, of a tendency, relates to the situation before measurement, which expresses the idea of a reality independent of measurement.[10] Immediately after its actualization, each event holds the tendency for the impending actualization of another, subsequent actual event. Therefore, events are by definition ambiguous. With respect to their actualized aspect, Stapp’s essential move is to “attach to each Heisenberg actual event an experiential aspect. The latter is called the feel of this event, and it can be considered to be the aspect of the actual event that gives it its status as an intrinsic actuality” (Stapp 1993, p. 149). With respect to their tendency aspect, it is tempting to understand events in terms of scheme (B) of Section 2. This is related to Whitehead’s ontology, in which mental and physical poles of so-called “actual occasions” are considered as psychological and physical aspects of reality. The potential antecedents of actual occasions are psychophysically neutral and refer to a mode of existence at which mind and matter are unseparated. This is expressed, for instance, by Stapp’s notion of a “hybrid ontology” with “both idea-like and matter-like qualities” (Stapp 1999, 159). Similarities with a dual-aspect approach (B) (cf. Section 5) are evident. In an interview of 2006, Stapp (2006) specifies some ontological features of his approach with respect to Whitehead’s process thinking, where actual occasions rather than matter or mind are fundamental elements of reality. They are conceived as based on a processual rather than a substantial ontology (see the entry on process philosophy). Stapp relates the fundamentally processual nature of actual occasions to both the physical act of state reduction and the correlated psychological intentional act. Another significant aspect of his approach is the possibility that “conscious intentions of a human being can influence the activities of his brain” (Stapp 1999, p. 153). Different from the possibly misleading notion of a direct interaction, suggesting an interpretation in terms of scheme (A) of Section 2, he describes this feature in a more subtle manner. The requirement that the mental and material outcomes of an actual occasion must match, i.e. be correlated, acts as a constraint on the way in which these outcomes are formed within the actual occasion (cf. Stapp 2006). The notion of interaction is thus replaced by the notion of a constraint set by mind-matter correlations (see also Stapp 2007). At a level at which conscious mental states and material brain states are distinguished, each conscious experience, according to Stapp (1999, p. 153), has as its physical counterpart a quantum state reduction actualizing “the pattern of activity that is sometimes called the neural correlate of that conscious experience”. This pattern of activity may encode an intention and, thus, represent a “template for action”. An intentional decision for an action, preceding the action itself, is then the key for anything like free will in this picture. Stapp argues that the mental effort, i.e. attention devoted to such intentional acts, can protract the lifetime of the neuronal assemblies that represent the templates for action due to quantum Zeno-type effects. Concerning the neurophysiological implementation of this idea, intentional mental states are assumed to correspond to reductions of superposition states of neuronal assemblies. Additional commentary concerning the concepts of attention and intention in relation to James’ idea of a holistic stream of consciousness (James 1950 [1890]) was given by Stapp (1999). For further progress, it will be mandatory to develop a coherent formal framework for this approach and elaborate on concrete details. For instance, it is not yet worked out precisely how quantum superpositions and their collapses are supposed to occur in neural correlates of conscious events. Some indications are outlined by Schwartz et al. (2005). With these desiderata for future work, the overall conception is conservative insofar as the physical formalism remains unchanged. This is why Stapp insisted for years that his approach does not change what he calls “orthodox” quantum mechanics, which is essentially encoded in the statistical formulation by von Neumann (1955). From the point of view of standard present-day quantum physics, however, it is certainly unorthodox to include the mental state of observers in the theory. Although it is true that quantum measurement is not yet finally understood in terms of physical theory, introducing mental states as the essential missing link is highly speculative from a contemporary perspective. This link is a radical conceptual move. In what Stapp now denotes as a “semi-orthodox” approach (Stapp 2015), he proposes that the blind-chance kind of randomness of individual quantum events (“nature’s choices”) be reconceived as “not actually random but positively or negatively biased by the positive or negative values in the minds of the observers that are actualized by its (nature’s) choices” (p. 187). This hypothesis leads into mental influences on quantum physical processes which are widely unknown territory at present. 3.3 Vitiello and Freeman: Quantum Field Theory of Brain States In the 1960s, Ricciardi and Umezawa (1967) suggested to utilize the formalism of quantum field theory to describe brain states, with particular emphasis on memory. The basic idea is to conceive of memory states in terms of states of many-particle systems, as inequivalent representations of vacuum states of quantum fields.[11] This proposal has gone through several refinements (e.g., Stuart et al. 1978, 1979; Jibu and Yasue 1995). Major recent progress has been achieved by including effects of dissipation, chaos, fractals and quantum noise (Vitiello 1995; Pessa and Vitiello 2003; Vitiello 2012). For readable nontechnical accounts of the approach in its present form, embedded in quantum field theory as of today, see Vitiello (2001, 2002). Quantum field theory (see the entry on quantum field theory) deals with systems with infinitely many degrees of freedom. For such systems, the algebra of observables that results from imposing canonical commutation relations admits of multiple Hilbert-space representations that are not unitarily equivalent to each other. This differs from the case of standard quantum mechanics, which deals with systems with finitely many degrees of freedom. For such systems, the corresponding algebra of observables admits of unitarily equivalent Hilbert-space representations. These correlations are responsible for the emergence of ordered patterns. Unlike in standard thermal systems, a large number of bosons can be condensed in an ordered state in a highly stable fashion. Roughly speaking, this provides a quantum field theoretical derivation of ordered states in many-body systems described in terms of statistical physics. In the proposal by Umezawa these dynamically ordered states represent coherent activity in neuronal assemblies. Umezawa’s proposal addresses the brain as a many-particle system as a whole, where the “particles” are more or less neurons. In the language of Section 3.1, this refers to the level of neuronal assemblies, which correlate directly with mental activity. Another merit of the quantum field theory approach is that it avoids the restrictions of standard quantum mechanics in a formally sound way. Conceptually speaking, many of the pioneering presentations of the proposal nevertheless confused mental and material states (and their properties). This has been clarified by Freeman and Vitiello (2008): the model “describes the brain, not mental states.” For a corresponding description of brain states, Freeman and Vitiello 2006, 2008, 2010) studied neurobiologically relevant observables such as electric and magnetic field amplitudes and neurotransmitter concentration. They found evidence for non-equilibrium analogs of phase transitions (Vitiello 2015) and power-law distributions of spectral energy densities of electrocorticograms (Freeman and Vitiello 2010, Freeman and Quian Quiroga 2013). All these observables are classical, so that neurons, glia cells, “and other physiological units are not quantum objects in the many-body model of brain” (Freeman and Vitiello 2008). However, Vitiello (2012) also points out that the emergence of (self-similar, fractal) power-law distributions in general is intimately related to dissipative quantum coherent states (see also recent developments of the Penrose-Hameroff scenario, Section 3.5). The overall conclusion is that the application of quantum field theory describes why and how classical behavior emerges at the level of brain activity considered. The relevant brain states themselves are viewed as classical states. Similar to a classical thermodynamical description arising from quantum statistical mechanics, the idea is to identify different regimes of stable behavior (phases, attractors) and transitions between them. This way, quantum field theory provides formal elements from which a standard classical description of brain activity can be inferred, and this is its main role in large parts of the model. Only in their last joint paper, Freeman and Vitiello (2016) envision a way in which the mental can be explicitly included. For a recent review including technical background see Sabbadini and Vitiello (2019). 3.4 Beck and Eccles: Quantum Mechanics at the Synaptic Cleft Probably the most concrete suggestion of how quantum mechanics in its present-day appearance can play a role in brain processes is due to Beck and Eccles (1992), later refined by Beck (2001). It refers to particular mechanisms of information transfer at the synaptic cleft. However, ways in which these quantum processes might be relevant for mental activity, and in which their interactions with mental states are conceived, remain unclarified to the present day.[12] As presented in Section 3.1, the information flow between neurons in chemical synapses is initiated by the release of transmitters in the presynaptic terminal. This process is called exocytosis, and it is triggered by an arriving nerve impulse with some small probability. In order to describe the trigger mechanism in a statistical way, thermodynamics or quantum mechanics can be invoked. A look at the corresponding energy regimes shows (Beck and Eccles 1992) that quantum processes are distinguishable from thermal processes for energies higher than 10-2 eV (at room temperature). Assuming a typical length scale for biological microsites of the order of several nanometers, an effective mass below 10 electron masses is sufficient to ensure that quantum processes prevail over thermal processes. The upper limit of the time scale of such processes in the quantum regime is of the order of 10-12 sec. This is significantly shorter than the time scale of cellular processes, which is 10-9 sec and longer. The sensible difference between the two time scales makes it possible to treat the corresponding processes as decoupled from one another. The detailed trigger mechanism proposed by Beck and Eccles (1992) is based on the quantum concept of quasi-particles, reflecting the particle aspect of a collective mode. Skipping the details of the picture, the proposed trigger mechanism refers to tunneling processes of two-state quasi-particles, resulting in state collapses. It yields a probability of exocytosis in the range between 0 and 0.7, in agreement with empirical observations. Using a theoretical framework developed earlier (Marcus 1956; Jortner 1976), the quantum trigger can be concretely understood in terms of electron transfer between biomolecules. However, the question remains how the trigger may be relevant for conscious mental states. There are two aspects to this question. The first one refers to Eccles’ intention to utilize quantum processes in the brain as an entry point for mental causation. The idea, as indicated in Section 1, is that the fundamentally indeterministic nature of individual quantum state collapses offers room for the influence of mental powers on brain states. In the present picture, this is conceived in such a way that “mental intention (volition) becomes neurally effective by momentarily increasing the probability of exocytosis” (Beck and Eccles 1992, 11360). Further justification of this assumption is not given. The second aspect refers to the problem that processes at single synapses cannot be simply correlated to mental activity, whose neural correlates are coherent assemblies of neurons. Most plausibly, prima facie uncorrelated random processes at individual synapses would result in a stochastic network of neurons (Hepp 1999). Although Beck (2001) has indicated possibilities (such as quantum stochastic resonance) for achieving ordered patterns at the level of assemblies from fundamentally random synaptic processes, this remains an unsolved problem. With the exception of Eccles’ idea of mental causation, the approach by Beck and Eccles essentially focuses on brain states and brain dynamics. In this respect, Beck (2001, 109f) states explicitly that “science cannot, by its very nature, present any answer to […] questions related to the mind”. Nevertheless, their biophysical approach may open the door to controlled speculation about mind-matter relations. A more recent proposal targeting exocytosis processes at the synaptic cleft is due Fisher (2015, 2017). Similar to the quasi-particles by Beck and Eccles, Fisher refers to so-called Posner molecules, in particular to calcium phosphate, Ca\(_9\)(PO\(_4\))\(_6\). The nuclear spins of phosphate ions serve as entangled qubits within the molecules, which protect their coherent states against fast decoherence (resulting in extreme decoherence times in the range of hours or even days). If the Posner molecules are transported into presynaptic glutamatergic neurons, they will stimulate further glutamate release and amplify postsynaptic activity. Due to nonlocal quantum correlations this activity may be enhanced over multiple neurons (which would respond to Hepp’s concern). This is a sophisticated mechanism that calls for empirical tests. One of them would be to modify the phosphorus spin dynamics within the Posner molecules. For instance, replacing Ca by different Li isotopes with different nuclear spins gives rise to different decoherence times, affecting postsynaptic activity. Corresponding evidence has been shown in animals (Sechzer et al. 1986, Krug et al. 2019). In fact, lithium is known to be efficacious in tempering manic phases in patients with bipolar disorder. 3.5 Penrose and Hameroff: Quantum Gravity and Microtubuli In the scenario developed by Penrose and neurophysiologically augmented by Hameroff, quantum theory is claimed to be effective for consciousness, but the way this happens is quite sophisticated. It is argued that elementary acts of consciousness are non-algorithmic, i.e., non-computable, and they are neurophysiologically realized as gravitation-induced reductions of coherent superposition states in microtubuli. Unlike the approaches discussed so far, which are essentially based on (different features of) status quo quantum theory, the physical part of the scenario, proposed by Penrose, refers to future developments of quantum theory for a proper understanding of the physical process underlying quantum state reduction. The grander picture is that a full-blown theory of quantum gravity is required to ultimately understand quantum measurement (see the entry on quantum gravity). This is a far-reaching assumption. Penrose’s rationale for invoking state reduction is not that the corresponding randomness offers room for mental causation to become efficacious (although this is not excluded). His conceptual starting point, at length developed in two books (Penrose 1989, 1994), is that elementary conscious acts cannot be described algorithmically, hence cannot be computed. His background in this respect has a lot to do with the nature of creativity, mathematical insight, Gödel’s incompleteness theorems, and the idea of a Platonic reality beyond mind and matter. Penrose argues that a valid formulation of quantum state reduction replacing von Neumann’s projection postulate must faithfully describe an objective physical process that he calls objective reduction. As such a physical process remains empirically unconfirmed so far, Penrose proposes that effects not currently covered by quantum theory could play a role in state reduction. Ideal candidates for him are gravitational effects since gravitation is the only fundamental interaction which is not integrated into quantum theory so far. Rather than modifying elements of the theory of gravitation (i.e., general relativity) to achieve such an integration, Penrose discusses the reverse: that novel features have to be incorporated in quantum theory for this purpose. In this way, he arrives at the proposal of gravitation-induced objective state reduction. Why is such a version of state reduction non-computable? Initially one might think of objective state reduction in terms of a stochastic process, as most current proposals for such mechanisms indeed do (see the entry on collapse theories). This would certainly be indeterministic, but probabilistic and stochastic processes can be standardly implemented on a computer, hence they are definitely computable. Penrose (1994, Secs 7.8 and 7.10) sketches some ideas concerning genuinely non-computable, not only random, features of quantum gravity. In order for them to become viable candidates for explaining the non-computability of gravitation-induced state reduction, a long way still has to be gone. With respect to the neurophysiological implementation of Penrose’s proposal, his collaboration with Hameroff has been instrumental. With his background as an anaesthesiologist, Hameroff suggested to consider microtubules as an option for where reductions of quantum states can take place in an effective way, see e.g., Hameroff and Penrose (1996). The respective quantum states are assumed to be coherent superpositions of tubulin states, ultimately extending over many neurons. Their simultaneous gravitation-induced collapse is interpreted as an individual elementary act of consciousness. The proposed mechanism by which such superpositions are established includes a number of involved details that remain to be confirmed or disproven. The idea of focusing on microtubuli is partly motivated by the argument that special locations are required to ensure that quantum states can live long enough to become reduced by gravitational influence rather than by interactions with the warm and wet environment within the brain. Speculative remarks about how the non-computable aspects of the expected new physics mentioned above could be significant in this scenario[13] are given in Penrose (1994, Sec. 7.7). Influential criticism of the possibility that quantum states can in fact survive long enough in the thermal environment of the brain has been raised by Tegmark (2000). He estimates the decoherence time of tubulin superpositions due to interactions in the brain to be less than 10-12 sec. Compared to typical time scales of microtubular processes of the order of milliseconds and more, he concludes that the lifetime of tubulin superpositions is much too short to be significant for neurophysiological processes in the microtubuli. In a response to this criticism, Hagan et al. (2002) showed that a corrected version of Tegmark’s model provides decoherence times up to 10 to 100 μsec, and it has been argued that this can be extended up to the neurophysiologically relevant range of 10 to 100 msec under particular assumptions of the scenario by Penrose and Hameroff. More recently, a novel idea has entered this debate. Theoretical studies of interacting spins have shown that entangled states can be maintained in noisy open quantum systems at high temperature and far from thermal equilibrium. In these studies the effect of decoherence is counterbalanced by a simple “recoherence” mechanism (Hartmann et al. 2006, Li and Paraoanu 2009). This indicates that, under particular circumstances, entanglement may persist even in hot and noisy environments such as the brain. However, decoherence is just one piece in the debate about the overall picture suggested by Penrose and Hameroff. From another perspective, their proposal of microtubules as quantum computing devices has recently received support from work of Bandyopadhyay’s lab at Japan, showing evidence for vibrational resonances and conductivity features in microtubules that should be expected if they are macroscopic quantum systems (Sahu et al. 2013). Bandyopadhyay’s results initiated considerable attention and commentary (see Hameroff and Penrose 2014). In a well-informed in-depth analysis, Pitkänen (2014) raised concerns to the effect that the reported results alone may not be sufficient to confirm the approach proposed by Hameroff and Penrose with all its ramifications. In a different vein, Craddock et al. (2015, 2017) discussed in detail how microtubular processes (rather than, or in addition to, synaptic processes, see Flohr 2000) may be affected by anesthetics, and may also be responsible for neurodegenerative memory disorders. As the correlation between anesthetics and consciousness seems obvious at the phenomenological level, it is interesting to know the intricate mechanisms by which anesthetic drugs act on the cytoskeleton of neuronal cells,[14] and what role quantum mechanics plays in these mechanisms. Craddock et al. (2015, 2017) point out a number of possible quantum effects (including the power-law behavior addressed by Vitiello, cf. Section 3.3) which can be investigated using presently available technologies. Recent empirical results about quantum interactions of anesthetics are due to Li et al. (2018) and Burdick et al. (2019). From a philosophical perspective, the scenario of Penrose and Hameroff has occasionally received outspoken rejection, see e.g., Grush and Churchland (1995) and the reply by Penrose and Hameroff (1995). Indeed, their approach collects several top level mysteries, among them the relation between mind and matter itself, the ultimate unification of all physical interactions, the origin of mathematical truth, and the understanding of brain dynamics across hierarchical levels. Combining such deep and fascinating issues certainly needs further work to be substantiated, and should neither be too quickly celebrated nor offhandedly dismissed. After more than two decades since its inception one thing can be safely asserted: the approach has fruitfully inspired important innovative research on quantum effects on consciousness, both theoretical and empirical. 4. Quantum Mind 4.1 Applying Quantum Concepts to Mental Systems Today there is accumulating evidence in the study of consciousness that quantum concepts like complementarity, entanglement, dispersive states, and non-Boolean logic play significant roles in mental processes. Corresponding quantum-inspired approaches address purely mental (psychological) phenomena using formal features also employed in quantum physics, but without involving the full-fledged framework of quantum mechanics or quantum field theory. The term “quantum cognition” has been coined to refer to this new area of research. Perhaps a more appropriate characterization would be non-commutative structures in cognition. On the surface, this seems to imply that the brain activity correlated with those mental processes is in fact governed by quantum physics. The quantum brain approaches discussed in Section 3 represent attempts that have been proposed along these lines. But is it necessarily true that quantum features in psychology imply quantum physics in the brain? A formal move to incorporate quantum behavior in mental systems, without referring to quantum brain activity, is based on a state space description of mental systems. If mental states are defined on the basis of cells of a neural state space partition, then this partition needs to be well tailored to lead to robustly defined states. Ad hoc chosen partitions will generally create incompatible descriptions (Atmanspacher and beim Graben 2007) and states may become entangled (beim Graben et al. 2013). This implies that quantum brain dynamics is not the only possible explanation of quantum features in mental systems. Assuming that mental states arise from partitions of neural states in such a way that statistical neural states are co-extensive with individual mental states, the nature of mental processes depends strongly on the kind of partition chosen. If the partition is not properly constructed, it is likely that mental states and observables show features that resemble quantum behavior although the correlated brain activity may be entirely classical: quantum mind without quantum brain. Intuitively, it is not difficult to understand why non-commuting operations or non-Boolean logic should be relevant, even inevitable, for mental systems that have nothing to do with quantum physics. Simply speaking, the non-commutativity of operations means nothing else than that the sequence, in which operations are applied, matters for the final result. And non-Boolean logic refers to propositions that may have unsharp truth values beyond yes or no, shades of plausibility or credibility as it were. Both versions obviously abound in psychology and cognitive science (and in everyday life). Pylkkänen (2015) has even suggested to use this intuitive accessibility of mental quantum features for a better conceptual grasp of quantum physics. The particular strength of the idea of generalizing quantum theory beyond quantum physics is that it provides a formal framework which both yields a transparent well-defined link to conventional quantum physics and has been used to describe a number of concrete psychological applications with surprisingly detailed theoretical and empirical results. Corresponding approaches fall under the third category mentioned in Section 3: further developments or generalizations of quantum theory. One rationale for the focus on psychological phenomena is that their detailed study is a necessary precondition for further questions as to their neural correlates. Therefore, the investigation of mental quantum features resists the temptation to reduce them (within scenario A) all-too quickly to neural activity. There are several kinds of psychological phenomena which have been addressed in the spirit of mental quantum features so far: (i) decision processes, (ii) order effects, (iii) bistable perception, (iv) learning, (v) semantic networks, (vi) quantum agency,and (vii) super-quantum entanglement correlations. These topics will be outlined in some more detail in the following Section 4.2. It is a distinguishing aspect of these approaches that they have led to well-defined and specific theoretical models with empirical consequences and novel predictions. A second point worth mentioning is that by now there are a number of research groups worldwide (rather than solitary actors) studying quantum ideas in cognition, partly even in collaborative efforts. For about a decade there have been regular international conferences with proceedings for the exchange of new results and ideas, and target articles, special issues, and monographs have been devoted to basic frameworks and new developments (Khrennikov 1999, Atmanspacher et al. 2002, Busemeyer and Bruza 2012, Haven and Khrennikov 2013, Wendt 2015). 4.2 Concrete Applications Decision Processes An early precursor of work on decision processes is due to Aerts and Aerts (1994). However, the first detailed account appeared in a comprehensive publication by Busemeyer et al. (2006). The key idea is to define probabilities for decision outcomes and decision times in terms of quantum probability amplitudes. Busemeyer et al. found agreement of a suitable Hilbert space model (and disagreement of a classical alternative) with empirical data. Moreover, they were able to clarify the long-standing riddle of the so-called conjunction and disjunction effects (Tversky and Shafir 1992) in decision making (Pothos and Busemeyer 2009). Another application refers to the asymmetry of similarity judgments (Tversky 1977), which can be adequately understood by quantum approaches (see Aerts et al. 2011, Pothos et al. 2013). Order Effects Order effects in polls, surveys, and questionnaires, recognized for a long time (Schwarz and Sudman 1992), are still insufficiently understood today. Their study as contextual quantum features (Aerts and Aerts 1994, Busemeyer et al. 2011) offers the potential to unveil a lot more about such effects than the well-known fact that responses can drastically alter if questions are swapped. Atmanspacher and Römer (2012) proposed a complete classification of possible order effects (including uncertainty relations, and independent of Hilbert space representations), and Wang et al. (2014) discovered a fundamental covariance condition (called the QQ equation) for a wide class of order effects. An important issue for quantum mind approaches is the complexity or parsimony of Hilbert space models as compared to classical (Bayesian, Markov, etc.) models. Atmanspacher and Römer (2012) as well as Busemeyer and Wang (2018) addressed this issue for order effects, with the result that quantum approaches generally require less free variables than competing classical models and are, thus, more parsimonious and more stringent than those. Busemeyer and Wang (2017) studied how measuring incompatible observables sequentially induces uncertainties on the second measurement outcome. Bistable Perception The perception of a stimulus is bistable if the stimulus is ambiguous, such as the Necker cube. This bistable behavior has been modeled analagous to the physical quantum Zeno effect. (Note that this differs from the quantum Zeno effect as used in Section 3.2.) The resulting Necker-Zeno model predicts a quantitative relation between basic psychophysical time scales in bistable perception that has been confirmed experimentally (see Atmanspacher and Filk 2013 for review). Moreover, Atmanspacher and Filk (2010) showed that the Necker-Zeno model violates temporal Bell inqualitities for particular distinguished states in bistable perception.[15] This theoretical prediction is yet to be tested experimentally and would be a litmus test for quantum behavior in mental systems. Such states have been denoted as temporally nonlocal in the sense that they are not sharply (pointwise) localized along the time axis but appear to be stretched over an extended time interval (an extended present). Within this interval, relations such as “earlier” or “later” are illegitimate designators and, accordingly, causal connections are ill-defined. Learning Processes Another quite obvious arena for non-commutative behavior is learning behavior. In theoretical studies, Atmanspacher and Filk (2006) showed that in simple supervised learning tasks small recurrent networks not only learn the prescribed input-output relation but also the sequence in which inputs have been presented. This entails that the recognition of inputs is impaired if the sequence of presentation is changed. In very few exceptional cases, with special characteristics that remain to be explored, this impairment is avoided. Semantic Networks The difficult issue of meaning in natural languages is often explored in terms of semantic networks. Gabora and Aerts (2002) described the way in which concepts are evoked, used, and combined to generate meaning depending on contexts. Their ideas about concept association in evolution were further developed by Gabora and Aerts (2009). A particularly thrilling application is due to Bruza et al. (2015), who challenged a long-standing dogma in linguistics by proposing that the meaning of concept combinations (such as “apple chip”) is not uniquely separable into the meanings of the combined concepts (“apple” and “chip”). Bruza et al. (2015) refer to meaning relations in terms of entanglement-style features in quantum representations of concepts and reported first empirical results in this direction. Quantum Agency A quantum approach for understanding issues related to agency, intention, and other controversial topics in the philosophy of mind has been proposed by Briegel and Müller (2015), see also Müller and Briegel (2018). This proposal is based on work on quantum algorithms for reinforcement learning in neural networks (“projective simulation”, Paparo et al. 2012), which can be regarded as a variant of quantum machine learning (Wittek 2014). The gist of the idea is how agents can develop agency as a kind of independence from their environment and the deterministic laws governing it (Briegel 2012). The behavior of the agent itself is simulated as a non-deterministic quantum random walk in its memory space. Super-Quantum Correlations Quantum entanglement implies correlations exceeding standard classical correlations (by violating Bell-type inequalitites) but obeying the so-called Tsirelson bound. However, this bound does not exhaust the range by which Bell-type correlations can be violated in principle. Popescu and Rohrlich (1994) found such correlations for particular quantum measurements, and the study of such super-quantum correlations has become a vivid field of contemporary research, as the review by Popescu (2014) shows. One problem in assessing super-quantum correlations in mental systems is to delineate genuine (non-causal) quantum-type correlations from (causal) classical correlations that can be used for signaling. Dzhafarov and Kujala (2013) derived a compact way to do so and subtract classical context effects such as priming in mental systems so that true quantum correlations remain. See Cervantes and Dzhafarov (2018) for empirical applications, and Atmanspacher and Filk (2019) for further subtleties. 5. Mind and Matter as Dual Aspects 5.1 Compositional and Decompositional Approaches Dual-aspect approaches consider mental and material domains of reality as aspects, or manifestations, of one underlying reality in which mind and matter are unseparated. In such a framework, the distinction between mind and matter results from the application of a basic tool for achieving epistemic access to, i.e., gather knowledge about, both the separated domains and the underlying reality.[16] Consequently, the status of the underlying, psychophysically neutral domain is considered as ontic relative to the mind-matter distinction. As mentioned in Section 2, dual-aspect approaches have a long history, essentially starting with Spinoza as a most outspoken protagonist. Major directions in the 20th century have been described and compared to some detail by Atmanspacher (2014). An important distinction between two basic classes of dual-aspect thinking is the way in which the psychophysically neutral domain is related to the mental and the physical. For Russell and the neo-Russellians the compositional arrangements of psychophysically neutral elements decide how they differ with respect to mental or physical properties. As a consequence, the mental and the physical are reducible to the neutral domain. Chalmers’ (1996, Chap. 8) ideas on “consciousness and information” fall into this class. Tononi’s theoretical framework of “integrated information theory” (see Oizumi et al. 2014, Tononi and Koch 2015) can be seen as a concrete implementation of a number of features of Chalmers’ proposal. No quantum structures are involved in this work. The other class of dual-aspect thinking is decompositional rather than compositional. Here the basic metaphysics of the psychophysically neutral domain is holistic, and the mental and the physical (neither reducible to one another nor to the neutral) emerge by breaking the holistic symmetry or, in other words, by making distinctions. This framework is guided by the analogy to quantum holism, and the predominant versions of this picture are quantum theoretically inspired as, for instance, proposed by Pauli and Jung (Jung and Pauli 1955; Meier 2001) and by Bohm and Hiley (Bohm 1990; Bohm and Hiley 1993; Hiley 2001). They are based on speculations that clearly exceed the scope of contemporary quantum theory. In Bohm’s and Hiley’s approach, the notions of implicate and explicate order mirror the distinction between ontic and epistemic domains. Mental and physical states emerge by explication, or unfoldment, from an ultimately undivided and psychophysically neutral implicate, enfolded order. This order is called holomovement because it is not static but rather dynamic, as in Whitehead’s process philosophy. De Gosson and Hiley (2013) give a good introduction of how the holomovement can be addressed from a formal (algebraic) point of view. At the level of the implicate order, the term active information expresses that this level is capable of “informing” the epistemically distinguished, explicate domains of mind and matter. It should be emphasized that the usual notion of information is clearly an epistemic term. Nevertheless, there are quite a number of dual-aspect approaches addressing something like information at the ontic, psychophysically neutral level.[17] Using an information-like concept in a non-epistemic manner appears inconsistent if the common (syntactic) significance of Shannon-type information is intended, which requires distinctions in order to construct partitions, providing alternatives in the set of given events. Most information-based dual-aspect approaches do not sufficiently clarify their notion of information, so that misunderstandings easily arise. 5.2 Mind-Matter Correlations While the proposal by Bohm and Hiley essentially sketches a conceptual framework without further concrete details, particularly concerning the mental domain, the Pauli-Jung conjecture (Atmanspacher and Fuchs 2014) concerning dual-aspect monism offers some more material to discuss. An intuitively appealing way to represent their approach considers the distinction between epistemic and ontic domains of material reality due to quantum theory in parallel with the distinction between epistemic and ontic mental domains. On the physical side, the epistemic/ontic distinction refers to the distinction between a “local realism” of empirical facts obtained from classical measuring instruments and a “holistic realism” of entangled systems (Atmanspacher and Primas 2003). Essentially, these domains are connected by the process of measurement, thus far conceived as independent of conscious observers. The corresponding picture on the mental side refers to a distinction between conscious and unconscious domains.[18] In Jung’s depth psychological conceptions, these two domains are connected by the emergence of conscious mental states from the unconscious, analogous to physical measurement. In Jung’s depth psychology it is crucial that the unconscious has a collective component, unseparated between individuals and populated by so-called archetypes. They are regarded as constituting the psychophysically neutral level comprising both the collective unconscious and the holistic reality of quantum theory. At the same time they operate as “ordering factors”, being responsible for the arrangement of their psychical and physical manifestations in the epistemically distinguished domains of mind and matter. More details of this picture can be found in Jung and Pauli (1955), Meier (2001), Atmanspacher and Primas (2009), Atmanspacher and Fach (2013), and Atmanspacher and Fuchs (2014). This scheme is clearly related to scenario (B) of Sec. 2, combining an epistemically dualistic with an ontically monistic approach. Correlations between the mental and the physical are conceived as non-causal, thus respecting the causal closure of the physical against the mental. However, there is a causal relationship (in the sense of formal rather than efficient causation) between the psychophysically neutral, monistic level and the epistemically distinguished mental and material domains. In Pauli’s and Jung’s terms this kind of causation is expressed by the ordering operation of archetypes in the collective unconscious. In other words, this scenario offers the possibility that the mental and material manifestations may inherit mutual correlations due to the fact that they are jointly caused by the psychophysically neutral level. One might say that such correlations are remnants reflecting the lost holism of the underlying reality. They are not the result of any direct causal interaction between mental and material domains. Thus, they are not suitable for an explanation of direct efficient mental causation. Their existence would require some psychophysically neutral activity entailing correlation effects that would be misinterpreted as mental causation of physical events. Independently of quantum theory, a related move was suggested by Velmans (2002, 2009). But even without mental causation, scenario (B) is relevant to ubiquitous correlations between conscious mental states and physical brain states. 5.3 Further Developments In the Pauli-Jung conjecture, these correlations are called synchronistic and have been extended to psychosomatic relations (Meier 1975). A comprehensive typology of mind-matter correlations following from Pauli’s and Jung’s dual-aspect monism was proposed by Atmanspacher and Fach (2013). They found that a large body of empirical material concerning more than 2000 cases of so-called “exceptional experiences” can be classified according to their deviation from the conventional reality model of a subject and from the conventional relations between its components (see Atmanspacher and Fach 2019 for more details). Synchronistic events in the sense of Pauli and Jung appear as a special case of such relational deviations. An essential condition required for synchronistic correlations is that they are meaningful for those who experience them. It is tempting to interpret the use of meaning as an attempt to introduce semantic information as an alternative to syntactic information as addressed above. (Note the parallel to active information as in the approach by Bohm and Hiley.) Although this entails difficult problems concerning a clear-cut definition and operationalization, something akin to meaning, both explicitly and implicitly, might be a relevant informational currency for mind-matter relations within the framework of decompositional dual-aspect thinking (Atmanspacher 2014). Primas (2003, 2009, 2017) proposed a dual-aspect approach where the distinction of mental and material domains originates from the distinction between two different modes of time: tensed (mental) time, including nowness, on the one hand and tenseless (physical) time, viewed as an external parameter, on the other (see the entries on time and on being and becoming in modern physics). Regarding these two concepts of time as implied by a symmetry breaking of a timeless level of reality that is psychophysically neutral, Primas conceives the tensed time of the mental domain as quantum-correlated with the parameter time of physics via “time-entanglement”. This scenario has been formulated in a Hilbert space framework with appropriate time operators (Primas 2009, 2017), so it offers a formally elaborated dual-aspect quantum framework for basic aspects of the mind-matter problem. It shows some convergemce with the idea of temporally nonlocal mental states as addresed in Section 4.2. As indicated in Section 3.2, the approach by Stapp contains elements of dual-aspect thinking as well, although this is not much emphasized by its author. The dual-aspect quantum approaches discussed in the present section tend to focus on the issue of a generalized mind-matter “entanglement” more than on state reduction. The primary purpose here is to understand correlations between mental and material domains rather than direct causally efficacious interactions between them. A final issue of dual-aspect approaches in general refers to the problem of panpsychism or panexperientialism, respectively (see the review by Skrbina 2003, and the entry on panpsychism). In the limit of a universal symmetry breaking at the psychophysically neutral level, every system has both a mental and a material aspect. In such a situation it is important to understand “mentality” much more generally than “consciousness”. Unconscious or proto-mental acts as opposed to conscious mental acts are notions sometimes used to underline this difference. The special case of human consciousness within the mental domain might be regarded as special as its material correlate, the brain, within the material domain. 6. Conclusions The historical motivation for exploring quantum theory in trying to understand consciousness derived from the realization that collapse-type quantum events introduce an element of randomness, which is primary (ontic) rather than due to ignorance or missing information (epistemic). Approaches such as those of Stapp and of Beck and Eccles emphasize this (in different ways), insofar as the ontic randomness of quantum events is regarded to provide room for mental causation, i.e., the possibility that conscious mental acts can influence brain behavior. The approach by Penrose and Hameroff also focuses on state collapse, but with a significant move from mental causation to the non-computability of (particular) conscious acts. Any discussion of state collapse or state reduction (e.g. by measurement) refers, at least implicitly, to superposition states since those are the states that are reduced. Insofar as entangled systems remain in a quantum superposition as long as no measurement has occurred, entanglement is always co-addressed when state reduction is discussed. By contrast, some of the dual-aspect quantum approaches utilize the topic of entanglement differently, and independently of state reduction in the first place. Inspired by and analogous to entanglement-induced nonlocal correlations in quantum physics, mind-matter entanglement is conceived as the hypothetical origin of mind-matter correlations. This exhibits the highly speculative picture of a fundamentally holistic, psychophysically neutral level of reality from which correlated mental and material domains emerge. Each of the examples discussed in this overview has both promising and problematic aspects. The approach by Beck and Eccles is most detailed and concrete with respect to the application of standard quantum mechanics to the process of exocytosis. However, it does not solve the problem of how the activity of single synapses enters the dynamics of neural assemblies, and it leaves the mental causation of quantum processes as a mere claim. Stapp’s approach suggests a radically expanded ontological basis for both the mental domain and status-quo quantum theory as a theory of matter without essentially changing the formalism of quantum theory. Although related to inspiring philosophical and some psychological background, it still lacks empirical confirmation. The proposal by Penrose and Hameroff exceeds the domain of present-day quantum theory by far and is the most speculative example among those discussed. It is not easy to see how the picture as a whole can be formally worked out and put to empirical test. The approach initiated by Umezawa is embedded in the framework of quantum field theory, more broadly applicable and formally more sophisticated than standard quantum mechanics. It is used to describe the emergence of classical activity in neuronal assemblies on the basis of symmetry breakings in a quantum field theoretical framework. A clear conceptual distinction between brain states and mental states has often been missing. Their relation to mental states is has recently been indicated in the framework of a dual-aspect approach. The dual-aspect approaches of Pauli and Jung and of Bohm and Hiley are conceptually more transparent and more promising. Although there is now a huge body of empirically documented mind-matter correlations that supports the Pauli-Jung conjecture, it lacks a detailed formal basis so far. Hiley’s work offers an algebraic framework which may lead to theoretical progress. A novel dual-aspect quantum proposal by Primas, based on the distinction between tensed mental time and tenseless physical time, marks a significant step forward, particularly as concerns a consistent formal framework. Maybe the best prognosis for future success among the examples described in this overview, at least on foreseeable time scales, goes to the investigation of mental quantum features without focusing on associated brain activity to begin with. A number of corresponding approaches have been developed which include concrete models for concrete situations and have lead to successful empirical tests and further predictions. On the other hand, a coherent theory behind individual models and relating the different types of approaches is still to be settled in detail. With respect to scientific practice, a particularly promising aspect is the visible formation of a scientific community with conferences, mutual collaborations, and some perspicuous attraction for young scientists to join the field. • Aerts, D., Durt, T., Grib, A., Van Bogaert, B., and Zapatrin, A., 1993, “Quantum structures in macroscopical reality,” International Journal of Theoretical Physics, 32: 489–498. • Aerts, D. and Aerts, S., 1994, “Applications of quantum statistics in psychological studies of decision processes,” Foundations of Science, 1: 85–97. • Aerts, S., Kitto, K., and Sitbon, L., 2011, “Similarity metrics within a point of view,” in Quantum Interaction. 5th International Conference, D. Song, et al. (eds.), Berlin: Springer, pp. 13–24. • Alfinito, E., and Vitiello, G., 2000, “Formation and life-time of memory domains in the dissipative quantum model of brain,” International Journal of Modern Physics B, 14: 853–868. • Alfinito, E., Viglione, R.G., and Vitiello, G., 2001, “The decoherence criterion,” Modern Physics Letters B, 15: 127–135. • Anderson, J.A., and Rosenfeld, E. (eds.), 1988, Neurocomputing: Foundations of Research, Cambridge, MA: MIT Press. • Atmanspacher, H., 2014, “20th century variants of dual-aspect thinking (with commentaries and replies),” Mind and Matter, 12: 245–288. • Atmanspacher H., and Fach W., 2013, “A structural-phenomenological typology of mind-matter correlations,” Journal of Analytical Psychology, 58: 218–243. • –––, 2019, “Exceptional experiences of stable and unstable mental states, understood from a dual-aspect point of view,” Philosophies, 2019: 4,7. • Atmanspacher, H., and Filk, T., 2006, “Complexity and non-commutativity of learning operations on graphs,” BioSystems, 85: 84–93. • –––, 2010, “A proposed test of temporal nonlocality in bistable perception,” Journal of Mathematical Psychology, 54: 314–321. • –––, 2013, “The Necker-Zeno model for bistable perception,” Topics in Cognitive Science, 5: 800–817. • –––, 2019, “Contextuality revisited – Signaling may differ from communicating,” in Quanta and Mind, A. de Barros and C. Montemayor (eds.), Berlin: Springer. • Atmanspacher, H., and Fuchs, C., (eds.), 2014, The Pauli-Jung Conjecture and Its Impact Today, Exeter: Imprint Academic. • Atmanspacher, H. and beim Graben, P., 2007, “Contextual emergence of mental states from neurodynamics.” Chaos and Complexity Letters, 2: 151–168. • Atmanspacher, H. and Primas, H., (eds.), 2009, Recasting Reality. Wolfgang Pauli’s Philosophical Ideas and Contemporary Science, Berlin: Springer. • Atmanspacher, H., and Römer, H., 2012, “Order effects in sequential measurememts of non-commuting psychological observables,” Journal of Mathematical Psychology, 56: 274–280. • Atmanspacher, H., Römer, H., and Walach, H., 2002, “Weak quantum theory: Complementarity and entanglement in physics and beyond,” Foundations of Physics, 32: 379–406. • Beck, F., and Eccles, J., 1992, “Quantum aspects of brain activity and the role of consciousness.” Proceedings of the National Academy of Sciences of the USA, 89: 11357–11361. • Beck, F., 2001, “Quantum brain dynamics and consciousness,” in The Physical Nature of Consciousness, P. van Loocke (ed.), Amsterdam: Benjamins, pp. 83–116. • beim Graben, P., Filk, T., and Atmanspacher, H., 2013, “Epistemic entanglement due to non-generating partitions of classical dynamical systems,” International Journal of Theoretical Physics, 52: 723–734. • Bohm, D., 1990, “A new theory of the relationship of mind and matter,” Philosophical Psychology, 3: 271–286. • Bohm, D., and Hiley, B.J., 1993, The Undivided Universe, Routledge, London. See Chap. 15. • Briegel, H.-J., 2012, “On creative machines and the physical origins of freedom,” Scientific Reports, 2: 522. • Briegel, H.-J., and Müller, T., 2015, “A chance for attributable agency”, Minds and Machines, 25: 261–279. • Brukner, C., and Zeilinger, A., 2003, “Information and fundamental elements of the structure of quantum theory,” in Time, Quantum and Information, L. Castell and O. Ischebeck (ed.), Berlin: Springer, pp. 323–355. • Bruza, P.D., Kitto, K., Ramm, B.R., and Sitbon, L., 2015, “A probabilistic framework for analysing the compositionality of conceptual combinations”, Journal of Mathematical Psychology, 67: 26–38. • Burdick, R.K., Villabona-Monsalve, J.P., Mashour, G.A., and Goodson, T. III, 2019, “Modern anesthetic ethers demonstrate quantum interactions with entangled photons,” Scientific Reports , 9: 11351. • Busemeyer, J.R., and Bruza, P.D., 2012, Quantum Models of Cognition and Decision, Cambridge: University Press. • Busemeyer, J.R., and Wang, Z., 2017, “Is there a problem with quantum models of psychological measurements?,” PLoS ONE, 12(11): e0187733125. • –––, 2018, “Hilbert space multidimensional theory,” Psychological Review, 125: 572–591. • Busemeyer, J.R., Wang, Z., and Townsend, J.T., 2006, “Quantum dynamics of human decision making,” Journal of Mathematical Psychology, 50: 220–241. • Busemeyer, J.R., Pothos, E., Franco, R., and Trueblood, J.S., 2011, “A quantum theoretical explanation for probability judgment errors,” Psychological Review, 108: 193–218. • Butterfield, J., 1998, “Quantum curiosities of psychophysics,” in Consciousness and Human Identity, J. Cornwell (ed.), Oxford University Press, Oxford, pp. 122–157. • Cervantes, V.H., and Dzhafarov, E.N., 2018, “Snow Queen is evil and beautiful: Experimental evidence for probabilistic contextuality in human choices,” Decision, 5: 193–204. • Chalmers, D., 1995, “Facing up to the problem of consciousness,” Journal of Consciousness Studies, 2(3): 200–219. • –––, 1996, The Conscious Mind, Oxford: Oxford University Press. • Clifton, R., Bub, J., and Halvorson, H., 2003, “Characterizing quantum theory in terms of information-theoretic constraints,” Foundations of Physics, 33: 1561–1591. • Craddock, T.J.A., Hameroff, S.R., Ayoub, A.T., Klobukowski, M., and Tuszynski, J.A., 2015, “Anesthetics act in quantum channels in brain microtubules to prevent consciousness,” Current Topics in Medicinal Chemistry, 15: 523–533. • Craddock, T.J.A., Kurian, P., Preto, J., Sahu, K., Hameroff, S.R., Klobukowski, M., and Tuszynski, J.A., 2017, “Anesthetic alterations of collective terahertz oscillations in tubulin correlate with clinical potency: Implications for anesthetic action and post-operative cognitive dysfunction,” Scientific Reports, 7: 9877. • Cucu, A.C., and Pitts, J.B., 2019, “How dualists should (not) respond to the objection from energy conservation,” Mind and Matter, 17: 95–121. • de Gosson, M.A., and Hiley, B., 2013, “Hamiltonian flows and the holomovement,” Mind and Matter, 11: 205–221. • d’Espagnat, B., 1999, “Concepts of reality,” in On Quanta, Mind, and Matter, H. Atmanspacher, U. Müller-Herold, and A. Amann (eds.), Kluwer, Dordrecht, pp. 249–270. • Dzhafarov, E.N., and Kujala, J.V., 2013, “All-possible-couplings approach to measuring probabilistic context,” PLoS One, 8(5): e61712. • Ellis, G.F.R., Noble, D., and O’Connor T. (eds.), 2011, Top-Down Causation: An Integrating Theme Within and Across the Sciences?, Special Issue of Interface Focus 2(1). • Esfeld, M, 1999, “Wigner’s view of physical reality,” Studies in History and Philosophy of Modern Physics, 30B: 145–154. • Fechner, G., 1861, Über die Seelenfrage. Ein Gang durch die sichtbare Welt, um die unsichtbare zu finden, Amelang, Leipzig. Second edition: Leopold Voss, Hamburg, 1907. Reprinted Eschborn: Klotz, 1992. • Feigl, H., 1967, The ‘Mental’ and the ‘Physical’, Minneapolis: University of Minnesota Press. • Filk, T., and von Müller, A., 2009, “Quantum physics and consciousness: The quest for a common conceptual foundation,” Mind and Matter, 7(1): 59–79. • Fisher, M.P.A., 2015, “Quantum cognition: The possibility of processing with nuclear spins in the brain,” Annals of Physics, 362: 593–602. • –––, 2017, “Are we quantum computers, or merely clever robots?” Asia Pacific Physics Newsletter, 6(1): 39–45. • Flohr, H., 2000, “NMDA receptor-mediated computational processes and phenomenal consciousness,” in Neural Correlates of Consciousness. Empirical and Conceptual Questions, T. Metzinger (ed.), Cambridge: MIT Press, pp. 245–258. • Franck, G., 2004, “Mental presence and the temporal present,” in Brain and Being, G.G. Globus, K.H. Pribram, and G. Vitiello (eds.), Amsterdam: Benjamins, pp. 47–68. • –––, 2008, “Presence and reality: An option to specify panpsychism?” Mind and Matter, 6(1): 123–140. • Freeman, W.J., and Quian, Quiroga R., 2012, Imaging Brain Function with EEG, Berlin: Springer. • Freeman, W.J., and Vitiello, G., 2006, “Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics,” Physics of Life Reviews, 3(2): 93–118. • –––, 2008, “Dissipation and spontaneous symmetry breaking in brain dynamics,” Journal of Physics A, 41: 304042. • –––, 2010, “Vortices in brain waves,” International Journal of Modern Physics B, 24: 3269–3295. • –––, 2016, “Matter and mind are entangled in two streams of images guiding behavior and informing the subject through awareness,” Mind and Matter, 14: 7–25. • Fröhlich, H., 1968, “Long range coherence and energy storage in biological systems,” International Journal of Quantum Chemistry, 2: 641–649. • Fuchs, C.A., 2002, “Quantum mechanics as quantum information (and only a little more),” n Quantum Theory: Reconsideration of Foundations, A. Yu. Khrennikov (ed.), Växjö: Växjö University Press, pp. 463–543. • Gabora L., and Aerts, D., 2002, “Contextualizing concepts using a mathematical generalization of the quantum formalism,” Journal of Experimental and Theoretical Artificial Intelligence, 14: 327–358. • –––, 2009, “A model of the emergence and evolution of integrated worldviews,” Journal of Mathematical Psychology, 53: 434–451. • Grush, R., and Churchland, P.S., 1995, “Gaps in Penrose’s toilings,” Journal of Consciousness Studies 2(1), 10–29. (See also the response by R. Penrose and S. Hameroff in Journal of Consciousness Studies 2(2) (1995): 98–111.) • Hagan, S., Hameroff, S.R., and Tuszynski, J.A., 2002, “Quantum computation in brain microtubules: Decoherence and biological feasibility,” Physical Review E, 65: 061901-1 to -11. • Hameroff, S.R., and Penrose, R., 1996, “Conscious events as orchestrated spacetime selections,” Journal of Consciousness Studies, 3(1): 36–53. • –––, 2014, “Consciousness in the universe: A review of the Orch OR theory (with commentaries and replies),” Physics of Life Reviews, 11: 39–112. • Hartmann, L., Düer, W., and Briegel, H.-J., 2006, “Steady state entanglement in open and noisy quantum systems at high temperature,” Physical Review A, 74: 052304. • Haven, E., and Khrennikov, A.Yu., 2013, Quantum Social Science, Cambridge: Cambridge University Press. • Heisenberg, W., 1958, Physics and Philosophy, New York: Harper and Row. • Hepp, K., 1999, “Toward the demolition of a computational quantum brain,” in Quantum Future, P. Blanchard and A. Jadczyk (eds.), Berlin: Springer, pp. 92–104. • Hiley, B.J., 2001, “Non-commutative geometry, the Bohm interpretation and the mind-matter relationship,” in Computing Anticipatory Systems—CASYS 2000, D. Dubois (ed.), Berlin: Springer, pp. 77–88. • Holton, G., 1970, “The roots of complementarity,” Daedalus, 99: 1015–1055. • Huelga, S.H., and Plenio, M.B., 2013, “Vibrations, quanta, and biology,” Contemporary Physics, 54: 181–207. • James, W., 1950 [1890], The Principles of Psychology (Volume 1), New York: Dover; originally published in 1890. • Jibu, M., and Yasue, K., 1995, Quantum Brain Dynamics and Consciousness, Amsterdam: Benjamins. • Jortner, J., 1976, “Temperature dependent activation energy for electron transfer between biological molecules,” Journal of Chemical Physics, 64: 4860–4867. • Jung, C.G., and Pauli, W., 1955, The Interpretation of Nature and the Psyche, Pantheon, New York. Translated by P. Silz. German original Naturerklärung und Psyche, Zürich: Rascher, 1952. • Kandel, E.R., Schwartz, J.H., and Jessell, T.M., 2000, Principles of Neural Science, New York: McGraw Hill. • Kane, R., 1996, The Significance of Free Will, Oxford: Oxford University Press. • Kaneko, K., and Tsuda, I., 2000, Chaos and Beyond, Berlin: Springer. • Khrennikov, A.Yu., 1999, “Classical and quantum mechanics on information spaces with applications to cognitive, psychological, social and anomalous phenomena,” Foundations of Physics, 29: 1065–1098. • Kim, J., 1998, Mind in a Physical World, Cambridge, MA: MIT Press. • Krug, J.T., A.K. Klein, E.M. Purvis, K. Ayala, M.S. Mayes, L. Collins, M.P.A. Fisher, and A. Ettenberg, 2019, “Effects of chronic lithium exposure in a modified rodent ketamine-induced hyperactivity model of mania,” Pharmacology, Biochemistry and Behavior, 179: 150–156. • Kuhn, A., Aertsen, A., and Rotter, S., 2004, “Neuronal integration of synaptic input in the fluctuation-driven regime,” Journal of Neuroscience, 24: 2345–2356. • Li, J., and Paraoanu, G.S., 2009, “Generation and propagation of entanglement in driven coupled-qubit systems,” New Journal of Physics, 11: 113020. • Li, N., Lu, D., Yang, L., Tao, H., Xu, Y., Wang, C., Fu, L., Liu, H., Chummum, Y., and Zhang, S., 2018: “Nuclear spin attenuates the anesthetic potency of xenon isotopes in mice: Implications for the mechanisms of anesthesia and consciousness,”. Anesthesiology, 129: 271–277. • London, F., and Bauer, E., 1939, La théorie de l’observation en mécanique quantique, Paris: Hermann; English translation, “The theory of observation in quantum mechanics,” in Quantum Theory and Measurement, J.A. Wheeler and W.H. Zurek (eds.), Princeton: Princeton University Press, 1983, pp. 217–259. • Mahler, G., 2015, “Temporal non-locality: Fact or fiction?,” in Quantum Interaction. 8th International Conference, H. Atmanspacher, et al. (eds.), Berlin: Springer, pp. 243–254. • Marcus, R.A., 1956, “On the theory of oxydation-reduction reactions involving electron transfer,” Journal of Chemical Physics, 24: 966–978. • Margenau, H., 1950, The Nature of Physical Reality, New York: McGraw Hill. • Meier, C.A., 1975, “Psychosomatik in Jungscher Sicht,” in Experiment und Symbol, C.A. Meier (ed.), Olten: Walter Verlag, pp. 138–156. • ––– (eds.), 2001, Atom and Archetype: The Pauli/Jung Letters 1932–1958, Princeton University Press, Princeton. Translated by D. Roscoe. German original Wolfgang Pauli und C.G. Jung: ein Briefwechsel, Berlin: Springer, 1992. • Müller, T., and Briegel, H.-J., 2018, “A stochastic process model for free agency under indeterminism,” Dialectica, 72: 219–252. • Nagel, T., 1974, “What is it like to be a bat?,” The Philosophical Review, LXXXIII: 435–450. • Neumann, J. von, 1955, Mathematical Foundations of Quantum Mechanics, Princeton University Press, Princeton. German original Die mathematischen Grundlagen der Quantenmechanik, Berlin: Springer, 1932. • Noë, A., and Thompson, E., 2004, “Are there neural correlates of consciousness? (with commentaries and replies),” Journal of Consciousness Studies, 11: 3–98. • Oizumi, M., Albantakis, L., and Tononi, G., 2014, “From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0,” PLoS Computational Biology, 10(5): e1003588. • Paparo, G.D., Dunjko, V., Makmal, A., Martin-Delgado, M.A., and Briegel, H.-J., 2012, “Quantum speedup for active learning agents,” Physical Review X, 4: 031002. • Papaseit, C., Pochon, N., and Tabony, J., 2000, “Microtubule self-organization is gravity-dependent,” Proceedings of the National Academy of Sciences of the USA, 97: 8364–8368. • Pauen, M., 2001, Grundprobleme der Philosophie des Geistes, Frankfurt, Fischer. • Penrose, R., 1989, The Emperor’s New Mind, Oxford: Oxford University Press. • –––, 1994, Shadows of the Mind, Oxford: Oxford University Press. • Penrose, R., and Hameroff, S., 1995, “What gaps? Reply to Grush and Churchland,” Journal of Consciousness Studies, 2(2): 98–111. • Pessa, E., and Vitiello, G., 2003, “Quantum noise, entanglement and chaos in the quantum field theory of mind/brain states,” Mind and Matter, 1: 59–79. • Pitkänen, M., 2014, “New results about microtubuli as quantum systems,” Journal of Nonlocality, 3(1): available online. • Popescu, S., 2014, “Nonlocality beyind quantum mechanics, ” Nature Physics, 10 (April): 264–270. • Popescu, S., and Rohrlich, D., 1994, “Nonlocality as an axiom,” Foundations of Physics, 24: 379–385. • Popper, K.R., and Eccles, J.C., 1977, The Self and Its Brain, Berlin: Springer. • Pothos, E.M., and Busemeyer, J.R., 2009, “A quantum probability model explanation for violations of rational decision theory,” Proceedings of the Royal Society B, 276: 2171–2178. • –––, 2013, “Can quantum probability provide a new direction for cognitive modeling?” Behavioral and Brain Sciences, 36: 255–274. • Pothos, E.M., Busemeyer, J.R., and Trueblood, J.S., 2013, “A quantum geometric model of similarity,” Psychological Review, 120: 679–696. • Pribram, K., 1971, Languages of the Brain, Englewood Cliffs: Prentice-Hall. • Primas, H., 2002, “Hidden determinism, probability, and time’s arrow,” in Between Chance and Choice, H. Atmanspacher and R.C. Bishop (eds.), Exeter: Imprint Academic, pp. 89–113. • –––, 2003, “Time-entanglement between mind and matter,” Mind and Matter, 1: 81–119. • –––, 2007, “Non-Boolean descriptions for mind-matter systems,” Mind and Matter, 5(1): 7–44. • –––, 2009, “Complementarity of mind and matter,” in Recasting Reality, H. Atmanspacher and H. Primas (eds.), Berlin: Springer, pp. 171–209. • –––, 2017, Knowledge and Time, Berlin: Springer. • Pylkkänen, P., 2015, “Fundamental physics and the mind – Is there a connection?,” in Quantum Interaction. 8th International Conference, H. Atmanspacher, et al. (eds.), Berlin: Springer, pp. 3–11. • Ricciardi, L.M., and Umezawa, H., 1967, “Brain and physics of many-body problems,” Kybernetik, 4: 44–48. • Sabbadini, S.A., and Vitiello, G., 2019, “Entanglement and phase-mediated correlations in quantum field theory. Application to brain-mind states,” Applied Sciences, 9: 3203. • Sahu, S., Ghosh, S., Hirata, K., Fujita, D., and Bandyopadhyay, A., 2013, “Multi-level memory-switching properties of a single brain microtubule,” Applied Physics Letters, 102: 123701. • Schwartz, J.M., Stapp, H.P., and Beauregard, M., 2005, “Quantum theory in neuroscience and psychology: a neurophysical model of mind/brain interaction,” Philosophical Transactions of the Royal Society B, 360: 1309–1327. • Schwarz, N., and Sudman, S. (eds.), 1992, Context Effects in Social and Psychological Research, Berlin: Springer. • Sechzer, J. A., K. W. Lieberman, G. J. Alexander, D. Weidman, and P. E. Stokes, 1986, “Aberrant parenting and delayed offspring development in rats exposed to lithium,” Biological Psychiatry, 21: 1258–1266. • Shimony, A., 1963, “Role of the observer in quantum theory,” American Journal of Physics, 31: 755–773. • Skrbina, D., 2003, “Panpsychism in Western philosophy,” Journal of Consciousness Studies, 10(3): 4–46. • Smart, J.J.C., 1963, Philosophy and Scientific Realism, London: Routledge & Kegan Paul. • Spencer, Brown G., 1969, Laws of Form, London: George Allen and Unwin. • Squires, E., 1990, Conscious Mind in the Physical World, Bristol: Adam Hilger. • Stapp, H.P., 1993, “A quantum theory of the mind-brain interface,” in Mind, Matter, and Quantum Mechanics, Berlin: Springer, pp. 145–172. • –––, 1999, “Attention, intention, and will in quantum physics,” Journal of Consciousness Studies, 6(8/9): 143–164. • –––, 2006,“Clarifications and specifications. Conversation with Harald Atmanspacher,” Journal of Consciousness Studies, 13(9): 67–85. • –––, 2007, Mindful Universe, Berlin: Springer. • –––, 2015, “A quantum-mechanical theory of the mind-brain connection,” in Beyond Physicalism, E.F. Kelly et al. (eds.), Lanham: Rowman and Littlefield, pp. 157–193. • Stephan, A., 1999, Emergenz, Dresden: Dresden University Press. • Strawson, G., 2003, “Real materialism,” in Chomsky and His Critics, L. Anthony and N. Hornstein (eds.), Oxford: Blackwell, pp. 49–88. • Stuart, C.I.J., Takahashi, Y., and Umezawa, H., 1978, “On the stability and non-local properties of memory,” Journal of Theoretical Biology, 71: 605–618. • –––, 1979, “Mixed system brain dynamics: neural memory as a macroscopic ordered state,” Foundations of Physics, 9: 301–327. • Suarez, A., and Adams, P. (eds.), 2013, Is Science Compatible with Free Will?, Berlin: Springer. • Tegmark, M., 2000, “Importance of quantum decoherence in brain processes,” Physical Review E 61, 4194–4206. • Tononi, G., and Koch, C., 2015, “Consciousness: Here, there and everywhere?” Philosophical Transactions of the Royal Society B, 370: 20140167. • Tversky, A., 1977, “Features of similarity,” Psychological Review, 84: 327–352. • Tversky, A., and Shafir, E., 1992, “The disjunction effect in choice under uncertainty,” Psychological Science, 3: 305–309. • Velmans, M., 2002, “How could conscious experiences affect brains?” Journal of Consciousness Studies, 9(11): 3–29. Commentaries to this article by various authors and Velman’s response in the same issue, pp. 30–95. See also Journal of Consciousness Studies, 10(12): 24–61 (2003), for the continuation of the debate. • –––, 2009, Understanding Consciousness, Routledge, London. • Vitiello, G., 1995, “Dissipation and memory capacity in the quantum brain model,” International Journal of Modern Physics B, 9: 973–989. • –––, 2001, My Double Unveiled, Amsterdam: Benjamins. • –––, 2002, “Dissipative quantum brain dynamics,” in No Matter, Never Mind, K. Yasue, M. Jibu, and T. Della Senta (eds.), Amsterdam: Benjamins, pp. 43–61. • –––, 2012, “Fractals as macroscopic manifestation of squeezed coherent states and brain dynamics,” Journal of Physics, 380: 012021. • –––, 2015, “The use of many-body physics and thermodynamics to describe the dynamics of rhythmic generators in sensory cortices engaged in memory and learning,” Current Opinion in Neurobiology, 31: 7–12. • Wang, Z., Busemeyer, J., Atmanspacher, H., and Pothos, E., 2013, “The potential of quantum theory to build models of cognition,” Topics in Cognitive Science, 5: 672–688. • Wang, Z., Solloway, T., Shiffrin, R.M., and Busemeyer, J.R., 2014, “Context effects produced by question orders reveal quantum nature of human judgments,” Proceedings of the National Academy of Sciences of the USA, 111: 9431–9436. • Weizsäcker, C.F. von, 1985, Aufbau der Physik, München: Hanser. • Wendt, A., 2015, Quantum Mind and Social Science, Cambridge: Cambridge University Press. • Wheeler, J.A., 1994, “It from bit,” in At Home in the Universe, Woodbury: American Institute of Physics, pp. 295–311, references pp. 127–133. • Whitehead, A.N., 1978, Process and Reality, New York: Free Press. • Wigner, E.P., 1967, “Remarks on the mind-body question,” in Symmetries and Reflections, Bloomington: Indiana University Press, pp. 171–184. • –––, 1977, “Physics and its relation to human knowledge,” Hellenike Anthropostike Heaireia, Athens, pp. 283–294. Reprinted in Wigner’s Collected Works Vol. VI, edited by J. Mehra, Berlin: Springer, 1995, pp. 584–593. • Wittek, P., 2014, Quantum Machine Learning: What Quantum Computing Means to Data Mining, New York: Academic Press. • Wundt, W., 1911, Grundzüge der physiologischen Psychologie, dritter Band, Leipzig: Wilhelm Engelmann. Other Internet Resources [Please contact the author with suggestions.] Inspiring discussions on numerous topics treated in this paper with Guido Bacciagaluppi, Thomas Filk, Hans Flohr, Stuart Hameroff, Hans Primas, Stefan Rotter, Henry Stapp, Giuseppe Vitiello, and Max Velmans are gratefully acknowledged. Copyright © 2020 by Harald Atmanspacher <atmanspacher@collegium.ethz.ch> The Encyclopedia Now Needs Your Support Please Read How You Can Help Keep the Encyclopedia Free
d00639e45079a2d8
Wednesday, March 27, 2019 Nonsense arguments for building a bigger particle collider that I am tired of hearing (The Ultimate Collection) [Image Source] I know you’re all sick of hearing me repeat why a larger particle collider is currently not a good investment. Trust me, I am sick of it too. To save myself some effort, I decided to collect the most frequent arguments from particle physicists with my response. You’ve heard it all before, so feel free to ignore. 1. The “Just look” argument. This argument goes: “We don’t know that we will find something new, but we have to look!” or “We cannot afford to not try.” Sometimes this argument is delivered with poetic attitude, like: “Probing the unknown is the spirit of science” and similar slogans that would do well on motivational posters. Science is exploratory and to make progress we should study what has not been studied before, true. But any new experiment in the foundations of physics does that. You can probe new regimes not only be reaching higher energies, but also by reaching higher resolution, better precision, bigger systems, lower temperatures, less noise, more data, and so on. No one is saying we should stop explorative research in the foundations of physics. But since resources are limited, we should invest in experiments that bring the biggest benefit for the projected cost. This means the higher the expenses for an experiment, the better the reasons for building it should be. And since a bigger particle collider is presently the most expensive proposal on the table, particle physicists should have the best reasons. “Just look” certainly does not deliver any such reason. We can look elsewhere for lower cost and more promise, for example by studying the dark ages or heavy quantum oscillators. (See also point 18.) 2. The “No Zero Sum” argument. “It’s not a zero sum game,” they will say. This point is usually raised by particle physicists to claim that if they do not get money for a larger particle collider, then this does not imply a similar amount of money will go to some other area in the foundations of physics. This argument is a badly veiled attempt to get me to stop criticizing them. It does nothing to explain why a particle collider is a good investment. 3. Everyone gets to do their experiment! This usually comes up right after the No-Zero-Sum-argument. When I point out that we have to decide what is the best investment into progress in the foundations of physics, particle physicists claim that everyone’s proposal will get funded. This is just untrue. Take the Square Kilometer Array as an example. Its full plan is lacking about $1 billion in funding and the scientific mission is therefore seriously compromised. The FAIR project in Germany likewise had to slim down their aspirations because one of their planned detectors could not be accommodated in the budget. The James Webb Space telescope just narrowly escaped a funding limitation that would have threatened its potential. And that leaves aside those communities which do not have sufficient funding to even formulate proposals for large-scale experiments. (See also point 19.) Decisions have to be made. Every “yes” to something implies a “no” to something else. I suspect particle physicists do not want to discuss the benefit of their research compared to that of other parts of the foundations of physics because they know they would not come out ahead. But that is exactly the conversation we need to have. 4. Remember the Superconducting Super Collider! Yes, the Superconducting Super Collider (SSC). I remember. The SSC was planned in the United States in the 1980s. It would have reached energies somewhat exceeding that of the Large Hadron Collider, and somewhat below that of the now planned Future Circular Collider. Whatever happened to the SSC? What happened is that the estimated cost ballooned from $5.3 billion in 1987 to $10 billion in 1993, and when US congress finally refused to eat up the bill, particle physicists collectively blamed Phillip Anderson. Anderson is a Nobel Prize winning condensed matter physicist who testified before the US congress in opposition of the project, pointing out that society doesn’t stand much to benefit from a big collider. While Anderson’s testimony certainly did not help, particle physicists clearly use him as a scapegoat. Anderson-blaming has become a collective myth in their community. But historians largely agree the main reasons for the cancellation were: (a) the crudely wrong cost estimate, (b) the end of the cold war, (c) the lack of international financial contributions, and (d) the failure of particle physicists to explain why their mega-collider was worth building. Voss and Koshland, in a 1993 Editorial for Science, summed the latter point up as follows: “That particle physics asks questions about the fundamental structure of matter does not give it any greater claim on taxpayer dollars than solid-state physics or molecular biology. Proponents of any project must justify the costs in relation to the scientific and social return. The scientific community needs to debate vigorously the best use of resources, and not just within specialized subdisciplines. There is a limited research budget and, although zero-sum arguments are tricky, researchers need to set their own priorities or others will do it for them.” Remember that? 5. It is not a waste of money This usually refers to this attempted estimate to demonstrate that the LHC has a positive return on investment. That may be true (I don’t trust this estimate), but just because the LHC does not have a negative return on investment does not mean it’s a good investment. For this you would have to demonstrate it would be difficult to invest the money in a better way. Are you sure you cannot think of a better way to invest $20 billion to benefit mankind? 6. The “Money is wasted elsewhere too” argument. The typical example I hear is the US military budget, but people have brought up pretty much anything else they don’t approve of, be that energy subsidies, MP salaries, or – as Lisa Randall recently did – the US government shutdown. This argument simply demonstrates moral corruption: The ones making it want permission to waste money because waste of money has happened before. But the existence of stupidity does not justify more stupidity. Besides that, no one in the history of science funding ever got funding for complaining they don’t like how their government spends taxes. The most interesting aspect of this argument is that particle physicists make it, even make it in public, though it means they basically admit their collider is a waste of money. 7. But particle physicists will leave if we don’t build this collider. Too bad. Seriously, who cares? This is a profession almost exclusively funded by taxes. We don’t pay particle physicists just so they are not unemployed. We pay them because we hope they will generate knowledge that benefits society, if not now, then some time in the future. Please provide any reason that continuing to pay them is a good use of tax money. And if you can’t deliver a reason, I full well think we can let them go, thank you. 8. But we have unsolved problems in the foundations of physics. This argument usually refers to the hierarchy problem, dark matter, dark energy, the baryon asymmetry, quantum gravity, and/or the nature of neutrino masses. The hierarchy problem is not a problem, it is an aesthetic misgiving. For the other problems, there is no reason to think a larger collider would help solving them. I have explained this extensively elsewhere and don’t want to go into the question what problems make promising research directions here. If you want more details, read eg this or this or my book. 9. So-and-so many billions is only such-and-such a tiny amount per person per day. I have no idea what this is supposed to show. You can do the same exercise with literally any other expense. Did you know that for as little a tenth of a Cent per year per person I could pay my grad student? 10. Tim Berners-Lee invented the WWW while employed at CERN. By the same logic we should build patent offices to develop new theories of gravitation. 11. It may lead to spin-offs. The example they often bring up is contributions to WiFi technology that originated in some astrophysicists’ attempt to detect primordial black holes. In response, allow me to rephrase the spin-off-argument: Physicists sometimes don’t waste all money invested into foundational research because they accidentally come across something that’s actually useful. That wasn’t what you meant? Well, but that’s what this argument says. If these spin-offs are what you are really after, then you should invest more into data analysis or technology R&D, or at least try to find out which research environments are likely to benefit spin-offs. (It is presently unclear how relevant serendipity is to scientific progress.) Even in the best case this may be an argument for basic research in general, but not for building a particle collider in particular. 12. A big particle collider would benefit many tech industries and scientific networks. Same with any other big investment into experimental science. It is not a good argument for a particle collider in particular. 13. It will be great for education, too! If you want to invest into education, why dig a tunnel along with it? 14. Knowledge about particle physics will get lost if we do not continue. We have scientific publications to avoid that. If particle physicists worry this may not work, they should learn to write comprehensible papers. Besides, it’s not like particle physicists would have no place to work if we do not build the next mega-collider. There are more than a hundred particle accelerators in the world; the LHC is merely the largest one. Also note that the LHC is not the only experiment at CERN. So, even if we do not build a larger collider, CERN would not just close down. 15. Highly energetic particle collisions are the cleanest way to measure the physics of short distances. I tend to agree. This is what originally sparked my interest in high energy particle physics. But there is currently no reason to think that the next breakthroughs wait on shorter distances. Times change. The year is 2019, not 1999. 16. Lord Kelvin also said that physics was over and he was wrong Yeah, except that I am the one saying we could do better things with $20 billion than measuring the next digits of some constants. 17. Particle accelerators are good for other things. The typical example is that beams of ions can treat certain types of cancer better than the more common radiation therapies. That’s great of course, and I am all in favor of further developing this technology to enable the treatment of more patients, but this is an entirely different research avenue than building a larger collider. 18. You do not know what else we should do. Sure I do. I wrote a whole book on this: In the foundations of physics, we should focus on those areas where we have inconsistencies, either between experiment and theory, or internal inconsistencies in the theories. Examining such inconsistencies is what has historically led to breakthroughs. We currently have such situations in the following areas: (a) Astrophysical and cosmological observations attributed to dark matter. These are discrepancies between theory and data which should be studied closer, until we have pinned down the theory. Some people have mistakenly claimed I am advocating more direct detection experiments for certain types of dark matter particles. This is not so. I am saying we need better observations of the already known discrepancies. Better sky coverage, better resolution, better stats. If we have a good idea what dark matter is, we can think of building a collider to test it, if that turns out to be useful. (b) Quantum Gravity. The lack of a theory for quantized gravity is an internal theoretical inconsistency. We know it requires solution. A lot of physicists are not interested in experimentally testing this because they think it is not possible. I have previously explained here and here why that is wrong. (c) The foundations of quantum mechanics: The measurement postulate is inconsistent with reductionism. There is basically no phenomenological or experimental exploration of this. Needless to say, I think my argument for how to break the current impasse is a good one, but I do not really expect everyone to just agree with it. I am primarily putting this forward because it’s the kind of discussion we should have: We have not made progress in the foundations of physics for 40 years. What can we do about it? At least I have an argument. Particle physicists do not. 19. But you do not have any other worked-out proposals The proposal for the FCC was worked out by a study group over 5 years, supported by 11 million Euro. Needless to say, I cannot, as a single person and in a few weeks of time, produce comparable proposals for large scale experiments. Expecting me to do so is unreasonable. 20. But it will do all these things Particle physicists like to point towards their 716 pages report that summarizes what they could do with the FCC. But, look, no one doubts that you can do something with $20 billion. The question is whether what you can do is worth the investment. The report does not address this point at all. 1. Final sentence of 16.b) "I have previously written explained here and here why that is wrong." - I guess links are missing. 2. Hi Sabine, I totally appreciate what you're doing, and it even opened my eyes to the systematic errors that scientists make. Please don't let the cargo cult followers silence you :) But, as reader of your blog, I kinda miss the variety of the content that you published some time ago. Like reviews of new (or old) papers, introductions to new (and old) theories, etc. I hope at some point you will get back to digging out such papers and theories, and presenting them... Best regards 1. Michael, Yes, I am aware of this :( I hope to get back to "normal" soon. I have several interesting papers I want to write about, but I am severely behind. 3. "The foundations of quantum mechanics: The measurement postulate is inconsistent with reductionism. There is basically no phenomenological or experimental exploration of this." At the start of the Quantum Information/Computing/Communication industry, it was very much felt that such things were experimental foundations of physics, and I think they were instrumental in making QM seem much more familiar than it felt before, say, 2000, whether we can say we now better understand measurement or not. By now many people working on such things would hate to be thought so impractical, and therefore probably wasting $billions, but the early runners went to Foundations of QM conferences and did care about such things. If quantum computation doesn't pan out quickly, perhaps we'll be treated to stories of how the many billions spent led to better understanding of the foundations of QM. 16. (d) The foundations of interacting QFT. We don't understand interacting QFT. [But you may remember that I'm as much a broken record on this as other people are about their enthusiasms.] 1. Peter, Yes, you are right. I should have included QFT in that. I usually do, but somehow I forgot. My bad. 4. "By the same logic we should build patent offices to develop new theories of gravitation." Not the worst logic of the arguments considered. 5. Before you get annoyed about humanity spending money to advance fundamental knowledge consider this.. Google make $4 Billion/month from people clicking on their silly little ads. Give science a break. Give them the money. Lets look inside the proton. Unless of course you prefer to click on ads. 1. @Richard " The Hossenfelder Scale" for measuring Crackpots is way better than The Baez's Crackpots Index ... 6. For all who are discouraged to build the FCC (or CLIC) after reading the arguments above, I recommend to read the interview with Nima Arkani-Hamed, it will cheer you up again ! Where there is hope, there is life ! 7. If anyone can flesh out just a little what Sabine means by "the measurement postulate is inconsistent with reductionism," I'd be grateful. I assume this is a problem I've heard stated in other terms, and I'm just failing to translate it into this phrasing. My failure, not Sabine's. 1. Dave M, The point is that we would like our measurement instruments to be describable, in principle, by quantum mechanics. In that case, the measurement process should not require an additional assumption: all the details of the measurement process should be explained by QM without an additional measurement postulate. If that is not so -- i.e., if the action of measurement instruments cannot be explained by QM alone -- then we are entitled to ask what novel physical process is going on in the measurement process that is not explained by quantum mechanics. Weinberg explained this quite clearly in Sabine's interview in her book. See also his discussion in the second edition of his Lectures on Quantum Mechanics: "If quantum mechanics applies to everything, then it must apply to a physicist’s measurement apparatus, and to physicists themselves. On the other hand, if quantum mechanics does not apply to everything, then we need to know where to draw the boundary of its area of validity. Does it apply only to systems that are not too large? Does it apply if a measurement is made by some automatic apparatus, and no human reads the result?" The ultimate issue is whether (human?) consciousness somehow is needed to bring about a true measurement. Wigner suggested just that in his famous essay in The Scientist Speculates. Of course, if it were ever shown that consciousness is integral to the measurement process, then we would be obligated to turn our attention to understanding consciousness, which would certainly be a change of direction for physics! It seems reasonable that physicists should at least try to give a fully complete physical exposition of QM without invoking consciousness. Weinberg sums up by alluding to perhaps the oddest aspect of this whole matter: "Indeed, many physicists are satisfied with their own interpretation of quantum mechanics. But different physicists are satisfied with different interpretations." So, if you think you know the "obvious" answer to Weinberg's questions, be aware that many physicists agree that there is an "obvious" answer, but they disagree as to what that "obvious" answer is. Dave Miller 2. PhysicistDave, I totally endorse what you have written above. I guess quite a lot of non-HEP scientists feel that there is unfinished business at the level of ordinary QM, and indeed that that may be truly fundamental. As you point out, Shroedinger's equation properly applies to every part of life - not just a few particles that happen to be under study. Superficially those equations would imply a reality consisting of an ever more entangled wave function encompassing different possible situations superimposed. The possible relationship between QM and consciousness clearly interests Roger Penrose, so there it isn't as though this idea has been 'settled', it has just been put to one side because it is embarrassing! 3. Physicist Dave, QM is a mathematical method for describing the statistical outcomes of otherwise unobservable physical processes. The math neither describes nor explains those processes. Why then, should we expect a complete physical exposition of QM (with or without consciousness)? 4. Bud Rap wrote, "QM is a mathematical method for describing the statistical outcomes of otherwise unobservable physical processes." That makes QM sound like classical statistical mechanics, which I think isn't fair. First of all, QM computes the wave function, which is *not* in itself a probability distribution - not least because it can take on negative or complex values. QM isn't creating a statistical outcome of a deeper theory (although OK it is an approximation to QFT). You only get probabilities when you evaluate Ψ Ψ*. Surely physics should be more than obtaining some equations that seem to describe reality, shouldn't it also provide an explanation of what it is that the maths relates to? 5. "QM isn't creating a statistical outcome of a deeper theory (although OK it is an approximation to QFT)." Actually, it might as well be; it's just that we don't know that deeper theory yet. And I think even QFT doesn't fix that - you get a distribution over configurations of classical fields instead than over configurations of classical point-like particles, but the 'statistical distribution' effect remains. 6. David Bailey, At the interface between QM and observation statistics is all you get. That QM arrives there via a different set of formalisms necessitated by the peculiar circumstances of the quantum scale, doesn't alter the analogous nature of the outcome. It certainly should! My point was only that you cannot expect to obtain reasonable physical explanations from mathematical formalisms that aren't constructed on reasonable qualitative foundations. 7. Simone said, "Actually, it might as well be; it's just that we don't know that deeper theory yet. " Well unless there are an infinite number of theories, each depending on the one below, the process has to stop somewhere. My gut feeling is that QM is special - it says that fundamentally we have different possibilities (realities if you like) that evolve and interfere with each other. This feels more fundamental than particles. So I would rate QM as fundamental, and since QM cannot coexist with GR, I'd bet that GR has to change. 8. Bee, has Moriond 2019 found any BSM physics signals, i understand possible lepton flavor violations 1. Moriond is really only the occasion on which rumors become official. If there were any BSM breakthroughs in the data analysis done so far, we'd have heard of it by now. 2. The most interesting physics is the measurement of CP violations in decays of D0 vs bar-D0. 3. Yes that's in the popular news. has Moriond released new bounds on SUSY such as gluino's and squarks? given Morion hasn't seen SUSY in the full data set, it seems the likelihood of a 5-sigma discovery of SUSY is low. 4. So far evidence for s-tau or s-top etc is at best around 2-sigma. It has not risen to the eyebrow raising level of 3-sigma. The most recent thing I have seen is 9. Doctor Hossenfelder, In response 17, having pointed out that you are only one person, the criticism is not relevant because there exists a wealth of ready available alternatives already. To suggest a few (sorry, just my personal interests): Fusion Energy; Carbon Removal from the Atmosphere; Efficient Storage of Renewable energy sources during times of over-production; higher temperature superconductivity; Neurobiological Research; Cognitive and neurological health; Structures encouraging responsibility and objectivity in leadership. 1. I don't think diverting (even more) funds from the foundations of physics research into engineering research (and a bit of biology and medical sciences) is the right way to go (and I don't think that's what Sabine proposes, I trust she'll correct me if I misinterpreted her). Those $20 billion should stay in the same field of research, but funding 5-100 promising experiments instead of one mega-project with few to no chances of getting a breakthrough. Or even a different huge project if you have the justification. Biology, biomedicine and engineering are already attractive research fields for which funding, private and public, is *relatively* easy to come by. Physics (specially foundations) is extremely hard to sell to the public and the chances of private funding are close to nil. Please, do not advocate for moving funds away from physics, we *need* physics research. 2. Javier, Sabine has never seemed to me to suggest diverting funds from physics research. She presents arguments that, in upgrading the LHC, these funds are not being allocated for convincing objectives. Intelligent probing of the unknown, including in the field of theoretical physics, should always be supported. So should building on existing knowledge to directly address massive known problems. Tax supported funding is not unlimited; worthy ideas in all fields die daily for their lack. No single individual can be expected to develop programs which solve all the associated problems. (17.) In suupporting arguments for upgrading the LHC by related applied science, e.g. in superconducting magnets, the question simply arises whether the known value of advancing applied science should be more directly supported until physics offers programs with a higher probability of definitive results than the LHC. jmo. Bert Kortegaard 3. Yes, I'm aware Sabine wasn't suggesting that; you were, though. In my experience, Applied Science is just a fancy way of saying engineering research and, as I said, I don't think we should transfer money from the much-in-need-of-funding foundations of physics into the bad-but-still-not-nearly-as-bad field of engineering research. Superconducting magnets are being actively researched by public and private interests (plenty of direct applications) and although you can always use more funding, they have plenty of opportunities to get it (same with your other proposals). Foundations of physics (QFT, Cosmology, Quantum Gravity, etc) get nearly 0 funding from the private sector because of their lack of immediate applicability and, because of the obscurity of the topics, it's also a hard sell to the public (at the risk of being wrong, I'm guessing they are the worst funded field within the natural sciences; probably only social scientists envy them). That's why while I agree that we should fund something else, I believe the funds should stay in the field. And for full disclosure, I say this precisely from the point of view of someone who does engineering research for a living... in the private sector. Find a theoretical physicist who can say the same (and is still doing fundamental research). 4. Javier, thanks for your comments. I thought what I was suggesting was obvious from what I wrote, but I apologize to anyone who misinterpreted it as you have. Applied Science starts where science is understood well enough to build on it to produce useful things. At its most interesting it includes developing new techniques and tools, but ithose of us who practice it do not ordinarily describe that as research. My blog includes a link to some of my own work in this field. Lest this should become off-topic, my blog also contains my email. 10. "...Google make $4 Billion/month from people clicking on their silly little ads. Give science a break......" God I hope that asinine comment is an attempt at humour..but I have a feeling its not.. 11. On "what novel physical process is going on in the measurement process" I've always assumed it was some sort of Darwinian-like selection-of-fittest-history (in a sum-over-histories formulation of QM). But this process is apparently an additional "postulate" to QM. 12. I love your blog and totally agree that a "wrapping up" of this discussion was due. For that reason, I would suggest a change in argument 6: "With it, THESE particle physicists... "That THE particle physicists MAKING it ..." Only the ones making the argument suffer from moral corruption. Many others just think it isn't a waste of money, they just have a different opinion (generalization). It may help avoiding unwanted 'rants' 1. Ward, I think this is clear from the context, but I nevertheless changed that sentence along the line you suggest. 13. Me as taxypayer I think we should not spend billions of € for a even bigger collider - instead we should invest money in exploring and pondering, where we failed in our beautiful Taka-Tuka-theories during the last half century and consider new ways of thinking aubout the fundamental laws of physics! 14. As to point 7, maybe NASA's space launch system could use the extra physicists if no new collider is built. They could move from one project with no results to another that is building a rocket that will never launch, because the important thing is to have jobs in all fifty states, not actually get anything done. As Rep. Aderholt said about SLS ""The SLS and Orion programs are, of course, key to the health of our national aerospace supplier base, and it's really helped to really put a new boost of energy into the suppliers in all the 50 states following the retirement of the space shuttle," 15. Bee, do these arguments in these post apply to HE-LHC with 16 tesla magnets, a estimated ~7 billion upgrade to LHC 8.33 tesla magents in its 27 km tunnel? I would argue that for the price tag, exploring between 14 TEV to 27 TEV for new physics is certainly a justified upgrade. i wonder whether it'd be better to simply forget about HL-LHC and instead invest that money into HE-LHC. and by the time 16 tesla magnets are ready, perhaps 24 tesla or even 32 tesla magnets will be in development. so no new tunnel will be built, the 27 km is reused, but super conducting magnet technology is improved over decades. 16. "IF" dark matter is made of particles that only interact through gravity, how can you study it if not by missing energy momentum of high energy collisions? 1. @Daniel de França MTd2; Dark Matter necessarily gravitates with other matter; it can be studied astronomically; through gravitational lensing and perhaps by studying galaxy dynamics in a wide range of galaxy sizes, or a range of galaxy proximities. What's happening with the dark matter in galactic collisions? Let's build a $20B super high resolution space telescope, or 20 $1B telescopes we can gang together in an array. Let's study it. 2. Dr Castaldo, Yes but - there's always a but! The recent paper "Probing dark matter particles at CEPC" by Zuowei Liu and colleagues illustrates the possibility of using high energy colliders to investigate various Dark Matter models. The point being that thorough investigation of a phenomenon requires multiple lines of attack. This means making the best of the available options - which are often not mutually exclusive. Will collider funding be diverted to astronomy? There's currently no reason to suggest this would be the case. 3. Dr Castaldo This is like studying electrons with circuits. You won't be able to infer what dark matter is, but, just its collective properties. That is, you will just know what a current looks like. You won't get insight of what is dark matter. 17. Hi Sabine, you state: The hierarchy problem is not a problem. Maybe, but if you find a solution you sure will have surprises - surprises that the current foundations may not survive. 18. "The measurement postulate" I feel like the justification to question the "Copenhagen interpretation" (you know the one they still teach undergrads) has been around and readily accessible for at least 8 years ( The problem seems to be that none of the alternative hypotheses (can we call them that?) have been able to gather the doubters together and gain traction. This business of questioning weather "consciousness" is required for things to be "measured" always seemed daft to me, isn't "superposition" a statement about the correlation or non-correlation of two quantum system not a statement about a single quantum system? ie until I correlate my detector with the superimposed system (by shooting lasers between them I guess would be typical) then the detector isn't 'touching' /hasn't 'touched' the other system and just doesn't contain information about the superimposed system yet? So there is never a funny magic state there is just a situation where two systems don't currently share any information so querying ether of them about the other is nonsensical till you 'connect' the two systems (fire the lasers, take the measurement, open the box, throw the detector at the test article... ect) Obviously I'm out of my depth please correct my childish simplifications you there smart physical folk! Thank you for the help... 19. On 17, "you do not know what else to do": I understand you DO know, but --- Since when is knowing the solution to a problem necessary to know that there IS a problem? If I go to the vet because my dog is limping, I don't go there knowing what should be done about it. Making it known that a problem exists is the first step, getting agreement on that, and detailing the nature of problem, come next. Developing a plan of attack is well down the list. 1. It is interesting to me that (re the video link above: The Quantum Conspiracy, 1,571,119 views, GoogleTechTalks) that some physicists like an "interpretation" that says "you don't really exist". It seems to me to be a part of the curious antimaterialist turn (we are all just "information" or something like that) among physicists, at least as indicated by the current articles published for the general reader. 2. @Philip Thrift, voices that advocate for "antimaterialism" are perhaps more shrill, but for the general reader you could try Philip Ball's "Beyond Weird",, which deflates the weirdness of QM in a way that IMO fairly accurately reflects the practical "let's use QM" perspective of working quantum computing/information, condensed matter, and most working physicists. His Royal Institution lecture,, gives a fairly good sense of the position he suggests in that book. You may already know that in philosophy anti-realism is as or more often anti-realism about theories than it is an anti-materialism of anti-realism about the world and our experience of it. There will be some continuity between our current theories and new theories, so that electrons will exist in *some* form in future theories (with careful discussions of how the electron is both equivalent and not quite equivalent to new concepts), but they or other concepts may be deprecated, so to speak, because other theoretical tools and concepts will be devised that are just more effective. An absolute commitment even to such an apparently robust theoretical concept as the electron may, or may not, turn out to be ill-advised, but an appropriate slight hesitancy to say of every part of the standard model of particle physics that it is "emphatically, finally real", does not demand any hesitancy in our belief in and engagement with the world as a whole. 3. I have read articles about Philip Ball's book (e.g. Peter Woit's, but not the book I admit. My own view has been some combination of Path Integral (or Sum-Over-Histories) and (some version of) Quantum Darwinism: PI+QD. But that's as "real" as I get. :) 4. FWIW, the (very popular) idea that the Path Integral (a generating function for time-ordered vacuum expectation functionals) somehow makes quantum theory classical (paths!) is IMO problematic because it uses time ordering to sweep the noncommutative algebraic structure under the table, whereas noncommutative measurements are essential for the empirical success of QM/QFT. If you say "(some version of) QD", I take you to be invoking decoherence in some way, which one has to have formal worries about, but, as you know, it works more-or-less, and certainly for all practical purposes. My own view has become that QM and QFT are (stochastic) signal analysis formalisms, for which we can say, loosely, that incompatible measurements are mathematical consequences of using classical representations of the Heisenberg algebra, which is closely connected with fourier analysis. 5. On the PI, I just follow Fay Dowker (@DowkerFay, Mar 26): "This was an enjoyable discussion. I argued that there is one world, not many, in quantum theory based on the Path integral or Feynman sum-over-histories." On "Darwinian" selection: Only one history survives. The others die. Poor things. 20. Hi , Sabine �� Nice discussion. I agree with you as to a larger collider. -- I just find it interesting, the references to 'Tesla' -- (appearently) without knowing what it was (is). -anyway , keep up the good -- it is good. All Love, 21. re: "Nonsense arguments for building a bigger particle collider that I am tired of hearing (The Ultimate Collection)" Bee, the question i have about your arguments in this post is this CERN has earmarked several billion dollar upgrade for LHC to HL-LHC, to increase its luminosity is the billions dollars spent to upgrade luminosity by a factor of 2 to 10 a worthwhile use of money? what about $7 billion more to upgrade LHC to HE-LHC? HL-LHC and HE-LHC upgrade cost billions, but reuse the same 27km tunnel. it seems to me if we apply your arguments, we shouldn't bother upgrading the luminosity of LHC, after all, it is still going to CM of 14 TEV, and it seems a 5 sigma discovery at this point is moot. 22. This was a great thing to read right after opening my bottle of wine :) 23. RE "What should we do?" Martin Harwit wrote a very interesting book in 1981 called "Cosmic Discovery". In it, he shows the amazing role played by serendipity in fundamental discoveries, and tries to get some understanding of how to go forward based on what has led to the current state of knowledge. I think you would enjoy it. This post reminded me of it. 24. What do you think of the latest version of string theory called F-theory? I think it's a four-letter word they can't say in public 25. The intense discussion suggests the collider culture has yet to be buried and given up. I have pretty good reasons to believe that we we require new ideas about such experimental research particularly in relation to the ultimate nature of existence and of our realities. It cannot be argued that we have reached the end of all possibilities. However what I have in mind concerns the ultimate nature of forces and particles which if knew would open out a new world of physics. 26. Sabine, It seems to me that several of your arguments boil down to "Cost matters!", contrary to your opponents who are, in effect, arguing "No, Cost does not matter!" I came close to majoring in economics instead of physics, and I have trouble grasping the mind-set of anyone who truly believe that cost does not matter, but this does seem to be their perspective. Frankly, I think the subtext of your opponents' arguments is, in essence, "We high-energy physicists are just more important than other people, and doing high-energy physics is just more important than what other people do!" No one will say this quite so bluntly, but I am not sure any of us HEP physicists are completely immune to such hubris. After all, we chose to go into HEP because we really did think it was important. Of course, scientists should strive for rationality and objectivity, but, obviously, we too are all-too-human! All the best, 1. Dave, I am not sure if they actually believe that cost does not matter or whether they just argue this way because they know it's their only chance. Either way, though, what surprises me is that they would even make such an argument, if not explicitly, then implicitly by refusing to explain why the expenses are justified. Well, yes, everyone thinks that their occupation is the most important. I don't blame anyone for that. But most people understand at least that others might not share that impression. 2. I find it extraordinary that fundamental physics is now utterly divorced from the rest of science, or anything that matters more widely. HEP doesn't seem at all likely to discover a foundational truth - but it is always possible to throw yet more money at it to achieve higher energy collisions, and maybe some more 'particles'. That process will only stop when more people like Sabine put their feet down! 27. Hi Sabine. some of the latest tests give credence to your argument. ( high intensity laser / mirror trap) ...(nano particles) ,.. money can be better spent, on smaller scales. --. All Love, 28. Every ten years the space astronomers get together with NASA and create a new list of prioritized space missions. There is never enough money to fund everything and as science changes priorities change and as technology changes capabilities change. It's sort of what Erdos used to do with mathematical problems, He'd assign a cash bounty, higher for the problems he thought would be most fruitful. The problem with particle physics is that the price is getting so high, even in comparison with the costs of space missions, that funding even one item is just too expensive. No one has been thinking about a Plan B, C or D. My guess is that we'll start seeing the real spinoffs from the LHC when physicists start leaving the field. 29. Ms Hossenfelder, I personally think your position against the larger particle collider is very relevant. But I don't think that your arguments can change anything, and that is why : the larger collider has become a collective narrative of the particle physics community. Specialists call that "Intersubjective narratives", those are the root of our human society and when they have got some traction there is no way to kill them by questioning their soundness. By the way most of them are not built on RATIONAL arguments. Think for example of the moon race in the 1960. There was no rational to make such a costly programm without any other purpose than self proudness, but it became an intersubjective narrative of american people and as so impossible to cancel... until the mission succeeded and we could see there wasn't anything usefull to get from it. Il you do want to prevent that project there are in my view only two ways : 1/ leave the scientists and go to the politicians who will ultimately give the money.They most probably are not in the narrative of the particle physics community and could listen to the voice of the reason. But don't expect that the money not spent on the super collider will go in any massive way elsewhere in physics ; 2/ build another narrative on another subject and try to give it traction. To do that you have to get massive support within the physics community not just on criticizing the new collider idea but more importantly on one and only one other project which could get most of the money that could go to the collider. That does not seem fair to all the other good ideas which could benefit of a funding ? Yes, but life is not fair. 1. Franck, I think what you mean by "rational" is really "scientific". I agree that there are reasons besides the scientific ones that make people spend money on large science projects. I have nothing to say about those, so I don't. But I wouldn't call them irrational. You seem to be misunderstanding my intention though. I am not writing to prevent something from happening. I hope to make something happen. I hope that physicists who work in the foundations think about what has gone wrong and how to make progress. Blindly throwing money at the problem will not solve it. You seem to expect me personally to come up with a solution and then convince people to support me. This does not make any sense. Of course I have my own convictions about what is the right thing to do, but I don't think I should be the one making decisions. I merely want physicists to use their brain rather than blindly continuing down dead end streets. It's not about fairness, it's about progress. 2. Franck; Sputnik was launched in October, 1957. To Americans, it was widely considered a dire threat. Russia then put the first man in space four years later. Kennedy needed a response to a potential militarization of space; there was a perceived necessity to not let Russia seize "the high ground". Kennedy considered a number of potential operations, but "putting a man on the moon" before Russia did seemed the most likely to succeed, with the most inspirational content to get public backing. There were very rational ideas behind this program, even if the ultimate goal was just a symbolic finish line. The point was to develop the science and technology and capabilities of the space age, to match the same being developed by a hostile power (the Cold War was 14 years old at this time), and this is what was accomplished. There were many entirely rational reasons to "go to the moon", including the rational decision to appeal to emotions in building public support. Because, as we Americans are currently proving, and other countries have proven time and again throughout recorded history, rationality is definitely not the primary decision making tool of our citizens. 3. @Franck: That "life is not fair" is not an excuse for taking action to make life more unfair; the primary value of human intelligence has surely been to make us far less victims of the random cruelties of life and nature, not to exacerbate them. The solution to one swindle is not another swindle, it is getting people to recognize when they are being swindled. 30. With respect to the discussions on the foundations of quantum mechanics and measurement I write this below. Probability theory for statistically independent events is L^1 in that probabilities add linearly and there are no correlations between probabilities. Quantum mechanics is L^2 in that amplitudes add linearly, but the “distance,” or really most importantly the distance squared as probabilities, is the sum of the modulus square of amplitudes. This makes statistical mechanics or a theory based on pure classical probability fundamentally different from quantum mechanics. The theory of convex sets is such that for a set with measure L^p, with elements x, and another with L^q, with elements y, that Holder's norm ||x||_p×||y||_q ≥ sum_i|x_iy_i| for 1/p + 1/q = 1. This means there is a duality between convex sets with these values of p and q defining these norms. For p = 1 this means q → ∞ and for p = 2 the dual is also q = 2. This is a part of how quantum mechanics and spacetime, with its Gaussian metric distance, are dual to each other. The dual to pure statistical systems with q → ∞ means there are no probabilities at all and this is a completely deterministic system such as Newtonian mechanics. A measurement occurs where there is a decoherence of a quantum wave occurs and the trace elements of the density matrix defines a classical probability distribution. The theory of decoherence permits us to understand how a wave function is reduced, because the superposition or entanglement phase of that system is transferred to a reservoir of states, say the needle state of a measuring apparatus, and the system is reduced to pure probabilities. We can't really know which of these outcomes happens in some deterministic manner according to quantum mechanics. As the dual for a p = 1 system, where the wave function reduction is a p = 2 → 1 process, has as its dual the q → ∞ convex set or hull description. Does this then mean we can use this to understand some underlying classical type of structure to quantum measurement? We might want to be a bit conservative here. The problem is that we have convex sets that we propose are computing quantum numbers, and in the case with a p ↔ q duality we have this idea that quantum numbers, say as the Gödel number for an integer computed by a Diophantine equation or the computed outcome of a deterministic system, as having a single axomatic process. Hilbert's 10th problem proposed there should be a single algorithmic or axiomatic process for solving Diophantine equations Matiyasevich found the final conclusion to a series of lemmas and theorems worked by Davis, Putnam and Robinson, called the DMPR theorem. This is a form of Gödel's theorem and the conclusion is there is no comprehensive axiomatic system for Diophantine equations. Quantum numbers as Gödel numbers for integer solutions to Diophantine equations are then not entirely computable and there can't exist a Turing machine (in the classical sense a q → ∞ convex set) that computes quantum outcomes. I then maintain the solution to the quantum measurement problem is that there can't exist such a solution. It is an unsolvable problem. Quantum measurement has some features similar to self-reference in that a quantum system is encoded by another system ultimately made of quantum states. It also has features similar to the Euclid's 5th axiom problem. One can assume the axiom holds and stick with Euclidean flat space, or one can abandon it and work with a plethora of geometries. In QM this would be to stay with Merman’s shut up and calculate dictum, or to adopt any of the quantum interpretations out there, which contradict each other, to augment QM in some extended way. This has features remarkably similar to the dichotomy between consistency and completeness. 1. @LawrenceCrowell, this is fine, but I suggest there is a question as to what Classical Mechanics is. Specifically, Koopman in 1931 introduced a Hilbert space formalism for CM, which can be thought of as offering a unification of CM with QM, just as the Schrödinger equation and Heisenberg's matrices were unified as Hilbert space formalisms. In these terms, the difference between CM and QM is mostly "just" that CM has a purely commutative algebra of measurements. Mutually noncommutative measurements do make sense for CM, however, as is well-known in signal analysis, where Wigner functions are frequently used: one can introduce the Heisenberg group as differential operators, [j∂/∂q,q]=j, instead of as in QM as [q,p]=i. Call an extension to include all such operators CM+. I lay out an argument that if we have a solution of the measurement problem for CM+ (using a Gibbs state over the CM algebra extended to the CM+ algebra), we also have a solution for QM, in my (currently submitted to Physica Scripta): I find that a solution for CM+ is less elusive. In particular, I suggest that the specific difficulty you outline above is eliminated by comparing CM+ with QM instead of comparing CM with QM. We don't obtain a complete unification, but it's closer than we've had. 2. I looked over your paper and down loaded it. I will have to reserve judgment until I read it sometime later, though I hope not too long into the future. It looks a bit like noncommutative geometry of Connes et al.. The connection between quantum and classical mechanics is often stated as 1 = {q, p} → [q, p] = iħ for large action S = nħ for n → ∞. I think the most important aspect of this is that classical mechanics is real valued and quantum mechanics is complex valued. The extension of the reals into complex numbers means probabilities are the modulus square for |ψ⟩ = sum_n c_n|n⟩ ⟨ψ|ψ⟩ = sum_{mn}c^*_mc_n⟨m|n⟩ = sum_n|c_n|^2 = sum_n P_n. Classical mechanics has none of this construction, and instead determines the value of classical variables. The correspondence between an observable Ô|n⟩ = O_n|n⟩ in quantum mechanics and probabilities is then ⟨ ψ| Ô |ψ⟩ = sum_{mn}c^*_mc_n⟨ m| Ô |n> = sum_n|c_n|^2 = sum_n P_nO_n. This is Born's rule, where curiously a general proof of this is not at hand. Anyway the observable occurs as eigenvalues in a distribution with probabilities. We can think of both classical and quantum mechanics as a measure theory O_{obs} = ∫dμO, but where for classical mechanics the measure is zero everywhere except the contact manifold and the with quantum mechanics there is this quadratic set of modulus square of amplitudes = probabilities in a summation that weights eigenvalues. There is Gleason's theorem that tells us the linear span of a Hilbert space defines a trace that uniquely defines probabilities. Hence any measure μ(X) = Tr(WP_X) for W a positive trace class. So this appears half way to a complete proof of Born's rule; all we need is to slip operators in this. The problem is that operators come in sets of commuting operators. In particular the density matrix evolves by ρ(t' - t) = Uρ(t)U^† for U = exp{-iH(t' - t)/ħ}. For t' - t = δt very small then U ≈ 1 - iH(t' - t)/ħ and it is not hard to see that time evolution of the density matrix involves a nonzero commutator of the density matrix with the Hamiltonian. This means the Hamiltonian rotates or evolves the density matrix out of the basis one might consider for Gleason's theorem. I think this is the reason that Gleason's theorem, as profound it may be, does not reach the generalization of a proof of Born's rule. However, observables in classical and quantum mechanics have different measure theories or distributions. Classical mechanics is “sharp,” which means it it L^∞ --- say like a delta function. Classical mechanics is L^2, and the metric structure of spacetime is L^2 as well and with conformal spacetimes and R_{ab} = κg_{ab} it is also L^2. Without getting further this is a duality connected with building spacetimes with entanglements. Now with 1/p + 1/q = 1 for convex sets then L^∞ is dual to L^1, which is a measure of pure classical probabilities. So what is this system? It is about complete stochasticity, which the outcomes of measurements are an example of. The question is whether the eigenvalues of the QM L^2 coded as integer solutions to Diophantine equations, something proven to be possible by Matiyasevich as any function has a corresponding Diophantine equation (even transcendentals like e^{ix} etc). 3. Not so much Connes as an algebraic QM approach, with the intention to bring it down to a mortal (my) mathematical level (I'm just reading Valter Moretti, "Spectral Theory and Quantum Mechanics", Springer, 2017, for example, where his Chapter 14, "Introduction to the Algebraic Formulation of Quantum Theories, is nicely done). The starting point for both classical (as usually understood, a commutative *-algebra) and quantum (a noncommutative *-algebra), as I take it, is that a state over a *-algebra is a normalized, positive map to average measurement results. The GNS-construction gives us a Hilbert space in both cases. Normal states are given by Trace[Aρ] in both cases and the Born rule is "just" a measurement |ψ⟩⟨ψ| in a pure state with density matrix ρ=|φ⟩⟨φ|, Note that everything is linear until we insist on discussing pure states. The key question is to ask whether classical physicists can reasonably ascribe a meaning to all operators that act on the classical Hilbert space, to which I argue that they can. Transformations to a different basis, with the fourier transform as case in point, more than just making sense, are *used* in classical signal analysis. I'm doing very little that's specially new in this QM context. As I said, Koopman suggested such an approach in 1931; von Neumann wrote a long paper in German that has *not* been translated, so of course it's called the Koopman-von Neumann approach, but the approach mostly languished until about 2000, when a PhD thesis appeared, since when there has been a slow stream of papers, and for the last few years there has been a Wikipedia page that's not bad. Recently a connection has been made with Quantum Non-Demolition measurements, which seems to have led to slightly more interest. I believe that understanding how things look in this kind of approach deserves to be at least as much in physicists' consciousness as deBB approaches. One final comment: *I* take the view that the complex structure *can* be understood rather nicely as associated with the fourier sine and cosine transforms of probability densities, which, as any engineer can tell you, introduces a naturally useful imaginary, j. I'm not committed to that approach, but so far I haven't seen a more natural approach. I ought to let the paper do its own talking, given that you've kind enough to say that you have at least downloaded it, but I'm quite keen to see in what ways it might or might not be attractive to other people. 4. The GNS construction is an aspect of noncommutative geometry. The spectra with Tr(Aρ) is also used in Gleason's theorem. I will try to get to your paper as soon as possible. I have this large backdrop of things to read, including finishing Sabine's book. I started reading a library copy last year and have since bought my own copy and that is on my stack as well. 31. @Dave M and all who responded Thank you for the question and the replies. It has given me a little more to guide a short Internet search. I found a retired SEP entry, It contains a significant non-technical discussion of the issues. The disagreement between Bohr and Heisenberg over the Copenhagen interpretation is very much like the contrast between Skolem and Zermelo with regard to set theory. It would seem that the measurement problem in physics is very similar to some debates in the foundations of mathematics. 32. This comment has been removed by the author. 33. In The EU, Canada and 'Developed Asia' , the budgets for Science and technology seems to not be at Risk ... It is U.S.A. that prioritize their Budgets in Military Applications the ones that knows that They have to make their research agendas to fit into Geo-Political Military Conflicts to get The Money ... (ROFL) ... Very Likely, The EU's headquarter are waiting to China's Parliament approve their Budget for HEP projects ... after that, They will decide ... No Problem, Some CERN physicists will be invited to participate in China's Toys ... and CERN will receive its 'upgrading' budget ... There is not an Eternal HEP Vacuum in Your Future ... Don't Cry in advance for things that are not happening ... 34. @Sabine, I do not quite see what this "measurement problem" is, although apparently some people lose sleep over it. The view of standard QM+ decoherence is perfectly reasonable: Schroedinger's equation (SE) describes a closed quantum system. But when the system is measured, it cannot be considered closed anymore, so it is no surprise that it's not described by the SE. The collapse of the wave function is just an effective prescription that describes this coupling to the external environment induced by the measurement. Decoherence theory showed how this process can be explained in detail in terms of standard QM. So really, I do not see where is the problem. From the experimental point of view, the experiments of Serge Haroche, for instance, have clearly shown that when the "environment" is sufficiently simple, the decoherence can be well controlled or even reversed. Again, no mystery there. I would not spend gigadollars, not even megadollars, on this pseudo-problem. For K$, I'm OK. 1. Opamanfred, Decoherence does not solve the measurement problem. Please do some reading. Don't worry, I do not want your "giga-dollars". 2. The following video simulates the collapse of the wave function. This gives pretty good ideas of how probability plays a role in collapse and what visually a collapsed wave function appears as. Of course a caveat is in order, for the ontology of a quantum wave is highly uncertain and it does not exactly "appear." However, this tells us about the mathematical representation. This video also makes the point that this sudden transition is not something the Schödinger equation predicts. As I wrote above dated 3/31 I think very strongly this problem is not solvable. Of course I might be wrong, but the issue of quantum measurement appears remarkably similar to the concept of self-reference. Instead of a predicate acting on Gödel numbers for predicates including itself a measurement is quantum information encoding quantum information. Decoherence does address aspects of measurement. However, it does not tell us how a particular outcome occurs, but rather how probability amplitudes transform into classical-like probabilities as quantum phase of superposition or entanglement is transferred to a reservoir of states. Decoherence takes us right to the doorstep of the measurement dragon, but no further. 3. "Decoherence does not solve the measurement problem" Please elaborate. I would also like to hear how exactly you define the problem. I consider what I sketched as a perfectly acceptable solution. On what aspect do you disagree? 4. Opamanfred, This is really off-topic. I am one person and not a forum. I do not have time to respond to random questions. Really, this is common knowledge, and in any case, I explained this in my book, and also Lawrence explained it correctly when he writes: 5. Lawrence, Re the measurement problem: ...I think very strongly this problem is not solvable. Well, it is not solvable mathematically speaking because it is not a question of mathematics, but of physics. The question involves the nature of the physical processes underlying the maths of QM. The difficulty, of course, is that those processes are not directly observable, and the standard formalism does not resolve logically to a realistic picture of the quantum subsystems - a wavefunction is not a physical thing. The resulting ontological speculations (MW,PI, superposition) based on the maths are muddied, metaphysical, and lacking in scientific significance, to say the least. The Copenhagen approach, OTOH, is simply to ignore the ontological problem, which consequently induces the measurement problem. Only Bohmian mechanics approaches the ontological problem from a physics (rather than strictly maths) perspective by assuming that quantum subsystems are ontologically continuous with classical mechanics. That this physically realistic reformulation (of QM) is currently disfavored relative to all the logically strained, metaphysical interpretations (of QM), says nothing good about the state of modern theoretical physics. BM is mathematically equivalent (but not qualitatively identical) to QM. In Bohmian mechanics there is no measurement problem. So, problem solved, no? 6. I have certain proclivities for the Bohm interpretation. I suppose this is just as I have the same for other interpretations. In fact I derived a form of path integral with Bohm's quantum mechanics. I found the mention of Bohm was a form of toxin in getting this published. Bohm's QM is also potentially interesting for solving problems in chaos or quantum chaos. Bohm's QM is though not identical to QM in general, but only so for wave functions of a certain form. Bohm's QM has some other deeper problems as well. The Klein-Gordon equation is a scalar wave form of the invariant momentum-energy interval of special relativity. If you follow the Bohmian prescription with a polar wave function you find the KG equation has the quantum potential. The odd implication is that a massless particle is off the light cone and in fact moving faster than light. This does not give reason to think there are various nonlocal physics with this, for that violates no-signaling and other things. This is why it is often said that Bohm's QM is not relativistic. Bohm's QM also without a Hilbert space does not derive things such as the generation or absorption of photons by atoms in a concise way, and things get worse with higher energy creation and annihilation of particles. There are quantum interpretations that are ψ-epistemic and other that are ψ-ontic. The many world interpretation (MWI) and Bohm interpretation (BI) are ψ-ontic. Bohr's Copenhagen interpretation (CI) and now the latest Qubism by Fuchs are ψ-epistemic. These are some of the popular interpretations and there are others such as consistent histories, the Montevideo interpretation and the related one by Penrose, and other. In fact quantum interpretations are multiplying like bunnies, maybe cockroaches to put it in a negative light, and none of them seems to really solve everything. The CI is interesting in that M-theory of D-branes works well with it. Quantum information theory is worked often in MWI. Qubism is now the beautiful child of those into Bayesianism --- which I can tip my hat towards. Pullen and Penrose have interesting ideas on how gravitation plays a role, and quantum gravitation built up from quantum entanglements probably does have a correspondence with quantum wave decoherence and maybe even measurements. However, all of these have big holes you can run an optics bench through, maybe even a collider. I wrote a math-physics result on how quantum mechanics is neither ψ-epistemic or ψ-ontic with any certainty. It does not work for two state systems, which is unfortunate. I should revisit this to make it work. The result is that quantum interpretations that are either ψ-epistemic or ψ-ontic are not determined by a measure theory of QM. I like the prospect of this: QM has this sort of Man proposes and QM disposes flavor to it. 7. @Lawrence Crowell Last Spring I submitted a short essay to the Gravity Research Foundation (GRF) in Wellesley, Massachusetts, that effectively is another interpretation of QM; albeit, a very amateur one. The concept is largely heuristic with a minimum of mathematical modeling. Currently I'm expanding on the original paper, submitted to the GRF, to include ideas for which the essay word limit (1500 words) would not allow. In the abstract of the paper, submitted to GRF last year, a tie-in to De Broglie-Bohm Pilot Wave Theory (PWT) is mentioned. This might have been a mistake seeing that PWT is anathema to much of the physics community as illustrated by your choice of the word "toxin", to describe the reaction of publishers to that particular QM interpretation. While I didn't mention it directly in the essay submitted to GRF the model provides a mechanism for reported anomalous acceleration signals observed in certain superconductor experiments that are orders of magnitude larger than allowed by standard physics (Tajmar et. al. 2003-2006, and others). This connection provided the rationale for submitting the essay to GRF, as the organization's stated mission involves understanding gravity, and presumably artificially generated gravity-like forces. To wind this up, I hope to complete the expanded version of the originally submitted GRF essay in a few weeks and upload it to 8. The particle in the pilot wave interpretation of Bohm and taken from deBroglie is not highly regarded in part because of Bohm's intention with local hidden variables. The idea is workable in a nonrelativistic framework and I think a way of working quantum chaos. There is a fascinating way of doing quantum mechanics that Pascual Jordan worked with Wigner. It is a way of doing QM with trace and determinants that is useful with the Freudenthal determinant over exceptional algebras. In fact I think it is useful with permanents as well, which find their way into algebraic geometric complexity and P vs NP. So why is this not widely used? Jordan and Wigner published on this in 1935 and Jordan became fanatically committed to the Nazi cause. He worked on the rocket programs at Peenemunde and was committed to the Nazi program. It is amazing how this sort of crap can infect brains, much like MAGA promoted in the US these days. Anyway this approach to QM fell into disrepute. History and affiliation have big impacts on the course of development in physics. 9. This comment has been removed by the author. Well yes, but the Bohmian advantage over all those proliferating bunnies is twofold. First, it eliminates the self-induced measurement problem of CI. More importantly, it provides a qualitative account of unobservable quantum processes that is continuous with classical mechanics and therefore provides a sound (and realistic) basis for further qualitative and quantitative elaboration. The continuity with CM is achieved by introducing a scale factor, the guiding equation. This guiding equation, in turn, is suggestive of an underlying physical component that induces quantum behavior in sufficiently low-mass classical particles. This avenue would seem to offer, at least the possibility, of a qualitative and quantitative approach with the potential to converge on a plausibly realistic account of quantum phenomena. I don't think the same can be said for any of the other cockroaches. 35. @Lawrence Crowell I do personal research in foundations because of the continuum hypothesis. Should you ever wish to be put on a crank list, become interested in just such a problem. One morning, thirty years ago, I simply woke up with the conviction of its truth. I now know why. The result from core mathematics lies in dimension theory. There is no transfinite dimension beyond the first uncountable cardinal. And, there is nothing in the usual account of set theory or its model theory that reproduces a collapse of the cardinal hierarchy to just two infinities. Suppose, for the moment, that this has bearing on the mathematics of physics. In Chapter 11 of Birkhoff's "Lattice Theory" there is a theorem showing that the truth of the continuum hypothesis affects measures. If I recall correctly, there can be no non-trivial countably additive measure in which every point has measure zero. I need to trace through the mathematics of this more carefully, but, I suspect that it is similar to the effect of the axiom of choice in some ways. With regard to dimension, Coxeter gave a group-theoretic account of regular polytopes. Because of the method, some stellated and truncated forms are admissible as being regular. There are only three forms common to all dimensions, and, there is only one dimension with an infinite number of regular forms -- that would be the plane. If you look at Freiling's axiom of symmetry on Wikipedia, it will mention the relationship between graph theory and the continuum hypothesis. Now one of the forms occurring in every dimension by Coxeter's account is the simplex. And, complete graphs are the projection of simplexes into the plane. What you say about the multiplicity of interpretations for quantum mechanics is not unlike the the diversity of opinion that has resulted in the current state of affairs for the foundations of mathematics. The independence of the continuum hypothesis, as one hears about it, only applies with respect to a paradigm. My own experience is that one can use finite geometries to relate truth tables to mathematical elements associated with Lorentz metrics. Physicists use symmetry in relation to higher order mathematics. But there is a famous criticism of mathematical logicians in Black's paper on the identity of indiscernibles. And, it is not unreasonable to approach foundations with symmetry as a guiding principle. Your comparison with results from the foundation of mathematics appear quite reasonable to me (but, then, John Baez undoubtedly has a list waiting for me :-) ). 1. This conjecture on my part is not something I have actually bent metal on or have done any calculations. This is pretty removed from my day job work that is more applied or engineering. The DMPR theorem is similar to the Bernays-Cohen result that the continuum hypothesis is a case of Gödel's theorem. Polytopes also enter into the algebraic geometry complexity of N vs NP. The role symmetry is of course important for gauge fields. Also for quantum entanglements quotient spaces or groups occur when some set of quantum numbers are replaced by other degrees of freedom. A bipartite entanglement replaces the spin of two fermions with the Bell state. This is a quotient system. The exact sequence for the moduli space of gauge connections is similar. In fact I think dual to entanglement geometry. 36. Okay, okay. But what if we find more Odderons? ;) 37. I often wonder what a theory will look like that explains QM and ART as special cases. As far as I can see, most scientists are trying to bridge the gap from QM. This seems logical, since most physicists probably regard QM as the most fundamental theory. However, the classic cases of really new theories have developed differently. There was no direct path from classical physics to quantum mechanics, nor to GRT. So QM and GRT were really new. Therefore, the question is whether the current approaches to unifying the two basic theories can really be promising enough. I myself am a mathematician with a solid background in Artificial Intelligence. When developing an algorithm for decision-making, I came across interesting relationships rather playfully. The chaotic decision process (I call it the "GenI process") is a chaotic random process based on very simple rules. Except for the basic arithmetic in complex number space, this does not require any difficult mathematics. (Simple maths do not necessarily produce simple results: think of the fractal sets by Mandelbrot.) Significantly more difficult is the statistical analysis of chaotic state changes. On the one hand, I can show that the process, starting from an initial state, certainly selects one of several decisions, and thereby exactly fulfills the statistics known from quantum mechanical measurements. On the other hand, I can derive a relativistic metric such that averaged state changes follow time-like geodesic paths in a four-dimensional Riemann space. Should not such or similar approaches, which are not derived directly from QM or GRT, ensure a fresh start? In principle, this is only about a change of perspective. 38. @WSG There is a upcoming version of QM that uses complex numbers and four-dimensional Riemann space. It's used to handle open systems. It is called PT-symmetric quantum mechanics. PT-symmetric quantum mechanics is an extension of conventional quantum mechanics into the complex domain. (PT symmetry is not in conflict with conventional quantum theory but is merely a complex generalization of it.) PT-symmetric quantum mechanics was originally considered to be an interesting mathematical discovery but with little or no hope of practical application, but beginning in 2007 it became a hot area of experimental physics. 39. This is not the point I wanted to make. This is obviously just another extension of a proven theory. Such things did not lead to anything really new. I am well aware of other approaches, such as quantum loop gravity or string theory, which, despite all efforts, have yet to resolve the open questions. The question of what a theory must look like so that QM and GRT can be deduced from it have already been asked. Maybe it will look somewhat crazy from today's perspective, as the QM for classical physicists. My point is to take a fundamentally different perspective on the role of gravity in QM. A model like the one mentioned above indeed requires a rethink. After that, our universe, as we perceive it, evolves according to a collapse of its wave function. This clearly contradicts the not explicitly justified assumption of leading physicists that it develops along a Schrödinger equation. But why is it like that? Is there a clear justification and vice versa? What, in essence, is against assuming a collapse? I have not even seen a discussion among physicists about this aspect. Even with well-known authors like Penrose, Greene, Hawking, who otherwise like to talk about the wildest speculation, nowhere is there any hint that the collapse of its wave function is the source of reality in our universe. Can anyone help me here? Are there any works that consider this perspective? At least in a nutshell, I can prove that such an approach can be quite effective. I can perform concrete calculations of a space-time metric for a spin1 / 2 particle and actually prove that the dynamics during the measurement satisfy Einstein's field equations. That should justify at least a discussion about the view. 40. @Lawrence Crowell Thanks for the reply. I found papers specific to Bell states and two qubit geometries. In many ways this relates to what I have been doing. There is, for example, a diagram which occurs in several contexts that I use to decide the well ordering of my 16-set of logical constants. It is a tetrahedron inscribed in a cube. Similarly, some of the papers start looking at block designs. This is another aspect of what I have been doing. The fact that the truth tables relate to one another as points in a finite affine geometry is foundationally significant, although philosophers and logicians will simply deny or not understand the matter. Incompleteness is generalized with respect to theories whose axiom sets are recursively enumerable. Finite group theory is not such a theory. Thanks again. 41. Thank you for this exceptionally thoughtful post. I do think that a good question to ask people on both sides of the argument is: What is your cutoff? That is: For supporters of the collider, I'd like to ask "How expensive would this thing have to be before you stopped supporting it? 30 billion? 50 billion? 100 billion?" And for opponents: "How inexpensive would this thing have to be before you stopped opposing it? 15 billion? 10 billion? 1 billion?" As a general rule, I think people who are able to answer these questions --- and to defend their answers --- are likely to have thought a lot harder about the tradeoffs than those who reflexively just support or oppose. 1. Steven, Yes, a good question. I'll make a go at it and say about $2 billion. A larger collider currently has less scientific promise than LIGO had, which came in at a cost somewhat below $1 billion. It has also less scientific promise than the SKA, whose full proposal would come in at $2 billion. So that would seem a reasonable amount. Comment moderation on this blog is turned on.
2fcbb5b6f82d05de
Publications by Dr. Daniel Seipt All publications of HI Jena D. Seipt, and B. King Spin- and polarization-dependent locally-constant-field-approximation rates for nonlinear Compton and Breit-Wheeler processes Physical Review A 102, 052805 (2020) Abstract: In this paper we derive and discuss the completely spin- and photon-polarization-dependent probability rates for nonlinear Compton scattering and nonlinear Breit-Wheeler pair production. The locally constant field approximation, which is essential for applications in plasma-QED simulation codes, is rigorously derived from the strong-field QED matrix elements in the Furry picture for a general plane-wave background field. We discuss important polarization correlation effects in the spectra of both processes. Asymptotic limits for both small and large values of $\chi$ are derived and their spin and polarization dependence is discussed. P. Zhang, S. S. Bulanov, D. Seipt, A. V. Arefiev, and A. G. R. Thomas Relativistic plasma physics in supercritical fields Physics of Plasmas 27, 050601 (2020) Abstract: Since the invention of chirped pulse amplification, which was recognized by a Nobel prize in physics in 2018, there has been a continuing increase in available laser intensity. Combined with advances in our understanding of the kinetics of relativistic plasma, studies of laser-plasma interactions are entering a new regime where the physics of relativistic plasmas is strongly affected by strong-field quantum electrodynamics (QED) processes, including hard photon emission and electron-positron (e^+-e^−) pair production. This coupling of quantum emission processes and relativistic collective particle dynamics can result in dramatically new plasma physics phenomena, such as the generation of dense e^+-e^− pair plasma from near vacuum, complete laser energy absorption by QED processes or the stopping of an ultrarelativistic electron beam, which could penetrate a cm of lead, by a hair's breadth of laser light. In addition to being of fundamental interest, it is crucial to study this new regime to understand the next generation of ultra-high intensity laser-matter experiments and their resulting applications, such as high energy ion, electron, positron, and photon sources for fundamental physics studies, medical radiotherapy, and next generation radiography for homeland security and industry. Y. Ma, D. Seipt, A. E. Hussein, S. Hakimi, N. F. Beier, S. B. Hansen, J. Hinojosa, A. Maksimchuk, J. Nees, K. Krushelnick, A. G. R. Thomas, and F. Dollar Polarization-Dependent Self-Injection by Above Threshold Ionization Heating in a Laser Wakefield Accelerator Physical Review Letters 124, 114801 (2020) Abstract: We report on the experimental observation of a decreased self-injection threshold by using laser pulses with circular polarization in laser wakefield acceleration experiments in a nonpreformed plasma, compared to the usually employed linear polarization. A significantly higher electron beam charge was also observed for circular polarization compared to linear polarization over a wide range of parameters. Theoretical analysis and quasi-3D particle-in-cell simulations reveal that the self-injection and hence the laser wakefield acceleration is polarization dependent and indicate a different injection mechanism for circularly polarized laser pulses, originating from larger momentum gain by electrons during above threshold ionization. This enables electrons to meet the trapping condition more easily, and the resulting higher plasma temperature was confirmed via spectroscopy of the XUV plasma emission. D. Seipt, V. Kharin, and S. Rykovanov Optimizing Laser Pulses for Narrow-Band Inverse Compton Sources in the High-Intensity Regime Physical Review Letters 122, 204802 (2019) Abstract: Scattering of ultraintense short laser pulses off relativistic electrons allows one to generate a large number of X- or gamma-ray photons with the expense of the spectral width---temporal pulsing of the laser inevitable leads to considerable spectral broadening. In this Letter, we describe a simple method to generate optimized laser pulses that compensate the nonlinear spectrum broadening and can be thought of as a superposition of two oppositely linearly chirped pulses delayed with respect to each other. We develop a simple analytical model that allows us to predict the optimal parameters of such a two-pulse---the delay, amount of chirp, and relative phase---for generation of a narrow-band $\gamma$-ray spectrum. Our predictions are confirmed by numerical optimization and simulations including three-dimensional effects. V. Kharin, D. Seipt, and S. Rykovanov Higher-Dimensional Caustics in Nonlinear Compton Scattering Physical Review Letters 120, 044802 (2018) Abstract: A description of the spectral and angular distributions of Compton scattered light in collisions of intense laser pulses with high-energy electrons is unwieldy and usually requires numerical simulations. However, due to the large number of parameters affecting the spectra such numerical investigations can become computationally expensive. Using methods of catastrophe theory we predict higher-dimensional caustics in the spectra of the Compton scattered light, which are associated with bright narrow-band spectral lines, and in the simplest case can be controlled by the value of the linear chirp of the pulse. These findings require no full-scale calculations and have direct consequences for the photon yield enhancement of future nonlinear Compton scattering x-ray or gamma-ray sources. D. Würzler, N. Eicke, M. Möller, D. Seipt, A. M. Sayler, S. Fritzsche, M. Lein, and G. G. Paulus Velocity map imaging of scattering dynamics in orthogonal two-color fields Journal of Physics B: Atomic, Molecular and Optical Physics 51, 015001 (2017) Abstract: In strong-field ionization processes, two-color laser fields are frequently used for controlling sub-cycle electron dynamics via the relative phase of the laser fields. Here we apply this technique to velocity map imaging spectroscopy using an unconventional orientation with the polarization of the ionizing laser field perpendicular to the detector surface and the steering field parallel to it. This geometry allows not only to image the phase-dependent photoelectron momentum distribution (PMD) of low-energy electrons that interact only weakly with the ion (direct electrons), but also to investigate the low yield of higher-energy rescattered electrons. Phase-dependent measurements of the PMD of neon and xenon demonstrate control over direct and rescattered electrons. The results are compared with semi-classical calculations in three dimensions including elastic scattering at different orders of return and with solutions of the three-dimensional time-dependent Schrödinger equation. A. A. Peshkov, D. Seipt, A. Surzhykov, and S. Fritzsche Photoexcitation of atoms by Laguerre-Gaussian beams Physical Review A 96, 023407 (2017) Abstract: In a recent experiment, Schmiegelow et al. [Nat. Commun. 7, 12998 (2016)] investigated the magnetic sublevel population of Ca^+ ions in a Laguerre-Gaussian light beam if the target atoms were just centered along the beam axis. They demonstrated in this experiment that the sublevel population of the excited atoms is uniquely defined by the projection of the orbital angular momentum of the incident light. However, little attention has been paid so far to the question of how the magnetic sublevels are populated when atoms are displaced from the beam axis by some impact parameter b. Here, we analyze this sublevel population for different atomic impact parameters in first-order perturbation theory and by making use of the density-matrix formalism. Detailed calculations are performed especially for the 4s ^2S_1/2 -> 3d ^2D_5/2 transition in Ca^+ ions and for the vector potential of a Laguerre-Gaussian beam in Coulomb gauge. It is shown that the magnetic sublevel population of the excited ^2D_5/2 level varies significantly with the impact parameter and is sensitive to the polarization, the radial index, as well as the orbital angular momentum of the incident light beam. D. Zille, D. Seipt, M. Möller, S. Fritzsche, G. G. Paulus, and D. B. Milošević Spin-dependent quantum theory of high-order above-threshold ionization Physical Review A 95, 063408 (2017) Abstract: The strong-field-approximation theory of high-order above-threshold ionization of atoms is generalized to include the electron spin. The obtained rescattering amplitude consists of a direct and exchange part. On the examples of excited He atoms as well as Li^+ and Be^2+ ions, it is shown that the interference of these two amplitudes leads to an observable difference between the photoelectron momentum distributions corresponding to different initial spin states: Pronounced minima appear for singlet states, which are absent for triplet states. D. Seipt, T. Heinzl, M. Marklund, and S. S. Bulanov Depletion of Intense Fields Physical Review Letters 118, 154803 (2017) Abstract: The interaction of charged particles and photons with intense electromagnetic fields gives rise to multiphoton Compton and Breit-Wheeler processes. These are usually described in the framework of the external field approximation, where the electromagnetic field is assumed to have infinite energy. However, the multiphoton nature of these processes implies the absorption of a significant number of photons, which scales as the external field amplitude cubed. As a result, the interaction of a highly charged electron bunch with an intense laser pulse can lead to significant depletion of the laser pulse energy, thus rendering the external field approximation invalid. We provide relevant estimates for this depletion and find it to become important in the interaction between fields of amplitude a0∼103 and electron bunches with charges of the order of 10 nC. S. S. Bulanov, D. Seipt, T. Heinzl, and M. Marklund Depletion of intense fields AIP Conference Proceedings 1812, 100006 (2017) D. Zille, D. Seipt, M. Möller, S. Fritzsche, S. Gräfe, C. Müller, and G. G. Paulus Spin-dependent rescattering in strong-field ionization of helium Journal of Physics B: Atomic, Molecular and Optical Physics 50, 065001 (2017) Abstract: We investigate the influence of singlet and triplet spin states on rescattered photoelectrons in strong-field ionization of excited helium. Choosing either a symmetric or antisymmetric spatial wave function as the initial state results in different scattering cross sections for the 1s2s¹S and ³S states. These cross sections are used in the semi-classical model of strong-field ionization. Our investigations show that the photoelectron momentum distributions of rescattered electrons exhibit a significant dependence on the relative spin state of the projectile and the bound electron which should be observable in experiments. The proposed experimental approach can be understood as a testbed for probing the spin dynamics of electrons during strong-field ionization and the presented results as a baseline for their identification. K.-H. Blumenhagen, S. Fritzsche, T. Gassner, A. Gumberidze, R. Märtin, N. Schell, D. Seipt, U. Spillmann, A. Surzhykov, S. Trotsenko, G. Weber, V. A. Yerokhin, and Th. Stöhlker Polarization transfer in Rayleigh scattering of hard x-rays New Journal of Physics 18, 103034 (2016) Abstract: We report on the first elastic hard x-ray scattering experiment where the linear polarization characteristics of both the incident and the scattered radiation were observed. Rayleigh scattering was investigated in a relativistic regime by using a high- Z target material, namely gold, and a photon energy of 175 keV. Although the incident synchrotron radiation was nearly 100% linearly polarized, at a scattering angle of θ=90° we observed a strong depolarization for the scattered photons with a degree of linear polarization of +27% ± 12% only. This finding agrees with second-order quantum electrodynamics calculations of Rayleigh scattering, when taking into account a small polarization impurity of the incident photon beam which was determined to be close to 98%. The latter value was obtained independently from the elastic scattering by analyzing photons that were Compton-scattered in the target. Moreover, our results indicate that when relying on state-of-the-art theory, Rayleigh scattering could provide a very accurate method to diagnose polarization impurities in a broad region of hard x-ray energies. D. Seipt, R. A. Müller, A. Surzhykov, and S. Fritzsche Two-color above-threshold ionization of atoms and ions in XUV Bessel beams and intense laser light Physical Review A 94, 053420 (2016) Abstract: The two-color above-threshold ionization (ATI) of atoms and ions is investigated for a vortex Bessel beam in the presence of a strong near-infrared (NIR) light field. While the photoionization is caused by the photons from the weak but extreme ultraviolet (XUV) vortex Bessel beam, the energy and angular distribution of the photoelectrons and their sideband structure are affected by the plane-wave NIR field. We here explore the energy spectra and angular emission of the photoelectrons in such two-color fields as a function of the size and location of the target atoms with regard to the beam axis. In addition, analog to the circular dichroism in typical two-color ATI experiments with circularly polarized light, we define and discuss seven different dichroism signals for such vortex Bessel beams that arise from the various combinations of the orbital and spin angular momenta of the two light fields. For localized targets, it is found that these dichroism signals strongly depend on the size and position of the atoms relative to the beam. For macroscopically extended targets, in contrast, three of these dichroism signals tend to zero, while the other four just coincide with the standard circular dichroism, similar as for Bessel beams with a small opening angle. Detailed computations of the dichroism are performed and discussed for the 4s valence-shell photoionization of Ca+ ions. R. Müller, D. Seipt, R. Beerwerth, M. Ornigotti, A. Szameit, S. Fritzsche, and A. Surzhykov Photoionization of neutral atoms by X waves carrying orbital angular momentum Physical Review A 94, 041402 (2016) Abstract: In contrast to plane waves, twisted or vortex beams have a complex spatial structure. Both their intensity and energy flow vary within the wave front. Beyond that, polychromatic vortex beams, such as X waves, have a spatially dependent energy distribution. We propose a method to measure this (local) energy spectrum. The method is based on the measurement of the energy distribution of photoelectrons from alkali-metal atoms. On the basis of our fully relativistic calculations, we argue that even ensembles of atoms can be used to probe the local energy spectrum of short twisted pulses. I. P. Ivanov, D. Seipt, A. Surzhykov, and S. Fritzsche Elastic scattering of vortex electrons provides direct access to the Coulomb phase Physical Review D 94, 076001 (2016) Abstract: Vortex electron beams are freely propagating electron waves carrying adjustable orbital angular momentum with respect to the propagation direction. Such beams were experimentally realized just a few years ago and are now used to probe various electromagnetic processes. So far, these experiments used the single vortex electron beams, either propagating in external fields or impacting a target. Here, we investigate the elastic scattering of two such aligned vortex electron beams and demonstrate that this process allows one to experimentally measure features which are impossible to detect in the usual plane-wave scattering. The scattering amplitude of this process is well approximated by two plane-wave scattering amplitudes with different momentum transfers, which interfere and give direct experimental access to the Coulomb phase. This phase (shift) affects the scattering of all charged particles and has thus received significant theoretical attention but was never probed experimentally. We show that a properly defined azimuthal asymmetry, which has no counterpart in plane-wave scattering, allows one to directly measure the Coulomb phase as function of the scattering angle. Double-slit experiment in momentum space Europhysics Letters 115, 41001 (2016) Abstract: Young's classic double-slit experiment demonstrates the reality of interference when waves and particles travel simultaneously along two different spatial paths. Here, we propose a double-slit experiment in momentum space, realized in the free-space elastic scattering of vortex electrons. We show that this process proceeds along two paths in momentum space, which are well localized and well separated from each other. For such vortex beams, the (plane-wave) amplitudes along the two paths acquire adjustable phase shifts and produce interference fringes in the final angular distribution. We argue that this experiment can be realized with the present-day technology. We show that it gives experimental access to the Coulomb phase, a quantity which plays an important role in all charged particle scattering but which usual scattering experiments are insensitive to. A. Surzhykov, D. Seipt, and S. Fritzsche Probing the energy flow in Bessel light beams using atomic photoionization Physical Review A 94, 033420 (2016) Abstract: The growing interest in twisted light beams also requires a better understanding of their complex internal structure. Particular attention is currently being given to the energy circulation in these beams as usually described by the Poynting vector field. In the present study we propose to use the photoionization of alkali-metal atoms as a probe process to measure (and visualize) the energy flow in twisted light fields. Such measurements are possible since the angular distribution of photoelectrons, emitted from a small atomic target, appears sensitive to and is determined by the local direction of the Poynting vector. To illustrate the feasibility of the proposed method, detailed calculations were performed for the ionization of sodium atoms by nondiffractive Bessel beams. V. Yu. Kharin, D. Seipt, and S. G. Rykovanov Temporal laser-pulse-shape effects in nonlinear Thomson scattering Physical Review A 93, 063801 (2016) Abstract: The influence of the laser-pulse temporal shape on the nonlinear Thomson scattering on-axis photon spectrum is analyzed in detail. Using the classical description, analytical expressions for the temporal and spectral structure of the scattered radiation are obtained for the case of symmetric laser-pulse shapes. The possibility of reconstructing the incident laser pulse from the scattered spectrum averaged over interference fringes in the case of high peak intensity and symmetric laser-pulse shape is discussed. A. Otto, T. Nousch, D. Seipt, B. Kämpfer, D. Blaschke, A. D. Panferov, S. A. Smolyansky, and A. I. Titov Pair production by Schwinger and Breit–Wheeler processes in bi-frequent fields Journal of Plasma Physics 82, 65582030 (2016) Abstract: Counter-propagating and suitably polarized light (laser) beams can provide conditions for pair production. Here, we consider in more detail the following two situations: (i) in the homogeneity regions of anti-nodes of linearly polarized ultra-high intensity laser beams, the Schwinger process is dynamically assisted by a second high-frequency field, e.g. by an XFEL beam; and (ii) a high-energy probe photon beam colliding with a superposition of co-propagating intense laser and XFEL beams gives rise to the laser-assisted Breit–Wheeler process. The prospects of such bi-frequent field constellations with respect to the feasibility of conversion of light into matter are discussed. D. Seipt, V. Kharin, S. Rykovanov, A. Surzhykov, and S. Fritzsche Analytical results for nonlinear Compton scattering in short intense laser pulses Journal of Plasma Physics 82, 655820203 (2016) Abstract: We study in detail the strong-field QED process of nonlinear Compton scattering in short intense plane wave laser pulses of circular polarization. Our main focus is placed on how the spectrum of the backscattered laser light depends on the shape and duration of the initial short intense pulse. Although this pulse shape dependence is very complicated and highly nonlinear, and has never been addressed explicitly, our analysis reveals that all the dependence on the laser pulse shape is contained in a class of three-parameter master integrals. Here we present completely analytical expressions for the nonlinear Compton spectrum in terms of these master integrals. Moreover, we analyse the universal behaviour of the shape of the spectrum for very high harmonic lines. D. Seipt, A. Surzhykov, S. Fritzsche, and B. Kämpfer Caustic structures in x-ray Compton scattering off electrons driven by a short intense laser pulse New Journal of Physics 18, 023044 (2016) Abstract: We study the Compton scattering of x-rays off electrons that are driven by a relativistically intense short optical laser pulse. The frequency spectrum of the laser-assisted Compton radiation shows a broad plateau in the vicinity of the laser-free Compton line due to a nonlinear mixing between x-ray and laser photons. Special emphasis is placed on how the shape of the short assisting laser pulse affects the spectrum of the scattered x-rays. In particular, we observe sharp peak structures in the plateau region, whose number and locations are highly sensitive to the laser pulse shape. These structures are interpreted as spectral caustics by using a semiclassical analysis of the laser-assisted QED matrix element, relating the caustic peak locations to the laser-driven electron motion. A. Titov, B. Kämpfer, A. Hosaka, T. Nousch, and D. Seipt Determination of the carrier envelope phase for short, circularly polarized laser pulses Physical Review D 93, 045010 (2016) Abstract: We analyze the impact of the carrier envelope phase on the differential cross sections of the Breit- Wheeler and the generalized Compton scattering in the interaction of a charged electron (positron) with an intensive ultrashort electromagnetic (laser) pulse. The differential cross sections as a function of the azimuthal angle of the outgoing electron have a clear bump structure, where the bump position coincides with the value of the carrier phase. This effect can be used for the carrier envelope phase determination. T. Nousch, D. Seipt, B. Kämpfer, and A. I. Titov Spectral caustics in laser assisted Breit–Wheeler process Physics Letters B 755, 162 (2016) Abstract: Electron–positron pair production by the Breit–Wheeler process embedded in a strong laser pulse is analyzed. The transverse momentum spectrum displays prominent peaks which are interpreted as caustics, the positions of which are accessible by the stationary phases. Examples are given for the superposition of an XFEL beam with an optical high-intensity laser beam. Such a configuration is available, e.g., at LCLS at present and at European XFEL in near future. It requires a counter propagating probe photon beam with high energy which can be generated by synchronized inverse Compton backscattering. T. Nousch, A. Otto, D. Seipt, B. Kämpfer, A. I. Titov, D. Blaschke, A. D. Panferov, and S. A. Smolyansky Laser Assisted Breit-Wheeler and Schwinger Processes in: S. Schramm and M. Schäfer (ed.): New Horizons in Fundamental Physics (Springer International Publishing) (2016) Abstract: The assistance of an intense optical laser pulse on electron-positron pair production by the Breit-Wheeler and Schwinger processes in XFEL fields is analyzed. The impact of a laser beam on high-energy photon collisions with XFEL photons consists in a phase space redistribution of the pairs emerging in the Breit-Wheeler sub-process. We provide numerical examples of the differential cross section for parameters related to the European XFEL. Analogously, the Schwinger type pair production in pulsed fields with oscillating components referring to a superposition of optical laser and XFEL frequencies is evaluated. The residual phase space distribution of created pairs is sensitive to the pulse shape and may differ significantly from transiently achieved mode occupations. R. Müller, D. Seipt, S. Fritzsche, and A. Surzhykov Effect of bound-state dressing in laser-assisted radiative recombination Physical Review A 92, 053426 (2015) Abstract: We present a theoretical study on the recombination of a free electron into the ground state of a hydrogenlike ion in the presence of an external laser field. Emphasis is placed on the effects caused by the laser dressing of the residual ionic bound state. To investigate how this dressing affects the total and angle-differential cross section of laser-assisted radiative recombination (LARR) we apply first-order perturbation theory and the separable Coulomb-Volkov continuum ansatz. Using this approach, detailed calculations are performed for low-Z hydrogenlike ions and laser intensities in the range from I_L=10^12 to 10^13W/cm2. It is seen that the total cross section as a function of the laser intensity is remarkably affected by the bound-state dressing. Moreover, the laser dressing becomes manifest as asymmetries in the angular distribution and the (energy) spectrum of the emitted recombination photons. S. Stock, A. Surzhykov, S. Fritzsche, and D. Seipt Compton scattering of twisted light: Angular distribution and polarization of scattered photons Physical Review A 92, 013401 (2015) Abstract: Compton scattering of twisted photons is investigated within a nonrelativistic framework using first-order perturbation theory. We formulate the problem in the density-matrix theory, which enables one to gain new insights into scattering processes of twisted particles by exploiting the symmetries of the system. In particular, we analyze how the angular distribution and polarization of the scattered photons are affected by the parameters of the initial beam such as the opening angle and the projection of orbital angular momentum. We present analytical and numerical results for the angular distribution and the polarization of Compton scattered photons for initially twisted light and compare them with the standard case of plane-wave light. V. Serbo, I. P. Ivanov, S. Fritzsche, D. Seipt, and A. Surzhykov Scattering of twisted relativistic electrons by atoms Physical Review A 92, 012705 (2015) A. Otto, D. Seipt, D. Blaschke, S. A. Smolyansky, and B. Kämpfer Dynamical Schwinger process in a bifrequent electric field of finite duration: Survey on amplification Physical Review D 91, 105018 (2015) Abstract: The electron-positron pair production due to the dynamical Schwinger process in a slowly oscillating strong electric field is enhanced by the superposition of a rapidly oscillating weaker electric field. A systematic account of the enhancement by the resulting bifrequent field is provided for the residual phase space distribution. The enhancement is explained by a severe reduction of the suppression in both the tunneling and multiphoton regimes. D. Seipt, S. G. Rykovanov, A. Surzhykov, and S. Fritzsche Narrowband inverse Compton scattering x-ray sources at high laser intensities Physical Review A 91, 033402 (2015) Abstract: Narrowband x- and γ-ray sources based on the inverse Compton scattering of laser pulses suffer from a limitation of the allowed laser intensity due to the onset of nonlinear effects that increase their bandwidth. It has been suggested that laser pulses with a suitable frequency modulation could compensate this ponderomotive broadening and reduce the bandwidth of the spectral lines, which would allow one to operate narrowband Compton sources in the high-intensity regime. In this paper we therefore present the theory of nonlinear Compton scattering in a frequency-modulated intense laser pulse. We systematically derive the optimal frequency modulation of the laser pulse from the scattering matrix element of nonlinear Compton scattering, taking into account the electron spin and recoil. We show that, for some particular scattering angle, an optimized frequency modulation completely cancels the ponderomotive broadening for all harmonics of the backscattered light. We also explore how sensitively this compensation depends on the electron-beam energy spread and emittance, as well as the laser focusing. A. Surzhykov, D. Seipt, V. G. Serbo, and S. Fritzsche Interaction of twisted light with many-electron atoms and ions Physical Review A 91, 013403 (2015) Abstract: The excitation of many-electron atoms and ions by twisted light has been studied within the framework of the density-matrix theory and Dirac's relativistic equation. Special attention is paid to the magnetic sublevel population of excited atomic states as described by means of the alignment parameters. General expressions for the alignment of the excited states are obtained under the assumption that the photon beam, prepared as a coherent superposition of two twisted Bessel states, irradiates a macroscopic target. We demonstrate that for this case the population of excited atoms can be sensitive to both the transverse momentum and the (projection of the) total angular momentum of the incident radiation. While the expressions are general and can be employed to describe the photoexcitation of any atom, independent on its shell structure and number of electrons, we performed calculations for the 3s→3p transition in sodium. These calculations indicate that the “twistedness” of incoming radiation can lead to a measurable change in the alignment of the excited P3/22 state as well as the angular distribution of the subsequent fluorescence emission. A. Otto, D. Seipt, D. Blaschke, B. Kämpfer, and S. Smolyansky Lifting shell structures in the dynamically assisted Schwinger effect in periodic fields Physics Letters B 740, 335 (2015) Abstract: Abstract The dynamically assisted pair creation (Schwinger effect) is considered for the superposition of two periodic electric fields acting in a finite time interval. We find a strong enhancement by orders of magnitude caused by a weak field with a frequency being a multitude of the strong-field frequency. The strong low-frequency field leads to shell structures which are lifted by the weaker high-frequency field. The resonance type amplification refers to a new, monotonously increasing mode, often hidden in some strong oscillatory transient background, which disappears during the smoothly switching off the background fields, thus leaving a pronounced residual shell structure in phase space. D. Seipt, A. Surzhykov, and S. Fritzsche Structured x-ray beams from twisted electrons by inverse Compton scattering of laser light Physical Review A 90, 012118 (2014) Abstract: The inverse Compton scattering of laser light on high-energetic twisted electrons is investigated with the aim to construct spatially structured x-ray beams. In particular, we analyze how the properties of the twisted electrons, such as the topological charge and aperture angle of the electron Bessel beam, affect the energy and angular distribution of scattered x rays. We show that with suitably chosen initial twisted electron states one can synthesize tailor-made x-ray beam profiles with a well-defined spatial structure, in a way not possible with ordinary plane-wave electron beams. D. Seipt, and B. Kämpfer Laser-assisted Compton scattering of x-ray photons Physical Review A 89, 023433 (2014) Abstract: The Compton scattering of x-ray photons, assisted by a short intense optical laser pulse, is discussed. The differential scattering cross section reveals the interesting feature that the main Klein-Nishina line is accompanied by a series of side lines forming a broad plateau where up to O(10^(3)) laser photons participate simultaneously in a single scattering event. An analytic formula for the width of the plateau is given. Due to the nonlinear mixing of x-ray and laser photons a frequency-dependent rotation of the polarization of the final-state x-ray photons relative to the scattering plane emerges. A consistent description of the scattering process with short laser pulses requires to work with x-ray pulses. An experimental investigation can be accomplished, e.g., at LCLS or the European XFEL, in the near future. A. Jochmann, A. Irman, M. Bussmann, J. P. Couperus, T. E. Cowan, A. D. Debus, M. Kuntzsch, K. W. D. Ledingham, U. Lehnert, R. Sauerbrey, H. P. Schlenvoigt, D. Seipt, Th. Stöhlker, D. B. Thorn, S. Trotsenko, A. Wagner, and U. Schramm High Resolution Energy-Angle Correlation Measurement of Hard X Rays from Laser-Thomson Backscattering Physical Review Letters 111, 114803 (2013) Abstract: Thomson backscattering of intense laser pulses from relativistic electrons not only allows for the generation of bright x-ray pulses but also for the investigation of the complex particle dynamics at the interaction point. For this purpose a complete spectral characterization of a Thomson source powered by a compact linear electron accelerator is performed with unprecedented angular and energy resolution. A rigorous statistical analysis comparing experimental data to 3D simulations enables, e.g., the extraction of the angular distribution of electrons with 1.5% accuracy and, in total, provides predictive capability for the future high brightness hard x-ray source PHOENIX (photon electron collider for narrow bandwidth intense x rays) and potential gamma-ray sources. D. Seipt, and B. Kämpfer Asymmetries of azimuthal photon distributions in nonlinear Compton scattering in ultrashort intense laser pulses Physical Review A 88, 012127 (2013) Abstract: Nonlinear Compton scattering in ultrashort intense laser pulses is discussed with the focus on angular distributions of the emitted photon energy. This is an observable which is easily accessible experimentally. Asymmetries of the azimuthal distributions are predicted for both linear and circular polarization. We present a systematic survey of the influence of the laser intensity, the carrier envelope phase, and the laser polarization on the emission spectra for single-cycle and few-cycle laser pulses. For linear polarization, the dominant direction of the emission changes from a perpendicular pattern with respect to the laser polarization at low-intensity to a dominantly parallel emission for high-intensity laser pulses.
fc612c9bb117e70e
Skip to main content Physics LibreTexts 3.7 Path Integrals Huygen’s Picture of Wave Propagation                                               If a point source of light is switched on, the wavefront is an expanding sphere centered at the source.  Huygens suggested that this could be understood if at any instant in time each point on the wavefront was regarded as a source of secondary wavelets, and the new wavefront a moment later was to be regarded as built up from the sum of these wavelets. For a light shining continuously, this process just keeps repeating. What use is this idea? For one thing, it explains refraction—the change in direction of a wavefront on entering a different medium, such as a ray of light going from air into glass. If the light moves more slowly in the glass, velocity \(v\) instead of \(c\), with \(v<c\), then Huygen’s picture explains Snell’s Law, that the ratio of the sines of the angles to the normal of incident and transmitted beams is constant, and in fact is the ratio \(c/v\).  This is evident from the diagram below: in the time the wavelet centered at \(A\) has propagated to \(C\), that from \(B\) has reached \(D\), the ratio of lengths \(AC/BD\) being \(c/v\). But the angles in Snell’s Law are in fact the angles \(ABC\), \(BCD\), and those right-angled triangles have a common hypotenuse \(BC\), from which the Law follows. Fermat’s Principle of Least Time We will now temporarily forget about the wave nature of light, and consider a narrow ray or beam of light shining from point \(A\) to point \(B\), where we suppose \(A\) to be in air, \(B\) in glass.  Fermat showed that the path of such a beam is given by the Principle of Least Time: a ray of light going from \(A\) to \(B\) by any other path would take longer. How can we see that? It’s obvious that any deviation from a straight line path in air or in the glass is going to add to the time taken, but what about moving slightly the point at which the beam enters the glass? Where the air meets the glass, the two rays, separated by a small distance \(CD = d\)  along that interface, will look parallel: (Feynman gives a nice illustration: a lifeguard on a beach spots a swimmer in trouble some distance away, in a diagonal direction. He can run three times faster than he can swim. What is the quickest path to the swimmer?) Moving the point of entry up a small distance d, the light has to travel an extra \(d\sin\theta_1\) in air, but a distance less by \(d\sin\theta_2\) in the glass, giving an extra travel time \(\Delta t=d\sin\theta_1/c-d\sin\theta_2/v\).   For the classical path, Snell’s Law gives \(\sin\theta_1/\sin\theta_2=n=c/v\), so \(\Delta t=0\) to first order. But if we look at a series of possible paths, each a small distance d away from the next at the point of crossing from air into glass, \(\Delta t\) becomes of order \(d/c\) away from the classical path. Suppose now we imagine that the light actually travels along all these paths with about equal amplitude.   What will be the total contribution of all the paths at \(B\)?  Since the times along the paths are different, the signals along the different paths will arrive at \(B\) with different phases, and to get the total wave amplitude we must add a series of unit \(2D\) vectors, one from each path.  (Representing the amplitude and phase of the wave by a complex number for convenience—for a real wave, we can take the real part at the end.) When we map out these unit \(2D\) vectors, we find that in the neighborhood of the classical path, the phase varies little, but as we go away from it the phase spirals more and more rapidly, so those paths interfere amongst themselves destructively.  To formulate this a little more precisely, let us assume that some close by path has a phase difference \(\varphi\) from the least time path, and goes from air to glass a distance \(x\) away from the least time path: then for these close by paths, \(\varphi=ax^2\), where a depends on the geometric arrangement and the wavelength.  From this, the sum over the close by paths is an integral of the form \(\int e^{iax^2}dx\).  (We are assuming the wavelength of light is far less than the size of the equipment.)  This is a standard integral, its value is \(\sqrt{\pi/ia}\), all its weight is concentrated in a central area of width \(1/\sqrt{a}\), exactly as for the real function \(e^{-ax^2}\).   This is the explanation of Fermat’s Principle—only near the path of least time do paths stay approx_imately in phase with each other and add constructively. So this classical path rule has an underlying wave-phase explanation.  In fact, the central role of phase in this analysis is sometimes emphasized by saying the light beam follows the path of stationary phase. Of course, we’re not summing over all paths here—we assume that the path in air from the source to the point of entry into the glass is a straight line, clearly the subpath of stationary phase. Classical Mechanics: The Principle of Least Action Confining our attention for the moment to the mechanics of a single nonrelativistic particle in a potential, with Lagrangian \(L=T-V\), the action \(S\) is defined by \[ S=\int_{t_1}^{t_2}L(x,\dot{x})dt. \tag{3.7.1}\] Newton’s Laws of Motion can be shown to be equivalent to the statement that a particle moving in the potential from \(A\) at \(t_1\) to \(B\) at \(t_2\) travels along the path that minimizes the action.  This is called the Principle of Least Action: for example, the parabolic path followed by a ball thrown through the air minimizes the integral along the path of the action \(T-V\) where \(T\) is the ball’s kinetic energy, \(V\) its gravitational potential energy (neglecting air resistance, of course).  Note here that the initial and final times are fixed, so since we’ll be summing over paths with different lengths, necessarily the particles speed will be different along the different paths. In other words, it will have different energies along the different paths. With the advent of quantum mechanics, and the realization that any particle, including a thrown ball, has wave like properties, the rather mysterious Principle of Least Action looks a lot like Fermat’s Principle of Least Time.  Recall that Fermat’s Principle works because the total phase along a path is the integrated time elapsed along the path, and for a path where that integral is stationary for small path variations, neighboring paths add constructively, and no other sets of paths do.  If the Principle of Least Action has a similar explanation, then the wave amplitude for a particle going along a path from \(A\) to \(B\) must have a phase equal to some constant times the action along that path. If this is the case, then the observed path followed will be just that of least action, or, more generally, of stationary action, for only near that path will the amplitudes add constructively, just as in Fermat’s analysis of light rays. Going from Classical Mechanics to Quantum Mechanics Of course, if we write a phase factor for a path  \(e^{icS}\) where \(S\) is the action for the path and \(c\) is some constant, \(c\) must necessarily have the dimensions of inverse action.  Fortunately, there is a natural candidate for the constant \(c\). The wave nature of matter arises from quantum mechanics, and the fundamental constant of quantum mechanics, Planck’s constant, is in fact a unit of action.  (Recall action has the same dimensions as \(Et\), and therefore the same as \(px\), manifestly the same as angular momentum.)  It turns out that the appropriate path phase factor is \(e^{iS/\hbar}\) That the phase factor is \(e^{iS/\hbar}\), rather than \(e^{iS/h}\), say, can be established by considering the double slit experiment for electrons (Peskin page 277).  This is analogous to the light waves going from a source in air to a point in glass, except now we have vacuum throughout (electrons don’t get far in glass), and we close down all but two of the paths. Suppose electrons from the top slit, Path I, go a distance \(D\) to the detector, those from the bottom slit, Path II, go \(D+d\), with \(d\ll D\).  Then if the electrons have wavelength \(\lambda\) we know the phase difference at the detector is \(2\pi d/\lambda\).  To see this from our formula for summing over paths, on Path I the action  \(S=Et=\frac{1}{2}mv^2_1t\), and \(v_1=D/t\), so \[S_1=\frac{1}{2}mD^2/t. \tag{3.7.2}\]  For Path II, we must take \(v_2=(D+d)/t\).  Keeping only terms of leading order in \(d/D\), the action difference between the two paths \[ S_2-S_1=mDd/t \tag{3.7.3}\] so the phase difference \[ \frac{S_2-S_1}{\hbar} =\frac{mvd}{\hbar}=\frac{2\pi pd}{h}=\frac{2\pi d}{\lambda}. \tag{3.7.4}\] This is the known correct result, and this fixes the constant multiplying the action/h in the expression for the path phase. In quantum mechanics, such as the motion of an electron in an atom, we know that the particle does not follow a well-defined path, in contrast to classical mechanics.  Where does the crossover to a well-defined path take place?  Taking the simplest possible case of a free particle (no potential) of mass m moving at speed \(v\), the action along a straight line path taking time \(t\) from \(A\) to \(B\) is \(\frac{1}{2}mv^2t\).  If this action is of order Planck’s constant \(h\), then the phase factor will not oscillate violently on moving to different paths, and a range of paths will contribute.  In other words, quantum rather than classical behavior dominates when \(\frac{1}{2}mv^2t\) is of order \(h\). But \(vt\) is the path length \(L\), and \(mv/h\) is the wavelength \(\lambda\), so we conclude that we must use quantum mechanics when the wavelength \(h/p\) is significant compared with the path length.  Interference sets in when the difference in path actions is of order \(h\), so in the atomic regime many paths must be included. Feynman (in Feynman and Hibbs) gives a nice picture to help think about summing over paths. He begins with the double slit experiment for an electron.  We suppose the electron is emitted from some source \(A\) on the left, and we look for it at a point \(B\) on a screen to the right.  In the middle is a thin opaque barrier with the familiar two slits.  Evidently, to find the amplitude for the electron to reach \(B\) we sum over two paths.  Now suppose we add another two-slit barrier. We have to sum over four paths.  Now add another.  Next, replace the two slits in each barrier by several slits.  We must sum over a multitude of paths!  Finally, increase the number of barriers to some large number \(N\), and at the same time increase the number of slits to the point that there are no barriers left.  We are left with a sum over all possible paths through space from \(A\) to \(B\), multiplying each path by the appropriate action phase factor.  This is reminiscent of the original wave propagation picture of Huygens: if one pictures it at successive time intervals of picoseconds, say, from each point on the wavefront waves go out 3 mm in all directions, then in the next time interval each of those sprouts more waves in all directions.  One could write this as a sum over all zigzag paths with random 3 mm steps. In fact, the sum over paths is even more daunting than Feynman’s picture suggests.  All the paths going through these many slitted barriers are progressing in a forward direction, from \(A\) towards \(B\).  Actually, if we’re summing over all paths, we should be including the possibility of paths zigzagging backwards and forwards as well, eventually arriving at \(B\).  We shall soon see how to deal systematically with all possible paths. Review: Standard Definition of the Free Electron Propagator As a warm up exercise, consider an electron confined to one dimension, with no potential present, moving from \(x'\) at time 0 to \(x\) at time \(T\).  We’ll follow Feynman in using \(T\) for the final time, so we can keep t for the continuous (albeit sometimes discretized) time variable over the interval 0 to \(T\).    (As explained previously, when we write that the electron is initially at \(x'\), we mean its wave function is a normalizable state, such as a very narrow Gaussian, centered at \(x'\). The propagator then represents the probability amplitude, that is, the wave function, at point \(x\) after the given time \(T\). )  The propagator is given by \[ |\psi(x,t=T)\rangle =U(T)|\psi(x,t=0)\rangle ,\tag{3.7.5}\] or, in Schrödinger wave function notation, \[ \psi(x,T)=\int U(x,T; x',0)\psi(x′,0) dx′. \tag{3.7.6}\] It is clear that for this to make sense, as \(T\to0\), \(U(x,T;x',0)\to\Delta(x-x′).\) In the lecture on propagators, we found \[ \langle x|U(T,0)|x′\rangle  =\int_{-\infty}^{\infty}e^{-i\hbar k^2T/2m}\frac{dk}{2\pi}\langle x|k\rangle \langle k|x′\rangle =\int_{-\infty}^{\infty}e^{-i\hbar k^2T/2m}\frac{dk}{2\pi}e^{-ik(x-x′)}=\sqrt{\frac{m}{2\pi\hbar iT}}e^{im(x-x′)2/2\hbar T}. \tag{3.7.7}\] Summing over Paths Let us formulate the sum over paths for this simplest one-dimensional case, the free electron, more precisely.  Each path is a continuous function of time \(x(t)\) in the time interval \(0\le t\le T\), with boundary conditions \(x(0)=x′, x(T)=x\).  Each path contributes a term \(e^{iS/\hbar}\), where \[ S[x(t)]=\int_0^T L(x(t),\dot{x}(t))dt=\int_0^T \frac{1}{2}m\dot{x}^2(t)dt \tag{3.7.8}\] (for the free electron case) evaluated along that path. The integral over all paths is written: \[ \langle x|U(T,0)|x′\rangle =\int D[x(t)] e^{iS[x(t)]/\hbar} \tag{3.7.9}\] This rather formal statement begs the question of how, exactly, we perform the sum over paths: what is the appropriate measure in the space of paths? A natural approach is to measure the paths in terms of their deviation from the classical path, since we know that path dominates in the classical limit.  The classical path for the free electron is just the straight line from \(x'\) to \(x\), traversed at constant velocity, since there are no forces acting on the electron.  We write \[x(t)=x_{cl}(t)+y(t) \tag{3.7.10}\] where \[ x_{cl}(0)=x′,  x_{cl}(T)=x \tag{3.7.11}\] and therefore \[ y(0)=0, y(T)=0. \tag{3.7.12}\] Then \[ \begin{matrix}\langle x|U(T,0)|x′\rangle =\int D[y(t)] e^{iS[x_{cl}(t)+y(t)]/\hbar} ,\\ S[x_{cl}(t)+y(t)]=\int_0^T \frac{1}{2}m(\dot{x}_{cl}(t)+\dot{y}(t))^2dt \\ =S[x_{cl}(t)]+\int_0^T m\dot{x}_{cl}(t)\dot{y}(t)dt+\int_0^T \frac{1}{2}m\dot{y}^2(t)dt. \end{matrix} \tag{3.7.13}\] The middle term on the bottom line is zero, as it has to be since it is a linear term in the deviation from the minimum path.  To see this explicitly, one can integrate by parts: the end terms are zero, from the boundary condition on y, and the other term is the acceleration of the particle along the classical path, which is zero. Therefore \[ \langle x|U(T,0)|x′\rangle =e^{iS[x_{cl}(t)]/\hbar} \int D[y(t)] e^{iS[y(t)]/\hbar} \tag{3.7.14}\] The \(y\)- paths, being the deviation from the classical path from \(x'\) to \(x\), necessarily begin and end at the \(y\)- origin, since all paths summed over go from \(x'\) to \(x\).    The classical path, motion from \(x'\) to \(x\) at a constant speed \(v=(x′-x)/T\), has action  \(Et\), with \(E\) the classical energy \(\frac{1}{2}mv^2\), so \[ U(x,T;x',0)=A(T)e^{im(x-x′)2/2\hbar T}. \tag{3.7.15}\] This gives the correct exponential term.   The prefactor \(A\), representing the sum over the deviation paths y(t), cannot depend on \(x\) or \(x'\),  and is fixed by the requirement that as \(t\) goes to zero, \(U\) must approach a \(\delta\)- function, giving the prefactor found previously.   Proving that the Sum-Over-Paths Definition of the Propagator is Equivalent to the Sum-Over-Eigenfunctions Definition The first step is to construct a practical method of summing over paths.  Let us begin with a particle in one dimension going from \(x'\) at time 0 to \(x\) at time \(T\).  The paths can be enumerated in a crude way, reminiscent of Riemann integration: divide the time interval 0 to \(T\) into \(N\) equal intervals each of duration \varepsilon, so \(t_0=0, t_1=t_0+\varepsilon, t_2=t_0+2\varepsilon,…, t_N=T\). Next, define a particular path from \(x\) to \(x'\) by specifying the position of the particle at each of the intermediate times, that is to say, it is at \(x_1\) at time \(t_1\), \(x_2\) at time \(t_2\) and so on.  Then, simplify the path by putting in straight line bits connecting \(x_0\) to \(x_1\), \(x_1\) to \(x_2\), etc.  The justification is that in the limit of \(\varepsilon\) going to zero, taken at the end, this becomes a true representation of the path. The next step is to sum over all possible paths with a factor \(e^{iS/\hbar}\)  for each one.  The sum is accomplished by integrating over all possible values of the intermediate positions \(x_1,x_2,…,x_{N-1}\), and then taking \(N\) to infinity. The action on the zigzag path is \[ S=\int_0^T dt(\frac{1}{2}m\dot{x}^2-V(x))\to\sum_i \left[ \frac{m(x_i+1-x_i)^2}{2\varepsilon}-\varepsilon V(\frac{x_i+1+x_i}{2}) \right] \tag{3.7.16}\] We define the “integral over paths” written \(\int D[x(t)]\) by \[ \lim_{\begin{matrix} \varepsilon\to0 \\ N\to\infty \end{matrix}}\frac{1}{B(\varepsilon)}\int_{-\infty}^{\infty}\int \dots \int \frac{dx_1}{B(\varepsilon)} \dots  \frac{dx_{N-1}}{B(\varepsilon)} \tag{3.7.17}\] where we haven’t yet figured out what the overall weighting factor \(B(\varepsilon)\) is going to be. (It is standard convention to have that extra \(B(\varepsilon)\) outside.) To summarize: the propagator \(U(x,T;x',0)\) is the contribution to the wave function at \(x\) at time \(t=T\) from that at \(x'\) at the earlier time t=0.  Consequently, \(U(x,T;x',0)\) regarded as a function of \(x\),\(T\) is, in fact, nothing but the Schrödinger wave function \(\psi(x,T)\), and therefore must satisfy Schrödinger’s equation \[ i\hbar \frac{\partial}{\partial T}U(x,T;x',0)=\left( -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}+V(x) \right) U(x,T;x',0).\tag{3.7.18}\] We shall now show that defining \(U(x,T;x',0)\) as a sum over paths, it does in fact satisfy Schrödinger’s equation, and furthermore goes to a \(\delta\)- function as time goes to zero. \[ U(x,T;x',0)=\int D[x(t)]e^{iS[x(t)]/\hbar} =\lim_{\begin{matrix} \varepsilon\to0 \\ N\to\infty \end{matrix}}\frac{1}{B(\varepsilon)}\int_{-\infty}^{\infty}\int \dots \int \frac{dx_1}{B(\varepsilon)} \dots  \frac{dx_{N-1}}{B(\varepsilon)}e^{iS(x_1,  \dots   ,x_{N-1})/\hbar} . \tag{3.7.19}\] We shall establish this equivalence by proving that it satisfies the same differential equation.  It clearly has the same initial value—as \(t′\) and \(t\) coincide, it goes to \(\delta(x-x′)\) in both representations. To differentiate \(U(x,T;x',0)\) with respect to \(t\), we isolate the integral over the last path variable, \(x_{N-1}\): \[ U(x,T;x',0)=\int \frac{dx_{N-1}}{B(\varepsilon)}e^{\left[ \frac{im(x-x_{N-1})^2}{2\hbar \varepsilon}-\frac{i}{\hbar} \varepsilon V(\frac{x+x_{N-1}}{2})\right] }U(x_{N-1},T-\varepsilon;x',0) \tag{3.7.20}\] Now in the limit \(\varepsilon\) going to zero, almost all the contribution to this integral must come from close to the point of stationary phase, that is, \(x_{N-1}=x\). In that limit, we can take \(U(x_{N-1},t-\varepsilon;x',t′)\) to be a slowly varying function of \(x_{N-1}\), and replace it by the leading terms in a Taylor expansion about \(x\), so \[ U(x,T;x',0)=\int \frac{dx_{N-1}}{B(\varepsilon)}e^{\frac{im(x-x_{N-1})^2}{2\hbar \varepsilon}} \left(1-\frac{i}{\hbar} \varepsilon V\left( \frac{x+x_{N-1}}{2}\right) \right) \left( U(x,T-\varepsilon)+(x_{N-1}-x)\frac{\partial U}{\partial x}+\frac{(x_{N-1}-x)^2}{2}\frac{\partial^2U}{\partial x^2}\right) \tag{3.7.21}\] The \(x_{N-1}\) dependence in the potential \(V\) can be neglected in leading order—that leaves standard Gaussian integrals, and \[ U(x,T;x',0)=\frac{1}{B(\varepsilon)} \sqrt{\frac{2\pi\hbar \varepsilon}{-im}} \left( 1-\frac{i\varepsilon}{\hbar} V(x)+\frac{i\varepsilon\hbar}{2m}\frac{\partial^2}{\partial x^2}\right) U(x,T-\varepsilon;x',0). \tag{3.7.22}\] Taking the limit of \(\varepsilon\) going to zero fixes our unknown normalizing factor, \[ B(\varepsilon)=\sqrt{\frac{2\pi\hbar \varepsilon}{-im}} \tag{3.7.23}\] thus establishing that the propagator derived from the sum over paths obeys Schrödinger’s equation, and consequently gives the same physics as the conventional approach. Explicit Evaluation of the Path Integral for the Free Particle Case The required correspondence to the Schrödinger equation result fixes the unknown normalizing factor, as we’ve just established.  This means we are now in a position to evaluate the sum over paths explicitly, at least in the free particle case, and confirm the somewhat hand-waving result given above. The sum over paths is \[ U(x,T;x',0)=\int D[x(t)]e^{iS[x(t)]/\hbar} =\lim_{\begin{matrix} \varepsilon\to0 \\ N\to\infty \end{matrix}}1B(\varepsilon)\int_{-\infty}^{\infty}\int ...\int \frac{dx_1}{B(\varepsilon)} ... \frac{dx_{N-1}}{B(\varepsilon)}e^{i\sum_i \frac{im(x_i+1-x_i)^2}{2\hbar \varepsilon}}. \tag{3.7.25}\] Let us consider the sum for small but finite \(\varepsilon\).  In particular, we’ll divide up the interval first into halves, then quarters, and so on, into \(2^n\) small intervals.  The reason for this choice will become clear. Now, we’ll integrate over half the paths: those for \(i\) odd, leaving the even \(x_i\) values fixed for the moment.  The integrals are of the form \[ \begin{matrix} \int_{-\infty}^{\infty}dye^{(ia/2)[(x-y)^2+(y-z)^2]}=e^{(ia/2)(x^2+z^2)}\int_{-\infty}^{\infty} dye^{iay^2-iay(x+z)} \\ =e^{(ia/2)(x^2+z^2)}\sqrt{\frac{\pi}{-ia}}e^{(-ia/4)(x+z)^2}=\sqrt{\frac{\pi}{-ia}}e^{(ia/4)(x-z)^2} \end{matrix} \tag{3.7.26}\] using the standard result \(\int_{-\infty}^{\infty} dxe^{-ax^2+bx}=\sqrt{\frac{\pi}{a}}e^{b^2/4a}\). Now put in the value \(a=m/\hbar \varepsilon\): the factor \(\sqrt{\frac{\pi}{-ia}}=\sqrt{\frac{\pi\hbar \varepsilon}{-im}}\) cancels the normalization factor \(B(\varepsilon)=\sqrt{\frac{2\pi\hbar \varepsilon}{-im}}\) except for the factor of 2 inside the square root. But we need that factor of 2, because we’re left with an integral—over the remaining even numbered paths—exactly like the one before except that the time interval has doubled, both in the normalization factor and in the exponent, \(\varepsilon\to2\varepsilon\). So we’re back where we started. We can now repeat the process, halving the number of paths again, then again, until finally we have the same expression but with only the fixed endpoints appearing.
314a7fa809f21e76
 correct Bohr model of Helium Quantum mechanics is wrong.       Youhei Tsubono, Japan New Summary 2017       Criticize the present physics.       (17/ 10/20 ) Table of contents Quantum mechanics is nonsense. [ A single electron can pass both slits at the same time !? ] (Fig.1) Quantum mechanics = Many worlds = Fantasy. The present physics is filled with unreal concepts such as parallel worlds even in academic organization. Why did we fall into such a miserable situation ? When did we stray from the right path ? In Quantum mechanics, Schrodinger wavefuntion only gives vague probability density of each electron.  Transistor uses only classical mechanics. This dubious wavefunction is the origin of strange ideas such as many-worlds where an electron can be in all possible states at the same time. And it misleads people and entangle them in an endless debate about "consciousness", which is philosophy rather than science. The theory of everything = extra-dimensions ! [ 10-dimensional string theory is the only unified theory. ] (Fig.2)  Quantum mechanics + Einstein relativity = string theory. Theory of everything is an unified theory of quantum mechanics and Einstein's general relativity The only theory of everything accepted now is string ( M ) theory, which relies on unreal extra-dimensions. They believe quixotic idea that our universe is made of 10 ( or 11 ) dimensional spacetime instead of 4 (= x,y,z + time ) ! Unfortunately, this fanciful string theory is the only mainstream unified theory, so it monopolizes all important academic positions. It means unless you believe this string theory, you'll be surelly kicked out of academy and cannot be professors, let alone famous. Surprisingly, this only theory of everything depends on wrong math ( 1 + 2 + 3 = ∞ = - 1/12 ) and 10500 different worlds, which cannot predict anything. The present physics believes parallel universes ! [ Big Bang → multi-universes were born ? ] (Fig.3)  Multiverse = Parallel universe are rampant. Surprisingly, the present mainstream physicists believe fantasy multiverse where many universes exist parallel to each other. Even first-rate physicists are not exception. The present cosmology is based on fanciful faster-than-light expansion of universe. So they claim Big Bang 13.8 billion years ago spawned many bubble universes.  One of them is the universe where we live ? Of course, these quixotic ideas are all speculation, lacking physical evidence.  In spite of it, many scientists all over the world waste their time in this fiction ! Why does the present science become so miserable ? Angular momentum zero is impossible. [ Quantum mehanics includes angular momentum zero. ] (Fig.4)  Electrons in "s" orbital always crash into nucleus ? When you solve Schrödinger equation of hydrogen atom, it always includes orbitals of angular momentum zero (= s orbital ). It means electrons in "s" orbital always crash into and penetrate nucleus ? Hydrogen, helium and sodium are all s orbital with zero angular momentum. Wait, wait.  The outer electron of sodium (= Na ) is 3s orbital. This outer electron always penetrate inner electrons ( n = 1,2 ), too ? Thinking commonsensically, strong Coulomb repulsions by inner electrons prevent 3s electron from penetrating them ! → angular momentum is not zero ? Different from this absurd quantum mechanics, an electron in Bohr model is revolving around the nucleus (← not crash ).  It's far more realistic. Reason why Schrödinger's hydrogen is wrong. [ "Negative" kinetic energy ( Tr < 0 ) at both ends is unreal. ] (Fig.5)  Schrodinger's 2p radial wavefunction, negative kinetic energy. Schrodinger's hydrogen contains two classically forbidden areas with negative kinetic energy ( this p.2-, this )  Why such an stupid thing happens ? On the right of a2, the potential energy is higher than total energy ( V > E ). So kinetic energy must be negative to keep desirable total energy. To begin with, the idea that hydrogen bound electron can reach r = infinity is unreasonable. On the left of a1, to cancel the increasing tangential kinetic energy. radial kinetic energy must be negative. Because tangential (= angular ) kinetic energy is inversely proportional to the square of the radius r.  See the 3rd term of this (3). In this region, the potential energy is lower than total energy ( V < E ), so tunnel effect doesn't apply. The constant angular momentum keeps tangential kinetic energy always positive, but the radial kinetic energy can be negative.  So Schrodinger hydrogen is contradictory. Electron spinning far exceeds light speed ! [ Spinning speed of "point"-like electron is much faster than light ! ] (Fig.6)  Point-like electron ( radius r → 0 ), rotation v → ∞ Angular momentum is given by mv × r ( v = velocity, r = radius ). Electron spin also has angular momentum 1/2ħ, they claim The problem is an electron is very tiny, point-like. The point-like particle means its radius r is almost zero. So to get the angular momentum 1/2ħ, the electron spinning must far exceed light speed ( this p.5, this ) So the electron spin lacks reality. Even Pauli ridiculed the idea of "spinning electron". But in "s" orbital of Schrodinger's hydrogen, this electron spin is the only generator of magnetic moment. So they had no choice but to accept this strange spin ( Not as real spinning and speed ). Spin has the same Bohr magneton. [ Electron spin has the same magnetic moment as Bohr's orbit ! ] (Fig.7)  ↓ Lucky coincidence ?  Same magnetic moment. It's known that hydrogen atom has magnetism equal to Bohr magneton, which can be explained by Bohr's classical orbit and de Broglie theory. After quantum mechanics was born, its Schrodinger wavefunction has No orbital angular momentum to explain this magnetism. So the physicists at the time invented strange spin, and they artificially defined the spin's magnetic moment as the same Bohr magneton ! This is a very far-fetched interpretation. Spin's angular momentum is 1/2ħ, which is half of Bohr's ħ angular momentum. So they decided that spin g-factor is twice (= 2 ) the Bohr's orbit (= 1 ). "g-factor" means the ratio of magnetic moment to angular momentum. As a result, they claim spin can also has the same Bohr magneton. We can only measure the magnetism, neither angular momentum nor g-factor. The problem is there is No physical reason why "spin" cannot stop and it has the same Bohr magneton. Anomalous Zeeman effect is not "spin". [ Anomalous Zeeman effect is due to inner electrons, not spin. ] (Fig.8)  ↓ Only sodium shows typical anomalous Zeeman effect. Most textbooks say anomalous Zeeman spectrum patterns under magnetic field proved the existence of "spin".  But it's very far-fetched interpretation. In fact this anomalous Zeeman pattern was seen only in large atoms. Even if you try to find the cases, you can find only sodium case ( this p.3 ). I bet you can never find similar anomalous Zeeman effect in small hydrogen and lithium atoms, which all show normal Zeeman triplet without spin. Electron spin lacks reality, its spinning far exceeds light speed c. Sodium (= Na ) has many inner electrons ( n=1, 2 ), different from one-electron hydrogen. So it's more natural to think complicated anomalous Zeeman pattern is caused by inner electrons instead of unreal spin. Furthermore, there is No direct quantitive proof of orbital Lande g factor, which, they claim, is the proof of spin 1/2. Hydrogen is normal Zeeman effect without spin ! [ H, Li atoms show normal Zeeman triplet (= Paschen-Back ? ) ] (Fig.9)  One electron H shows "normal" ← Not spin. Despite textbook's exaggeration of "anomalous Zeeman = spin", one-electron hydrogen shows normal Zeeman effect, which doesn't need spin. Lithium also shows normal Zeeman triplet pattern. It is called Paschen-Back effect, which substantially means normal Zeeman. Even hydrogen includes small splitting called fine structure. It distorts typical normal Zeeman pattern a little ( this p.22 ) In this book p.659, they say, "for weak magnetic field, each component of hydrogen Hα doublet was separated in normal Zeeman triplet." This doublet fine structure does not need unreal "spin". In conclusion, anomalous Zeeman effect in large atoms is not a proof of spin. Pauli exclusion principle disproves spin. [ Spin magnetic energy is too small to cause Pauli exclusion force. ] (Fig.10)  Spin-spin magnetic energy (= 0.0001 eV ) is too small ! Pauli exclusion principle claims that each electron can occupy a different spin-orbital state.  Two 1s electrons of helium must have different spins, up and down. So the 3rd electron lithium cannot enter the same 1s orbital, because spin states has only two versions up and down. As a result, this 3rd electron of Li must enter far outer 2s orbital, resisting Coulomb attraction from nucleus. How strong is this Pauli exclusion force overcoming Couloumb attraction ? If all three electrons can enter inner 1s orbital, its total energy is 30 eV lower (= stable ) than the actual lithium. It means "Pauli exclusion force" is as strong as about 30 eV ! But spin-spin magnetic energy is far smaller, only 0.0001 eV ( this p.6 ). So strong Pauli exclusion principle has nothing to do with spin ( this p.7 ). Triplet, ferromagnet, molecular bonds are not spin, either. Schrödinger equation cannot handle helium. [ Quantum mechanics is useless in multi-electron atoms. ] (Fig.11)  No solution → just "choose" trial functions ! = useless Schrodinger equation of two-electron helium contains interelectronic Coulomb energy.  So it has No solution of helium. All other multi-electron atoms including H2+ molecule ion have No exact solution.  Then how does quantum mechanics deal with multi-electron atoms ? Surprisingly, they just choose artifical trial function as "imaginary" solution. "Choosing" convenient hypothetical solution out of infinite choices means Schrödinger equation has no ability to predict multi-electron atoms. And it's impossible to try infinite kinds of trial wavefunctions and find the one giving the lowest energy in them. Chosen wavefunction is Not a true orbital. [ Cannot solve → choose virtual function = not true energy ! ] (Fig.12)  "Choose" trial functions → integral over all space. Here we explain why these "chosen" wavefunction cannot give true ground state energy of helium. After choosing some trial wavefunction of unsolvable atoms, they integrate them over all space, and get, what they call, approximate total energy E'. The point is this approximate energy E' is just an average energy in a collection of different energies depending on different electrons' position. Originally, the sum of kinetic and potential energy in any electrons' positions must be equal to the single common ground state energy E. But "unsolvable" multi-electron wavefunctions don't satisfy this basic condition. So, this "average" energy E' does Not mean the single common ground state energy in any positions of helium.  It causes useless quantum mechanics. Total energy of helium is Not conserved ! [ There is NO "single" ground state energy fitting all states ! ] (Fig.13)  ↓All three states have the same total energy ? Getting exact true ground state energy means finding the single common energy in all electrons' position in helium atom. Because the total energy E must be conserved inside the same system. So it's natural that there is a single ground state energy governing all states. The problem is Schrodinger solution always spreads in all 3D space. So it's much harder to satisfy this single common energy than Bohr's planetary orbit. They often choose two hydrogen solutions (= ψH ) as approximate helium wavefunction.  All three above states ( ① - ③ ) must have the same total energy. But it's impossible that all these states give the same common ground state energy.  Because interelectronic repulsions are different in them. So choosing some approximate function (= basis set ) cannot give true "common" energy, but just "fake" energy. Two electrons have to classically avoid each other to obey a single total energy in any electrons' positions and two axioms ! Density functional theory ( DFT ) is useless. [ DFT can freely choose functional fitting experimental result. ] (Fig.14)  Electron interaction term is freely chosen. ← useless In larger atoms in condended matter physics, density functional theory (= DFT ) is the only computing method. It is often said this DFT is successful "ab-initio" method ( this p.3 ). "Ab-initio" means first-principle which can predict values without empirical parameter ? Unfortunately this DFT has No ability to predict any values, so useless. Like this, DFT just chooses some convenient functional out of infinie choices. In DFT, "exchange correlation functional" means interelectronic repulsions.  This functional is unknown, can be freely determined ( this p.2 ). So DFT can be considered a semi-empirical method, different from media-hype ( this p.23 ), and our basic science stops ! Electron correlation in DFT is meaningless. [ Electron correlation is artificially determined in DFT. ] (Fig.15)  Exchange, correlation functionals can be "freely" chosen. The calculated results depend on correlation functionals we choose in DFT. There is no restriction in choosing these functionals. No functional is accurate for all properties of interest ( this p.17 ). No matter what functional is invented, someone will always find a case where it fails. As you see, quantum mechanics has No ability to predict any energy values due to its unsolvable property.  Molecular mechanics is useless, too. This useless quantum mechanics is the root of all evils, and destroys all students' careers in all science fields ! Fine structure originates from Bohr-Sommerfeld model. [ Bohr-Sommerfeld model agreed with fine structure by Dirac. ] (Fig.16)  ↓ This was really a lucky coincidence ? The important point is that the fine structure (= small energy splitting ) of hydrogen was first obtained by Sommerfeld using Bohr's orbit. Later, Dirac equation using spin-orbit interaction got exactly the same solutions as Bohr-Sommerfeld model !  Lucky coincidence ? It's regrettable that almost No textbooks mention this important coincidence.  See histrotical magic and this last. It's surprising that Bohr-Sommerfeld model with No spin gives the same fine structure solution as quantum theory. Clearly, one of them (= latter Dirac hydrogen ) tried to aim at the same solution as the former Bohr-Sommerfeld, using some trick. Compare this p.12 and this p.9 Lucky coincidences in spin model. [ Spin-orbit model contains many lucky energy coincidences. ] (Fig.17)  ↓ This was really a lucky coincidence ? Quantum mechanical spin-orbit model should naturally contain much more splitted energy levels due to its spin, than Bohr-Sommerfeld model. But hydrogen energy levels are far less than spin-orbit model expected. Detailed derivation is this and this. Because Dirac hydrogen model contains many lucky coincidences in energy levels. For example, 2s1/2 and 2p1/2 orbitals have the same total energy in Dirac hydrogen, though their figures are completely different. In the same way, 3s1/2 = 3p1/2,  3p3/2 = 3d3/2,  4s1/2 = 4p1/2 ... As you see, the present spin-orbit model relies on very unnatural coincidences. They claim they have the same total angular momentum ( J = L + S ) despite different orbital angular momentum (= L ).  But no more mentioned. Relativistic Dirac equation with spin ? [ Einstein mass relation → Dirac equation with spin σ ? ] (Fig.18)  ↓ Linear Dirac equation contains spin ? In fact, Quantum field theory lacks reality, where Dirac equation was gotten by dividing Einstein quadratic relation into linear functions. In compensation for linear function, Dirac equation must contain 4 × 4 gamma (= γ ) matrices, which consist of spin Pauli matrices (= σ ). This is the reason they claim Dirac equation succeeded in combining "spin" and relativity.  But "σ = spin ?" is just artificial definition with No grounds ! To begin with, these Pauli σ matrices are just the result of changing quadratic → linear functions. They have nothing to do with unreal spin The problem is the momentum (= p ) in this Dirac equation is always tied to spin (= σ ) operator, which causes serious flaws. Spin-orbit coupling is Einstein relativity ? [ Fine structure = relativistic spin-orbit interaction ? ] (Fig.19)  ↓ H atom fine structure is relativistic effect ? Hydrogen atom has small energy splitting (= fine structure ). They say this splitting shows the difference between electron's spin up and down. This spin-orbit interaction is said to be Einstein relativistic effect. In H atom, an electron is moving around a proton (= nucleus ). From the electron's point of reference, the proton appears to be moving. Einstein relativity is based on purely relative ( not absolute ) motion. So even if the proton is actually stationary, the electron feels the pseudo-magnetic field created by moving proton, which causes small energy splitting depending on spin direction ? The point is this relativistic electromagnetic fields cause fatal paradox of Lorentz force !  So spin-orbit coupling model is completely false Na fine structure is too big for spin-orbit. [ Large Na fine structure splitting is impossible by Na+ charge. ] (Fig.20) Na+ ion must have 3.5 positive charge, if spin-orbit is true. They say Na large fine structure in D lines is also due to spin-orbit interaction.  The problem is this Na fine structure splitting is too big. Compare fine structure splitting in H (= 0.000045 eV ) and Na (= 0.0021 eV ) atoms. It is known that this fine structure splitting is proportional to Z4/n3, where n is principal quantum number.  See this last and this p.4. Z is effective positive charge (= H+, Na+ ion ), which movement causes magnetic field at the electron's spin ? To get the large Na fine structure, this central charge (= Na nucleus + all inner electrons ) must be unrealistically big ( Z is +3.5 ). In other alkali atoms, the situation becomes much worse. So relativistic "spin-orbit" interaction is too weak to cause alkali fine structure. Students ( including dropouts ) suffering from debt should sue universities for destroying all students' careers by exorbitant tuition and wrong theories. de Broglie wave was experimentally confirmed. [ de Broglie relation was confimed in various experiments. ] (Fig.21) Davisson-Germer experiment showed an electron is de Broglie wave. In de Broglie relation, electron's wavelength λ is given by λ = h/mv, where m and v are electron's mass and velocity. This important matter-wave relation was confirmed in various experiments such as Davisson-Germer and this. So there is No room for doubt that this de Broglie wave is true. Schrödinger "distorts" de Broglie relation. [ Quantum theory uses de Broglie relation, but "distorts" it ! ] (Fig.22)  Quantum mechanical wavefunction is unreal. So Schrodinger equation adopted this de Broglie relation as "derivative" form. Momentum operator (= derivative of wavefunction ) links p and λ. Of course, when momentum p is zero, its square p2 must be zero, too. But only when a wavefunction has basic " cos" or "sin" form, it holds true. The point is quantum mechanical wave functions distort original de Broglie relation.  Fig.22 is hydrogen 2p radial wavefunction ( this, this last ). This site (3) shows de Broglie derivative is valid in radial direction. "2p" wavefunction has unreal negative kinetic energy on both sides. On these boundaries, the second derivative is zero ( p2 = 0 ), but first derivative is not zero ( p is not zero ) !  This is ridiculous. It's quite natural that when p is zero, its square p2 is zero, too ! So quantum mechanics distorts original de Broglie relation, and uses wrong math ! Schrödinger and de Broglie wave [ Schrödinger hydrogen obeys an interger times de Broglie wave ! ] (Fig.23) Schrodinger's orbital is n × de Broglie wavelength. Historical magic shows Bohr model agreed with experimental results and Schrodinger's hydrogen using de Broglie theory. Bohr model's orbit must be an integer times de Broglie wavelength. Then, Schrodinger's hydrogen also obeys "an integer times de Broglie wavelength" ? In fact, Schrodinger orbitals also meet an integer times de Broglie wavelength like classical quantum thoery !  See Fig.1,  this last. "Boundary" condition at both ends ( r= 0,∞ ) in Schrodinger hydrogen corresponds to de Broglie condition ( see this p.11, 12 ). When we use u = rR as radial wavefunction, de Broglie relation is clearer ( this (3) and this p.3 ). As you see, Schrodinger hydrogen clearly obeys "n × de Broglie wavelength" ! But its angular momentum = zero is contradictory, so useless as precondition. Old Bohr model failed in three-body helium. [ It was impossible to obtain correct three-body helium model in 1920's without computers. ] (Fig.24)   Simple circular old Bohr's helium gived wrong energy (= -83.33 eV ). The most decisive reason for dismissing Bohr model is failure in explaining helium atom. As shown in this section, simple helium model of Fig.24 right gives wrong ground state energy (= -83.33 eV ). The helium experimental value is -79.005147 eV (= 1st + 2nd ionization energies, Nist, CRC ). Of course, there were NO convenient computers in 1920s to simulate three-body motions (= two electrons + one nucleus ) like helium. On the other hand, quantum mechanical variational methods can get approximate helium energy, though it does Not mean truth. Even wrong approximate solution was far better than the dire situation where physicists had nothing to study without computers in classical orbits in 1920s. So this lack of computers dealing with three-body atoms is the main reason we had gone the wrong way. A single electron interference = parallel worlds ? [ Real electron de Broglie wave explains interference. ] (Fig.25)  Real de Broglie wave in "medium". It's known even a single electron can interfere with itself in two-slit experiment.  Using real medium of de Broglie wave, we can easily explain it. But Einstein relativity denied any medium ! So they use "many-path worlds" where a single electron passes both slits at the same time. This quixotic idea is called "Feynman-path-integral", where a single electron can enter infinite different paths at the same time ! The problem is physicists jumped to the conclusion that atom interferometry using de Broglie wave interference showed "superposition = parallel worlds". Though if we suppose some real medium, we can solve all fatal paradoxes such as two-slit and magnetic force. Law of action and reaction forbids a single electron from being kicked out from destructive interference.  Interference needs some external things. Photon is just electromagnetic wave. [ Electromagnetic wave of 1 km wavelength = a photon ? ] (Fig.26)  ↓ A single photon is bigger than 1000 meter !? You may often see "photon", a quantum particle of electromagnetic wave ? in various academic sites and news. OK. Then how big is a single photon ? In fact, the present physics cannot answer even this basic question ! For example, radio wave is one of electromagnetic waves, which have very long 1000 meter wavelength.  A single photon is so big ? We have never confimed so big photon.  So a photon is just a fictional particle. Even Nobel laureate, Lamb (= photon experimentalist ) did Not believe a photon. Photoelectric effect is NOT a proof of photon ! [ Threshold light frequency (= f ) means light is "wave". ] (Fig.27)  Electron is ejected above some light frequency. Quantum mechanics claims Einstein's photoelectric effect proved a photon. But I bet you won't be able to find any clear photon images in them. The point is in photoelectric effect, all you can detect is electrons' current ejected by light (= photon ? ).  No photons can be seen directly. When you shine light above the threshold frequency (= f ) on to a metal, electrons are emitted from the metal, which is detected.  That's all. As you see, there is No proof of a photon particle here. Light frequency is equal to c / wavelength, which means "wave" ! So this famous photoelectric effect just showed incident light is "wave having frequency", Not a particle. Besides it, a photon interacting with a electron must be "virtual", not real, when the total energy and momentum are conserved.  So a photon is fiction. Photodetector detects "electrons" Not photon. [ A single photon detector measures ejected electrons, Not a photon ! ] (Fig.28)  Increased ejected photoelectrons = a photon ? Then what is a "photon" in university and the media based on ? They claim a single photon detector can detect each photon particle. Again, this explanation is misleading.  Because a single photodetector detects Not a photon, but electrons' current excited by incident light. ( this Fig.1 ). Only when frequency and intensity of an incident light exceed some threshold (= which can be adjusted ), they call its electric signal "photon". So there is No experimental proof of a photon. Fictional photon is needed for useless quantum field theory. This bogus science has No benefits for you, except imposing exorbitant tuition.  So students including dropouts should sue universities for deceiving them ! Superposition = parallel worlds is illusion. [ The original light just splits into two ← parallel worlds ? ] (Fig.29)  ↓ Light splits into 1 and 2 at beam splitter. This nature claims even a large object can be in two states at the same time.  In this "superpositon", a grotesque cat can be dead and alive at the same time ? This many-world like idea is just illusion. Quantum computer, information using " parallel-world" is just a scam to extort money from people. Even when the original light just splits into path 1 and 2 at the beam splitter, the present physics call it "superposition = parallel worlds 1 and 2". This far-fetched idea is caused by defining a fictional light particle (= photon ).  But this photon has No direct evidence.  Even "how big" is unknown. So the strange quantum physics misinterprets "just classically split light" as parallel worlds.  That's all. Accelerating electron radiates energy ? [ Electron cannot emit a real photon ! ] (Fig.30)  An electron emits and loses energy ? You may often see the boring cliche "accelerating electron radiates energy in classical orbit" in textbooks. But in fact, these explanations are false and physically impossible. They consider an electron as a spherical conductor storing repulsive charges. This "stored energy" electron model is inconsistent with the smallest stable-charge electron.  So textbooks are completely wrong. Even Schrodinger equation has to rely on "stable wavefunction", where de Broglie wave's phases agree with each other at ends. The truth is it's physically impossible that only a single electron emits or absorbs a photon quantum mechanically. Bohr's electron does NOT radiate energy ! [ When an electron is a spherical conductor, it loses energy. ] (Fig.31) Bohr model electron is Not falling into nucleus. They uses Poynting vector (= E × H ) as the energy flow ( this, this ), which is equal to the change of the electric and magnetic energy densities stored in the vacuum. This stored energy (= 1/2εE2 ) means the potential energies needed to gather infinitesimal charges to the spherical conductor (= an electron ? ) But a single electron is NOT made from smaller charges. ( A single electron is the smallest charge. ) There is NO concept such as "electric energy density" around a single electron. It means the vacuum electric energy (= 1/2εE2 ) in a single electron is NOT energy, as a result, Poynting vector itself is meaningless in this single electron's case. So this wrong explanation is a kind of brainwashing about Bohr model. Only when more than one charges are involved, they can radiate energy. Energy and momentum are NOT conserved ! [ Photon cannot conserve both total energy and momentum. ] (Fig.32)  ↓ A electron radiates a photon (= energy ) ? It's known that light (= photon ? ) has an energy proportional to its frequency as E = hf, and its momentum p = E/c. An electron has its momentum p = mv and its kinetic energy E = 1/2mv2. Suppose this electron emits a photon and loses its kinetic energy. Of course, total energy and momentum must be conserved. But it's impossible to satisfy both energy and momentum conservation ! In general, when a particle emits ( or absorbs ) another particle of different mass, one of energy or momentum is Not conserved. Because in a photon with No mass, its momentum is much smaller compared to its great energy. So only Compton scattering without emitting or absorbing is allowed. Or the whole orbit must emit "transverse" electromagnetic wave. Emitted photon must be "virtual", Not real. [ Photon emitted from an electron is virtual photon with negative mass ! ] (Fig.33)  ↓ Virtual photon has negative mass m2 < 0. As I said, emitted ( or absorbed ) photon cannot conserve both energy and momentum at the same time. Then what the heck is a photon which quantum mechanics says about ? In fact, these photons are called "virtual", which disobey Einstein relation. Surprisingly, these virtual photons have negative mass ( m2 < 0 ). All forces such as Coulomb, weak, strong forces are virtual, Not real. This is the reason why experimental results in LHC lack reality, just wasting our tax. The problem is all the media, universities and bloggers are hiding these virtual particles from people !  So probably you don't know it. They are one of main factors destroying your precious careers. Academic frauds are rampant openly now ! The present physics relies on unreal particles. [ Particle physics cannot avoid "unreal" virtual particles. ] (Fig.34)  ↓Coulomb, Higgs depend on fictional virtual particles. In fact, the present physics completely depends on unreal virtual particles as four fundamental forces. All particle physics reactions need virtual particles, too. Even Coulomb force depends on unreal virtual photon. What is "virtual particle" ?  Move faster than light ? In beta and Higgs decay, W boson is virtual, disobeys Einstein mass formula. The problem is ordinary people don't know about these virtual particles, which are indispensable for the present physics !  Why ? Because the media and university shut off all true informations. The square of mass of virtual photon is always negative ( m2 < 0 ) !  Impossible. Ferromagnet has nothing to do with spin. [ Spin magnetic moment is too weak to explain ferromagnetism. ] (Fig.35)  Spin magnet is too weak to explain ferromagnet. You may think Spintronics and excitonics are useful (← ? ) for your career. But almost nobody knows electron spin lacks reality ! Its spinning far exceeds light speed ( see this p.2 ). You may hear spin is tiny magnet with the magnitude of Bohr magneton. But this is not true, and disagrees with experiment. Spin-spin magnetic interaction is too weak to explain actual ferromagnet.  See this p.6 this p.7.  Spin can be replaced by more realistic model. Then, what the heck does this spin model mean ? It uses "Heisenberg" spin model ( this p.3 ). But this Heisenberg spin model is too old, which was introduced in 1920s, and it's too abstract to describe actual phenomena ( this p.2 ). This spin model just puts nonphysical symbols side by side.  So useless. Parameter J is arbitrarily chosen.  J > 0 = antiferromagnet, J < 0 = ferromagnet. Quantum mechanics is useless. [ Useless quantum mechanics spawned "imaginary" targets. ] (Fig.36)   Quantum equation cannot handle multi-electrons. As I said, Schrodinger equaiotn of quantum mechanics cannot solve multi-electron atoms.  They just choose trial function. Even Pauli exclusion force cannot be explained by too weak spin magnetic moment.  So all they can rely on is abstract determinant as muli-electron function ( this p.3 ) Choosing "imaginary" wavefunction means quantum mechanics has No ability to predict any physical values.  So useless. To conceal this inconvenient fact, they introduced fake "ab-initio" DFT.  Under this useless quantum mechanics, phycisists had nothing to do. This is the reason they created "imaginary" target such as quantum computer and quasiparticles. Unfortunately, Nobel prize and top journals are exploited as "virtual" reward for unreal physics ! Physics is filled with unreal quasiparticles. [ An electron can split into 3 components of spin, charge, orbit !? ] (Fig.37)   Quasi-particles will remain "fake" even 1000 years from now ! According to Nature, a fundamental electron can split into three components such as "spinon" (= spin ! ), holon (= charge !) and orbiton (= orbital motion ! ). Of course, these are fictitious (= unreal ) quasi-particle. So, these quasi-particle cannot exist independently outside the material. As you see, the current condensed matter physics just concentrates on creating fictitious quasi-particle instead of investigating true underlying mechanisms. See also list and quasi-electron. Taking the trouble to create fictitious quasi-particles means physicists have NO intention of clarifying their true underlying mechanism using "real" particles, from now on. So science stops due to quantum mechanics. Why quantum mechanics is "artificial",  unreal ? [ Massless Dirac fermion = unreal quasiparticle is "made" by force ! ] (Fig.38)  Measuring photoelectron → "massless" electron ? Boring cliche,"Quantum mehanics is the most successful theory" is a big lie, fabricated by universities to justify their exorbitant tuition. Though quantum mechanics is unrealistic, why they pretend that it's successful ?  Because physicists fabricated "artificial phenomena" by force. Quasiparticle in condensed matter is a typical example of it. "Dirac node" in this means massless Dirac fermion ( this ). Massless Dirac and Weyl fermions are unreal quasiparticle, which don't really exist !  "Artificial" objects just to fit the present theory ( this p.9 ). This massless Dirac fermion is slower than the light speed, which violates relativity.   They claim this massless fermion was observed by photoemission using this. "Photoemission" means observing ejected " electron" with mass, but they misinterpret it as "massless".  So this "massless fermion" is self-contradictory. It's based on wrong assumption that parallel momentum of ejected electron is invariant.  Topological insulator = Nobel prize candidate uses this unreal particle. Condensed matter physics. [ Quasiparticle is just a "trick" using meaningless symbols. ] (Fig.39)  Quasiparticle = trick with nonphysical symbols. A recent top journal still deals with unreal quasiparticle, exciton So the present science stops in condensed matter physics. Quasiparticle "exciton" is just a pair of electron (= c ) and its hole (= β ). NO physical shape in this nonphysical symbol ( this p.3 ). Polariton is also unreal quasiparticle.  Polariton consists of a pair of exciton and photon ( this (15) , this ).  That's all. It's not a modern physics ! In this way, all the present physics can do is make artificial quasiparticles ( this, this ).  So useless.  People are deceived, because universities hide this truth ! As a result, almost No ordinary people know the present solid physics is filled with unreal "quasiparticle" and does harm to all applied science. Superconductor by quantum mechanics. [ Superconductor model relies on unreal quasiparticle, so useless. ] (Fig.40)  ↓Bogoliubov quasiparticle lacks reality. The present superconductor model relies on unreal phonon quasiparticle.  This paper (p.2) mentions another Bogoliubov quasiparticle, too. Relying on "not actual" quasiparticle means the present science stops pursuing the truth !   It hampers all applied science. Furthermore, this Bogliubov quasiparticle contradicts normal particle. This quasiparticle consists of creation and annihilation of electrons ( this p.4 ). So "create + annihilate = zero" is what this quasiparticle is ! The problem is the current superconductor model stops at old BCS model, forever. Why did we fall into so serious situation also in semiconductor ? It all comes down to unreal (= useless ) wavefunction and faster-than-light spin. Quantum computer = Parallel worlds. [ Useless quantum → Parallel worlds → computer ! ] (Fig.41)  Physicists need imaginary target = quantum computer. If basic quantum theory remains useless, physicists have nothing to do. So they needed to create "imaginary" target = quantum computer. They paid attention to doubtful probabilistic nature in quantum superposition where a cat can be dead and alive at the same time. Of course, we cannot see a grotesque "dead and alive" cat directly. But they abused this absurd logic in entanglement and quantum computer. They misinterpret "1 or 2 unknown" as "1 and 2 states coexist" ! So the moment they see "1 state", entangled "2 state" is confirmed (= spooky ) ? Quantum computer calculates "different numbers" using parallel worlds ?  Of course we cannot confirm this "fantasy" parallel worlds, so a waste of money. Quantum computer = "camouflage" target. [ Physicists try to connect all "unreal" concepts with fictitious quantum computer ! ] (Fig.42)   Spin Hall, Berry, topological insulator → quantum computer ? Though we often see the words of "quantum computers move a step" in various news, its research has substantially made NO progress at all. As of 2013, the quantum computer consists only of two unstable trapped ions ( independently controllable ) or superconducting qubits with NO computer's shape. Their average working (= coherent ) time is only microseconds ( this ). So this easily broken computer is useless. The point is present quantum mechanics abuses this impractical ( ← forever ) quantum computer as "camouflage" target ! So, all roads (= spin Hall, quasiparticle Majorana, topological insulator, Berry phase ) lead to illusory quantum computer !? Very week spin Hall effect is useless ( this p.10 ). They adopted fictitious monopole to explain spin Hall effect. Physicists don't say what Berry phase really is, which means it's just artificial mathematical (= unreal ) phase. Particle is just an abstract symbol ? [ The present physics can only create or annihilate particle. ] (Fig.43)  ↓ Electron, photon are just meaningless math symbols. What figure does each electron and photon have ? Unfortunately the present physics has No ability to describe it. Electron and photon are just abstract math symbols with No shape. Quntum field theory is based on quantum mechanics and special relativity. In this theory, all physicists can do is two simple actions; create or annihilate each particle.  That's all. So this useless physics clearly prevents all applied science from developing, and is harmful to all science students. "Anticommute" means Pauli exclusion ? [ The present physics cannot give No detailed Pauli mechanism. ] (Fig.44)  Pauli exclusion principle is just anticummutation ? Though the media likes the title, "Einstein's dream", it's about general relativity.  Special relativity forms the basis of the present physics. They argue spin-orbit interaction and fine structure in all atoms are Einstein's relativistic effect.  But electrons spin far exceeds light speed ! In fact, fine structure does not need unreal spin. Actually, relativistic spin-orbit disagrees with experiment, which they hide. They argue even Pauli exclusion principle is Einstein's relativistic effect. Dirac combined "spin" and special relativity to derive Pauli exclusion. The problem is in his relativistic theory, each electron is just a nonphysical symbol (= a ) with no shape, and having flaws. So this theory just says Pauli exclusion is due to abstract anticommutation of fermions' operators.  No detailed mechanism is mentioned. This abstract relativistic field theory made the present solid physics useless, filled with unreal quasiparticle. How an electron emits a photon ? [ An electron is annihilated, a photon is created ?  ←NOT physics ! ] (Fig.45)  Only particle creation and annihilation in photon emission ? All quantum mechanics can do is two simple actions: create or annihilate each particle.  Electron is expressed by Dirac fields (= ψ ), and photon is Maxwell equation (= A ). When an electron emits a ( virtual ) photon, incident electron is annihilated, a photon is created, and outgoing electron is created.  That's all they can express. So, useless ! This process is one simple interaction term ( this, this ).  Dirac and Maxwell fields in this term include creation and annihilation operators of electron and photon. Universities and the media must tell people honestly that the present physics can do nothing, and useless.  So they must reduce exorbitant tuition drastically now ! Particle physics relies on Einstein mc2. [ Einstein mc2 produced Higgs, quarks .. through Dirac equation. ] (Fig.46)  ↓ Relativistic Dirac equation governs all particles. Though the media likes to mislead people using showy images about particle physics, these images are all fake. Ordinary people don't know the fact that abstract Dirac equation governs all particles such as electron, quark and neutrino .. This Dirac equation is so abstract (= out of touch with reality ) that the media and universities seem to desperately hide it. Einstein relation of mc2 is applied to this relativistic Dirac equation, which is the basis of unreal QED, Higgs and forces. In fact, they hide true paradox of Einstein relativity, which destroys all student's careers now !  Black hole cannot form. Light speed c is constant without medium ? [ Einstein "distorted" spacetime for light speed c. ] (Fig.47)  ↓ Einstein denied real light "medium". It's a famous story that Einstein relativity denied "ether" (= light medium ). Instead, he introduced strange idea that "spacetime" is distorted by observer. In fact, Michelson-Morley experiment didn't deny aether. Relativity without absolute space is based on relative motion. We and the light are approaching each other at speeds "v" and "c" in Fig.47. This "observed" light speed must be "c+v" in this case. But Einstein used tricky idea to make "c+v" remain the original "c" ! Light interference and its refraction clearly prove the light is "wave" traveling through some "medium". This "light medium" moving with the earth agrees with "constant" light speed c irrespective of its energy in Michelson-Morley experiment. True paradoxes in Einstein relativity. [ Observer can bend rigid rod just by moving !? ] (Fig.48)  Different clock times in different positions. Einstein relativity is NOT successful theory at all. People are brainwashed by universities and the media In Lorentz transformation, clock times (= t' ) seen from moving observer are different in different positions ( x = 0, 1 ) under the same t (= time from rest observer ). A straight rigid rod is moving along a square frame as shown in Fig.48 left. But special relativity claims this rigid rod is bent when the observer is moving ! For this time t' to be the same, we have to adopt smaller t (= past ) in the position x = 1.  So moving observer sees the past event in the right position ( Fig.48 right ). When the rod is moving only in one (= horizontal ) direction, this rod is Lorentz contracted, related to different clocks. When the rod is moving in two directions (= first vertical, later horizontal ), it is bent from moving observer !  So Einstein can bend rigid rod just by moving ! Einstein can change "future" of the rod ? [ "Block" without touching the rod can change the rod shape ! ] (Fig.49)  "Block" changes the rod "future" direction. The problem is that the only right part of the rod has NOT arrived at the turning point.  So this rod doesn't know whether there is some obstacle in the turning point. If we insert some "block" in the turning point before the rod (= right part ) has arrived there, the whole rod cannot turn to the left, because the rod is rigid. This means, the instant we insert a block, even the left horizontal part of the rod turns upward, though the block doesn't touch the rod !  See the detail. This is clearly a fatal paradox.  It's caused by strange relativistic "time change" in moving observer. In special relativtiy, Lorentz transformation is everyhing, which can bend any rigid rod, when the observer is moving without touching it ! When the rod is moving in two different directions, the fatal paradox is made clear.  So universities must reduce exorbitant tuition, when they teach "unreality". Einstein  'electromagnetic'  paradox. [ Magnetic force contradicts Einstein relativity. ] (Fig.50)  Neutral current → "Positive" by observer's movement ! In fact Einstein relativity includes fatal paradox also in electromagnetic force.  Relativity is a basis of spin-orbit and fine structure. So while universities accept Einstein, all fields ( ex. biology ) are useless, and students must pay exorbitant tuition for worthless degree ! Magnetic field B is generated around a neutral electric current in Fig.50. An external charge (+) stops, so it feels neither magnetic nor electric force from the current. But when an observer moves, he sees the charge "moving" in the opposite. So this "moving" charge feels Lorentz magnetic force, only when observer moves ! To cancel this magnetic force, the neutral current changes into positive, when observer moves !  = new electric force cancels magnetic force ( this p.2 ). This relativistic world is ridiculous. Observer moves  →  a charge is pulled ! [ A negative charge is attracted, when observer moves !? ] (Fig.51)  ↓ Einstein relativity shows "fatal" paradox ! The point is when an external charge (-) is at the side of the electric wire. This negative charge is attracted toward positively charged wire, only when observer moves ! This is clearly a fatal paradox.  Electric force acting on this negative charge cannot be cancelled by Loretnz magnetic force. How does special relativity handle this phenomenon ?  Charge (= ρ ) and current (= J ) are Lorentz-transformed ( this p.3 ), which causes positive ρ' from neutral current ( ρ = 0,  J isn't zero ). But Lorentz transformation of electro (= E ) magnetic (= B ) fields contradicts it.  Parallel electric field (= E|| ) remains zero, even when the current turns positive ! This contradiction originates in magnetic force disobeying Einstein relativity.  So universities must honestly tell students about these paradoxes, before destroying their career. How Einstein famous mc2 was born ? [ Energy-momentum relation is invariant in any observer. ] (Fig.52)  Relativistic momentum (= p ), energy (= E ). How was this Einstein's famous mc2 relation made ? Einstein's relativity relies purely on "relative" ( not absolute ) motion. When an observer and an electron stop, the stationary electron has zero momentum ( p = 0 ) and only rest mass energy ( E = mc2 ). When the observer starts to move at speed v, from his viewpoint, the electron is moving in the opposite direction at v (= Fig.52 right ). So from his viewpoint (= frame ), electron's momentum and energy becomes this. All these relativistic energy (= E ) and momentum (= p ) satisfy Einstein relation in any observer's speed v. Einstein relation disobeys de Broglie wave ! [ de Broglie wavelength (= λ ) contradicts Lorentz contraction. ] (Fig.53)  ↓ Electron's de Broglie wave vanishes !? The serious problem is this Einstein momentum contradicts de Broglie relation !  Relativistic version is this. de Broglie wavelength was confirmed in two-slit and various experiments.  So if Einstein relation disagrees with de Broglie relation, his theory is wrong. In Fig.53 left, an electron is moving at v, causing its de Broglie wave, and double-slit interference pattern is seen on the screen. But from the viewpoint of the moving observer, the electron appears to stop in Fig.53 right.  So he sees No interference due to vanished de Broglie wave ! This is clearly one of true paradoxes.  So if students ( including dropouts ) suffering from debt sue universities for destroying their careers, they could win ! How can we solve Einstein true paradoxes ? [ "Medium" is indispensable to avoid serious paradoxes. ] (Fig.54)  Electron moves relative to "medium" → de Broglie wave ! Lorentz magnetic force is perpendicular to particle ( or observer ) velocity.  It causes serious paradox in different directin. Electron's de Broglie wave disobeys Lorentz contraction (= independent of observer's motion ).  How can we fix this serious situation ? The only way to fix it is we admit some real "medium", which relativity rejected.  Medium moving with the earth agrees with Michelson-Morley experiment. If we admit when an electron moves with respect to this medium, it causes de Broglie wave, we can solve all serious paradoxes above. Furthermore, this real medium can explain electron's double-slit without fantasy parallel worlds. In fact, light speed c is affected by various different mediums ( ex. water ). Uniform and isotropic cosmic microwave background just fits this medium. And we don't need artificial dark matter, if we admit some medium in space from the beginning. Why Einstein is harmful to all science ? [ Special relativity is already used in the current physics. ] (Fig.55)  "Relativistic" QED disobeys Einstein ! Though the media repeats "Einstein dream", his theory "special relativity" already forms the basis of the current physics through Dirac equation. It is called quantum electrodynamics (= QED ).  The problem is in this QED, all fundamental forces such as Coulomb are unreal virtual particles ! Then why this unrealistic theory is called "most successful" ? The point is QED calculation always diverges to infinity (= ∞ ). So we have to artificially eliminate this infinity by renormalization. This is an absurd math trick by dividing it like ∞ = ∞ + finite value. And only infinite (= ∞ ) part is artificially removed (= renormalization ), leaving arbitrary finite values, as they like. In this way, QED can get any experimental ( finite ) value, as we like. So this theory is useless.  Even founders Dirac, Feynmen hated it. In Lamb shift, they just manipulate "not-analytical" Bethe values ( this p.6 ).  Lamb shift can be explained by Sommerfeld fine structure realistically. Why gigantic collider is waste of money ? [ "Symmetry" in particle physics is nonsense. ] (Fig.56)  "Symmetry" has NO physical meaning, so Higgs is unreal. In fact, god-particle Higgs has NO physical meaning. They say it's based on "symmetry".  What the heck is symmetry ? This says, when the equation of motion is invariant under some ( ex. gauge ) transformation, it's called gauge symmetry, which is the basis of Higgs. This gauge is NOT physics but artificial concept ( see this p.13 ) Under useless theory, they needed to create imaginary target (= symmetry ). They extended it to "matrix" form ( this p.2 = SU(2) symmetry ). SU(2) means weak force, and SU(3) means fractional-charge quarks, which cannot be isolated, so unreal. When these particles have "mass" ( term ), this artificial symmetry is broken. So they transfer "mass" term to other Higgs equations ( this p.6 ).  This is the reason Higgs is necessary.  Nonsense. Let me remind you that this symmetry has NO physical ground, so Higgs and quark are just artificial math concepts with NO reality ! Even after Higgs and quarks were discovered (← ? ), our daily livings had NOT changed at all.  It's safe to say these doubtful particles don't exist, except the media ( this ). Beta decay by weak force is unreal. [ A neutron decays into massive W boson of 80 times proton mass ? ] (Fig.57)  ↓ This heavy W boson violates energy conservation. Neutron is unstable, which decays into a proton and an electron within 10 minutes.  This process is called " beta decay". The problem is W boson, which they claim mediates this beta decay, is 80 times heavier than a proton ! This is very strange.  Because they claim an initial neutron decays into a proton (+) and very heavy W boson (-). The mass difference between a neutron and a proton is very small, which can never reach "80 times proton" mass ! So they start to claim this very heavy W boson is virtual (= not real ), which can appear for only so short time that we cannot detect. This "far-fetched" interpretation is the preset particle physics. Even in Higgs decay, this W boson is virtual, lacking realtiy. So it's impossible to say the standard model "giving up reality" is the most successful theory.  In fact, LHC cannot detect correct energies. Why "antimatter" research is useless ? [ Positron emission = electron capture !? ] (Fig.58)  ↓Positron emission is impossible. Why only antimatter disappeared in our world ?  Research on antimatter is really worth exorbitant tuition and education ? They say antimatter is useful (← ? ) as PET in hospital. The point is they do not detect antimatter but emitted electromagnetic wave. In fact, "positron emission" in PET can be replaced by real "electron capture", because they both have the same effect on a nucleus ( this p.3 ) ! In Na nuclide, they argue both positron ( β+ ) emission and electron capture produce the same Ne.  ← Emitted "light" energy is the same, indistinguishable. Positron emission is unrealistic, because a proton decays into a heavier neutron, causing "fantasy" perpetual machine ! So real "electron capture" is what actually happens in PET instead of unreal "positron emission".  = antimatter is useless. Antimatter production = "momentum" is gone ? [ When the light changes into (anti)matter, its momentum is gone. ] (Fig.59)  Light → positron + electron at rest ? It is said antimatter can be produced from high energy light (= γ ray ).  But we cannot generate it only from light ! They claim collision between accelerated electrons and nuclei is needed to generate antimatter.  Light involved in antimatter is vitual photon. So "antimatter is produced from high enegy (real) γ ray" is misleading. Furthermore, antimatter disobeys energy and momentum conservation ! When a light (= energy 2mc2 ) produces a pair of positron and electron at rest, the initial light momentum is gone, because the resultant pair is stationary. The incident light always has momentum (= p ).  But after the light spends all its energy in producing a pair particles at rest, the initial momentum is missing ! Collisions among electrons and nuclei generate a large number of unrelated particles.  We cannot measure the trajectry of each particle independently in magnetic. So, random Coulomb scattering in infinite unrelated particles is one of main reason they misinterpret unrelated ones as imaginary antimatters. Why Einstein relativity is worthless ? [ General relativity is too faint, doubtful and useless. ] (Fig.60)  Only 0.01o per century is relativity ? Though Einstein is still celebrated even after 100 years have passed, his theory is one of main factors of worthless university degree. His general relativity is too faint, so it's useless and doubtful. For example, advance of Mercury's perihelion is useful for our daily life ? They say general relativity can explain a slight change (= 0.01o per 100 years ! )  But this change is too small to believe, so useless, despite the media-hype. It's impossible to know correct mass ( distribution ) and shape of each star, which influence this slight change for 100 years ! Also in pulsar, which is 21000 light years away, its orbital change is too small to believe ( only 0.000076 seconds per year ! ). Other various factors and artificial parameters can affect these interpretation of very faint general relativity.  So doubtful. Einstein relativity is useless in GPS. [ Atmosphere has a greater influence, so relativity is meaningless. ] (Fig.61)  Without Einstein,  GPS works "correctly". So all general relativistic effects made no contribution to us, so worthless to teach.  Gravitational deflection of light is natural due to attracted dusts around stars. GPS really cannot be available without Einstein relativity ? They argue GPS needs relativistic correction of only 38 microseconds per day ! This effect is too small to believe. And relativistic clock time cannot avoid fatal twin paradox.  The point is Einstein relativity is useless also in GPS. Because the variable atmosphere (= medium ) around the earth has a great influence on GPS electromagnetic signal and its velocity change. So even if relativity is right, we cannot predict GPS time correction ! We have to rely on empirical model for correction of variable atmospheric effect. Or we can use atmospheric time correction in some known location. Discrepancy between clocks on the earth and satellite can be corrected by the 4th satellite ( not general relativity ! ) Atomic clock is based on the frequency (/s) of electromagnetic wave emitted by Cs transition. between very small split energy levels. It's no wonder that density difference in atmospheric medium ( on the earth and satellite ) affects these energy levels or light oscillating frequency slightly. Black hole cannot be formed ! [ Infinite time is needed to form black hole, so impossible. ] (Fig.62)  Time stopping on black hole prevents its formation. What does black hole look like ?  Like this or this ? Unfortunately these images are all just fiction only inside the media. The reason of the name "black hole" is no light can get out, so black hole cannot be seen directly.  No experimental proof. Furthermore, the clock time stops on the surface of black hole from viewpoint of distant observer ( on earth ). So it needs infinite time to form black hole from collapsed star. It means black hole does Not exist now, different from these claims. Black hole is one of the largest scams in science history. In fact, this bogus black hole just reflects basic flaws of the current physics which does harm to all students' careers ! Star being black hole needs infinite time ! [ Star collapses to denser black hole ? ← It takes ∞ time. ] (Fig.63)  Impossible to form black hole within age of universe. Einstein general relativity got the famous relation indicating stronger gravity slows clock time.  M is the mass of black hole. For a star with mass M to be a black hole, it needs to contract to some radius r.  But as the star is denser, the time on its surface is slower. To be black hole, it needs to be dense enough to stop the clock time ! It takes infinite time, so black hole cannot be formed ( this #2 ). So if you accept this black hole, you must give up Big Bang theory which claims the universe is 13.8 billion years (= finite ) old. The present physics claims the oldest black hole formed 900 million years after Big Bang.  But it's impossible, as I said. Of course, if many black holes existed from the beginning of Big bang, our earth would have been already swallowed into one of them. And it contradicts nucleosynthesis of the present Big Bang theory. So the black hole is just a scam, destroying new students instead of stars ! Big Bang, expanding universe is fiction. [ Faster-than-light expansion = Big Bang is nonsense. ] (Fig.64)  Uniform microwave cannot be "remnant" of early universe ! Though the present cosmology claims out universe is expanding, the earth and the sun is not expanding.  So Big Bang theory is too good to be true. Surprisingly, it claims our universe is expanding faster-than-light ! Strangely, this expansion energy is not diluted (= minus pressure ? ) All these researches rely on the unrealistic assumtion that microwave (= CMB ) filling universe is the remnant of the early universe. The point is this cosmic microwave is too uniform and isotropic with extremely small variaion ( ± 0.00003 Kelvin ), indicating uniform medium. Thinking commonsensically, it's impossible that all these very weak microwave remain intact for 13.8 billion years from the early universe ! Not only microwave but also high-energy gamma rays fill all space. The fact the earth is moving through cosmic microwave at 370 km/s indicates medium moving with the earth. Red-shift (= longer light wavelegnth ) from distant stars means Not expansion but the light losing energy. Gravitational waves do NOT exist. [ Gravitational waves pseudotensor contradicts Einstein. ] (Fig.65)  We can choose "convenient" gravitational pseudotensor ! It is often said that gravitational wave, ripples in the spacetime, is the result of Einstein general relativity.  But it's a big lie. In fact, general relativity has No concept of energy conservsation.  So they created gravitational wave as fake " pseudotensor". This pseudotensor has nothing to do with Einstein relativity. So many artificial gravitational waves were invented ( this p.2 ) Surprisingly, this gravitational wave energy (= pseudotensor ) vanishes depending on observer's motion (= coordinate ).  See this p.1, this p.1.  So gravitational waves do NOT exist. Read this website !  Physicists just choose convenient coordinate (= observer's motion ) for experimental results.  So Einstein himself said "gravitational waves do NOT exist !" In conclusion, many different choices in gravitational pseudotensors make it useless ( this p.2, this p.2 ). Different artificial gravitational wave pseudotensors give different energy values depending on different space condition (= coordinate, this p.3, this p.17 ). So the media should stop misleading expression like "Einstein greatest prediction" in gravitational wave. Unreal physics hampers all science. [ All students pay exorbitant tuition for useless science ! ] (Fig.66)  Many-worlds, quasiparticle are a basis of science ? Rising university fee is the biggest problem all over the world. Even if you study biology and physics, the degrees are useless, causing skill mismatch. I wonder why almost No governments question whether universities really teach useful (?) things.  Getting Nobel prize is everything ? Without Nobel prize, science in university is of no use ? The media hiding true Einstein paradox is also destroying students' career. Their main reason is the basic physics lacks reality ( see many worlds ). Schrodinger equation cannot solve muti-electrons, so useless. Electron spin lacks reality, its spinning exceeds light speed ( this p.2 ). So quantum mechanics relies on unreal quasi and virtual particles ! If basic physics lacks reality, all applied science is hampered ! So all students including lawyer are forced to pay exorbitant tuition for nothing. Everyday - quantum mechanics ? [ The only hope, tunnel doesn't need quantum mechanics. ] (Fig.67)  Tunneling happens only in very short ( ~nm ) barrier. Quantum mechanics is used every day in smartphones ? But the present physics lacks reality.  They mention quantum tunnel. Though quantum tunnel argues an electron can tunnel through some barrier (= insulator ), the definition of this "barrier" is very vague. The point is quantum tunnel doesn't mean a ball penetrating a wall ! It's just like point-like electron passing some empty vacuum (= insulator ? ). In fact, the length of this barrier (= insulator ) needs to be very short (= nanometer ! ) to cause tunnel. See scanning microscope ( ~1 nm ) and transistor ( 12 nm ). It's natural some electrons pass very short "insulator" ( including large empty space ) under some voltage. So "everyday Einstein and quantum mechanics !" is false advertisement for universities to justify exorbitant tuition ! Entangle = spooky action is a big lie. [ The moment A is known, B state is determined = spooky ? ] (Fig.68)  Entanglement is a far-fetched interpretation. Quantum teleportation and computer use parallel-world. This parallel world idea is just fantasy, so quantum computer is reduced to just a tool for universities to collect money from tax and people. Entanglement is just "classical" phenonemon, different from this claim. They make two spins the same by illuminating them with light in Fig.68. So when spin A is "up (down)", spin B is always "up (down)". These spin up and down just mean two energy levels, ( not seeing spin ). In this state, when we measure A and know its state is "up",  B state is determined as "up" instantly (= faster-than-light ? ) Unfortunately, this is not "superluminal" process at all. These A and B states are just "up"-"up" before measurement. There is No spooky superluminal action (= nonlocal ) between two spins.  These states are just classically manipulated by illuminating them. Entangle relies on fantasy parallel worlds ! [ Misinterpret "unknown" as "parallel worlds". ] (Fig.69)  Entangle is just classically "unknown" state. In Fig.69, we don't know whether "up-up" or "down-down" in Be+ and Mg+ energy levels.  This is just classically unknown state. Surprisingly, the present physics intentionally misinterprets this "unknown" states as "parallel worlds = superposition". "Superposition" means a grotesque cat can be "dead" and "alive" at the same time, where the moment we know Ba+ is "up", Mg+ is determined as "up". Though they claim this determination process is superluminal (= spooky ? ), these ions are just "up-up" just before measurement, classically. So "entanglement", "quantum computer" rely on far-fetched idea, unreal parallel worlds to claim they are non-classical (← ? ) phenomena. There is No mystery here.  They are just "classical" phenomena. Old Bohr's helium failed   "Destructive" interference. [ 1-de Broglie wavelength orbit cannot have two electrons. ] (Fig.70) Old Bohr's helium.   Two de Broglie waves cancel each other. In old Bohr's helium, two electrons are moving on the opposite sides of the nucleus in the same circular orbit (= one de Broglie wavelength ). Considering Davisson-Germer interference experiment, two electrons of old Bohr's helium are clearly unstable. 1-de Broglie wavelength orbit consists of a pair of the opposite wave phases (= ±ψ ), which cancel each other by destructive interference. Due to Coulomb repulsion between two electrons, one is always on the opposite side of another where the opposite de Broglie wave phases cancel each other. Actually, old Bohr's helium of Fig.70 gives wrong ground state energy of helium, when you calculate it. Old helium gives the total energy of -83.33 eV, which is a little lower than the actual value of -79.005 eV (= 1st + 2nd ionization energy of this ). All old helium models failed. (Fig.71) Various old Bohr's helium atom. In 1910s - 1920s, Lande (= outer and inner orbits, Fig.71A), Langumuir (= two parallel orbits, Fig.47B, two linear oscillating orbits, Fig.71C) failed in finding true helium model. Other Kramers (= 120 degree angle crossed orbits, Fig.71D ), and Heisenberg (= coplanar and inclined orbits, Fig.71E,F ) failed in Bohr's helium, too. No old helium models could explain the correct ground state energy, stability, and closed shell property of helium atom. Because about that time, they did not have computers to calculate three-body realistic helium atom. Calculation of Correct and New Bohr model helium. [ Why helium is stable and doesn't form compounds ] (Fig.72) Two de Broglie waves cross perpendicularly = stable. To avoid the problems of vanishing de Broglie's wave in the upper section, we suppose another model as shown in Fig.72. This new Bohr's helium consists of two electron orbits which are perpendicular to each other.  Each orbit is one-de Broglie wavelength. If the two orbits are perpendicular to each other, their wave phases are independent from each other and can be stable, not canceling each other. If the electron tries to obey repulsive Coulomb force completely and lay down its orbit, the destructive interference of their de Broglie waves expels the electron. So as shown in Davisson-Germer experiment, the interference of two de Broglie waves forces them to cross perpendicular to each other. Stable de Broglie waves determine helium structure. (Fig.73) Old Bohr's helium = electrons are expelled.   New Bohr helium = stable. In 1 × de Broglie wavelength orbit, the opposite sides of nucleus contain the opposite wave phases, which cancels another phase. When two de Broglie waves are just perpendicular to each other, they can avoid destructive interference between these opposite phases. There is NO more space for the third electron to enter this helium (= Pauli exclusion principle and NO magnetic field can be explained ). We succeeded in expressing Pauli exclusion principle in all atoms using this de Broglie wavelength. New Bohr's helium can explain the stability due to "neutral" distribution. (Fig.74) New Bohr's helium (= A.) is not electrically polarized. As you know, helium atom does NOT form any compounds with other atoms, and has the lowest boiling point in all atoms. Unfortunately, the quantum mechanical electron spin has NO power to stop forming compounds, because the magnetic moment of spin is very weak in comparison with Coulomb force. Spin interaction is as small as fine structure level ( < 0.0001 eV ). So ONLY de Broglie waves is left to explain this important stability and independence of helium. As shown in Fig.74 left, when the two electron orbits are perpendicular to each other, the space around 2e+ nucleus becomes just neutral. In this case, two negative electrons are equally distributed around the 2e+ nucleus both in vertical and horizontal directions. In other helium models, the space is electrically polarized, and their wave phases easily become chaotic when other atoms are close to them. New Bohr's helium can explain Pauli exclusion principle. (Fig.75)  Pauli exclusion principle by de Brolgie wave interference. Of course, there is NO space for the third electron to enter in Fig.72 model (= Pauli exclusion principle ). Because, if the third electron enters the orbit of 1 × de Broglie wavelength in this new Bohr's helium, it cannot be perpendicular to both of two other waves. On the other hand, in old Bohr helium, the third electron of Li can enter this orbit, because it does NOT depend on cancellation between de Broglie waves. Spin-Spin magnetic dipole moment interactions are too week to explain strong Pauli exclusion principle. For example, fine structure of hydrogen is ONLY 0.000045 eV. Spin-spin coupling is weaker than it. As a result, Only de Broglie wave's interference is left for describing strong Pauli exclusion principle also in bonding number. Two orbits are perpendicular to each other, avoiding "destructive" interference. (Fig.76) Two same-shaped orbital planes are perpendicular to each other. Next we calculate the new helium using simple computer program. Fig.76 shows one quarter of the whole orbits. We suppose electron 1 starts at ( r1, 0, 0 ), while electron 2 starts at ( -r1, 0, 0 ). (Fig.77) The two electrons have moved one quarter of their orbitals. In Fig.77, the electron 1 is crossing y axis perpendicularly, while electron 2 is crossing z axis. When the two orbits are crossing perpendicularly, the motion pattern as shown in Fig.76 and Fig.77 is the most stable one (= potential energy is the lowest ). I thank Tao greatly for giving youtube of this helium ! Computational methods. Here we investigate how the electrons of the helium are moving by calculating the Coulomb force among the two electrons and the nucleus at short time intervals. The computer programs of JAVA ( version 1.5.0 ), simple C languages and Python ( 2.7 ) to compute the electron orbit of the helium are shown in the link below. The program to calculate the electronic orbital of the helium Sample JAVA program C language program Python program. As shown in Fig.76 and Fig.77, the helium nucleus is at the origin. The electron 1 initially at ( r1, 0, 0 ) moves one quarter of its orbit to ( 0, r2, 0 ), while the electron 2 initially at ( -r1, 0, 0 ) moves to ( 0, 0, r2 ). As meter and second are rather large units for measurement of atomic behavior, here we use new convenient units (Fig.78) New units of time and length. From Fig.78, the accelaration is If you copy and paste the above program source code into a text editor, you can easily compile and run this. When you run this program ( for example, JAVA ) in command prompt, the following sentences are displayed on the screen. First we input the initial x-coordinate r1 = r   (in MM) of electron 1 (see Fig.80 1 ), and press "enter" key. In Fig.79, we input "3060", which means the initial x coordinate of electron 1 is 3060 MM = 3060 × 10-14 meter. The initial x coordinate of electron 2 becomes -3060 MM, automatically. Next we input the absolute value of the total energy |E| (in eV) of helium. In Fig.80, when we input "79.0", and press enter key, it means total energy of this helium is -79.0 eV. (Fig.81) Initial states. "r" is initial x coordinate of electron 1. From the inputted values, this program aturomatically calculates the initial velocity of the electron 1 ( = 2 ) in y ( z ) direction. Total potential energy (= V ) of the initial state of Fig.81 becomes (Fig.82) Initial total potential energy V. The first term of right side in Fig.82 is the potential energy between two electrons and 2e+ helium nucleus. The second term is the repulsive potential energy between two electrons. (Fig.83) Initial velocity "v". Total kinetic energy of two electrons is given by total energy (ex. -79.0 eV ) minus potential energy (= V ). So from inputed values of Fig.80, we can get the initial velocity of each electron. The initial velocity of electron 1 ( 2 ) is in y ( z ) direction. (Fig.84) Change unit of velocity. Using the new unit of Fig.78, this program changes "m/s" into "MM/SS" in the initial velocity. Because it is convenient when calculating each acceleration and de Broglie wave at intervals of 1 SS (= 10-23 seconds ). Computing Coulomb force at short time intervals. (Fig.85) Positions of two electrons (= perpendicular and symmetric ) At intervals of 1 SS, we compute the Coulomb force among the two electrons and the nucleus. When the electron 1 is at ( x, y, 0 ), the electron 2 is at ( -x, 0, y ) due to their symmetric positions ( see Fig.76 and Fig.77 ). So the x component of the acceleration ( m/sec2 ) of the electron 1 is, (Fig.86) x component of the acceleration. where the first term is the Coulomb force between the nucleus and the electron 1, and the second term is the force between the two electrons. (Fig.87) Distances among two electrons and nucleus. Due to symmetric positions of two electrons, when electron 1 is at ( x, y, 0 ), the electrons 2 is at ( -x, 0, z ), in which z = y. As a result, the distance between electron 1 and nucleus is given by the first relation of Fig.87. The second relation is the distance between two electrons. Calculation of acceleration in each direction. Considering the helium nuclear mass (= alpha particle), we use here the reduced mass (= rm ) except when the center of mass is at the origin. (Fig.88)  Reduced mass of one electron. See also reduced mass of three-body helium. In the same way, the y component of the acceleration (m/sec2) is, (Fig.89) y component of the acceleration. Based on that calculation value, we change the velocity vector and the position of the electrons. We suppose electron 1 moves only on the XY-plane, so the z component of the acceleration of the electron 1 is not considered. If we consider all components of the Coulomb force against the electrons, the electron's motion becomes as shown in Fig. 70. But in this state, the two electrons are packed in one orbit of one de Broglie's wavelength where de Broglie wave oppsite phases (= ±ψ) are cancelled (= destructive interference ). Number of de Broglie waves contained in each short segment. (Fig.90) De Broglie waves in each segment. We also calculate de Broglie wavelength of the electron from the velocity ( λ = h/mv ) at intervals of 1 SS. The number of that wave ( λ in length ) contained in that short movement section is, (Fig.91)  Number of de Broglie wavelength in the short segment. where (VX, VY) are the velocity of the electron 1 (in MM/SS ), the numerator is the movement distance (in meter) for 1 SS. the denominator is de Broglie's wavelength (in meter). Here we use 1 MM = 10-14 meter. Here, the estimated electron's orbit is divided into more than one million short segments for the calculation. When the electron 1 has moved one quarter of its orbit and its x-coordinate is zero (Fig.92), this program checked the y-component of the electron 1 velocity (= last VY ). Because "the last VY is zero" means two electrons are periodically moving around the nucleus in the same orbitals as shown in Fig.76 and Fig.77. (Fig.92) Computing results   ( input: 79.00 eV, r1 = 3060 MM ). After moving a quarter of the orbit, the program displays the above values on the screen.  The initial r1 automatically increases per each calculation of 1/4 orbit. VX and VY are the last velocity of electron 1 ( MM/SS ). preVY is the last y velocity 1ss before VY. We pick up the values when this last VY is the closest to zero. (mid)WN means the total number of de Broglie wavelength in one quarter of the orbit. (Fig.93) When total energy is just -79.00 eV,   1/4 de Broglie wave is 0.250006. This program gives results when r1 increases from inputted value (ex. 3060 ) to r1+100 (= 3160 ). As shown in Fig.92, when r1 is 3074 MM, last VY velocity of electron 1 becomes the smallest ( VY = 0.000000 ). This means when r1 ( initial x coordinate ) = 3074 × 10-14 meter, these electron's orbits become just symmetric and electrons are stably moving in the same orbits. In this case, the number of de Broglie wavelength contained in a quarter of its orbit becomes 0.250006. So, one orbit is 0.250006 × 4 = 1.000024 de Broglie wavenlength. ( ← NOT 1.000000 ) As shown in Table 1, when inputted energy is -79.0037 eV, de Broglie wave becomes just 1.000000. Computing results agree with experimental value. Table 1 shows the results in which the last VY is the closest to zero in different inputted total energies E. This result shows when the total energy of new Bohr's helium is -79.0037 eV, each orbital length is just one de Broglie wavelength. Table 1. Results. E (eV) r1 (MM) WN WN x 4 -78.80 3082.0 0.250323 1.001292 -79.00 3074.0 0.250006 1.000024 -79.003 3074.0 0.250001 1.000004 -79.0037 3074.0 0.250000 1.000000 -79.005 3074.0 0.249998 0.999992 -79.01 3074.0 0.249990 0.999960 -79.20 3067.0 0.249690 0.998760 WN × 4 is the total number of de Broglie's wavelength contained in one round of the orbital.  This computed value is -79.0037 eV. The experimental value of helium ground state energy is -79.005147 eV (= 1st + 2nd ionization energies, Nist, CRC ). This result shows the relativistic correction (= resistance when closer to c ) to the energy = -79.005147 - (-79.0037 ) = -0.001447 eV. The theoretical ground state energy value of the helium ion (He+) can be gotten from usual Bohr model or Schrodinger equation using the reduced mass. This value is -54.41531 eV. And the experimental value of He+ ground state energy is -54.41776 eV (Nist). So the relativistic correction to the energy in He+ ion is -54.41776-(-54.41531) = -0.00245 eV. The theoretical ground state energy value of the hydrogen atom (H) can be gotten from usual Bohr model or Schrodinger equation using the reduced mass, too. This value is -13.5983 eV. And the experimental value of H ground state energy is -13.59844 eV (Nist). So the relativistic correction to the energy in hydrogen atom is -13.59844-(-13.5983) = -0.00014 eV. New Bohr helium agrees with experimental values. The electron's velocity of the neutral helium atom is slower than helium ion, but faster than hydrogen atom. So the relativistic correction in neutral helium atom should be between -0.00245 eV and -0.00014 eV. The above calculation value of -0.001447 eV is just between them ! As a control program, we show the program of hydrogen-like atoms ( H and He+ ) using the same computing method as above. Try these, too. JAVA program ( H or He+ ) C language ( H or He+ ) Here we use the new unit ( 1 SS = 1 × 10-23 second ) and compute each value at the intervals of 1 SS. If we change this definition of 1 SS, the calculation results of the total energy (E) in which the orbital length is just one de Broglie's wavelength change as follows, Table 2. 1 SS = ? sec Result of E(eV) 1 × 10-22 -79.00540 1 × 10-23 -79.00370 1 × 10-24 -79.00355 1 × 10-25 -79.00350 This means that as the orbit becomes more smooth, the calculation values converge to -79.00350 eV. The programs based on other 1 SS definition is as follows, Sample JAVA program 1 SS = 1 × 10-25 sec, calculation takes much time. Old sample JAVA program 1 SS = 1 × 10-22 sec--fast but the result and Eq.no are a little different New Bohr's helium satisfies 1 × de Broglie wavelength. (Fig.94) Hydrogen and Helium atoms. These orbits are all just one de Broglie's wavelength. In this new helium, the two symmetrical orbits crossing perpendicularly are wrapping the whole helium atom completely. The Bohr model hydrogen which has only one orbit, can not wrap the direction of the magnetic moment completely. It is just consistent with the fact of the strong stability and the closed shell property of helium. In helium, the opposite ( same ) phases of two orbits move in the same ( opposite ) direction, which cancel de Broglie wave effect (= magnetic field ) at a distance. New Bohr model holds good in all two and three atoms. Surprisingly, this new atomic structure of Bohr's helium is applicable to all other two and three electron atoms ( ions ). (Table 3) Calculation results of two electron atoms (ions). Atoms r1 (MM) WN x 4 Circular orbit Result (eV) ExperimentError (eV) He 3074.0 1.000000 -83.335-79.0037 -79.0051 0.001 Li+ 1944.5 1.000000 -205.78-198.984 -198.093-0.89 Be2+ 1422.0 1.000000 -382.66-373.470 -371.615-1.85 B3+ 1121.0 1.000000 -613.96-602.32 -599.60-2.72 C4+ 925.0 1.000000 -899.67-885.6 -882.1-3.50 N5+ 788.0 1.000000 -1239.8-1223.3 -1219.1-4.20 O6+ 685.3 1.000000 -1634.38-1615.44 -1610.70-4.74 F7+ 607.3 1.000000 -2083.3-2062.0 -2057.0-5.00 Ne8+ 544.5 1.000000 -2586.7-2563.0 -2558.0-5.00 Table 4 shows three electron atoms such as lithium. (Table 4) Calculation results of three electron atoms (ions). Atoms r1 (MM) WN x 4 Result (eV) ExperimentError (eV) Li 1949.0 1.000000 -203.033 -203.480 0.47 Be+ 1427.0 1.000000 -388.785 -389.826 1.04 B2+ 1125.0 1.000000 -635.965 -637.531 1.56 C3+ 928.0 1.000000 -944.46 -946.57 2.11 N4+ 790.5 1.000000 -1314.25 -1317.01 2.76 O5+ 688.0 1.000000 -1745.70 -1748.82 3.12 F6+ 609.4 1.000000 -2237.60 -2242.21 4.61 Ne7+ 546.0 1.000000 -2791.15 -2797.12 5.97 About the calculation method, see this page. This excellent agreement with experimental results shows this new helium and molecular model is true. Bohr model Neon and molecular bonds. [ Neon has 8 valence electrons in 2 × de Broglie wavelength ( n = 2 ) orbits. ] (Fig.95)  Eight valence electrons = regular hexahedron. What determines the number of atomic valence electrons ? Neon is stable noble gas, and has eight valence electrons in n = 2 orbitals. Considering symmetric distribution due to repulsive Coulomb forces, regular hexahedron is natural.     ♦ New Bohr's Neon,   Carbon bonds.,   Biot-Savart.     ♦ de Broglie waves determine all atomic structures.     ♦ Truth of electromagnetic waves.     ♦ Four fundamental forces New Bohr model Neon. (Fig.96)  Each electron is harmonizing with other de Broglie waves. Fig.96 shows the periodic movements of all eight electrons in Bohr model Neon. 8 electrons of neon can move smoothly, NOT crashing into other electrons. And all four de Broglie waves can cross each other perpendicularly, avoiding destructive interference ! Generalized rules in atomc orbitals and de Broglie wavelength. (Fig.97)  Maximum orbits = midpoint lines + 2 (= two perpendicular orbits ) When each orbit crosses another orbit perpendicularly, they can avoid destructive interference. When atoms contain more than two orbits, other orbits must be on the midpoint lines (= zero phase ) NOT to be disturbed. So, the maxium number of orbits in Ne becomes "4" (= 2 × perpendicular + 2 × midlines ). 4 × de Broglie wavelength contains 4 midlines, so the total orbital number of Kr becomes "6". The odd numbers of "3", "5", "7" orbits are asymmetrical and unstable. So the orbital numbers of "Ar" (= 3 × waveslength ), "Xe" (= 5 × waveslength ) remain the same as "Ne" and "Kr". So we can get the generalized common rules, "perpendicular orbits" and "avoiding destructive interference" in all atoms based on de Broglie wavelength.   See also this page. Japanese version 2016/ 1/10 updated. Feel free to link to this site.
6c8f1519b481cd43
Dismiss Notice Join Physics Forums Today! Eigenfunction, Eigenvalue, Wave Function and collapse 1. Dec 29, 2007 #1 Reading Sam Treiman's http://books.google.de/books?id=e7fmufgvE-kC" he nicely explains the dependencies between the Schrödinger wave equation, eigenvalues and eigenfunctions (page 86 onwards). In his notation, eigenfunctions are [itex]u:R^3\to R[/itex] and the wavefunction is [itex]\Psi:R^4\to R[/itex], i.e. in contrast to the eigenfunctions it depends on time. Then on page 94 he says: With "state of the system" he refers of course to [itex]\Psi[/itex], so during the measurement, the jump or collapse is from [itex]\Psi[/itex] to [itex]u[/itex]. The one thing I don't understand here is: [itex]u[/itex] does not depend on time, so how is the development of the new [itex]\Psi[/itex] over time governed? Is it that every solution of the Schrödinger equation is uniquely determined as soon as the value at just one point in time is known? Last edited by a moderator: Apr 23, 2017 2. jcsd Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Eigenfunction, Eigenvalue, Wave Function and collapse
bf34f8165230b26d
General relativity related topics {math, energy, light} {math, number, function} {theory, work, human} {work, book, publish} {film, series, show} {black, white, people} General relativity or the general theory of relativity is the geometric theory of gravitation published by Albert Einstein in 1915. It is the current description of gravitation in modern physics. General relativity generalises special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the four-momentum (mass-energy and linear momentum) of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations. Many predictions of general relativity differ significantly from those of classical physics, especially concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Examples of such differences include gravitational time dilation, the gravitational redshift of light, and the gravitational time delay. General relativity's predictions have been confirmed in all observations and experiments to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory that is consistent with experimental data. However, unanswered questions remain, the most fundamental being how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. Einstein's theory has important astrophysical implications. For example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. There is evidence that such stellar black holes as well as more massive varieties of black hole are responsible for the intense radiation emitted by certain types of astronomical objects such as active galactic nuclei or microquasars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, where multiple images of the same distant astronomical object are visible in the sky. General relativity also predicts the existence of gravitational waves, which have since been measured indirectly; a direct measurement is the aim of projects such as LIGO and NASA/ESA Laser Interferometer Space Antenna. In addition, general relativity is the basis of current cosmological models of a consistently expanding universe. Full article ▸ related documents Black hole Magnetic field Depth of field Maxwell's equations Schrödinger equation Special relativity Globular cluster String theory Solar System Quantum mechanics Magnetic resonance imaging Aurora (astronomy) X-ray crystallography Quantum field theory Speed of light Specific heat capacity
81b63f107f8fb718
Saturday, May 10, 2014 What Is Time? Defining time seems tricky compared to matter and action, but in many ways, it is overly simplistic views of matter and action that makes time seem rather complex in comparison. Time is just a property of a source just like color or size or distance, all of which emerge from matter and action and therefore all sources tell time. While it is clear that clocks tell time, it is perhaps not as clear that all other sources also tell time and time, just like color, is just a property of all sources just like red is a property of an apple. Among the properties of the apple are its color and its ripeness and of course, ripeness tells time for the apple. Just like the amplitude and phase coherence that are the two dimensions of matter, time likewise has two dimensions of amplitude and phase coherence. There is a long history from ancient Greece that defines two different kinds of time; Kronos as a kind of absolute time and Kairos as a kind of relative feeling of time. These two dimensions of time along with the two dimensions of matter provide realities for quantum charge and gravity that relate time and matter with the Planck constant, h. An objective atomic Kronos time is an interval time dimension while a subjective decoherence Kairos action time is the second time dimension. While we think of sources as existing without change in one place in space until something happens, sources are always oscillating including even the universe itself. Sources therefore change and move and evolve, albeit sometimes very slowly at the limit of 0.26 ppb/yr and time emerges from the action of that change. While we imagine inaction as the complement to action, action exists only as less or more and an evolving universe is always in action. In a quantum universe, there is never really complete inaction and inaction simply means that a source matches our action or motion. Inaction means that a source evolves just like time and just like matter and so there is never really inaction, just less or more action. Each of the axioms of matter and action really have the same kinds of trickiness and the definitions of matter and action use words that simply mean the axioms as identities. For example, saying matter is a static dimension then defines matter in terms of time since dynamic is another word for changes in time. That circularity is even more confusing than time was in the first place. Saying action means a sequence of events is likewise circular since sequence is another word that means time. Figure 1 shows Cartesian interval time as a line of either infinitely divisible moments or finite moments running from past to present to future. Time in this sense is just like a Cartesian displacement in space and this Cartesian view of time is part of general relativity where there is a continuous time with a determinate future. Block time is very similar except time is now made up of finite moments or intervals that run like the frames of a movie camera from past to future. The future for block time can be determinate and just waiting for the present to catch up or there can be many futures. Relational source action time is an alternate view that tells time by the way that a source is put together as Fig. 2, which is a more primitive relational observer action time. This fossil view of time means that moments of matter come together with past actions and form a source of the present moment from any number of paths. By sensing the source with a matter spectrometer like our consciousness and knowing about the source’s fossil past, an observer can then tell time with any source. There are then a large number of possible futures associated with that present source, but there is no determinate future since all bonding is subject to quantum uncertainty. Sources with very highly structured and periodic actions tell time as clocks in Fig. 3 as relational source action time. Clocks show very regular action and therefore keep a very precise interval time with moments of matter as long as the moments are very short and periodic. However, source actions are reversible since they are built by quantum bonds and it is therefore necessary to impose an overall decoherence action time in order to point the arrow of interval time. This decoherence or action time can be thermal as in a clock power source running down or a person aging, or indeed the decoherence can be intrinsic and the whole clock shrinks. A universal decoherence points an action time direction as well as a universal quantum force from both gravity and charge. Axioms really defy further definition by any single term and so axioms are self-evident characteristics of the universe. Time emerges from the two axioms of matter and action and the trimal of matter, time, and action closes our universe. Matter is then a naturally more static dimension while action is a naturally more dynamic dimension and time emerges as the differential of action with matter, dS/dm.  On the one hand, we think of interval time as a single static dimension of the past, since the past is like the frozen hands of a clock and does not seem to change except in the interval of a present moment. On the other hand, we also think of action time as a dynamic dimension that is all about the present moment, which changes and evolves into any number of possible futures. Just as we watch the second hand intervals of a clock evolve into as seemingly determinate future, we also imagine time in our experience of action that involves many possible futures.  Our past is a series of moments or intervals like the frames of a video camera, but the present is a action without a determinate future or fate awaiting us in predetermined future frames. But is a moment of interval time accumulating as a past memory or a moment of action time counting down into a possible future? After all, we know time as both the predictable frames of a DVD movie and the unpredictable moments of a life stage play. A moment of time is like the tick of a clock or the recursive neural cycle of our brain or a heartbeat. Unlike the past events of interval time, a moment of action time is a dimension of the present. Action time allows any number of possible futures and the future is not therefore predetermined. Action time as a dimension is a very intuitive and understandable concept of a slowly changing universe and interval time is likewise rather clear in defining the present moment with the short period of atomic time. While action time is very slow, the intervals of atomic time are very fast and that is a little confusing since we really only seem to know time as a single fast atomic time dimension that is a past experience of action. In other words, time is in some sense two dimensional, but our memory of time is only of a norm or of a proper time. Yet there is both an action time and an interval time for all source change as orthogonal dimensions.  Roughly speaking, action time represents the aether decoherence of past, present, and future while interval time represents the dynamic and immediate atomic time for an action in the present. Although we think of matter as largely static, just like time, all matter has both a slowly changing as well as a rapidly changing dimension. Our concept of matter as a single dimension of mass comes from the measurement of the gravity mass for a source, but the mass of a source is also in constant evolution as it exchanges matter with other sources. This second dimension for matter is a more dynamic dimension that is how sources exchange matter with each other.  Matter as an axiom, you see, is ultimately defined only by both time and action.  Dipolar light is an oscillation of charge and light's color and polarization oscillate orthogonal to its propagation direction. Light is therefore a matter wave spectrum that is the dynamic exchange that bonds sources to observers in the universe. When light propagates, there is a complementary biphoton exchange that bonds the matter left behind. Propagating light always has an entangled complementary photon and that biphoton quadrupole is in exchange with the boson matter from which space emerges. When an observer absorbs light from a source, that exchange bonds observer to source for some period of time. Pairs of light photons called biphotons  also represent a coherent quadrupole of neutral oscillation. The quadrupoles of biphoton light represent the propagation of matter amplitude as neutral gravity force. Each source also exists as a matter wave, both as a propagation of matter amplitude and an oscillation of that amplitude in time orthogonal to its propagation. The oscillations of matter waves from sources are extremely high frequency and therefore do not often impact our prediction of action. We like to think of a source that is not moving as stationary, but even stationary sources are comoving with an inertial frame of reference and undergo constant exchange and action with the boson matter of the universe. Primal axioms or beliefs are a necessary and sufficient basis for closing the laws of the universe and anchoring the spectrometer of consciousness. We each need such primal beliefs to anchor and calibrate the spectrometers of our consciousness. Sometimes people feel as though they have no primal beliefs, but that is simply not true. The matter spectrometer of consciousness measures certain properties of sources, but first of all there must be primal beliefs to anchor the qualia of conscious thought. Qualia are the measured properties of sources, a red color for example, and our memory of sources relates them to other sources according to their common qualia. Consciousness only begins when we calibrate our matter spectrometer with beliefs or axioms, because it is those beliefs that allow us to make sense out of the world. Neural recursion is the basic mechanism of thought, but without a set of primal beliefs, we cannot make sense out of the world. In order for people to engage in a useful discussion about the universe, they must have an understanding and agreement about their primal beliefs. Without some understanding of each other’s primal beliefs along with a common language and how their matter spectrometers are calibrated, people usually end up arguing about their primal beliefs even though the discourse was ostensibly about some other attribute of reality. For example, a discourse about a philosophy of time will not be very useful unless there are complementary and compatible philosophies of the other axioms of matter and action, the other primal beliefs of the universe. Before discussing time, we need some manner of defining the lonely nothing that we call empty space and so there would need to be a philosophy of space as well. Two primal beliefs are the fundamental dimensions or axioms of reality from which a third emerges and it is only possible to define each primal with the other two primals. Since time is primal, time is defined as a combination of the other two primals, matter and action. We often use an action, such as the tick period of a clock, to define interval time but time is also the decoherence of that tick interval over action time and an action of those tick intervals recorded by the hands of a clock. Time as a primal axiom is not really like any single thing and the definition of time is only in terms of other two primal axioms; matter and action. The axiom of time therefore includes both a matter moment such as a tick interval and an action that accumulates those ticks such as on the hands or display of a clock. We can describe the action of a tick interval as a moment of matter that decoheres as an action time, which is the amount of matter that defines the interval of a tick along with a decoherence rate. For an hourglass, a matter moment would quite naturally be the mass of a grain of the sand. For a ticking clock, it would be the matter equivalent energy of the balance wheel resonance of the clock's mechanism. Thus an increment of matter defines a metric for a moment or interval time and it is the integration of those matter moments that becomes action time. The second is our fundamental unit of time and is formally set as 9,192,631,770 or about nine billion cycles of the cesium 133 atom hyperfine resonance. There are then 86,400 seconds in every solar day and each tick of the atomic clock then also represents a very small matter equivalent energy of 1.1e-41 kg as a matter moment. The accumulation of these tiny moments over one year amounts to the action of about three hundred hydrogen atoms. The matter spectrometer that we call consciousness samples reality as matter spectra of single moments that we call the present. We remember matter spectra of present moments that we call the past and use those memories to predict the many possible actions that we call the future. The prediction of many possible futures is a dynamic notion of time called A time while the memory of present moments is a static notion of static or B theory of time.  Our notion of static time makes it seem like the future is also also frozen into moments that just wait to be played like a movie already recorded in Kronos time. This is the karma or fate of a determinate universe. Our notion of dynamic time, however, makes it seem like there exist an infinity of infinitely divisible present moments from which emerges an infinity of possible futures. The universe of discrete aether has two dimensions of time that emerge from both memories of past actions along with the emergence from the present moment of a large but finite number of possible futures. A two dimensional A-B time emerges from the discrete action of discrete aether as just a possibility from each moment. An A-B time avoids the knife edge of a present moment that is squeezed by the A time past and future and A-B also avoids the messy infinity of B-time moments. Time is therefore not just a moment of matter, as A time, and time is not just an integration of matter moments, as B time, A-B time is really both matter moments and their action. The matter moment defines an interval and a relationship among the actions that we remember as past experience. This interval might be the discrete ticks of a clock, the discrete sand grains of an hourglass, the discrete pulses of an atomic clock, the passage of discrete days, or the discrete neural recursion of human thought. The discrete memory of action can be in the positions of clock hands, the sand in the hourglass, the count of an atomic clock, the calendar of days, in the memory that we have of events, or in the possible futures that we imagine. But time itself is inextricably both discrete matter moments and the integration of those moments as discrete memories of discrete actions. An hourglass keeps time with the passage of grains of sand as hourglass ticks as well as with an accumulation of those grains as the action of the lower hourglass along with a loss of grains in the upper hourglass. Time is neither action alone nor matter alone, but time has the two dimensions of both action and matter just as an hourglass is the relationship between an amount of sand and the matter of a single grain of sand. Likewise any definition of time necessarily includes the two dimensions of both matter and action. The sand in the hourglass bottom is a memory of the integrated gain or loss, the sand in the top is one of many possible futures, and each grain of sand through the neck defines the matter moment of that clock, its tick. Neural time is how we tell the difference between the sources that we remember as our past and the actions that we imagine as possible futures. Our memories of a past action exist as matter in our minds as does how we imagine the future and the neural packets of consciousness differentiates those memories of action from the actions that we imagine for our possible futures. Just like time, we are conscious of both the matter of our memory and the neural packets of our thought. Once anchored, a time-like consciousness is why we are self-aware and why we believe that we exist with a purpose. The matter of our memory is the action of our past while the action of our neural recursion defines the matter of our neural moment. • Recursion of time: Because we see other people act just like we act, we believe we are conscious, and since we are conscious, we imagine and choose desirable futures by acting just like other people act. The definition of time as a series of moments from past to present to future is quite natural and intuitive. However, similar to the definition of space as a mostly empty void with only the volume of an occasional source, time might then also be mostly a timeless void except for occasional moments. But timeless, arbitrary eternities do not emerge to separate time moments and it is therefore curious that space emerges as an infinitely divisible empty void to separate sources. In contrast to an empty space with occasional sources, moments of time are what connect actions to each other with a common matter moment which gives each moment a composite of past and present as well as possible futures. However defining time as only a series of time moments generates paradoxes and the philosophy of time has a long history of a discourse about exactly what a moment of time means. Is time a forward stream of events with only a present moment, the dynamic A time, or is time a patchwork of separate moments, the static B time? Is there a future action as a mement waiting for us to arrive, the karma or fate of a B time movie, or are there many possible future actions and we choose the futures we like from the moment that we are in, the quantum free will of an A time live play? Although the script of an A time play is determinate, the execution of a live A time play has many possible futures. In aethertime, time is a primal axiom and time is not like any single thing except the other two axioms of matter and action. Time is not just the action of a moment in a live A time play nor is time just a series of frozen moments in a  B time movie; rather time is like a series of matter moments within an action. Time is both a moment and an accumulation or loss of those moments and we project moments and remember sources and actions much like the stop action of the freeze frame of a video camera. But unlike a movie, what we play back in our mind is a highly selective and relational memory of an event that also incorporates the fading memory of a lifetime of related experiences into the action of thought. We tie every moment of time to a large number of related memories and possible futures and our experience of time is as much in those related but fading memories and possible futures as it is in the immediate sensations-feeling-action recursion of thought. We playback memories not as a DVD but with a selective focus on making predictions and choosing actions that help us survive and achieve our purpose. The neural recursion of sensation-feeling-action in our minds generates neural packets that become the matter of our memory of an event. Memories of related experiences are an active part of neural recursion and so we relate the immediate neural recursion of the present to many past remembered events and form a relational memory of that new experience. With the power of our mind and memory, we project reality as a series of static moments and interpolate when our sensations cannot resolve an action or there is missing information. Time moments are just a projection of our decohering memory of events and so moments are what we think of as time, but time is more than just the memory of moments and prediction of possible futures. Time is actually both the memory of moments as action and the decoherence of those memories as a matter moment that is the tick of our consciousness clock. The function of consciousness is time-like as is the space around us, but it is quite difficult to think about our homuncular recursion of time. The homunculus is a little person inside of our minds that is looking at what we are looking at and so the homunculus is simply a restatement of the fundamental recursion of consciousness. A homunculus, though, also has a homunculus that is also looking at the same thing, and so on. This neural recursion represents the feedback of our brain and is a basic property of thought. When we look at our own homunculus, though, we engage in a recursion or eternal recursion of sensation-feeling-action, since we look at homunculus, homunculus looks at itself and its homunculus at itself, and so on. If our homuncular recursion does not converge, just like any neural recursion in our brain that does not converge, the homuncular notion of self will make no sense and we simply will not understand and therefore will not learn the homuncular recursion of self as a truth. We will only recognize our self as different from the world if we the homuncular recursion makes sense. We project experience from neural action and memory into a series of moments that we naturally interpret as time, but time is more than just memories. This natural view of time as memories of experience is one where we can overlook the many conundrums and paradoxes of that projection as long as we can adequately predict future action. Prediction of action is, after all, what is really important and a key to our survival and the discovery of the meaning of our lives. Evolution therefore favors any mental devices that permit us to better predict action and therefore imagine the many possible futures. Consciousness correspondingly overlooks a large number of illusions and mistakes in perception of sources as long as consciousness achieves the primary goals of survival and purpose. All sources in the universe are a certain time distance or delay away from their observers and every source relates to the many possible futures of all other sources. Although we remember sources from our past and imagine many of those same sources in our possible futures, we can only ever journey to a future source. Sources in our past are only memories and there is no action that journeys to a memory. We can imagine a Cartesian journey that returns to a source that we visited in the past, but such a return will not be to the same source nor along the same path. A journey to return to a source in our past is a future action with a different path to an evolved and therefore different source. All time paths journey to future sources but there is no time path to a past memory. It is rather the projection of a Cartesian return to a spatial source that misleads us to imagine that we might return to a past time. Although we can imagine that Cartesian space does not evolve and change over time, the reality is that space does continually evolve and change. Any space to which we return at some later time, t, on the surface of our planet is a much different space than when our journey began at time zero. Even if we somehow remained fixed in a place within the cosmic microwave background, our most absolute cosmic reference frame, the very nature of space still evolves because of the universal decoherence of matter. The Cartesian separation is really a time separation and what we imagine as the lonely nothing of empty space is simply a time-like projection of our minds. Similar to the hands and ticks of a clock, a walk through a park is a journey in time that involves exchange of matter and each exchange of a matter particle provides a tick of the integrated action that separates observer from source. Complementary with time, matter and action are what make up the universe and matter and action are what bonds sources together and matter and action are also what we project as the lonely nothing of empty space. Matter and action along with time are the three irreducible properties (or axioms or qualia) of our universe and matter and its action in time are what make up the universe. Time is not like anything and matter is also not like anything and definitions of matter are circular unless they incorporate the other two primal axioms. If we say that matter is a substance, for example, a substance is just another word for matter. Matter is an axiom and therefore only the differential of action and time defines matter. Action completes the trimal of matter, time, and action that together describes our world as a matter pulse in time. An action necessarily involves integration of matter over time and even when we imagine sources as stationary and not moving, those sources still evolve and still change. Just like any source, though, a thought represents a highly relational neural spectrum within the set of 100 billion neurons in our mind and so our thoughts are also sources that are co-moving and evolving through space. Therefore, the trimal of matter, time, and action completely describes the evolution of our reality along with the evolution of our universe. Just as time inexorably advances in matter, matter likewise inexorably decoheres over time and matter’s decoherence complements an increase in the atomic clock tick rate. Although there are many sources that are co-moving with us that we call stationary, no source is ever really static and unchanging. Change and evolution are a part of all existence and part of the nature of the universe. Even though we can imagine an unchanging and static source remaining perfectly still on the surface of earth, that source is nevertheless comoving with the earth’s surface, rotating about earth’s axis, in orbit about the sun and galaxy center, and moving through the universe. And, the source’s matter exchanges and decoheres and that evolution occurs along with the ever-increasing forces that hold sources together. Most words that we use to define time are in fact just synonyms for time and defining a word with a synonym is circular or recursive. For example, among the fourteen definitions of time in Merriam-Webster are these two:       a. the measured or measurable period during which an action, process, or condition exists or continues: duration.       b. a non-spatial continuum of events that succeed one another from past through present to future. Although it is often useful to define a word with its synonyms, more typically a definition is a short description or story about what the source is like. However primal axioms are not like anything other than all other primal axioms as a description. The primal definition of time is as the differential of action with matter, matter is as the differential of action with time, and action is the integration or product of matter in time. As with any integration, action necessarily has a constant or offset and that simply means that action can be either bound to a rest frame or free in a moving frame. The Merriam-Webster definitions of time incorporate actions, but only implicitly include matter and a definition of time as an axiom must have both action and matter. The words period or duration or continuum or progression of events or the past through present to future are all pretty much synonyms for time and so definitions with these words are equivalent to defining time as time, which is an identity. The accumulation of those actions as matter is an implicit part of these definitions. When we say that time is a sequence or series of matter actions in space, unlike backing up in space, we cannot back up or go back in time and travel to the past. When we define time as both matter and action, then it is clear that we can only ever choose a future and never a past action. It is typical to describe a word such as time by the sources or ideas that it resembles and time is action divided by matter, which is an integration of matter, action, divided by the tick matter moment. To complete a definition of time as a series of moments, we need the action of consciousness. The neural recursion of sensation-feeling-action is a homuncular recursion that is the action of consciousness. Independent of any mind, time is the action of sources along with the accumulation of those actions as matter, a duration. Every action of time is tied to a large number of related moments just as each tick of a clock is related to the accumulation of those ticks in the action of the hands of the clock or in its display. The natural moment of earth is in the length of the day and the action of a year, properties that are tied to the solar system. The natural tick of matter is the frequency of the atomic clock, some nine billion cycles per second. The natural decoherence of that tick, though, is with classical universal decoherence of 0.26 ppb/yr, and so that means the atomic clock gains about one second every 124 years. The neural recursion of our brain, runs at about the rate of our heartbeat, 1.6 Hz and our lives decohere at about 1.3%/yr for an 80 year lifetime. We naturally project the future and past into opposing Cartesian dimensions and this is where our projection of Cartesian space misleads us about time. Matter time shows that we project a three dimensional Cartesian space from time and not the other way around. We project journeys in opposite directions in each of three Cartesian dimensions as forward and backward, up and down, and left and right. However, our journeys in Cartesian space first of all are actions that involve the exchange of matter over time and the integral of that change is the action that separates us from other sources. So a journey from one source to another is an evolution of our matter spectrum and our relations and interactions with other sources are what separate us from those sources. The empty space that we imagine separates sources is just a projection of time as action divided by matter. Journeys on the surface of the earth have a beginning and duration and when we return to a journey’s starting place on earth’s surface, we naturally imagine that we might go back in time to a past memory of that source and of that place as well. However, when we return to the beginning of a journey, the earth and its surface are actually at very different places about its axis, about the sun, about the galaxy, and through the universe. It is much simpler for us to project that we have returned to the same relative place in a comoving space but that is clearly not really so. The place to which we return is both a different space as well as a future time with evolved sources. In astronomy and cosmology, a light year is the distance that light journeys in a year and is a very common measure of distance in the cosmos. In fact, all distance is equivalent to time because the speed of light does not depend on the relative velocity between two sources. The ticks of an atomic clock are therefore very precise and provide very accurate measures of spatial distance even for quite small distances. In fact, with the much lower velocities of human experience, it is quite common for people to describe distance as the time that a journey takes, like a twenty-minute commute to work or a ten-minute drive to the store. Einstein described time as a fourth spatial dimension in order to explain an odd characteristic of light in space. Einstein showed why the velocity of light for a stationary observer does not depend on the velocity of the source of that light. In fact, Lorentz first derived the equation that showed the contraction of space by time that Einstein used in relativity. It was Michelson and Morley who first measured a constant speed of light that was independent of relative velocity and the Lorentz contraction of space was consistent with this observation. Einstein used Lorentz’s projection and then added time as a fourth dimension to our three dimensional space, thereby deriving a four-dimensional space time that has had many far-reaching consequences. However, to explain the constant velocity of light, we could instead presume that light is in some sense stationary and it is us and our comoving sources that are in motion at the speed of light. In such a reinterpretation of reality, distance and separation would be necessarily time-like and matter exchange would describe all relations among sources. Time would not be just one of four spatial dimensions as it is in GR, time would instead describe all distance and the action of matter would be what we call Cartesian space that would provide Lorentz invariance. The projection of a three-dimensional Cartesian space is a very useful device of our imagination, but space would not therefore be necessary to predict action. We both remember and imagine time by counting and recording actions such as heartbeats or footsteps, which are both about one per second, or we count the ticks of a clock at about two per second.  The sand grains from an hourglass fall at several per second and if we count the resonances of an atomic clock, they number about nine billion per second. By counting and remembering past heartbeats we imagine that our future heartbeats will add to a past count, but we know that we decohere at 1.3%/yr on average. Thus a clock as time always includes two fundamental qualia for definition: a matter moment, such as a tick, and the accumulation or loss of those moments as action, such as the hands of a clock face. In a similar manner as a clock, a calendar counts, records, and anticipates the number of days, weeks, months, and years of our lives. Time is a reflection of consciousness and provides order for our past as well as order for the possibilities of our future. We periodically adjust our clocks in order to keep them aligned to the natural cycle of the solar year, but we naturally presume that the tick rate of our atomic clock is otherwise constant. In matter time, our clocks tick 0.26 ppb/yr faster each year. We further interpret the past relics of ancient civilizations and the fossils of past earth as the cycles of eons and epochs of that same constant of atomic time. Our memory or record of past heartbeats and our imagining of the possibility of future heartbeats is what we call time and time is therefore a dimension that we think of in the same way that we think of space. In reality, we should really think of space as a manifestation of time and not the other way around. Time differentiates a memory of a past action, which is simply a fossil record within the matter of our brain, from the imagining of a possible future action, which is an action of neural impulses. Therefore time is the progression or sequence of action in our lives and so just as space is time-like, our consciousness is also time-like. Our memory of the count of moments is how we keep track of time and we project our past and future actions as a calendar of events. In all past action, time was equivalent to a distance between sources as well and when we imagine possible future actions, we also imagine a calendar for future actions in a way very similar to the past. This time order very effectively allows us to remember the past as a progression of events and to imagine and predict future events by projecting that memory of the past. Our science has long known that the speed of light is constant in all frames of reference and this has always been difficult to understand and communicate. If a light source moves, doesn’t the light from that source also then move? In fact, the light from either a traveler or a twin with different relative velocities has the same velocity even though each person’s light will appear as a bluer or redder color depending on whether they are moving closer or apart, respectively. Einstein resolved these various conundrums associated with the speed of light by imagining time as a fourth spatial dimension. According to Einstein, in order for the speed of light to remain constant in all frames of reference, atomic time necessarily varies between two people with different relative velocities. As a result, he also showed that the relative velocity between two people distorts or curves the Cartesian space between them and both of these predictions have been repeatedly verified with observations. But there is another way to resolve the conundrum of constant light speed. In matter time, space essentially shrinks at the speed of light and it is this shrinking of space that determines all force and also makes it appear that the speed of light is constant in a comoving frame of reference. Ironically, the traveler motion decreases the shrinkage of space ahead and increases the shrinkage of space behind the traveler. This means that the traveler cannot detect a change in light’s velocity although the frequency of the light does change. Einstein did not talk about any relation between time and memory and imagination and he also did not discuss what would happen if space was not axiomatic in our reality. What if space where a projection of time and matter and not axiomatic? The past is not only in our biological memory; the past is also in the fossil memories of past action. There is no action that rewinds reality and so we can not go back in time because until the end of the universe, there is no action without a reaction. Action can only create new memories and new fossils for the future. Nevertheless we can imagine actions that rewind time because of our Cartesian projection. Poincaré, in fact, proposed that any system of particles in seemingly chaotic motion will still cycle back to the same initial state with some probability, i.e. all systems show a possible reversal in time. However, Poincare’s proposition assumes that the particles and their space do not evolve or change with time. In matter time, space and matter both evolve in time and that means that Poincaré’s hypothesis is therefore based on different axioms from matter time. Time is just one of the three primal axioms of matter, time, and action and time is simply the quotient of matter by action, the clock count divided by its tick action. Time differentiates memory from imagination and while we remember a past time as a count of actions, we imagine a future time as a distance in space, a collection of matter ticks on a aether clock. Thus, our memory of past time is just the marker of matter while we imagine a future time that has both matter and action.  Since we project Cartesian paths with opposing directions as we journey forward and backward, up and down, left and right, it is quite natural to project time with opposing directions as well. We organize time from the present as a past into a future, but we actually project space from time and not the other way around. We project our memory of past sources and actions into a path in space, typically a straight line, and we project a future action into the opposite direction in space from our past. Any future action between sources involves a change in the time distance with action and so there is no sense to a journey to a memory. A journey always involves actions that are positive time distances in a chosen direction and a future action is only one of many possible actions and those possibilities are always in our future and never in our past. Once we experience the single reality of what a source did become, it is that single reality that we remember as a past moment and the many other possible futures for a source simply decoheres away. Time as a dimension is then simply the distance between sources and time is an accumulation of matter, divided by the aether action metric. As our heart beats, the distance between heartbeats defines both a time and what we project as space. Even though we may stand perfectly still on the surface of the earth, the earth rotates about its axis, about the sun, around our galaxy, and within the universe. All action on earth defines time as distance and the loss or gain of matter with action. The events of our past are simply memories or relics or fossils of what did occur even as we imagine the possibilities of what might occur in a possible future action. From the action of sources from time delays, we project a three dimensional universe with sources on continuous time trajectories and we predict action both very precisely and very accurately with continuous space and time. We have an innate notion of a continuous void of empty space and Euclid defined the first geometric axioms some 2,300 years ago in ancient Greece and those same axioms are a fixture of our science and engineering even today. Euclid’s right angle is still the cornerstone of our Cartesian reality even though Cartesian space loses meaning for sources at very small and very large scales. The more primitive dimensions of discrete matter, time delay, and action as matter exchange have meaning for all sources in the universe. The primitive reality of matter time augments our understanding of reality for sources that exist in the realities of frozen space and time. We know about a source in either of two complementary ways. A very common and intuitive understanding of physical reality projects a source on an event path relatively unperturbed by other forces, which is a straight line in Cartesian space or a parabolic trajectory on earth. However, we actually sense or perceive a source by what it might become, i.e., by sensing some of its many possibilities, and not by what that source actually is. Our sensations represent just a very small number of a source’s possible futures and the totality of those possibilities is a complementary representation of that source. Yet even with the very small number of possible futures that we actually sense, we imagine quite a large number of possible futures, even those that do not actually make sense. By seeing, hearing, smelling, tasting, and/or touching, we sense a source as a very large number of possible futures as opposed to what the source actually is. That source might not move or it might be moving, it might change color, or it might even disappear or suddenly change its form. We imagine the reality of a source on the basis of a rather limited number of our sensations of the source’s possibilities, but we relate that source to similar sources from a lifetime of experience with similar sources. Because of our past experience, we do not normally need to sense very many possible futures for a source in order to accurately predict that source’s actual future, but we can be and often are fooled by our sensations. There are in fact many illusions that fool us just as there are also very many unlikely futures for a source that surprise us as well. So our imagining of a source on an event trajectory represents a convenient and succinct way for us to reliably predict that source’s future in our universe. The rigid Cartesian reality that we project in our minds can make it very difficult to understand time since space is a projection of time. When we project a source onto an event trajectory, we also project the context of a Cartesian space as having a forward and a backward and therefore an opposing dimension. Quite naturally we project time backwards as a spatial displacement into the past, but we first projected a Cartesian displacement from time as a useful prediction of action. Action, after all, only ever moves us closer to or further from other sources. We can actually never return to the place where we began a journey because that place no longer exists in the universe. We could imagine getting on a spacecraft and reaching a relative velocity that would maintain a place in a universal or proper space despite the rotation of earth about its axis and about the sun and about the galaxy and through the universe. However, the universe itself is shrinking in size and matter and so the universe would change even if we somehow remained in one place in space. We are so accustomed to return journeys on the surface of earth that we do not realize that every action that we take on earth involves an opposite reaction by the earth. When we jump up from earth, she falls down away from us. When we step in one direction, mother earth backsteps in the opposite direction of our stride. Our forward is her backward and our backward is her forward. The actions of our footsteps and of our heartbeats represent not only the duration or time of a past journey, but actions also represent a Cartesian distance for that past journey. Each footstep is an action for us as an observeron an event trajectory that was a part of a past journey. The memory of footsteps as an accumulation of space allows us to imagine a future journey among a large number possible journeys given a variation of our future footsteps. During a walk or run, we can turn around and change direction or we can speed up or slow down to avoid obstacles, all without any concern for the effect our stride has on earth’s rotation about its axis or earth’s orbit about the sun or earth’s place in the galaxy or indeed earth’s place in the universe. And yet all of our choices during a walk do affect the earth’s rotation as well as earth’s orbit about the sun as well as the sun’s path through the galaxy, not to mention our galaxy’s journey in the universe. Although the impact of our stride on the earth is quite small, we can think of changes in time instead of changes in distance. When we look up in the sky at night, we see only the fossil light of the past. The distance of a source that we see is the time it takes for its light to reach us and so the speed of light as a constant defines all distance as time. Every meter that light travels is about three billionths of a second or three nanoseconds, one nanosecond per foot of light travel. That constant speed of light associates an interval time with a distance and is as if everyone walked with the same speed or had exactly the same heartbeat. A second time dimension, an interval time, represents the perpendicular distance between a source and a reference direction and along with the rotation or phase of the source around that reference direction projects that source into our Cartesian space. Thus, these two dimensions of time and one dimension of phase provide an equivalent representation of our Cartesian space with a time map instead of a Cartesian map. Our earth frame of reference usually provides us with a reference direction along with many other sources as landmarks. Since the perpendicular distance between a source and a reference direction is always positive, it is the second time dimension, interval time, along with the phase or rotation of a source about the reference direction that determines a source’s direction. Cartesian space is, then, just a convenient projection of a two-dimensional time universe with phase. We imagine that there are two opposing directions for each of three Cartesian dimensions when in fact Cartesian space is just a projection of matter, time, and phase. The right angle or 90°of Euclidean geometry is equivalent to the π/2 phase angle between time and matter. The uncertainty principle in quantum mechanics involves a phase relationship between matter and time that is a complex number, -i, which derives from the the same 90° phase angle that is the right angle of Euclidean space. In matter time, Euclidean geometry reduces to a basic action equation of our quantum universe, the Schrödinger equation. A time map of a source involves two time dimensions that we project into a Cartesian plane. There is an interval time as the distance to a source and an interval time as a separation of that source from a reference direction and a phase or rotation of that source about that reference direction. We project a three dimensional Cartesian reality from two dimensions of time, event and interval time distances, and one dimension of phase or angle about a reference direction. Now what exactly does it mean to have two dimensions of time? Very simply put there is an action time and an interval time and action time is the particular time associated with an universe action and interval time is an orthogonal atomic time associated with the co-moving source frame of reference. Matter-time’s reinterpretation of reality with two time dimensions and a phase is very different from Einstein’s approach that begins with Cartesian space and then projects time a fourth spatial dimension. Einstein imagined a four-dimensional reality called space time with only a single time dimension along with our three Cartesian dimensions. The recursion of time in relativity results in a great deal of complex mathematics called tensor algebra. Since Cartesian space is a projection from time, it is distorted by time and adding time as a fourth dimension time mixes back with itself in a recursion that forms the basis of space time. Matter time has instead just three dimensions, two of time and one of phase. Although the dilations of time and therefore of space between sources traveling at different relative velocity are identical between matter time and space time science, the existence of two time dimensions in matter time complements the two dimensions for all matter as well. Given a common phase between time and matter, matter and time exist in a kind of holographic reality that defines our universe as a complex Fourier transform between the universe as a pulse of matter in time and the universe as a spectrum of matter amplitudes. In space time, the speed of light is constant and relative velocity distorts time and space between a traveler in motion with respect to a stationary twin. In matter time, light is in a sense stationary and it is actually sources of matter that move away from a light source at the speed of matter. Sources never move faster than the speed of light because motion in one direction slows the matter collapse along that great circle of the universe. In effect, our co-moving velocity is at the speed of light in all directions and once matter has slowed completely down, it then becomes light. The collapse of matter in time is a constant of matter time called mdot, and determines both gravity and charge force. Along with two other constants and the Schrödinger equation, mdot determines all forces and action for the matter-time universe and amounts to a decoherence of 0.26 ppb/yr or a gain of about one second every 124 years of the 9 billion ticks per second atomic clock. When the accretion of matter is greater than some amount, Einstein’s four-dimensional space collapses into a black hole singularity, which is a very unusual but well accepted characteristic of space time. A black hole represents a singularity of space time where no light can escape, time literally stands still at its surface, and inside of the black hole, the laws of our physical universe no longer apply. It is very clear that astronomers have observed the effects of a number of very large matter accretions that center most galaxies, often termed supermassive black holes. Science has more difficulty observing the much more subtle effects of smaller black holes that should form from the collapse of a class of stars known as supergiants. Moreover, the progression of a collapsed star known as a neutron star into the more massive black holes is very uncertain because of the role of angular momentum. It would appear that heavy, slowly rotating neutron stars might behave like black holes and that lighter, rapidly spinning black holes might behave like neutron stars. Space time physics, then, is still an incomplete story for our universe and we await an improved story that includes the unification of gravity and charge forces. Such a story will likely not only close a chapter in our understanding of time and matter, it will open new disciplines for study. In the prevailing paradigm of space time, space exists as an empty void that separates sources. The past memories and future imaginings that are in our minds represent sources that we separate by space, so consciousness is part of the source that is our mind. Our consciousness would then seem to exist as a source in time and we could then imagine a disembodied timeless mind. Consciousness would then be a convenient projection of space time and the same projection of consciousness would differentiate our memory of the past from our imagination of possible futures. In matter time, the past memories and present thoughts that are in our minds are not just sources of matter, they are time-like. Memories are matter sources of action embedded in our brains, but the neural recursion of sensation-feeling-action is action-like. Our consciousness is therefore not just a matter source or an action, but really consciousness is the time-like differential of action with matter. Just like a two dimensional time, consciousness would then also have two dimensions; event consciousness and action consciousness. Just as we have difficulty defining time and space, for the same reason we also have trouble defining the two dimensions of consciousness. Time and time-like concepts all share the characteristic that they are axiomatic and not really like anything except combinations of other axioms. However, we do not have similar difficulty defining the axioms of matter and action. Matter is the static substance of all sources and so comprises the air, water, stone, soil, and fire of our alchemie. So all sources are like matter, but matter itself is an axiom and is only explicable as the product of action and time. We can easily imagine matter or we can just as easily imagine the empty void of space as not matter or nothing. Action is the evolution of a source over time and so action is a very familiar and intuitive dynamic concept, just like matter is a static concept. We can easily imagine either action or the absence of action as a co-moving source that we think of as immobile or stationary. However, when we imagine time, it is very difficult to imagine a complement to time as timelessness. What is timelessness like? The contrapositives of matter and action are straightforward with the opposite of matter as empty space and the opposite of action as inaction and it is only with these contrapositives that we can define timelessness. Timelessness is then the inaction of empty space, a definition of eternity, and once again, we find empty space linked to the contrapositive of time. Timelessness is the inaction of empty space and represents a kind of eternity where time is the action of empty space. In fact, what we imagine separates sources in time is the aether and action that we project the action of aether as the empty void of space. We do experience timelessness during sleep, for example, or during other unconscious states. There is a rich language associated with timelessness: eternal, immortal, perpetual, everlasting, and so on. In fact, many of our religious traditions are embedded into the semantics of a timeless and perpetual eternity that addresses various transcendental questions. We know that we are conscious because the sources and actions that we remember from our past are different from the sources and actions that we imagine in our future. That is time. The timeless nature of our dreams mixes memories and imaginings and in a final dream, the neural impulses of our conscious mind become progressively slower thereby stretching time out. In effect, the timeless nature of a final dream represents the eternity of a final and fading conscious thought. Our final dream ends in either a point of ecstasy for a life fulfilled or in the circle of despair for a life unfulfilled. All sources that are in the universe are in our possible futures at a certain time distance away from us. Although we remember sources from our past, there is no journey that will take us to those past sources and time only projects sources into our possible futures. One very odd thing about time is embodied in the principle of relativity, which is that atomic clocks tick more slowly as they travel away from or towards a stationary twin clock. If a traveler accelerates to 0.8 c on a journey from a stationary twin, in five years according to the stationary twin’s clock, the traveler will journey four light years away from the twin. However, during that journey, the traveler will only have aged three years and so after the traveler slows down to the twin’s inertial frame, it will seem to the traveler that that journey’s velocity, 1.2 c, was faster than the speed of light. The traveler aged 3 years during a journey of 4 light years and traveled faster than the speed of light in the twin’s frame of reference. A traveler’s distance age can exceed the speed of light relative to the twin’s stationary frame of reference left behind. Once the traveler slows back to the twin’s inertial frame, the traveler has only aged three years while the twin has aged 5 years. Of course it will take 4 years to communicate that information back to the twin and 4 more years for the twin to acknowledge that communication and so the traveler will only know that this has occurred 8 years after reaching the destination. We can and do describe the distance between sources in space by the time it takes light to journey between those sources. Therefore, we are conscious because there is a time distance between all sources for all actions in our universe, including our neural impulses. The sequence of neural impulses in our minds represents a time distance between neurons and therefore time is a part of our consciousness. In space time, a universe of the lonely nothing of empty space is possible even without any matter, but in matter time, there can be no universe without matter. Just as no universe is possible without time, no universe is possible without matter and action, either.
4334563db1b63634
Web Exclusives: TigersRoar Letter Box Fallacy of the Assumption of Statistical Independence Between Successive Indeterminant Events by Thomas V. Gillman ’49 As a preface I would like to mention a concept that bears on the relation between physics as encompassed by science and metaphysics the branch of philosophy that treats of the ultimate nature of existence, reality, and experience. Recent research in cosmology indicates that there exists a universal wave function that determines everything in the universe. This unique field gives being to the recognized physical fields-gravitational and electromagnetic forces, the strong force that among other things is responsible for the sun shining, and the weak force instrumental in radioactive decay. The further possibility exists that life is a direct expression of the effects of such a "force" field and that the evolutionary properties of living matter provide evidence of the indeterminate nature of the universal wave function. There is a simple experiment that I believe demonstrates the existence and the workings of such a universal wave function. The results, at the very least, represent an instance of the indeterminate nature of the probability aspect of wave mechanics at the macro level. A Heuristic Experiment The procedure is simply a matter of flipping a coin and recording the result of each toss, head or tail, as well as the sequence of the results for an arbitrarily large number of tosses- something in the order of 100 tosses. No attempt is made to maintain a uniform time interval between tosses, since time apparently does not enter in. The expectation is that in the long term the number of resulting heads and tails will be approximately equal, since the likelihood of a head or tail is 1/2. According to probability theory each toss of the coin is an independent event, therefore, there is not supposed to be any relation between successive tosses of the coin. The probability of any particular sequence of heads or tails is therefore the product of their "independent" probabilities. For example, the probability of tossing seven heads in succession would be (112)7 or 1/128-a fairly unlikely sequence of events but well within the range of expectation. On the other hand, it is common knowledge that in many games of chance players often experience "runs of luck" in which the outcome temporarily favors them. How are such unlikely courses of events to be explained? In conducting this experiment it is common that over the course of 100 or so tosses a sequence of at least six or more heads or tails will occur. Even longer sequences are interrupted by only one or two inverse events, thus establishing a trend or a "run" as it is often described. Graphic Results If one plots the sequence of tosses on the horizontal axis and the algebraic results of the coin tossing on the vertical axis, with the simple assumption that each toss represents a unit gain or loss of some sort of "potential" from one toss to the next, some fascinating patterns emerge. These correspond to the so-called "runs" of good or bad luck that gamblers experience. A more interesting finding is that these deviations tend to propagate or persist. That is, the number of heads or tails sometimes does not even out for long sequences. [Note that these sequences are time independent and therefore do not represent periods, but they do seem to indicate a "progression".] A question that comes to mind is whether or not the cumulative "potential" indicated on the graphs provides evidence for the existence of a deterministic element that enters into these results. The gambler will tell you that when he is on a "roll" he is able to "influence" the course of events. Who is to say? What is obvious is that if one bets in concert with one of these "potential" swings-these apparent "drifts" of the probability function-one is going to be ahead of the odds for an indeterminate but substantial number of events! Whereas in the previous plot, the number of opposing events (heads and tails) is not far from the expected proportion, viz., 50:50, this is not always the case, as shown in the following graph. While there is little basis for conclusion at this stage, the results do lead to speculation. The partially determinate nature of the outcomes of events that have traditionally been treated as indeterminate, random, and independent may point to the possible fluctuations of something of the nature of a general field which "determines" the course of events. Events at the macro level are usually analyzed in terms of cause and effect, but there are many events where the outcomes cannot be predicted but can only be described statistically. Another instance is radioactive decay at the subatomic level. Anyone who has listened to a Geiger counter knows that these events occur randomly but in time-wise bursts that exhibit no regularity. And there is no way of predicting the decay of a particular radioactive atom in terms of time or location. The best we can do is to establish a so-called half-life — a period during which half of the atoms originally present will have decayed. We have no way of knowing what conditions, if any, "cause" the occurrence of the decay event. The most that we know, at present, is something about the elementary particles of matter that are involved in the process. It has been discovered that, in the vernacular of elementary particle physics, the '"weak" interaction causes radioactive decay of nuclear constituents and unstable leptons, and is mediated by the massive W and Z bosons. Whatever precipitates (causes) these weak interactions within a space-time frame is unknown, at least as far as I am aware. Further speculation Now wouldn't it be interesting if it were found that what we call probability is nothing more than the way in which the occurrence of events is modulated by something in the nature of a general wave? Further, wouldn't it be a kick if the effects of a general field are reflected in the activities of living matter, the primary characteristic of which is purposiveness or goal-oriented behavior? Suppose that living matter has the power to causally influence the outcome of events. This would help to explain the apparent evolutionary discontinuities that are reflected in the geologic record. This leads to the further possibility that evolution occurs, not as a result of environmental change, but reflects the implicit capability of living matter to affect change as a way of adapting to changing environmental requirements or opportunities? This is in direct contrast to Darwin's theory of the survival of the fittest, or the occurrence of natural selection among the chance variants or "sports" that are speculated to arise spontaneously. Running parallel to such speculation is the heuristic work of the engineer and behaviorist William Powers. (William T. Powers Living Control Systems: Selected Papers (Gravel Switch, KY: The Control Systems Group, Inc., 1989).) He shows that control in living systems is neither subject to chance nor to the causal control of outside agencies. Behavior is not a direct response to external stimuli but is under the direct and nonprobabilistic control of feedback mechanisms built into the organism. We find that this cybernetic mechanism is typical of living organisms and, therefore, is a major design aspect implicit in the life functions. Returning to a consideration of the results of the coin-toss experiment, the necessary next experiment would be to look to the identification of something in the nature of a bifurcation that will predict the onset of another probability swing or trend in the course of action. That appears to be the nature of evolutionary change, indeed of all change. If such change can be controlled, as we attempt to do through planning, then we have evidence for the intervention of a life force (willpower?) in the determination of the outcome of events. The Statistical Postulate of Quantum Mechanics In a discussion of quantum mechanics, physicist Victor J. Stenger (ViCtor J. Stenger, The Unconscious Quantum (Amherst, NY: Prometheus Books, 1995), pp 56-60) indicates: "In 1926 Max Born proposed what was to become a primary postulate of quantum mechanics in the von Neumann scheme. According to this postulate, the wave function is used to compute the probability P for a particle to be found in a particular state. This probability was to be proportional to II2, the square of the magnitude of the wave function .... This postulate was extended by Wolfgang Pauli to include the probability for finding a particle at a particular position. "Pauli proposed that the probability P for finding a particle in an infinitesimal volume element AV located in a specific region of space is equal to the square of the magnitude of the wave function computed at that point multiplied by V: P = II2 AV. Since we can measure volume in any units we wish, no loss of generality occurs if we assume a unit volume, V = 1, and simply write P = II2 and understand it to mean probability per unit volume, that is, probability density. Paraphrasing Pauli's postulate in terms of my conjecture about a universal wave: The probability of an event, that is, conversion of the energy of the universal wave into a material state at a particular place (the result of the toss of a coin is thought of as equivalent to the conversion of energy into a particle) is equal to the square of the wave function,. Mathematically, squaring of the quantum "state" converts it from imaginary and complex to real and rational, and this would be analogous to the occurrence of a real event. A positive value may correspond to a constructive (energy-binding) event typical of the action of living systems, while a negative value would represent a destructive (entropic) event. Stenger mentions that the role of statistics in quantum mechanics was supported by Einstein's calculation of the probabilities for atomic transitions; however, the uncertain nature of its predictions was one of the aspects that Einstein found unsatisfying about quantum mechanics. Einstein is well known for having said, 'God does not play dice,' What he was really objecting to was the notion that statistics was the final word. He found it hard to accept that no underlying causal laws determined the behavior of individual quantum particles at the most fundamental level. As Stenger explains, "Actually, statistics enters quantum mechanics only in an indirect way. The time-dependent Schrödinger equation predicts the exact value of the wave function at future times given its value at some initial time. Probability enters with the Bom postulate when the time comes to make a prediction on the expected value of some measurement." When such a measurement is attempted, however, we are attempting to calculate at some instant in time when in fact the application of the Schr6dinger equation varies constantly over time. So we are left with a probability distribution rather than a specific value at tx. Thereby, quantum mechanics is often said to be "deterministic" in that its basic equation, the Schrödinger equation, precisely determines the time evolution of the wave function. However, it is indeterministic in the sense that knowledge of the wave function is not always sufficient to predict the outcome of a measurement — or of an event. By the probability postulate, the wave function allows for the prediction of the average motion of a system [the probable outcome of a coin toss] but not the outcome in any particular instance, which is what the above experiment demonstrates. I believe that we are approaching the time when some deterministic theory will evolve that goes beyond quantum mechanics and which applies to individual quantum systems as well as to causal events at the macro level. According to the recent theoretical development of Dr. Frank Tipler, we have an all-pervasive physical field which gives being to all being — which gives life to all living things-and which itself is generated by the ultimate life which it defines. Through this "physical" field we humans are apparently capable of superimposing our own wills on the ordinarily indeterminate laws of probability and the chaotic physical laws that prevail throughout the universe. This we do when we exercise our intellect and creativity. This is some of the speculative thinking that can serve as a precursor to further theoretical development — and it comes from reaction to a portion of an article by Billy Goodman '80 in the January 29, 2003, issue of PAW entitled "Thinking about Thinking." Now, what are the odds against that development? Send a letter to PAW Go back to our online Letter Box Table of Contents Current Issue    Online Archives    Printed Issue Archives Advertising Info    Reader Services    Search    Contact PAW    Your Class Secretary
910c3ad61202e7ee
Science — Direct measurements of the wave nature of matter The estimated wave function of electrons in solid nitrogen. The heart of quantum mechanics is the wave-particle duality: matter and light possess both wave-like and particle-like attributes. Typically, the wave-like properties are inferred indirectly from the behavior of many electrons or photons, though it's sometimes possible to study them directly. However, there are fundamental limitations to those experiments—namely information about the wave properties of matter that is inherently inaccessible. And therein lies a loophole: two groups used indirect experiments to reconstruct the wave structure of electrons. A.S. Stodolna and colleagues manipulated hydrogen atoms to measure their electron's wave structure, validating more than 30 years of theoretical work on the phenomenon known as the Stark effect. A second experiment by Daniel Lüftner and collaborators reconstructed the electronic structure of individual organic molecules through repeated scanning, with each step providing a higher resolution. In both cases, the researchers were able to match theoretical predictions to their results, verifying some previously challenging aspects of quantum mechanics. Neither a wave nor a particle description can describe all experimental results obtained by physicists. Photons interfere with each other and themselves like waves when they pass through openings in a barrier, yet they show up as individual points of light on a phosphorescent screen. Electrons create orbital patterns inside atoms described by three-dimensional waves, yet they undergo collisions as if they were particles. Certain experiments are able to reconstruct the distribution of electric charge inside materials, which appears very wave-like, yet the atoms look like discrete bodies in those same experiments. Researchers typically deal with this behavior using wave functions. The wave function is a mathematical description of the external attributes of a particle: its position, momentum, and rotational characteristics. Much of quantum mechanics involves calculating wave functions and their evolution using the Schrödinger equation, named for the same guy famous for the cat thought experiment. The wave function contains two pieces: an absolute piece called the amplitude and a relative component called the phase. When the amplitude is squared, it gives the probability of the outcome of certain measurements, but the phase is not directly accessible. In other words, there's always an aspect of the wave character that cannot be obtained experimentally without resorting to some kind of cleverness. That's a disappointing proposition for those of us interested in direct comparisons between theory and measurement. However, full knowledge of the wave function is important for understanding chemical reactions and material properties on the atomic or molecular scale. Understanding at that level of detail is especially significant for the next generation of materials and molecular design. A Stark contrast Hydrogen is the simplest of atoms, consisting of just one proton and one electron. That means its wave function can be calculated, as it is by innumerable physics and chemistry students at universities every year as class exercises. Since its electron is charged when a hydrogen atom is placed in a uniform electric field (such as exists inside a large capacitor), its wave functions change. That change results in different responses to light, which is known as the Stark effect. The wave functions in the Stark effect have a peculiar mathematical property, one which Stodolna and colleagues recreated in the lab. They separated individual hydrogen atoms from hydrogen sulfide (H2S) molecules, then subjected them to a series of laser pulses to induce specific energy transitions inside the atoms. By measuring the ways the light scattered, the researchers were able to recreate the predicted wave functions—the first time this has been accomplished. The authors also argued that this method, known as photoionization microscopy, could be used to reconstruct wavefunctions for other atoms and molecules. Since the Stark effect is a general response to external influences, the technique would be very handy for studying atoms' responses to other electric and magnetic fields—essential for understanding the behavior of materials under a wide variety of conditions. Just a phase Lüftner and colleagues took a different approach, examining the wave functions of organic molecules chemically attached (adsorbed) on a silver surface. Specifically, they looked at pentacene (C22H14) and the easy-to-remember compound perylene-3,4,9,10-tetracboxylic dianhydride (or PTCDA, C24H8O6). Unlike hydrogen, the wave functions for these molecules cannot be calculated exactly. They usually require using "ab initio" computer models. The researchers were particularly interested in finding the phase, that bit of the wave function that can't be measured directly. They determined that they could reconstruct it by using the particular way the molecules bonded to the surface, which enhanced their response to photons of a specific wavelength. The experiment involved taking successive iterative measurements by exciting the molecules using light, then measuring the angles at which the photons were scattered away. Reconstructing the phase of the wave function required exploiting the particular mathematical form it took in this system. Specifically, the waves had a relatively sharp edge, allowing the researchers to make an initial guess and then refine the value as they took successive measurements. Even with this sophisticated process, they were only able to determine the phase to an arbitrary precision—something entirely to be expected from fundamental quantum principles. However, they were able to experimentally reconstruct the entire wave function of a molecule. There was previously no way to check whether our calculated wave functions were accurate or not. Quantum physics of solace When we discuss quantum physics, the weirdness of the theory is often emphasized. However, quantum mechanics is the basis of most of modern technology, and these experiments highlight how much we actually understand about it. The wave functions generated by these experiments are exact matches to theoretical predictions. The physics works as expected. In both the molecular and hydrogen cases, the method used to reconstruct the wave functions could be applied to other systems. As researchers work to understand chemical reactions and material properties on the molecular and atomic levels, such techniques would be very powerful, perhaps leading to new insights about how to control them. Physical Review Letters, 2013. DOI: 10.1103/PhysRevLett.110.213001 and PNAS, 2013. DOI: 10.1073/pnas.1315716110  (About DOIs). Ars Science Video > Apollo: The Greatest Leap You must to comment.
97662836ff037a5b
Thursday, May 1, 2014 Mai's planning and Camp NaNoWriMo's Impressive Win. Here is my plan for the month of Mai or more my absence of plan.  I wanted to do the 6 week challenge again and Read More Or Die.   But this time, the 6 week challenge is a special Esperanto challenge to prepare a 5 minutes presentation in Esperanto in 6 weeks. And I don't want to do that so I'll be studying Japanese on my own or maybe I want to do that, I mean the new things people are throwing at me always look like candy. So I might think about it a bit further (and discuss it with my "coach" lol) And the Read More Or Die contest happened to be in April and not in May so I just missed it, which hopefully doesn't mean I won't read next month, I'm just not going to go totally crazy about having my daily 40 pages.  Camp NaNoWriMo's Impressive Win. In April, I wrote 100,715 words and finished on the 25th before leaving for Hong Kong. That's impressive not only because of the huge word count but also because I managed to finish Blue Angel and write the complete draft of Dark Druid in less than a month. The main reason why I wanted to go up to 100k words was because I'm not feeling the thrill anymore. There are a lot of people participating to NaNoWriMo who can't make it to the basic 50k, people who chose ridiculously small goal during camp because camp allows word goals as small as I think 10k and they still can't make it. Here are the stats for November. I used to be a nice person, I used to think it was just hard and that people were busy and that people were having writer block and a life and well, I'm not a nice person anymore.  I too have a life, a full time job, a boyfriend in a different city, a trip to plan and to go to at the end of April, a daily photography challenge, blog hops and daily blog posts, and writing and editing and learning Japanese and other stuff when I feel like it and an Etsy store, and so much more. I too have feelings about not being good enough, thinking that my novel is a crappy idea, that my draft is awful and not worth finishing. I used to sabotage myself with feelings. This time winning was overwhelming and when I hit 80k and started to think that I wouldn't feel the thrill this time around again I thought: "What the point, if the thrill isn't there?" I wrote 6000 words to be able to get something on paper because looking at the screen and the word count was scaring me. But when you win nobody is being nice to you and telling you that you'll do better next time. You've got a "congrats" and others bitching that you did it again and they didn't. You go through the same problem and all you have is the win, you made it, for yourself and screw the rest of the world. And you didn't take the win from anybody either, they took it away from themselves. Accepting other people excuses while I'm not accepting mine is like saying yeah I did it because my life is easier than yours but that's not even true, so screw it, I won't be nice to people making excuses anymore. If you didn't make it, you can tell yourself everything you want, give yourself as many excuses as possible. If you didn't win, it's your own fault and seriously you won't even win the next time around without a serious change of attitude. Let's call a cat a cat and a loser a loser.   Here is a quote from Rounders (1998): Why do you think the same five guys make it to the final table... at the World Series of Poker every single year? What are they, the luckiest guys in Las Vegas? - It's a skill game, Jo. - And this is so true, it's a game, writing is a game and it's a skill game. I wrote 77k in November, it was hard but it was nothing unachievable. I wrote 100k+ this time around and in not even a month.  Winning NaNoWriMo has nothing to do with being lucky or having time. You only have the time that you want to have. Winning NaNoWriMo is about telling to yourself: "I'm going to do this."  Here is another quote from Rounders (1998): Listen, here's the thing. then you are the sucker. Guys around here'll tell ya... you play for a living. So what about not feeling the thrill? Well, I thought there was something about me that was broken but in fact it's quite the opposite. I thought that because a lot of people can't make it, I should be over joyed about verifying my wining word count. But the fact is, I'm not telling myself every time I solve the Schrödinger equation that I'm such an awesome person because very few people in the word can do it. I'm not telling myself every time I write a post in English that it's freaking amazing because it's not my native language. In the same way, winning NaNoWriMo style contests has become an habit, it's something I do and I'm freaking good at it. I won't be pushing anymore to feel the thrill, I'll get the thrill with something else. I'll just continue writing novels for people to enjoy, because that's just what I do. That's it. Find us on Google+ No comments: Post a Comment
adf206b60df4be53
Energy level From Wikipedia, the free encyclopedia   (Redirected from Quantum energy) Jump to: navigation, search Energy levels for an electron in an atom: ground state and excited states. After absorbing energy, an electron may jump from the ground state to a higher energy excited state. A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy. This contrasts with classical particles, which can have any energy. These discrete values are called energy levels. The term is commonly used for the energy levels of electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or a principal energy level, may be thought of as an orbit followed by electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4 ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, …). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2(n2) electrons.[1] Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration.[2] If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. If more than one quantum mechanical state is at the same energy, the energy levels are "degenerate". They are then called degenerate energy levels. Wavefunctions of a hydrogen atom, showing the probability of finding the electron in the space around the nucleus. Each stationary state defines a specific energy level of the atom. Quantized energy levels result from the relation between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave function has the form of standing waves. Only stationary states with energies corresponding to integral numbers of wavelengths[clarification needed] can exist; for other states the waves interfere destructively[clarification needed], resulting in zero probability density. Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. Intrinsic energy levels[edit] In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom, i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. Orbital state energy level: atom/ion with nucleus + one electron[edit] Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by : (typically between 1 eV and 103 eV), where R is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is Planck's constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n. This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = h ν = h c / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. Electron-electron interactions in atoms[edit] If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number. In such cases, the orbital types (determined by the azimuthal quantum number ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. Fine structure splitting[edit] Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell[which?] electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. Hyperfine structure[edit] This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. Energy levels due to external fields[edit] Zeeman effect[edit] There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, The interaction energy therefore becomes Stark effect[edit] Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. [3] In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. In molecular physics and quantum chemistry, an energy level is a quantized energy of a bound quantum mechanical state. Energy level diagrams[edit] There are various types of energy level diagrams for bonds between atoms in a molecule. Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams. Energy level transitions[edit] An increase in energy level from E1 to E2 resulting from absorption of a photon represented by the red squiggly arrow, and whose energy is hν A decrease in energy level from E2 to E1 resulting in emission of a photon represented by the red squiggly arrow, and whose energy is h ν Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to Planck's constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ).[3] ΔE = h f = h c / λ, since c, the speed of light, equals to f λ[3] Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. [3] [4] Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly colored glow. An electron farther from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.[5] Crystalline materials[edit] Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. See also[edit]
f2374d9b557cc262
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term Semi-classical constructions in solid state physics. (English) Zbl 0732.35079 A Schrödinger equation with weak constant magnetic field is considered. Approximate eigenfunctions are constructed in a neighborhood of a saddle point of the Fermi surface. The construction uses solutions of the Weber equation and complex analysis. The problem was studied for a Fermi surface without stationary points by J. C. Guillot, J. Ralston, E. Trubowitz. Reviewer: J.Asch (Berlin) 35Q40PDEs in connection with quantum mechanics 81Q20Semi-classical techniques in quantum theory, including WKB and Maslov methods 35C20Asymptotic expansions of solutions of PDE
906a4c150b74a394
Quantum mechanics From Wikipedia, the free encyclopedia   (Redirected from Quantum Physics) Jump to: navigation, search Quantum mechanics (QM – also known as quantum physics, or quantum theory) is a branch of physics which deals with physical phenomena at microscopic scales, where the action is on the order of the Planck constant. It departs from classical mechanics primarily at the quantum realm of atomic and subatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. It is the non-relativistic limit of quantum field theory (QFT), a theory that was developed later that combined quantum mechanics with relativity. In advanced topics of quantum mechanics, some of these behaviors are macroscopic (see macroscopic quantum phenomena) and emerge at only extreme (i.e., very low or very high) energies or temperatures (such as in the use of superconducting magnets). The name quantum mechanics derives from the observation that some physical quantities can change only in discrete amounts (Latin quanta), and not in a continuous (cf. analog) way. For example, the angular momentum of an electron bound to an atom or molecule is quantized.[1] In the context of quantum mechanics, the wave–particle duality of energy and matter and the uncertainty principle provide a unified view of the behavior of photons, electrons, and other atomic-scale objects. The mathematical formulations of quantum mechanics are abstract. A mathematical function known as the wavefunction provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Mathematical manipulations of the wavefunction usually involve the bra-ket notation, which requires an understanding of complex numbers and linear functionals. The wavefunction treats the object as a quantum harmonic oscillator, and the mathematics is akin to that describing acoustic resonance. Many of the results of quantum mechanics are not easily visualized in terms of classical mechanics—for instance, the ground state in a quantum mechanical model is a non-zero energy state that is the lowest permitted energy state of a system, as opposed to a more "traditional" system that is thought of as simply being at rest, with zero kinetic energy. Instead of a traditional static, unchanging zero state, quantum mechanics allows for far more dynamic, chaotic possibilities, according to John Wheeler. The earliest versions of quantum mechanics were formulated in the first decade of the 20th century. At around the same time, the atomic theory and the corpuscular theory of light (as updated by Einstein) first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Early quantum theory was significantly reformulated in the mid-1920s by Werner Heisenberg, Max Born and Pascual Jordan, who created matrix mechanics; Louis de Broglie and Erwin Schrödinger (Wave Mechanics); and Wolfgang Pauli and Satyendra Nath Bose (statistics of subatomic particles). Moreover, the Copenhagen interpretation of Niels Bohr became widely accepted. By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann,[2] with a greater emphasis placed on measurement in quantum mechanics, the statistical nature of our knowledge of reality, and philosophical speculation about the role of the observer. Quantum mechanics has since branched out into almost every aspect of 20th century physics and other disciplines, such as quantum chemistry, quantum electronics, quantum optics, and quantum information science. Much 19th century physics has been re-evaluated as the "classical limit" of quantum mechanics, and its more advanced developments in terms of quantum field theory, string theory, and speculative quantum gravity theories. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[3] In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper entitled "On the nature of light and colours". This experiment played a major role in the general acceptance of the wave theory of light. In 1838, with the discovery of cathode rays by Michael Faraday, these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[4] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or "energy elements") precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies, and underestimated the radiance at low frequencies. Later, Max Planck corrected this model using Boltzmann statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Among the first to study quantum phenomena in nature were Arthur Compton, C.V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert A. Millikan studied the Photoelectric effect experimentally and Albert Einstein developed a theory for it. At the same time Niels Bohr developed his theory of the atomic structure which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[5] This phase is known as Old quantum theory. According to Planck, each energy element E is proportional to its frequency ν: E = h \nu\ Planck is considered the father of the Quantum Theory where h is Planck's constant. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[6] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizeable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld and others. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the "Old Quantum Theory". Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. The other exemplar that led to quantum mechanics was the study of electromagnetic waves, such as visible and ultraviolet light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or "quanta", Albert Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon) with a discrete quantum of energy that was dependent on its frequency.[7] As a matter of fact, Einstein was able to use the photon theory of light to explain the photoelectric effect, for which he won the Nobel Prize in 1921. This led to a theory of unity between subatomic particles and electromagnetic waves, called wave–particle duality, in which particles and waves were neither one nor the other, but had certain properties of both. Thus coined the term wave-particle duality. While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors, superfluids, and larger organic molecules.[8] The word quantum derives from the Latin, meaning "how great" or "how much".[9] In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and sub-atomic systems which is today called quantum mechanics. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[10] Some fundamental aspects of the theory are still actively studied.[11] Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. If classical mechanics alone governed the workings of an atom, electrons could not really "orbit" the nucleus. Since bodies in circular motion are accelerating, electrons must emit radiation, losing energy and eventually colliding with the nucleus in the process. This clearly contradicts the existence of stable atoms. However, in the natural world, electrons normally remain in an uncertain, non-deterministic, "smeared", probabilistic, wave–particle wavefunction orbital path around (or through) the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism.[12] Quantum mechanics was initially developed to provide a better explanation and description of the atom, especially the differences in the spectra of light emitted by different isotopes of the same element, as well as subatomic particles. In short, the quantum-mechanical atomic model has succeeded spectacularly in the realm where classical mechanics and electromagnetism falter. Mathematical formulations[edit] In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[13] David Hilbert,[14] John von Neumann,[15] and Hermann Weyl[16] the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors"). Formally, these reside in a complex separable Hilbert space - variously called the "state space" or the "associated Hilbert space" of the system - that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system - for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues. In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function, also referred to as state vector in a complex vector space.[17] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, with accuracy. For instance, electrons may be considered (to a certain probability) to be located somewhere within a given region of space, but with their exact positions unknown. Contours of constant probability, often referred to as "clouds", may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate momentum.[18] According to one interpretation, as the result of a measurement the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable — which explains the choice of Hermitian operators, for which all the eigenvalues are real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wavefunction collapse" (see, for example, the relative state interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.[19] Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction.[20][21] Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning "inherent" or "characteristic").[22] In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle's position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs); rather, it provides only a range of probabilities in which that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstates). Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or "generalized" eigenstate) of that observable. This process is known as wavefunction collapse, a controversial and much-debated process[23] that involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of the wavefunction collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0 (neither an eigenstate of position nor of momentum). When one measures the position of the particle, it is impossible to predict with certainty the result.[19] It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x.[24] The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that - given a wavefunction at an initial time - it makes a definite prediction of what the wavefunction will be at any later time.[25] During a measurement, on the other hand, the change of the initial wavefunction into another, later wavefunction is not deterministic, it is unpredictable (i.e., random). A time-evolution simulation can be seen here.[26][27] Wave functions change as time progresses. The Schrödinger equation describes how wavefunctions change in time, playing a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate.[28] Fig. 1: Probability densities corresponding to the wavefunctions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Brighter areas correspond to higher probability density in a position measurement. Such wavefunctions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics, and are modes of oscillation as well, possessing a sharp energy and, thus, a definite frequency. The angular momentum and energy are quantized, and take only discrete values like those shown (as is the case for resonant frequencies in acoustics) Some wave functions produce probability distributions that are constant, or independent of time - such as when in a stationary state of constant energy, time vanishes in the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1) (note, however, that only the lowest angular momentum states, labeled s, are spherically symmetric).[29] The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the "wave-like" behavior of quantum states. As it turns out, analytic solutions of the Schrödinger equation are available for only a very small number of relatively simple model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen molecular ion, and the hydrogen atom are the most important representatives. Even the helium atom - which contains just one more electron than does the hydrogen atom - has defied all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions, however. In the important method known as perturbation theory, one uses the analytic result for a simple quantum mechanical model to generate a result for a more complicated model that is related to the simpler model by (for one example) the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces only weak (small) deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos. Mathematically equivalent formulations of quantum mechanics[edit] There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by the late Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics—matrix mechanics (invented by Werner Heisenberg)[30] and wave mechanics (invented by Erwin Schrödinger).[31] Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born in the development of QM was overlooked until the 1954 Nobel award. The role is noted in a 2005 biography of Born, which recounts his role in the matrix formulation of quantum mechanics, and the use of probability amplitudes. Heisenberg himself acknowledges having learned matrices from Born, as published in a 1940 festschrift honoring Max Planck.[32] In the matrix formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[33] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible histories between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Interactions with other scientific theories[edit] The rules of quantum mechanics are fundamental. They assert that the state space of a system is a Hilbert space, and that observables of that system are Hermitian operators acting on that space—although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical mechanics when a system moves to higher energies or—equivalently—larger quantum numbers, i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero. In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, then attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. List of unsolved problems in physics Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical \scriptstyle -e^2/(4 \pi\ \epsilon_{_0}\ r) Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. These three men shared the Nobel Prize in Physics in 1979 for this work.[34] It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity (the most accurate theory of gravity currently known) and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. Classical mechanics has also been extended into the complex domain, with complex classical mechanics exhibiting behaviors similar to quantum mechanics.[35] Quantum mechanics and classical physics[edit] Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[36] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[37] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[38] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum coherence is an essential difference between classical and quantum theories, and is illustrated by the Einstein-Podolsky-Rosen paradox, Einstein's attempt to disprove quantum mechanics by an appeal to local realism.[39] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena that are characteristic of quantum systems.[40] Quantum coherence is not typically evident at macroscopic scales — although an exception to this rule can occur at extremely low temperatures (i.e. approaching absolute zero), when quantum behavior can manifest itself on more macroscopic scales.[41] This is in accordance with the following observations: • Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (which consists of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[42] • While the seemingly "exotic" behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with particles of extremely small size or velocities approaching the speed of light, the laws of classical Newtonian physics remain accurate in predicting the behavior of the vast majority of "large" objects (on the order of the size of large molecules or bigger) at velocities much smaller than the velocity of light.[43] Relativity and quantum mechanics[edit] Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence and while they do not directly contradict each other theoretically (at least with regard to their primary claims), they have proven extremely difficult to incorporate into one consistent, cohesive model.[44] Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly contributing to the field, he did not accept many of the more "philosophical consequences and interpretations" of quantum mechanics, such as the lack of deterministic causality. He is famously quoted as saying, in response to this aspect, "My God does not play with dice". He also had difficulty with the assertion that a single subatomic particle can occupy numerous areas of space at one time. However, he was also the first to notice some of the apparently exotic consequences of entanglement, and used them to formulate the Einstein-Podolsky-Rosen paradox in the hope of showing that quantum mechanics had unacceptable implications if taken as a complete description of physical reality. This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that - although Einstein was correct in identifying seemingly paradoxical implications of quantum mechanical nonlocality - these implications could be experimentally tested. Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement. According to the paper of J. Bell and the Copenhagen interpretation—the common interpretation of quantum mechanics by physicists since 1927 - and contrary to Einstein's ideas, quantum mechanics was not, at the same time a "realistic" theory and a "local" theory. The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner - although the two particles can be an arbitrary distance apart. However, this effect does not violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is used in high-security commercial applications in banking and government. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th and 21st century physics. Many prominent physicists, including Stephen Hawking, have labored for many years in the attempt to discover a theory underlying everything. This TOE would combine not only the different models of subatomic physics, but also derive the four fundamental forces of nature - the strong force, electromagnetism, the weak force, and gravity - from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, he has concluded that one is not obtainable, and has stated so publicly in his lecture "Gödel and the End of Physics" (2002).[45] Attempts at a unified field theory[edit] The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory,[46][unreliable source](blog) has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field,[47] Beyond this "grand unification," it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of the leading authorities continuing the search for a coherent TOE is Edward Witten, a theoretical physicist who formulated the groundbreaking M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are - at lower energies - completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing. Another popular theory is Loop quantum gravity (LQG), a theory that describes the quantum properties of gravity. It is also a theory of quantum space and quantum time, because in general relativity the geometry of spacetime is a manifestation of gravity. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. The main output of the theory is a physical picture of space where space is granular. The granularity is a direct consequence of the quantization. It has the same nature of the granularity of the photons in the quantum theory of electromagnetism or the discrete levels of the energy of the atoms. But here it is space itself which is discrete. More precisely, space can be viewed as an extremely fine fabric or network "woven" of finite loops. These networks of loops are called spin networks. The evolution of a spin network over time, is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 1.616×10−35 m. According to theory, there is no meaning to length shorter than this (cf. Planck scale energy). Therefore LQG predicts that not just matter, but also space itself, has an atomic structure. Loop quantum Gravity was first proposed by Carlo Rovelli. Philosophical implications[edit] Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. Even fundamental issues, such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated by society and many leading scientists. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics."[48] The Copenhagen interpretation - due largely to the Danish theoretical physicist Niels Bohr - remains the quantum mechanical formalism that is currently most widely accepted amongst physicists, some 75 years after its enunciation. According to this interpretation, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered a final renunciation of the classical idea of "causality". It is also believed therein that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementarity nature of evidence obtained under different experimental situations. Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement. Einstein held that there should be a local hidden variable theory underlying quantum mechanics and, consequently, that the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the Einstein-Podolsky-Rosen paradox. John Bell showed that this "EPR" paradox led to experimentally testable differences between quantum mechanics and local realistic theories. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that the physical world cannot be described by any local realistic theory.[49] The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[50] This is not accomplished by introducing some "new axiom" to quantum mechanics, but on the contrary, by removing the axiom of the collapse of the wave packet. All of the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical - not just formally mathematical, as in other interpretations - quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe (i.e., the consistent state contribution to the aforementioned superposition) that we, as observers, inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, these "parallel universes" will never be accessible to us. The inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away at the speed of light towards the other end of the universe. In order to prove that the wave function did not collapse, one would have to bring all these particles back and measure them again, together with the system that was originally measured. Not only is this completely impractical, but even if one could theoretically do this, it would have to destroy any evidence that the original measurement took place (to include the physicist's memory!); in light of these Bell tests, Cramer (1986) formulated his transactional interpretation.[51] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen Interpretation. Quantum mechanics had enormous[52] success in explaining many of the features of our world. Quantum mechanics is often the only tool available that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism). Quantum mechanics is also critically important for understanding how individual atoms combine covalently to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Relativistic quantum mechanics can, in principle, mathematically describe most of chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and the magnitudes of the energies involved.[53] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics. A working mechanism of a resonant tunneling diode device, based on the phenomenon of quantum tunneling through potential barriers A great deal of modern technological inventions operate at a scale where quantum effects are significant. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging (MRI). The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems and devices. Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to more fully develop quantum cryptography, which will theoretically allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances. Quantum tunneling is vital to the operation of many devices - even in the simple light switch, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. While quantum mechanics primarily applies to the atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale - superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. Quantum theory also provides accurate descriptions for many previously unexplained phenomena, such as black body radiation and the stability of the orbitals of electrons in atoms. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[54] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this basic fundamental process of the plant kingdom.[55] Even so, classical physics can often provide good approximations to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. Free particle[edit] For example, consider a free particle. In quantum mechanics, there is wave-particle duality, so the properties of the particle can be described as the properties of a wave. Therefore, its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with complete precision. However, one can measure the position (alone) of a moving free particle, creating an eigenstate of position with a wavefunction that is very large (a Dirac delta) at a particular position x, and zero everywhere else. If one performs a position measurement on such a wavefunction, the resultant x will be obtained with 100% probability (i.e., with full certainty, or complete precision). This is called an eigenstate of position—or, stated in mathematical terms, a generalized position eigenstate (eigendistribution). If the particle is in an eigenstate of position, then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum, then its position is completely unknown.[56] In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.[57] 3D confined electron wave functions for each eigenstate in a Quantum Dot. Here, rectangular and triangular-shaped quantum dots are shown. Energy states in rectangular dots are more ‘s-type’ and ‘p-type’. However, in a triangular dot, the wave functions are mixed due to confinement symmetry. Step potential[edit] Scattering at a finite potential step of height V0, shown in green. The amplitudes and direction of left- and right-moving waves are indicated. Yellow is the incident wave, blue are reflected and transmitted waves, red does not occur. E > V0 for this figure. The potential in this case is given by: V(x)= \begin{cases} 0, & x < 0, \\ V_0, & x \ge 0. \end{cases} The solutions are superpositions of left- and right-moving waves: \psi_1(x)= \frac{1}{\sqrt{k_1}} \left(A_\rightarrow e^{i k_1 x} + A_\leftarrow e^{-ik_1x}\right)\quad x<0 \psi_2(x)= \frac{1}{\sqrt{k_2}} \left(B_\rightarrow e^{i k_2 x} + B_\leftarrow e^{-ik_2x}\right)\quad x>0 where the wave vectors are related to the energy via k_1=\sqrt{2m E/\hbar^2}, and k_2=\sqrt{2m (E-V_0)/\hbar^2} and the coefficients A and B are determined from the boundary conditions and by imposing a continuous derivative on the solution. Each term of the solution can be interpreted as an incident, reflected, or transmitted component of the wave, allowing the calculation of transmission and reflection coefficients. In contrast to classical mechanics, incident particles with energies higher than the size of the potential step are still partially reflected. Rectangular potential barrier[edit] This is a model for the quantum tunneling effect, which has important applications to modern devices such as flash memory and the scanning tunneling microscope. Particle in a box[edit] 1-dimensional potential energy box (or infinite potential well) The particle in a one-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and infinite potential energy everywhere outside' that region. For the one-dimensional case in the x direction, the time-independent Schrödinger equation can be written as:[58] Writing the differential operator \hat{p}_x = -i\hbar\frac{d}{dx} the previous equation can be seen to be evocative of the classic kinetic energy analogue \frac{1}{2m} \hat{p}_x^2 = E with E as the energy for the state \psi, which in this case coincides with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are: or, from Euler's formula, and so D = 0. When x = L, C cannot be zero, since this would conflict with the Born interpretation. Therefore, sin kL = 0, and so it must be that kL is an integer multiple of π. And additionally, Finite potential well[edit] This is the generalization of the infinite potential well problem to potential wells of finite depth. Harmonic oscillator[edit] Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wavefunction), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C,D,E,and F) are standing waves (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy. As in the classical case, the potential for the quantum harmonic oscillator is given by: This problem can be solved either by solving the Schrödinger equation directly, which is not trivial, or by using the more elegant "ladder method", first proposed by Paul Dirac. The eigenstates are given by: \psi_n(x) = \sqrt{\frac{1}{2^n\,n!}} \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{ - \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots. where Hn are the Hermite polynomials: H_n(x)=(-1)^n e^{x^2}\frac{d^n}{dx^n}\left(e^{-x^2}\right) and the corresponding energy levels are E_n = \hbar \omega \left(n + {1\over 2}\right). This is another example which illustrates the quantization of energy for bound states. See also[edit] 1. ^ The angular momentum of an unbound electron, in contrast, is not quantized. 2. ^ van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society 64: Part2:95–99.  3. ^ Max Born & Emil Wolf, Principles of Optics, 1999, Cambridge University Press 4. ^ Mehra, J.; Rechenberg, H. (1982). The historical development of quantum theory. New York: Springer-Verlag. ISBN 0387906428.  5. ^ http://www.ias.ac.in/resonance/December2010/p1056-1059.pdf 6. ^ Kuhn, T. S. (1978). Black-body theory and the quantum discontinuity 1894-1912. Oxford: Clarendon Press. ISBN 0195023838.  7. ^ Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [On a heuristic point of view concerning the production and transformation of light]. Annalen der Physik 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.  Reprinted in The collected papers of Albert Einstein, John Stachel, editor, Princeton University Press, 1989, Vol. 2, pp. 149-166, in German; see also Einstein's early work on the quantum hypothesis, ibid. pp. 134-148. 8. ^ "Quantum interference of large organic molecules". Nature.com. Retrieved April 20, 2013.  9. ^ "Quantum - Definition and More from the Free Merriam-Webster Dictionary". Merriam-webster.com. Retrieved 2012-08-18.  10. ^ http://mooni.fccj.org/~ethall/quantum/quant.htm 11. ^ Compare the list of conferences presented here 12. ^ Oocities.com at the Wayback Machine (archived October 26, 2009) 14. ^ D. Hilbert Lectures on Quantum Theory, 1915-1927 16. ^ H.Weyl "The Theory of Groups and Quantum Mechanics", 1931 (original title: "Gruppentheorie und Quantenmechanik"). 17. ^ Greiner, Walter; Müller, Berndt (1994). Quantum Mechanics Symmetries, Second edition. Springer-Verlag. p. 52. ISBN 3-540-58080-8. , Chapter 1, p. 52 18. ^ "Heisenberg - Quantum Mechanics, 1925-1927: The Uncertainty Relations". Aip.org. Retrieved 2012-08-18.  19. ^ a b Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition. Jones and Bartlett Publishers, Inc. p. 215. ISBN 0-7637-2470-X. , Chapter 8, p. 215 20. ^ "[Abstract] Visualization of Uncertain Particle Movement". Actapress.com. Retrieved 2012-08-18.  21. ^ Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Campbridge University Press. p. 265. ISBN 0-521-80412-4. , Chapter , p. 22. ^ Dict.cc 23. ^ "Topics: Wave-Function Collapse". Phy.olemiss.edu. 2012-07-27. Retrieved 2012-08-18.  24. ^ "Collapse of the wave-function". Farside.ph.utexas.edu. Retrieved 2012-08-18.  25. ^ "Determinism and Naive Realism : philosophy". Reddit.com. 2009-06-01. Retrieved 2012-08-18.  26. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Demonstrations.wolfram.com. Retrieved 2010-10-15.  27. ^ Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Demonstrations.wolfram.com. Retrieved 2010-10-15.  28. ^ Mathews, Piravonu Mathews; Venkatesan, K. (1976). A Textbook of Quantum Mechanics. Tata McGraw-Hill. p. 36. ISBN 0-07-096510-2. , Chapter 2, p. 36 29. ^ "Wave Functions and the Schrödinger Equation" (PDF). Retrieved 2010-10-15. [dead link] 30. ^ "Quantum Physics: Werner Heisenberg Uncertainty Principle of Quantum Mechanics. Werner Heisenberg Biography". Spaceandmotion.com. 1976-02-01. Retrieved 2012-08-18.  31. ^ http://th-www.if.uj.edu.pl/acta/vol19/pdf/v19p0683.pdf 32. ^ Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124-8 and 285-6. 33. ^ http://ocw.usu.edu/physics/classical-mechanics/pdf_lectures/06.pdf 34. ^ "The Nobel Prize in Physics 1979". Nobel Foundation. Retrieved 2010-02-16.  35. ^ Carl M. Bender, Daniel W. Hook, Karta Kooner (2009-12-31). "Complex Elliptic Pendulum". arXiv:1001.0131 [hep-th]. 36. ^ See, for example, Precision tests of QED. The relativistic refinement of quantum mechanics known as quantum electrodynamics (QED) has been shown to agree with experiment to within 1 part in 108 for some atomic properties. 37. ^ Tipler, Paul; Llewellyn, Ralph (2008). Modern Physics (5 ed.). W. H. Freeman and Company. pp. 160–161. ISBN 978-0-7167-7550-8.  38. ^ "Quantum mechanics course iwhatisquantummechanics". Scribd.com. 2008-09-14. Retrieved 2012-08-18.  39. ^ A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777 (1935). [1] 40. ^ "Between classical and quantum�" (PDF). Retrieved 2012-08-19.  41. ^ (see macroscopic quantum phenomena, Bose-Einstein condensate, and Quantum machine) 42. ^ "Atomic Properties". Academic.brooklyn.cuny.edu. Retrieved 2012-08-18.  43. ^ http://assets.cambridge.org/97805218/29526/excerpt/9780521829526_excerpt.pdf 45. ^ Stephen Hawking; Gödel and the end of physics 46. ^ "Life on the lattice: The most accurate theory we have". Latticeqcd.blogspot.com. 2005-06-03. Retrieved 2010-10-15.  48. ^ The Character of Physical Law (1965) Ch. 6; also quoted in The New Quantum Universe (2003), by Tony Hey and Patrick Walters 49. ^ "Action at a Distance in Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. 2007-01-26. Retrieved 2012-08-18.  50. ^ "Everett's Relative-State Formulation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. Retrieved 2012-08-18.  51. ^ The Transactional Interpretation of Quantum Mechanics by John Cramer. Reviews of Modern Physics 58, 647-688, July (1986) 52. ^ See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14-11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8-6), and lasers (vol III, pp. 9-13). 53. ^ Introduction to Quantum Mechanics with Applications to Chemistry - Linus Pauling, E. Bright Wilson. Books.google.com. 1985-03-01. ISBN 9780486648712. Retrieved 2012-08-18.  54. ^ Anderson, Mark (2009-01-13). "Is Quantum Mechanics Controlling Your Thoughts? | Subatomic Particles". DISCOVER Magazine. Retrieved 2012-08-18.  55. ^ "Quantum mechanics boosts photosynthesis". physicsworld.com. Retrieved 2010-10-23.  56. ^ Davies, P. C. W.; Betts, David S. (1984). Quantum Mechanics, Second edition. Chapman and Hall. p. 79. ISBN 0-7487-4446-0. , Chapter 6, p. 79 57. ^ Baofu, Peter (2007-12-31). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. Books.google.com. ISBN 9789812708991. Retrieved 2012-08-18.  58. ^ Derivation of particle in a box, chemistry.tidalswan.com More technical: Further reading[edit] External links[edit] Course material
78b003faaaafe077
Spin–statistics theorem From Wikipedia, the free encyclopedia   (Redirected from Spin-statistics theorem) Jump to: navigation, search In quantum mechanics, the spin–statistics theorem relates the spin of a particle to the particle statistics it obeys. The spin of a particle is its intrinsic angular momentum (that is, the contribution to the total angular momentum which is not due to the orbital motion of the particle). All particles[citation needed] have either integer spin or half-integer spin (in units of the reduced Planck constant ħ). The theorem states that: • the wave function of a system of identical integer-spin particles has the same value when the positions of any two particles are swapped. Particles with wave functions symmetric under exchange are called bosons; • the wave function of a system of identical half-integer spin particles changes sign when two particles are swapped. Particles with wave functions antisymmetric under exchange are called fermions. In other words, the spin–statistics theorem states that integer spin particles are bosons, while half-integer spin particles are fermions. The spin–statistics relation was first formulated in 1939 by Markus Fierz,[1] and was rederived in a more systematic way by Wolfgang Pauli.[2] Fierz and Pauli argued by enumerating all free field theories, requiring that there should be quadratic forms for locally commuting[clarification needed] observables including a positive definite energy density. A more conceptual argument was provided by Julian Schwinger in 1950. Richard Feynman gave a demonstration by demanding unitarity for scattering as an external potential is varied,[3] which when translated to field language is a condition on the quadratic operator that couples to the potential.[4] General discussion[edit] Two indistinguishable particles, occupying two separate points, have only one state, not two. This means that if we exchange the positions of the particles, we do not get a new state, but rather the same physical state. In fact, one cannot tell which particle is in which position. A physical state is described by a wavefunction, or – more generally – by a vector, which is also called a "state"; if interactions with other particles are ignored, then two different wavefunctions are physically equivalent if their absolute value is equal. So, while the physical state does not change under the exchange of the particles' positions, the wavefunction may get a minus sign. Bosons are particles whose wavefunction is symmetric under such an exchange, so if we swap the particles the wavefunction does not change. Fermions are particles whose wavefunction is antisymmetric, so under such a swap the wavefunction gets a minus sign, meaning that the amplitude for two identical fermions to occupy the same state must be zero. This is the Pauli exclusion principle: two identical fermions cannot occupy the same state. This rule does not hold for bosons. In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum. In order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. The operator \int \psi(x,y) \phi(x)\phi(y)\,dx\,dy (with \phi an operator and \psi(x,y) a numerical function) creates a two-particle state with wavefunction \psi(x,y), and depending on the commutation properties of the fields, either only the antisymmetric parts or the symmetric parts matter. Let us assume that x \ne y and the two operators take place at the same time; more generally, they may have spacelike separation, as is explained hereafter. If the fields commute, meaning that the following holds then only the symmetric part of \psi contributes, so that \psi(x,y) = \psi(y,x) and the field will create bosonic particles. On the other hand if the fields anti-commute, meaning that \phi has the property that then only the antisymmetric part of \psi contributes, so that \psi(x,y) = -\psi(y,x), and the particles will be fermionic. Naively, neither has anything to do with the spin, which determines the rotation properties of the particles, not the exchange properties. A suggestive bogus argument[edit] Consider the two-field operator product R(\pi)\phi(x) \phi(-x) \, where R is the matrix which rotates the spin polarization of the field by 180 degrees when one does a 180 degree rotation around some particular axis. The components of \phi are not shown in this notation, \phi has many components, and the matrix R mixes them up with one another. In a non-relativistic theory, this product can be interpreted as annihilating two particles at positions x \ and -x \ with polarizations which are rotated by \pi relative to each other. Now rotate this configuration by π around the origin. Under this rotation, the two points x \ and -x \ switch places, and the two field polarizations are additionally rotated by a \pi \ . So you get R(2\pi)\phi(-x) R(\pi)\phi(x) \, which for integer spin is equal to and for half integer spin is equal to (proved here). Both the operators \pm \phi(-x) R(\pi)\phi(x) still annihilate two particles at x and - x. Hence we claim to have shown that, with respect to particle states: R(\pi)\phi(x) \phi(-x) = \begin{cases}\phi(-x) R(\pi)\phi(x) & \text{ for integral spins}, \\ -\phi(-x) R(\pi)\phi(x) & \text{ for half-integral spins}.\end{cases} So exchanging the order of two appropriately polarized operator insertions into the vacuum can be done by a rotation, at the cost of a sign in the half integer case. This argument by itself does not prove anything like the spin/statistics relation. To see why, consider a nonrelativistic spin 0 field described by a free Schrödinger equation. Such a field can be anticommuting or commuting. To see where it fails, consider that a nonrelativistic spin 0 field has no polarization, so that the product above is simply: \phi(-x) \phi(x)\, In the nonrelativistic theory, this product annihilates two particles at x and −x, and has zero expectation value in any state. In order to have a nonzero matrix element, this operator product must be between states with two more particles on the right than on the left: \langle 0| \phi(-x) \phi(x) |\psi\rangle \, Performing the rotation, all that you learn is that rotating the 2-particle state |\psi\rangle gives the same sign as changing the operator order. This is no information at all, so this argument does not prove anything. Why the bogus argument fails[edit] To prove spin/statistics, it is necessary to use relativity (though there are a few nice methods[5][6] which do not use field theoretic tools). In relativity, there are no local fields which are pure creation operators or annihilation operators. Every local field both creates particles and annihilates the corresponding antiparticle. This means that in relativity, the product of the free real spin-0 field has a nonzero vacuum expectation value, because in addition to creating particles and annihilating particles, it also includes a part which creates and then annihilates a particle: G(x)= \langle 0 | \phi(-x) \phi(x) | 0\rangle \, And now the heuristic argument can be used to see that G(x) is equal to G(−x), which tells you that the fields cannot be anti-commuting. The essential ingredient in proving the spin/statistics relation is relativity, that the physical laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, by definition. Additionally, the assumption (known as microcausality) that spacelike separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless. However, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained. Lorentz transformations include 3-dimensional rotations as well as boosts. A boost transfers to a frame of reference with a different velocity, and is mathematically like a rotation into time. By analytic continuation of the correlation functions of a quantum field theory, the time coordinate may become imaginary, and then boosts become rotations. The new "spacetime" has only spatial directions, and is termed Euclidean. A π rotation in the Euclidean x–t plane can be used to rotate vacuum expectation values of the field product of the previous section. The time rotation turns the argument of the previous section into the spin/statistics theorem. The proof requires the following assumptions: 1. The theory has a Lorentz invariant Lagrangian. 2. The vacuum is Lorentz invariant. 3. The particle is a localized excitation. Microscopically, it is not attached to a string or domain wall. 4. The particle is propagating, meaning that it has a finite, not infinite, mass. 5. The particle is a real excitation, meaning that states containing this particle have a positive definite norm. These assumptions are for the most part necessary, as the following examples show: 1. The spinless anticommuting field shows that spinless fermions are nonrelativistically consistent. Likewise, the theory of a spinor commuting field shows that spinning bosons are too. 2. This assumption may be weakened. 3. In 2+1 dimensions, sources for the Chern–Simons theory can have exotic spins, despite the fact that the three dimensional rotation group has only integer and half-integer spin representations. 4. An ultralocal field can have either statistics independently of its spin. This is related to Lorentz invariance, since an infinitely massive particle is always nonrelativistic, and the spin decouples from the dynamics. Although colored quarks are attached to a QCD string and have infinite mass, the spin-statistics relation for quarks can be proved in the short distance limit. 5. Gauge ghosts are spinless fermions, but they include states of negative norm. Assumptions 1 and 2 imply that the theory is described by a path integral, and assumption 3 implies that there is a local field which creates the particle. The rotation plane includes time, and a rotation in a plane involving time in the Euclidean theory defines a CPT transformation in the Minkowski theory. If the theory is described by a path integral, a CPT transformation takes states to their conjugates, so that the correlation function \langle 0 | R\phi(x) \phi(-x)|0\rangle must be positive definite at x=0 by assumption 5, the particle states have positive norm. The assumption of finite mass implies that this correlation function is nonzero for x spacelike. Lorentz invariance now allows the fields to be rotated inside the correlation function in the manner of the argument of the previous section: \langle 0 | RR\phi(x) R\phi(-x) |0\rangle = \pm \langle 0| \phi(-x) R\phi(x)|0\rangle Where the sign depends on the spin, as before. The CPT invariance, or Euclidean rotational invariance, of the correlation function guarantees that this is equal to G(x). So \langle 0 | ( R\phi(x)\phi(y) - \phi(y)R\phi(x) )|0\rangle = 0 \, for integer spin fields and \langle 0 | R\phi(x)\phi(y) + \phi(y)R\phi(x)|0\rangle = 0 \, for half-integer spin fields. Since the operators are spacelike separated, a different order can only create states that differ by a phase. The argument fixes the phase to be −1 or 1 according to the spin. Since it is possible to rotate the space-like separated polarizations independently by local perturbations, the phase should not depend on the polarization in appropriately chosen field coordinates. This argument is due to Julian Schwinger.[7] Spin statistics theorem implies that half-integer spin particles are subject to the Pauli exclusion principle, while integer-spin particles are not. Only one fermion can occupy a given quantum state at any time, while the number of bosons that can occupy a quantum state is not restricted. The basic building blocks of matter such as protons, neutrons, and electrons are fermions. Particles such as the photon, which mediate forces between matter particles, are bosons. There are a couple of interesting phenomena arising from the two types of statistics. The Bose–Einstein distribution which describes bosons leads to Bose–Einstein condensation. Below a certain temperature, most of the particles in a bosonic system will occupy the ground state (the state of lowest energy). Unusual properties such as superfluidity can result. The Fermi–Dirac distribution describing fermions also leads to interesting properties. Since only one fermion can occupy a given quantum state, the lowest single-particle energy level for spin-1/2 fermions contains at most two particles, with the spins of the particles oppositely aligned. Thus, even at absolute zero, the system still has a significant amount of energy. As a result, a fermionic system exerts an outward pressure. Even at non-zero temperatures, such a pressure can exist. This degeneracy pressure is responsible for keeping certain massive stars from collapsing due to gravity. See white dwarf, neutron star, and black hole. Ghost fields do not obey the spin-statistics relation. See Klein transformation on how to patch up a loophole in the theorem. Relation to representation theory of the Lorentz group[edit] The Lorentz group has no non-trivial unitary representations of finite dimension. Thus it seems impossible to construct a Hilbert space in which all states have finite, non-zero spin and positive, Lorentz-invariant norm. This problem is overcome in different ways depending on particle spin-statistics. For a state of integer spin the negative norm states (known as "unphysical polarization") are set to zero, which makes the use of gauge symmetry necessary. For a state of half-integer spin the argument can be circumvented by having fermionic statistics.[8] • Markus Fierz: Über die relativistische Theorie kräftefreier Teilchen mit beliebigem Spin. Helv. Phys. Acta 12, 3–17 (1939) • Wolfgang Pauli: The connection between spin and statistics. Phys. Rev. 58, 716–722 (1940) • Ray F. Streater and Arthur S. Wightman: PCT, Spin & Statistics, and All That. 5th edition: Princeton University Press, Princeton (2000) • Ian Duck and Ennackel Chandy George Sudarshan: Pauli and the Spin-Statistics Theorem. World Scientific, Singapore (1997) • Arthur S Wightman: Pauli and the Spin-Statistics Theorem (book review). Am. J. Phys. 67 (8), 742–746 (1999) • Arthur Jabs: Connecting spin and statistics in quantum mechanics. http://arXiv.org/abs/0810.2399 (Found. Phys. 40, 776–792, 793–794 (2010)) 1. ^ M. Fierz "Über die relativistische Theorie kräftefreier Teilchen mit beliebigem Spin" Helvetica Physica Acta 12:3–37, 1939 2. ^ W. Pauli "The Connection Between Spin and Statistics", Phys. Rev. 58, 716–722 (1940), pdf 3. ^ R.P. Feynman "Quantum Electrodynamics", Basic Books, 1961 4. ^ W. Pauli "On the Connection Between Spin and Statistics" Progress of Theoretical Physics vol 5 no. 4, 1950 5. ^ Jabs, Arthur (5). "Connecting Spin and Statistics in Quantum Mechanics". Foundations of Physics. Foundations of Physics 40 (7): 776–792. arXiv:0810.2399. Bibcode:2010FoPh...40..776J. doi:10.1007/s10701-009-9351-4. Retrieved May 29, 2011.  6. ^ Horowitz, Joshua (14). From Path Integrals to Fractional Quantum Statistics.  7. ^ The Quantum Theory of Fields I, Schwinger 1950. The only difference between the argument in this paper and the argument presented here is that the operator "R" in Schwinger's paper is a pure time reversal, instead of a CPT operation, but this is the same for CP invariant free field theories which were all that Schwinger considered. 8. ^ Peskin, Michael E.; Schroeder, Daniel V. (1995), An Introduction to Quantum Field Theory, Addison-Wesley, ISBN 0-201-50397-2 See also[edit] External links[edit]
8ee23189595d7a56
Tell me more × Suppose we have a time varying potential $$\left( -\frac{1}{2m}\nabla^2+ V(\vec{r},t)\right)\psi = i\partial_t \psi$$ then I want to know why is the general solution written as $\psi = \displaystyle\sum_n a_n(t)\phi_n(\vec{r})e^{-iE_n t} $ Particularly, why do we get a time dependent coefficient $a_n(t)$. This confuses me because when we have a time independent potential, then we use variable separation and usual method to get the general solution $$\psi = \displaystyle\sum_n a_n\phi_n(\vec{r})e^{-iE_n t}$$ However, the time varying counterpart cannot be reduced this way by variable seperation. EDIT: I could not find a free preview of the book I am using, however, the lectures here for example, use the same solution. share|improve this question Are you sure there is $\phi_n(t)\exp(-iE_nt)$ not just $\phi_n\exp(-iE_nt)$? –  Maksim Zholudev Feb 22 '12 at 7:22 @MaksimZholudev That was a typo. Thanks for pointing it out. –  yayu Feb 22 '12 at 10:24 add comment 3 Answers The basis functions $\phi_n(\vec{r})$ and the energies $E_n$ are the solutions of the stationary Schrödinger equation: $$ \left( -\frac{1}{2m}\nabla^2+ V_0(\vec{r})\right)\phi_n(\vec{r}) = E_n \phi_n(\vec{r}) $$ If the Hamiltonian depends on time one even can not write this equation. But the set of functions $\phi_n(\vec{r})$ is a full basis in the Hilbert space. So one always can expand any function (from this space) over this basis. The snapshot of the wavefunction $\psi(\vec{r},t)$ at the moment $t$ is just a function of coordinates and the element of this Hilbert space. So we can expand it: $$ \psi(\vec{r},t) = \sum_n b_n(t) \phi_n(\vec{r}) $$ If the Hamiltonian do not depend on time the expansion coefficients can be easily derived from the general Schrödinger equation (the one with the time derivative): $$ b_n(t) = a^{(0)}_n e^{-iE_nt} $$ In the case of time-dependent potential these coefficients are usually considered as unknown functions of time: $$ b_n(t) = a_n(t) e^{-iE_nt} $$ The perturbation theory is used to find the approximation for these functions. share|improve this answer add comment 1) What OP is looking at is known as time-dependent perturbation theory. Here the energies $E_n$ are eigenvalues for the unperturbed time-independent Hamiltonian $H^{(0)}$. The full Hamiltonian is 2) Imagine for a second that the potential $V$ is time-independent and commutes with $H^{(0)}$. Let $v_n$ be the eigenvalues of $V$. In the time independent case, the wavefunction solution is then of the form $$\psi(t,\vec{r}) ~=~ \displaystyle\sum_n c_n\phi_n(\vec{r})e^{-i(E_n+v_n) t} ~=~ \displaystyle\sum_n \left(c_n e^{-iv_nt}\right) \phi_n(\vec{r})e^{-iE_n t}.$$ 3) For general time-dependent perturbations $V(t)$, it is hence natural to expect that the coefficients $a_n(t)$ in the eigenfunction expansion $$\psi(t,\vec{r}) ~=~ \displaystyle\sum_n a_n(t)\phi_n(\vec{r})e^{-iE_n t} $$ could depend on time $t$, cf. OP's question(v2). Here $\phi_n(\vec{r})$ denote eigenfunctions for the unperturbed problem, $$ H^{(0)}\phi_n(\vec{r})~=~E_n\phi_n(\vec{r}).$$ share|improve this answer add comment It is simply a matter of definition. If the time-dependent coefficients can be reasonably found from the Scroedinger equation, then it is a solution for your time-dependent wave function. It is introducing new variables $a_n (t)$ and determining them from the exact equation. It can always be done. share|improve this answer add comment Your Answer
f11853896fba2aa3
Legendre polynomials From Wikipedia, the free encyclopedia   (Redirected from Legendre polynomial) Jump to: navigation, search For Legendre's Diophantine equation, see Legendre's equation. Associated Legendre polynomials are the most general solution to Legendre's Equation and Legendre polynomials are solutions that are azimuthally symmetric. In mathematics, Legendre functions are solutions to Legendre's differential equation: {d \over dx} \left[ (1-x^2) {d \over dx} P_n(x) \right] + n(n+1)P_n(x) = 0. They are named after Adrien-Marie Legendre. This ordinary differential equation is frequently encountered in physics and other technical fields. In particular, it occurs when solving Laplace's equation (and related partial differential equations) in spherical coordinates. The Legendre differential equation may be solved using the standard power series method. The equation has regular singular points at x = ±1 so, in general, a series solution about the origin will only converge for |x| < 1. When n is an integer, the solution Pn(x) that is regular at x = 1 is also regular at x = −1, and the series for this solution terminates (i.e. it is a polynomial). These solutions for n = 0, 1, 2, ... (with the normalization Pn(1) = 1) form a polynomial sequence of orthogonal polynomials called the Legendre polynomials. Each Legendre polynomial Pn(x) is an nth-degree polynomial. It may be expressed using Rodrigues' formula: P_n(x) = {1 \over 2^n n!} {d^n \over dx^n } \left[ (x^2 -1)^n \right]. That these polynomials satisfy the Legendre differential equation (1) follows by differentiating n + 1 times both sides of the identity (x^2-1)\frac{d}{dx}(x^2-1)^n = 2nx(x^2-1)^n and employing the general Leibniz rule for repeated differentiation.[1] The Pn can also be defined as the coefficients in a Taylor series expansion:[2] \frac{1}{\sqrt{1-2xt+t^2}} = \sum_{n=0}^\infty P_n(x) t^n. In physics, this ordinary generating function is the basis for multipole expansions. Recursive definition[edit] Expanding the Taylor series in Equation (2) for the first two terms gives P_0(x) = 1,\quad P_1(x) = x for the first two Legendre Polynomials. To obtain further terms without resorting to direct expansion of the Taylor series, equation (1) is differentiated with respect to t on both sides and rearranged to obtain \frac{x-t}{\sqrt{1-2xt+t^2}} = (1-2xt+t^2) \sum_{n=1}^\infty n P_n(x) t^{n-1}. Replacing the quotient of the square root with its definition in (2), and equating the coefficients of powers of t in the resulting expansion gives Bonnet’s recursion formula (n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x).\, This relation, along with the first two polynomials P0 and P1, allows the Legendre Polynomials to be generated recursively. Explicit representations include \begin{align}P_n(x)&= \frac 1 {2^n} \sum_{k=0}^n {n\choose k}^2 (x-1)^{n-k}(x+1)^k \\ &=\sum_{k=0}^n {n\choose k} {-n-1\choose k} \left( \frac{1-x}{2} \right)^k \\&= 2^n\cdot \sum_{k=0}^n x^k {n \choose k}{\frac{n+k-1}2\choose n},\end{align} where the latter, which is immediate from the recursion formula, expresses the Legendre polynomials by simple monomials and involves the multiplicative formula of the binomial coefficient. The first few Legendre polynomials are: n P_n(x)\, 0 1\, 1 x\, 2 \begin{matrix}\frac12\end{matrix} (3x^2-1) \, 3 \begin{matrix}\frac12\end{matrix} (5x^3-3x) \, 4 \begin{matrix}\frac18\end{matrix} (35x^4-30x^2+3)\, 5 \begin{matrix}\frac18\end{matrix} (63x^5-70x^3+15x)\, 6 \begin{matrix}\frac1{16}\end{matrix} (231x^6-315x^4+105x^2-5)\, 7 \begin{matrix}\frac1{16}\end{matrix} (429x^7-693x^5+315x^3-35x)\, 8 \begin{matrix}\frac1{128}\end{matrix} (6435x^8-12012x^6+6930x^4-1260x^2+35)\, 9 \begin{matrix}\frac1{128}\end{matrix} (12155x^9-25740x^7+18018x^5-4620x^3+315x)\, 10 \begin{matrix}\frac1{256}\end{matrix} (46189x^{10}-109395x^8+90090x^6-30030x^4+3465x^2-63)\, The graphs of these polynomials (up to n = 5) are shown below: An important property of the Legendre polynomials is that they are orthogonal with respect to the L2 inner product on the interval −1 ≤ x ≤ 1: \int_{-1}^{1} P_m(x) P_n(x)\,dx = {2 \over {2n + 1}} \delta_{mn} (where δmn denotes the Kronecker delta, equal to 1 if m = n and to 0 otherwise). In fact, an alternative derivation of the Legendre polynomials is by carrying out the Gram–Schmidt process on the polynomials {1, xx2, ...} with respect to this inner product. The reason for this orthogonality property is that the Legendre differential equation can be viewed as a Sturm–Liouville problem, where the Legendre polynomials are eigenfunctions of a Hermitian differential operator: {d \over dx} \left[ (1-x^2) {d \over dx} P(x) \right] = -\lambda P(x), where the eigenvalue λ corresponds to n(n + 1). Applications of Legendre polynomials in physics[edit] The Legendre polynomials were first introduced in 1782 by Adrien-Marie Legendre[3] as the coefficients in the expansion of the Newtonian potential \frac{1}{\left| \mathbf{x}-\mathbf{x}^\prime \right|} = \frac{1}{\sqrt{r^2+r^{\prime 2}-2rr'\cos\gamma}} = \sum_{\ell=0}^{\infty} \frac{r^{\prime \ell}}{r^{\ell+1}} P_{\ell}(\cos \gamma) where r and r' are the lengths of the vectors \mathbf{x} and \mathbf{x}^\prime respectively and \gamma is the angle between those two vectors. The series converges when r>r'. The expression gives the gravitational potential associated to a point mass or the Coulomb potential associated to a point charge. The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution. Legendre polynomials occur in the solution of Laplace's equation of the static potential, \nabla^2 \Phi(\mathbf{x})=0, in a charge-free region of space, using the method of separation of variables, where the boundary conditions have axial symmetry (no dependence on an azimuthal angle). Where \widehat{\mathbf{z}} is the axis of symmetry and \theta is the angle between the position of the observer and the \widehat{\mathbf{z}} axis (the zenith angle), the solution for the potential will be \Phi(r,\theta)=\sum_{\ell=0}^{\infty} \left[ A_\ell r^\ell + B_\ell r^{-(\ell+1)} \right] P_\ell(\cos\theta). A_\ell and B_\ell are to be determined according to the boundary condition of each problem.[4] They also appear when solving Schrödinger equation in three dimensions for a central force. Legendre polynomials in multipole expansions Figure 2 Legendre polynomials are also useful in expanding functions of the form (this is the same as before, written a little differently): \frac{1}{\sqrt{1 + \eta^{2} - 2\eta x}} = \sum_{k=0}^{\infty} \eta^{k} P_{k}(x) which arise naturally in multipole expansions. The left-hand side of the equation is the generating function for the Legendre polynomials. As an example, the electric potential \Phi(r, \theta) (in spherical coordinates) due to a point charge located on the z-axis at z=a (Figure 2) varies like \Phi (r, \theta ) \propto \frac{1}{R} = \frac{1}{\sqrt{r^{2} + a^{2} - 2ar \cos\theta}}. If the radius r of the observation point P is greater than a, the potential may be expanded in the Legendre polynomials \Phi(r, \theta) \propto \frac{1}{r} \sum_{k=0}^{\infty} \left( \frac{a}{r} \right)^{k} P_{k}(\cos \theta) where we have defined η = a/r < 1 and x = cos θ. This expansion is used to develop the normal multipole expansion. Conversely, if the radius r of the observation point P is smaller than a, the potential may still be expanded in the Legendre polynomials as above, but with a and r exchanged. This expansion is the basis of interior multipole expansion. Legendre polynomials in signal processing In addition to the above applications in theoretical physics, Legendre polynomials play a role in digital signal processing. The finite Legendre transform makes use of the orthogonality property and allows the decomposition of any function defined on a finite interval and scaled to [−1,1] as a spectrum of Legendre polynomials. Additional properties of Legendre polynomials[edit] Legendre polynomials are symmetric or antisymmetric, that is P_n(-x) = (-1)^n P_n(x). \,[2] Since the differential equation and the orthogonality property are independent of scaling, the Legendre polynomials' definitions are "standardized" (sometimes called "normalization", but note that the actual norm is not unity) by being scaled so that P_n(1) = 1. \, The derivative at the end point is given by P_n'(1) = \frac{n(n+1)}{2}. \, As discussed above, the Legendre polynomials obey the three term recurrence relation known as Bonnet’s recursion formula {x^2-1 \over n} {d \over dx} P_n(x) = xP_n(x) - P_{n-1}(x). Useful for the integration of Legendre polynomials is (2n+1) P_n(x) = {d \over dx} \left[ P_{n+1}(x) - P_{n-1}(x) \right]. From the above one can see also that {d \over dx} P_{n+1}(x) = (2n+1) P_n(x) + (2(n-2)+1) P_{n-2}(x) + (2(n-4)+1) P_{n-4}(x) + \ldots or equivalently {d \over dx} P_{n+1}(x) = {2 P_n(x) \over \| P_n(x) \|^2} + {2 P_{n-2}(x) \over \| P_{n-2}(x) \|^2}+\ldots where \| P_n(x) \| is the norm over the interval −1 ≤ x ≤ 1 \| P_n(x) \| = \sqrt{\int _{- 1}^{1}(P_n(x))^2 \,dx} = \sqrt{\frac{2}{2 n + 1}}. From Bonnet’s recursion formula one obtains by induction the explicit representation P_n(x) = \sum_{k=0}^n (-1)^k \begin{pmatrix} n \\ k \end{pmatrix}^2 \left( \frac{1+x}{2} \right)^{n-k} \left( \frac{1-x}{2} \right)^k. The Askey–Gasper inequality for Legendre polynomials reads \sum_{j=0}^n P_j(x)\ge 0\qquad (x\ge -1). A sum of Legendre polynomials is related to the Dirac delta function for -1\leq y\leq 1 and -1\leq x\leq1 \delta(y-x) = \frac12\sum_{\ell=0}^{\infty} (2\ell + 1) P_\ell(y)P_\ell(x)\,. The Legendre polynomials of a scalar product of unit vectors can be expanded with spherical harmonics using P_{\ell}({r}\cdot {r'})=\frac{4\pi}{2\ell + 1}\sum_{m=-\ell}^{\ell} Y_{\ell m}(\theta,\phi)Y_{\ell m}^*(\theta',\phi')\,. where the unit vectors r and r' have spherical coordinates (\theta,\phi) and (\theta',\phi'), respectively. Asymptotically for \ell\rightarrow \infty for arguments less than unity P_{\ell}(\cos \theta) J_0(\ell\theta) + \mathcal{O}(\ell^{-1}) \frac{2}{\sqrt{2\pi \ell \sin \theta}}\cos\left[\left(\ell + \frac{1}{2}\right)\theta - \frac{\pi}{4}\right] and for arguments greater than unity P_{\ell}\left(\frac{1}{\sqrt{1-e^2}}\right) = I_0(\ell e) + \mathcal{O}(\ell^{-1}) \frac{1}{\sqrt{2\pi \ell e}} \frac{(1+e)^{(\ell+1)/2}}{(1-e)^{\ell/2}} + \mathcal{O}(\ell^{-1})\,, where J_0 and I_0 are Bessel functions. Shifted Legendre polynomials[edit] The shifted Legendre polynomials are defined as \tilde{P_n}(x) = P_n(2x-1). Here the "shifting" function x\mapsto 2x-1 (in fact, it is an affine transformation) is chosen such that it bijectively maps the interval [0, 1] to the interval [−1, 1], implying that the polynomials \tilde{P_n}(x) are orthogonal on [0, 1]: \int_{0}^{1} \tilde{P_m}(x) \tilde{P_n}(x)\,dx = {1 \over {2n + 1}} \delta_{mn}. An explicit expression for the shifted Legendre polynomials is given by \tilde{P_n}(x) = (-1)^n \sum_{k=0}^n {n \choose k} {n+k \choose k} (-x)^k. The analogue of Rodrigues' formula for the shifted Legendre polynomials is \tilde{P_n}(x) = \frac{1}{n!} {d^n \over dx^n } \left[ (x^2 -x)^n \right].\, The first few shifted Legendre polynomials are: n \tilde{P_n}(x) 0 1 1 2x-1 2 6x^2-6x+1 3 20x^3-30x^2+12x-1 4 70x^4-140x^3+90x^2-20x+1 Legendre functions[edit] As well as polynomial solutions, the Legendre equation has non-polynomial solutions represented by infinite series. These are the Legendre functions of the second kind, denoted by Q_n(x). The differential equation {d \over dx} \left[ (1-x^2) {d \over dx} f(x) \right] + n(n+1)f(x) = 0 has the general solution where A and B are constants. Legendre functions of fractional order[edit] Main article: Legendre function Legendre functions of fractional order exist and follow from insertion of fractional derivatives as defined by fractional calculus and non-integer factorials (defined by the gamma function) into the Rodrigues' formula. The resulting functions continue to satisfy the Legendre differential equation throughout (−1,1), but are no longer regular at the endpoints. The fractional order Legendre function Pn agrees with the associated Legendre polynomial P0 See also[edit] 1. ^ Courant & Hilbert 1953, II, §8 2. ^ a b George B. Arfken, Hans J. Weber (2005), Mathematical Methods for Physicists, Elsevier Academic Press, p. 743, ISBN 0-12-059876-0  3. ^ M. Le Gendre, "Recherches sur l'attraction des sphéroïdes homogènes," Mémoires de Mathématiques et de Physique, présentés à l'Académie Royale des Sciences, par divers savans, et lus dans ses Assemblées, Tome X, pp. 411–435 (Paris, 1785). [Note: Legendre submitted his findings to the Academy in 1782, but they were published in 1785.] Available on-line (in French) at: http://edocs.ub.uni-frankfurt.de/volltexte/2007/3757/pdf/A009566090.pdf . 4. ^ Jackson, J.D. Classical Electrodynamics, 3rd edition, Wiley & Sons, 1999. page 103 External links[edit]
b7f41f3fe2523824
More Options Multiverses and Blackberries Notes of a Fringe-Watcher Martin Gardner Volume 25.5, September / October 2001 There be nothing so absurd but that some philosopher [or cosmologist? -M.G.] has said it. The American philosopher Charles Sanders Peirce somewhere remarked that unfortunately universes are not as plentiful as blackberries. One of the most astonishing of recent trends in science is that many top physicists and cosmologists now defend the wild notion that not only are universes as common as blackberries, but even more common. Indeed, there may be an infinity of them! It all began seriously with an approach to quantum mechanics (QM) called “The Many Worlds Interpretation” (MWI). In this view, widely defended by such eminent physicists as Murray Gell-Mann, Stephen Hawking, and Steven Weinberg, at every instant when a quantum measurement is made that has more than one possible outcome, the number specified by what is called the Schrödinger equation, the universe splits into two or more universes, each corresponding to a possible future. Everything that can happen at each juncture happens. Time is no longer linear. It is a rapidly branching tree. Obviously the number of separate universes increases at a prodigious rate. If all these countless billions of parallel universes are taken as no more than abstract mathematical entities-worlds that could have formed but didn't-then the only “real” world is the one we are in. In this interpretation of the MWI the theory becomes little more than a new and whimsical language for talking about QM. It has the same mathematical formalism, makes the same predictions. This is how Hawking and many others who favor the MWI interpret it. They prefer it because they believe it is a language that simplifies QM talk, and also sidesteps many of its paradoxes. There is, however, a more bizarre way to interpret the MWI. Those holding what I call the realist view actually believe that the endlessly sprouting new universes are “out there,” in some sort of vast super-space-time, just as “real” as the universe we know! Of course at every instant a split occurs each of us becomes one or more close duplicates, each traveling a new universe. We have no awareness of this happening because the many universes are not causally connected. We simply travel along the endless branches of time’s monstrous tree in a series of universes, never aware that billions upon billions of our replicas are springing into existence somewhere out there. “When you come to a fork in the road,” Yogi Berra once said, “take it.” The MWI was first put forth by Hugh Everett III in a Princeton doctoral thesis written for John Wheeler in 1956. It was soon taken up and elaborated by Bryce DeWitt. For several years John Wheeler defended his student’s theory, but finally decided it was “on the wrong track,” no more than a bizarre language for QM and one that carried “too much metaphysical baggage.” However, recent polls show that about half of all QM experts now favor the theory, though it is seldom clear whether they think the other worlds are physically real or just abstractions such as numbers and triangles. Apparently both Everett and DeWitt took the realist approach. Roger Penrose is among many famous physicists who find the MWI appalling. The late Irish physicist John S. Bell called the MWI “grotesque” and just plain “silly.” Most working physicists simply ignore the theory as nonsense. In an article on “Quantum Mechanics and Reality” (in Physics Today, September 1970), DeWitt wrote with vast understatement about his first reaction to Everett’s thesis: “I still recall vividly the shock I experienced on first encountering the multiworld concept. The idea of 10100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable, is not easy to reconcile with common sense. This is schizophrenia with a vengeance!” In the MWI, most of its defenders agree, there is no room for free will. The multiverse, the universe of all universes, develops strictly along determinist lines, always obeying the deterministically evolving Schrödinger equation. This equation is a monstrous wave function which never collapses unless it is observed and collapsed by an intelligence outside the multiverse, namely God. In recent years David Deutsch, a quantum physicist at Oxford University, has become the top booster of the MWI in its realist form. He believes that quantum computers, using atoms or photons and operating in parallel with computers in nearby parallel worlds, can be trillions of times faster than today’s computers. He is convinced that many famous QM paradoxes, such as the double slit experiment and a similar one involving two half-silvered mirrors, are best explained by assuming an interaction with twin particles in a parallel world almost identical with our own. For example, in the double slit experiment, when both slits are open, our particle goes through one slit while its twin from the other world goes through the other slit to produce the interference pattern on the screen. Deutsch calls our particle the “tangible” one, and the particle coming from the other world a “shadow” particle. Of course in the adjacent universe our particle is the shadow of their tangible particle. Because communication between universes is impossible, it is hard to imagine why a particle would bother to jump from one universe to another just to produce interference. Deutsch believes that the results of calculating simultaneously in parallel worlds can somehow be brought back here to coalesce. Critics argue that QM paradoxes, as well as quantum computers, are just as easily explained by conventional theory or by such rivals as the pilot wave theory of David Bohm. In any case, Deutsch’s 1997 book The Fabric of Reality: The Science of Parallel Universes-and Its Implications is the most vigorous defense yet of a realistic MWI. Deutsch is fully aware that the MWI forces him to accept the reality of endless copies of himself out there in the infinity of other worlds. “I may feel subjectively,” he writes (p. 53), “that I am distinguished among the copies as the 'tangible' one, because I can directly perceive myself and not the others, but I must come to terms with the fact that all the others feel the same about themselves. Many of those Davids are at this moment writing these very words. Some are putting it better. Others have gone for a cup of tea.” And he is puzzled by the fact that so few physicists are as enthralled as he about the MWI! Theoretical and experimental work on quantum computers is now a complex, controversial, rapidly growing field with Deutsch as its pioneer and leading theoretician. You can keep up with this research by clicking on Oxford’s Centre for Quantum Computation’s Web site The MWI should not be confused with a more recent concept of a multiverse proposed by Andrei Linde, a Russian physicist now at Stanford University, as well as by a few other cosmologists such as England’s Martin Rees. This multiverse is essentially a response to the anthropic argument that there must be a Creator because our universe has so many basic physical constants so finely tuned that, if any one deviated by a tiny fraction, stars and planets could not form-let alone life appear on a planet. The implication is that such fine tuning implies an intelligent tuner. Linde’s multiverse goes like this. Every now and then, whatever that means, a quantum fluctuation precipitates a Big Bang. A universe with its own space-time springs into existence with randomly selected values for its constants. In most of these universes those values will not permit the formation of stars and life. They simply drift aimlessly down their rivers of time. However, in a very small set of universes the constants will be just right to allow creatures like you and me to evolve. We are here not because of any overhead intelligent planning but simply because we happen by chance to be one of the universes properly tuned to allow life to get started. David Lewis David Lewis We come now to a third kind of multiverse, by far the wildest of the three. It has been set forth not by a scientist but by a peculiar philosopher, now at Princeton University, named David Lewis. In his best-known book, The Plurality of Worlds (Oxford, 1986), and other writings, Lewis seriously maintains that every logically possible universe-that is, one with no logical contradictions such as square circles-is somewhere out there. The notion of logical possible worlds, by the way, goes back to Leibniz’s Theodicy. He speculated that God considered all logically possible worlds, then created the one He deemed best for His purposes. Both the MWI and Lewis’s possible worlds allow time travel into the past. You need never encounter the paradox of killing yourself, yet you are still alive, because as soon as you enter your past the universe splits into a new one in which you and your duplicate coexist. Most of Lewis’s worlds do not contain any replicas of you, but if they do they can be as weird as you please. You can't, of course, simultaneously have five fingers on each hand and seven on each hand because that would be logically contradictory. But you could have a hundred fingers, and a dozen arms, or seven heads. Any world you can think of without contradiction is real. Can pigs fly? Certainly. There is nothing contradictory about pigs with wings. In an infinity of possible worlds there are lands of Oz, Greek gods on Mount Olympus, anything you can imagine. Every novel is a possible world. Somewhere millions of Ahabs are chasing whales. Somewhere millions of Huckleberry Finns are floating down rivers. Every kind of universe exists if it is logically consistent. David Lewis’s mad multiverse was anticipated by hordes of science-fiction writers long before the MWI of QM came from Everett’s brain. More recent examples include Larry Nivens’s 1969 story “All the Myriad Ways” and Frederick Pohl’s 1986 novel The Coming of the Quantum Cats. Jorge Luis Borges played with the theme in his story ”The Garden of Forking Paths.” There is a quotation from this tale at the front of The Many Worlds Interpretation of Quantum Mechanics (1973), a standard reference by DeWitt and Neill Graham. For other examples of multiverses in science fiction and fantasy see the entry on “Parallel Worlds” in The Encyclopedia of Science Fiction (1995) by John Clute and Peter Nichols. Fredric Brown, in What Mad Universe (1950), described Lewis’s multiverse this way: There are, then, an infinite number of coexistent universes. “They include this one and the one you came from. They are equally real, and equally true. But do you conceive what an infinity of universes means, Keith Winton?” “Well-yes and no.” “It means that, out of infinity, all conceivable universes exist. “There is, for instance, a universe in which this exact scene is being repeated except that you-or the equivalent of you-are wearing brown shoes instead of black ones. “There are an infinite number of permutations of that variation, such as one in which you have a slight scratch on your left forefinger and one in which you have purple horns and-” “But are they all me?” Mekky said, “No, none of them is you-any more than the Keith Winton in this universe is you. I should not have used that pronoun. They are separate individual entities. As the Keith Winton here is; in this particular variation, there is a wide physical difference-no resemblance, in fact.” Keith said thoughtfully, “If there are infinite universes, then all possible combinations must exist. Then, somewhere, everything must be true.” “And there are an infinite number of universes, of course, in which we don't exist at all-that is, no creatures similar to us exist at all. In which the human race doesn't exist at all. There are an infinite number of universes, for instance, in which flowers are the predominant form of life-or in which no form of life has ever developed or will develop. “And infinite universes in which the states of existence are such that we would have no words or thoughts to describe them or to imagine them.” I have here looked at only the three most important versions of a multiverse. There are others, less well known, such as Penn State’s Lee Smolin’s universes which breed and evolve in a manner similar to Darwinian theory. For a good look at all the multiverses now being proposed, see British philosopher John Leslie’s excellent book Universes (1989). I find it hard to believe that so many academics take Lewis’s possible worlds seriously. As poet Armand T. Ringer has put it in a clerihew: David Lewis Is a philosopher who is Crazy enough to insist That all logically possible worlds actually exist. Alex Oliver, reviewing Lewis’s Papers in Metaphysics and Epistemology, in The London Times Literary Supplement (January 7, 2000), closes by calling Lewis “the leading metaphysician at the start of this century, head and beard above his contemporaries.” The stark truth is that there is not the slightest shred of reliable evidence that there is any universe other than the one we are in. No multiverse theory has so far provided a prediction that can be tested. In my layman’s opinion they are all frivolous fantasies. As far as we can tell, universes are not as plentiful as even two blackberries. Surely the conjecture that there is just one universe and its Creator is infinitely simpler and easier to believe than that there are countless billions upon billions of worlds, constantly increasing in number and created by nobody. I can only marvel at the low state to which today’s philosophy of science has fallen. Martin Gardner
44c3c021d03101e7
Noether Theorem Get Noether Theorem essential facts below. View Videos or join the Noether Theorem discussion. Add Noether Theorem to your Like2do.com topic list for future reference or share this resource on social media. Noether Theorem Emmy Noether (1882-1935) was an influential mathematician known for her groundbreaking contributions to abstract algebra and theoretical physics. Basic illustrations and background Informal statement of the theorem All fine technical points aside, Noether's theorem can be stated informally A more sophisticated version of the theorem involving fields states that: If an integral I is invariant under a continuous group G? with ? parameters, then ? linearly independent combinations of the Lagrangian expressions are divergences. Historical context A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion -- it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) vanishes, The earliest constants of motion discovered were momentum and energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's third law. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress-energy tensor (non-gravitational stress-energy) and the Landau-Lifshitz stress-energy-momentum pseudotensor (gravitational stress-energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress-energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace-Runge-Lenz vector. Hamilton's principle states that the physical path q(t)--the one actually taken by the system--is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler-Lagrange equations, where the momentum is conserved throughout the motion (on the physical path). Mathematical expression Simple form using perturbations The essence of Noether's theorem is generalizing the ignorable coordinates outlined. where the perturbations ?t and ?q are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, ..., N. where ?r are infinitesimal parameter coefficients corresponding to each: Using these definitions, Noether showed that the N quantities Time invariance For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t -> t + ?t, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H[6] Translational invariance Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk -> qk + ?qk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding momentum pk[7] In special and general relativity, these apparently separate conservation laws are aspects of a single conservation law, that of the stress-energy tensor,[8] that is derived in the next section. Rotational invariance The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart.[9] It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle about an axis n; such a rotation transforms the Cartesian coordinates by the equation Since time is not being transformed, T=0. Taking as the ? parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by Then Noether's theorem states that the following quantity is conserved, Field theory version A continuous transformation of the fields can be written infinitesimally as with the consequence The Lagrangian density transforms in the same way, , so and thus Noether's theorem corresponds to the conservation law for the stress-energy tensor T??,[8] where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives The conservation of electric charge, by contrast, can be derived by considering ? linear in the fields ? rather than in the derivatives.[10] In quantum mechanics, the probability amplitude ?(x) of finding a particle at a point x is a complex field ?, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |?|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ? field and its complex conjugate field ?* that leave |?|2 unchanged, such as a complex rotation. In the limit when the phase ? becomes infinitesimally small, , it may be taken as the parameter ?, while the ? are equal to i? and -i?*, respectively. A specific example is the Klein-Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density In this case, Noether's theorem states that the conserved (j = 0) current equals One independent variable is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler-Lagrange equations And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, ?, which acts on the variables as follows The action integral flows to which may be regarded as a function of ?. Calculating the derivative at ?' = 0 and using Leibniz's rule, we get Notice that the Euler-Lagrange equations imply Substituting this into the previous equation, one gets Again using the Euler-Lagrange equations we get Substituting this into the previous equation, one gets From which one can see that is a constant of the motion, i.e., it is a conserved quantity. Since ?[q, 0] = q, we get and so the conserved quantity simplifies to Field-theoretic derivation Noether's theorem may also be derived for tensor fields ?A where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates x? where the index ? ranges over time (? = 0) and three spatial dimensions (? = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written whereas the transformation of the field variables is expressed as By this definition, the field variations A result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field ?A depends on the transformed coordinates ??. To isolate the intrinsic changes, the field variation at a single point x? may be defined If the coordinates are changed, the boundary of the region of space-time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as ? and ?', respectively. Since ? is a dummy variable of integration, and since the change in the boundary ? is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form Using the Euler-Lagrange field equations the difference in Lagrangians can be written neatly as Thus, the change in the action can be written as Since this holds for any region ?, the integrand must be zero where is the Lie derivative of ?A in the X? direction. When ?A is a scalar or , These equations imply that the field variation taken at one point equals Differentiating the above divergence with respect to ? at ? = 0 and changing the sign yields the conservation law where the conserved current equals Manifold/fiber bundle derivation Examples of this M in physics include: Now suppose there is a functional called the Lagrangian density, depending on ?, its derivative and the position. In other words, for ? in Suppose we are given boundary conditions, i.e., a specification of the value of ? at the boundary if M is compact, or some limit on ? as x approaches ?. Then the subspace of consisting of functions ? such that all functional derivatives of at ? are zero, that is: and that ? satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action) for all compact submanifolds N or in other words, for all x, where we set Now, for any N, because of the Euler-Lagrange theorem, on shell (and only on-shell), we have Since this is true for any N, we have Noether's theorem is an on shell theorem: it relies on use of the equations of motion--the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that The quantum analogs of Noether's theorem involving expectation values, e.g. d4xJ? = 0, probing off shell quantities as well are the Ward-Takahashi identities. Generalization to Lie algebras where f12 = Q1[f2?] - Q2[f1?]. So, Generalization of the proof This applies to any local symmetry derivation Q satisfying QS ? 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ? be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ? is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[?][?(x)] = ?(x)Q[?(x)] satisfies q[?][S] ? 0 for every ?, or more compactly, q(x)[S] ? 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem. To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on ? and its first derivatives. Also, assume for all ?. More generally, if the Lagrangian depends on higher derivatives, then Example 1: Conservation of energy so we can set (called the Hamiltonian) is conserved. Example 2: Conservation of center of momentum Still considering 1-dimensional time, let Note that This has the form of so we can set Example 3: Conformal transformation This has the form of Noether's theorem states that (as one may explicitly check by substituting the Euler-Lagrange equations into the left hand side). Note that if one tries to find the Ward-Takahashi analog of this equation, one runs into a problem because of anomalies. In quantum field theory, the analog to Noether's theorem, the Ward-Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential. See also 1. ^ See also Noether's second theorem. 5. ^ Nina Byers (1998) "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws." in Proceedings of a Symposium on the Heritage of Emmy Noether, held on 2-4 December 1996, at the Bar-Ilan University, Israel, Appendix B. 6. ^ Lanczos 1970, pp. 401-3 7. ^ Lanczos 1970, pp. 403-4 8. ^ a b Goldstein 1980, pp. 592-3 9. ^ Lanczos 1970, pp. 404-5 10. ^ Goldstein 1980, pp. 593-4 12. ^ Vivek Iyer; Wald (1995). "A comparison of Noether charge and Euclidean methods for Computing the Entropy of Stationary Black Holes". Physical Review D. 52 (8): 4430-9. arXiv:gr-qc/9503052 Freely accessible. Bibcode:1995PhRvD..52.4430I. doi:10.1103/PhysRevD.52.4430.  External links Top US Cities
b46d964cf74a9bf9
Monday, 13 May 2013 Coursera - Quantum Mechanics A course on quantum mechanics! What am I thinking? So I signed up for a course on quantum mechanics. I mean, how hard can it be? Answer - *!?**!!* hard! I brushed off my knowledge of imaginary numbers, I went through the introductory maths materials - they didn't seem too hard. OK - I struggled to remember complex conjugates, and one or two other things. I thought there might be a reasonable introduction, and an explanation about things - which there was. However the learning curve was incredibly steep, and was sort of emphasised by the first homework. Q1 For what was Albert Einstein awarded the Nobel prize? • General Relativity • The expansion of the universe • The photo-electric effect • Electron diffraction OK - I actually knew that one - although it was in the course materials too. Q2 Recall how the Schrödinger equation was motivated by the non-relativistic dispersion relation  E=p22m. If we follow the same procedure for the case of a relativistic dispersion relation (E2=p2c2+m2c4), what equation do we arrive at? (For simplicity consider the one-dimensional case) Ouch! The gloves are off! The lectures also had a grading system. No stars was for everyone, 1 star had some maths in it, two stars extensive maths, and three stars - mega maths. Most of the videos were in the 2/3 star range. I actually enjoyed doing some of the integration - but realised I was gradually losing the plot as the course went on. I never really got a good handle on the bra-ket notation - I still don't really get it's power - I'm missing something I'm sure, but they didn't spend very long on it, and the books I got didn't help. Then it was onto Dirac deltas, Levi-civita notation and stuff about spin. By now I was really struggling with the weekly homeworks, and guessing as many as I was solving - I was no longer learning and close to drowning. I did think about giving up on the course, but I stayed the distance, and finished all the videos, all the homeworks. This course had an exam - 1 chance at answer each question - 6 hour time limit. A couple of questions I could answer, the rest I guessed at, except those that required a numeric answer - which I couldn't do. I got 42% which I consider more than fair. This gave me a total course mark of 72% - again more than I deserve. So I probably got half way before I couldn't keep up, and for me it was hard to turn all that maths back into what it meant in the real world - even in the abstract. I guess that's not unusual in quantum mechanics! Sayan Datta said... Hello Julian, Thanks for this post. I am doing this course at its present iteration. Can you give me a rough idea about what percentage of homework and exam questions require entering numeric answers? I have trouble entering numeric answers...that's why I am asking the question. Julian Onions said... Quite a number of the homeworks require you to either work out maths expressions, or solve for numbers. Good luck! Sayan Datta said... Thanks for the reply...
4e2e4179dca4a872
BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Nonequilibrium quantum and statistical physics - ECPv4.5.3//NONSGML v1.0//EN CALSCALE:GREGORIAN METHOD:PUBLISH X-WR-CALNAME:Nonequilibrium quantum and statistical physics X-ORIGINAL-URL: X-WR-CALDESC:Events for Nonequilibrium quantum and statistical physics BEGIN:VEVENT DTSTART;TZID=Europe/Ljubljana:20171128T160000 DTEND;TZID=Europe/Ljubljana:20171128T180000 DTSTAMP:20180223T195403 CREATED:20171121T103051Z LAST-MODIFIED:20171209T102939Z SUMMARY:Marko Robnik: Application of the WKB method in 1D linear and nonlinear time-dependent oscillators DESCRIPTION:The WKB method is an important analytic tool for solving numerous problems in\nmathematical physics of 1D systems\, for example the stationary (time-independent)\nSchrödinger equation in one dimension\, or the classical dynamics of one-dimensional\ntime-dependent (nonautonomous) Hamilton oscillators. I shall review the standard\nWKB method including the exact explicit solutions to all orders\, published by Rob-\nnik and Romanovski (2000)\, and applied in a series of papers. Among other results\nwe have shown that the application of the method in cases of the Schrödinger equa-\ntion with exactly solvable potentials leads to an infinite series to all orders\, that the\nseries converges and the sum reproduces the known exact eigenenergies. We shall\nlook in particular at the case of the time-dependent one-dimensional linear Hamilto-\nnian oscillator\, and then I shall present the approach towards generalizing the WKB\nmethod for the case of one-dimensional time-dependent nonlinear Hamiltonian oscil-\nlators having quadratic kinetic energy and homogeneous power law potential\, which\nincludes e.g. the quartic oscillator\, and of course also the linear oscillator. I will\nshow that the nonlinear method\, although only in the leading approximation\, is very\nuseful and accurate. We also shall touch upon possible generalizations.\n\n* Marko Robnik\, CAMTP\, Univerza v Mariboru URL: LOCATION:Jadranska 19\, Ljubljana\, Slovenia END:VEVENT END:VCALENDAR
99e6103c000c4f8c
Electron orbital From Citizendium, the Citizens' Compendium Jump to: navigation, search This article is developed but not approved. Main Article Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] In quantum chemistry, an electron orbital (or more often just orbital) is a synonym for a one-electron function, i.e., a function of a single vector r, the position vector of the electron.[1] The majority of quantum chemical methods expect that an orbital has a finite norm, i.e., that the orbital is normalizable (quadratically integrable), and hence this requirement is often added to the definition. In other branches of chemistry, an orbital is often seen as a wave function of an electron, meaning that an orbital is seen as a solution of an (effective) one-electron Schrödinger equation. This point of view is a narrowing of the more general quantum chemical definition, but not contradictory to it. In the past quantum chemists, too, distinguished one-electron functions from orbitals; one-electron functions were fairly arbitrary functions of the position vector r, while orbitals were solutions of certain effective one-electron Schrödinger equations. This distinction faded out in quantum chemistry, resulting in the present definition of "normalizable one-electron function", not necessarily eigenfunctions of a one-electron Hamiltonian. Usually one distinguishes two kinds of orbitals: atomic orbitals and molecular orbitals. Atomic orbitals are expressed with respect to one Cartesian system of axes centered on a single atom. Molecular orbitals (MOs) are "spread out" over a molecule. Usually this is a consequence of an MO being a linear combination (weighted sum) of atomic orbitals centered on different atoms. Definitions of orbitals Several kinds of orbitals can be distinguished. Atomic orbital (AO) See the article atomic orbital Molecular orbital (MO) See the article molecular orbital The AOs and MOs defined so far depend only on the spatial coordinate vector r of a single electron. In addition, an electron has a spin coordinate μ, which can have two values: spin-up or spin-down. A complete set of functions of μ consists of two functions only, traditionally these are denoted by α(μ) and β(μ). These functions are eigenfunctions of the z-component sz of the spin angular momentum operator with eigenvalues ±½. Spin atomic orbital The most general spin atomic orbital of an electron is of the form where r is a vector from the nucleus of the atom to the position of the electron. In general this function is not an eigenfunction of sz. More common is the use of either which are eigenfunctions of sz. Since it is rare that different AOs are used for spin-up and spin-down electrons, we dropped the superscripts + and −. Spin molecular orbital A spin molecular orbital is usually either Here the superscripts + and − might be necessary, because some quantum chemical methods distinguish the spatial wave functions of electrons with different spins. These are the so-called different orbitals for different spins (DODS) (or spin-unrestricted) methods. However, many quantum chemical methods apply the spin-restriction: Chemists express this spin-restriction by stating that two electrons [an electron with spin up (α), and another electron with spin down (β)] are placed in the same spatial orbital φ. This means that the total N-electron wave function contains a factor of the type φ(r1)α(μ1)×φ(r2)β(μ2). Before the advent of electronic computers, orbitals (molecular as well as atomic) were used extensively in qualitative arguments explaining all kinds of properties of atoms and molecules. Orbitals still play this role in introductory texts and also in organic chemistry, where orbitals serve in the explanation of some reaction mechanisms, for instance in the Woodward-Hoffmann rules. In modern computational quantum chemistry the role of atomic orbitals is different; they serve as a convenient expansion basis, comparable to powers of x in a Taylor expansion of a function f(x), or sines and cosines in a Fourier series. Atomic orbitals Originally AOs were defined as approximate solutions of atomic Schrödinger equations, see the section solution of the atomic Schrödinger equation in atomic orbital for more on this. Very often atomic orbitals are depicted in polar plots. In fact, the angular parts (functions depending on the spherical polar angles θ and φ) are commonly plotted. The angular parts of atomic orbitals are functions known as spherical harmonics. As stated, quantum chemists see AOs simply as convenient basis functions for quantum mechanical computations. See the section AO basis sets in the article atomic orbital for more on the use of atomic orbitals in quantum chemistry. See Slater orbital for the explicit analytic form of orbitals and see Gauss type orbitals for a discussion of the type of AOs that are most frequently used in computations. Molecular orbitals To date the most widely applied method for computing molecular orbitals is a Hartree-Fock method in which the MO is expanded in atomic orbitals. Since the term "Hartree-Fock" (HF) is in quantum chemistry almost synonymous with the term "self-consistent field" (SCF), such an MO is often referred to as an SCF-LCAO-MO. See the section computation of molecular orbitals in molecular orbital for a simple example of a case where an MO is almost completely determined by symmetry. In more complicated cases it is necessary to solve the Hartree-Fock-Roothaan equations, which have the form of a generalized eigenvalue problem. The dimension of this problem is equal to the number of atomic orbitals included in the AO basis (see AO basis sets in atomic orbital for more details). History of term orbital Orbit is an old noun introduced by Johannes Kepler in 1609 to describe the trajectories of the earth and the planets. The adjective "orbital" had (and still has) the meaning "relating to an orbit". When Ernest Rutherford in 1911 postulated his planetary model of the atom (the nucleus as the sun, and the electrons as the planets) it was natural to call the paths of the electrons "orbits". Bohr used the word as well, although he was the first to recognize (1913) that an electron orbit is not a trajectory, but a stationary state of the hydrogen atom. After Schrödinger (1926) had solved his wave equation for the hydrogen atom (see hydrogen-like atoms for details), it became clear that the electronic orbits did not resemble planetary orbits at all. The wave functions of the hydrogen electron are time-independent and smeared out; they are more like unmoving clouds than planetary orbits. As a matter of fact, the angular parts of the hydrogen wave functions are spherical harmonic functions and hence they have the same appearance as these functions. (See spherical harmonics for a few graphical illustrations). In the 1920s electron spin was discovered, whereupon the adjective "orbital" started to be used in the meaning of "non-spin", that is, as a synonym of "spatial". In scientific papers of around 1930 one finds discussions about "orbital degeneracy", meaning that the spatial (non-spin) parts of several one-electron wave functions have the same energy. Also the terms orbital- and spin-angular momentum date form these days. In 1932 Robert S. Mulliken coined the noun "orbital". Mulliken wrote:[2] From here on, one-electron orbital wave functions will be referred to for brevity as orbitals. Note that here, evidently, Mulliken uses "orbital" as relating to "spatial" (non-spin) and defines an orbital as what is now called a "spatial orbital". Then Mulliken went on in the same article to distinguish atomic and molecular orbitals. Later the somewhat unfortunate term "spinorbital" was introduced for the product functions φ(r)α(μ) and φ(r)β(μ) in which φ(r) has the tautological name "spatial orbital" and α(μ) and β(μ) are called "one-electron spin functions". The term "spinorbital" is unfortunate because it merges in one word the concepts of spin and orbital, which were distinguished carefully by early writers on quantum mechanics. For instance, one of the pioneers of theoretical chemistry, Walter Heitler, juxtaposes two-electron spin functions and two-electron orbital functions.[3] In the phrase "two-electron orbital function", Heitler uses orbital as an adjective synonymous with spatial (non-spin). Note, parenthetically, that Heitler does not refer to a "two-electron orbital", (there is no such thing as a two-electron orbital !) and that an inexperienced reader may easily—and erroneously—interpret the term "two-electron orbital function" as "two-electron orbital". References and notes 1. Here "orbital" is used as a noun. In quantum mechanics, the adjective orbital is often used as a synonym of "spatial" (as in orbital angular momentum), in contrast to spin (as in spin angular momentum). 2. R. S. Mulliken, Electronic Structures of Molecules and Valence. II General Considerations, Physical Review, vol. 41, pp. 49-71 (1932) 3. W. Heitler, Elementary Wave Mechanics, 2nd edition (1956) Clarendon Press, Oxford, UK.